uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,108,101,563,915
arxiv
\section{Introduction} \label{sec:intro} We say that a graph $G$ contains a graph $H$ as a \emph{minor} if we can obtain a graph isomorphic to $H$ from a subgraph of $G$ by using edge contractions (discarding any loops and multiple edges, we are interested in simple graphs). A class of graphs $\cA$ is \emph{minor-closed} if for each $G \in \cA$, each minor $G'$ of $G$ is also in $\cA$. We say that $H$ is an \emph{excluded minor} for $\cA$ if $H$ is not in $\cA$ but each minor of $H$ (other than $H$ itself) is in $\cA$. Robertson and Seymour \cite{robertsonSeymour} showed that, for each minor-closed class $\cA$ of graphs, the set $\cH$ of excluded minors is finite. We say that $G$ is $H$-\emph{free} if it has no minor $H$; and given a set $\cH$ of graphs, $G$ is $\cH$-\emph{free} if it is $H$-free for all $H \in \cH$. We denote the class of all $\cH$-free graphs by $\Ex(\cH)$, and write $\Ex(H)$ when $\cH$ consists of just the graph $H$. Observe that $\Ex(\cH)$ contains $n$-vertex graphs for each $n$ as long as each graph in $\cH$ has at least one edge. Let us call such a set $\cH$ of graphs \emph{suitable} as long as it is non-empty. We shall restrict our attention to suitable classes $\cH$, and be interested in the number of edges in edge-maximal $\cH$-free graphs. Given a graph $G$, let $v(G)$ denote the number of vertices and $e(G)$ the number of edges. For all $n \geq 1$, let \[ E_\cH(n) = \{ e(G) : v(G) = n \text{ and } G \text{ is an edge-maximal $\cH$-free graph} \}. \] Also, let $M^+_\cH(n) = \max E_\cH(n)$, and $M^-_\cH(n) = \min E_\cH(n)$. Finally, let us define \[ \gap_{\cH}(n) = M^+_\cH(n) - M^-_\cH(n). \] As for $\Ex(H)$, we write $\gap_{H}(n)$ to denote $\gap_{\cH}(n)$ when $\cH$ consists of just the graph $H$. This is the case on which we focus. The function $M^+_\cH(n)$ (sometimes in the form of $2M^+_\cH(n)/n$ to analyse the maximum average degree of graphs in $\cA=\Ex(\cH)$) has been studied extensively for various suitable sets $\cH$. Mader \cite{minorsLinear} showed that, given a graph $H$, there is a constant $c=c(H)$ such that $e(G) \leq c\, v(G)$ for each graph $G \in \Ex(H)$. Let us define $\beta_{\cH}$ by setting \begin{equation} \label{eqn.betadef} \beta_{\cH} := \sup_{G \in \cA} \frac{e(G)}{v(G)} = \sup_{n \geq 1} \frac{M^+_{\cH}(n)}{n}, \end{equation} noting that $\beta_{\cH}$ is finite. Write $\beta_{H}$ when $\cH$ consists just of the graph $H$. Building on work of Mader \cite{maderComplete}, Kostochka \cite{kostochkaComplete} and Fernandez de la Vega \cite{fdlvComplete}, Thomason \cite{completeMinors} showed that, for each positive integer $r$, we have $M^+_{K_r}(n) \sim \beta_{K_r} n$ as $n \to \infty$; and $\beta_{K_r} \sim \alpha \, r \log r$ as $r \to \infty$, where $\alpha \approx 0.319$. The value of $\beta_H$ for dense graphs $H$ was studied by Myers and Thomason \cite{nonCompleteMinors}. Reed and Wood \cite{sparseMinors} analysed this parameter for sparse forbidden minors $H$. Cs\'oka, Lo, Norin, Wu and Yepremyan \cite{disconnectedMinors} focused on $H$ being a union of disjoint cycles and, more generally, of disjoint $2$-connected graphs. Much less is known about the function $M^-_\cH(n)$ and, consequently, about $\gap_{\cH}(n)$, for a suitable set $\cH$. From Mader's result it follows that we always have $\gap_{\cH}(n) = O(n)$. We say that the class $\cA = \Ex(\cH)$ is \emph{pure} if we have $\gap_{\cH}(n) = 0$ for each positive integer $n$. For example, the class $\Ex(K_3)$ of forests is pure, since the $n$-vertex edge-maximal forests are the trees, each with $n-1$ edges. Our first main theorem is: \begin{theorem} \label{thm:connpure} The connected graphs $H$ on at least two vertices such that the class $\Ex(H)$ is pure are precisely the complete graphs $ K_2, K_3, K_4$ and the 3-vertex path $P_3$. \end{theorem} We say that $\Ex(\cH)$ is \emph{near-pure} if it is not pure, but we still have $\gap_{\cH}(n) = O(1)$. Also, we define the `linear impurity parameter' \[ \limp(\cH) = \liminf_{n \to \infty} \frac{\gap_{\cH}(n)}{n}; \] and we say that $\Ex(\cH)$ is \emph{linearly impure} if $\limp(\cH)>0$. Our second main result shows that all connected graphs $H$ fall into one of only three categories according to the purity of the class $\Ex(H)$. \begin{theorem} \label{thm:threeClasses} For each connected graph $H$ on at least two vertices, the class of $H$-free graphs is either pure, near-pure or linearly impure. \end{theorem} In other words, Theorem \ref{thm:threeClasses} says that it is not possible for the impurity of a class of $H$-free graphs to be unbounded but not grow linearly fast in $n$. We have seen in Theorem~\ref{thm:connpure} that if $H$ is $K_3$ or $K_4$ then $\gap_H(n)=0$ for each $n$, and $\limp(H)=0$. More generally, whenever $H$ is $2$-connected, $\gap_{H}(n)/n$ tends to limit, so the `liminf' in the definition of limp could be replaced by the more satisfactory `lim' (see also Theorem~\ref{thm:AddableLimit}). \begin{theorem} \label{thm:PositiveLimit} Let $H$ be a 2-connected graph other than $K_3$ or $K_4$. Then, as $n \to \infty$, \begin{equation} \label{eq:positiveLimit} \frac{\gap_{H}(n)}{n} \; \to \; \limp(H) \: >0. \end{equation} \end{theorem} An important example of a pure minor-closed class is the class of planar graphs. Indeed, for each $n \geq 3$, all $n$-vertex edge-maximal graphs $G$ embeddable in the plane are triangulations, satisfying $e(G) = 3n-6$. However, somewhat surprisingly, it is not the case that a similar statement holds for graphs embeddable in the torus: it was shown in~\cite{torusNonTriangulation} that a complete graph on $8$ vertices with the edges of a $5$-cycle $C_5$ removed (thus containing $23$ edges) is an edge-maximal graph embeddable in the torus, while each $8$-vertex triangulation of the torus, by Euler's formula, contains $24$ edges. However, for every surface $S$, the (minor-closed) class of graphs embeddable in $S$ is pure or near-pure, as shown by McDiarmid and Wood~\cite{embeddableImpurity}. At this point, let us check that the four connected graphs listed in Theorem~\ref{thm:connpure}, namely $K_2$, $K_3$, $K_4$ and $P_3$, give rise to pure $H$-free classes of graphs. The case of $K_2$ is trivial, as $\Ex(K_2)$ consists of the graphs without edges. We already noted that the class $\Ex(K_3)$ of forests is pure. If $H=P_3$, the path on $3$ vertices, then the $n$-vertex edge-maximal $H$-free graphs are the maximal matchings, each with $\lfloor n/2 \rfloor$ edges. Finally, the class $\Ex(K_4)$ is the class of series-parallel graphs, which is also the class of graphs of treewidth at most 2. For each $n \geq 2$ each $n$-vertex edge-maximal such graph has exactly $2n-3$ edges. In fact, for each fixed $k \geq 1$ the edge-maximal graphs of treewidth at most $k$ are the $k$-trees, and each $n$-vertex $k$-tree has $kn-\binom{k+1}{2}$ edges for $n \geq k$ (and $\binom {n}{2}$ for $n<k$). Thus for each $k \geq 1$ the class of graphs of treewidth at most $k$ is pure. We will have to work much harder to prove that the four graphs listed are the only connected graphs $H$ for which $\Ex(H)$ is pure! The rest of the paper is organised as follows. In the next section we introduce addable graph classes, and prove a general limiting result, Theorem~\ref{thm:AddableLimit}, which yields the `limit part' of Theorem~\ref{thm:PositiveLimit}. We also sketch a useful consequence of purity or near-purity for such a class of graphs. In Section~\ref{sec.noleaf} we show that for each connected graph $H$ with no leaf (that is, with minimum degree $\delta(H) \geq 2$), if $H$ is not $K_3$ or $K_4$ then $\Ex(H)$ is linearly impure. This is a step towards proving both Theorems~\ref{thm:connpure} and~\ref{thm:threeClasses}, and together with Theorem~\ref{thm:AddableLimit} proves Theorem~\ref{thm:PositiveLimit}, concerning a 2-connected graph $H$. In Section~\ref{sec.leaf} we complete the proof of Theorem~\ref{thm:threeClasses}, showing that for a connected excluded minor there are only the three possibilities of purity, near-purity or linear impurity. In Section~\ref{sec:allNonPure} we complete the proof of Theorem~\ref{thm:connpure}, showing that only four connected graphs $H$ give rise to pure $H$-free classes. In Section \ref{sec:generalizations} we give some extensions of our results to suitable sets $\cH$ of two or more excluded graphs, and to forbidding disconnected graphs; and finally we propose some natural open problems. \section{Addable graph classes} \label{sec:addable} In this section we introduce addable graph classes. We show that, for an addable minor-closed class $\cA$ of graphs with suitable set $\cH$ of excluded minors, $ \gap_{\cH}(n)/n $ tends to a limit, and we identify that limit as a difference of two terms (see~(\ref{eq:addableLimit})). Finally we describe a consequence of purity or near-purity for growth constants when we have a given average degree. \smallskip We say that a graph class $\cA$ is \emph{addable} when \begin{enumerate} \item $G \in \cA$ if and only if every component of $G$ is in $\cA$ (following Kolchin \cite{randomMappings}, if $\cA$ satisfies this property we call it \emph{decomposable}), and \item whenever $G \in \cA$ and $u,v$ belong to different components of $G$ then the graph obtained from $G$ by adding the edge $\{u,v\}$ is also in $\cA$ (following \cite{randomMinorClosed}, such a class $\cA$ is called \emph{bridge-addable}). \end{enumerate} A minor-closed class is decomposable if and only if each excluded minor is connected, and it is addable if and only if each excluded minor is $2$-connected. For example, the classes of forests ($\Ex(K_3)$), series-parallel graphs ($\Ex(K_4)$), and planar graphs $(\Ex(\{ K_5, K_{3,3} \})$) are each addable. The following general limiting result shows that in the addable case, the `liminf' in the definition of limp can be replaced by `lim'. \begin{theorem} \label{thm:AddableLimit} Let $\cA$ be an addable minor-closed class of graphs, with suitable set $\cH$ of excluded minors. Then, as $n \to \infty$, \begin{equation} \label{eq:AddableLimit} \frac{\gap_{\cH}(n)}{n} \to \limp(\cH). \end{equation} \end{theorem} To prove this result, we use two lemmas, treating $M^+_{\cH}(n)$ and $M^-_{\cH}(n)$ separately. Recall that $\beta_{\cH}$ was defined in~(\ref{eqn.betadef}). In the following lemma, it is easy to see that $\beta_{\cH} \geq 1$, since $\cA$ contains all the forests. \begin{lemma} \label{lem.decomp} Let $\cA$ be a decomposable minor-closed class of graphs, with suitable set $\cH$ of excluded minors. Then \[ \frac1{n} M^+_\cH(n) \to \beta_{\cH} \;\; \mbox{ as } n \to \infty. \] \end{lemma} \begin{proof} Denote $M^+_\cH(n)$ by $f(n)$. For $i=1,2$ let $n_i$ be a positive integer and let $G_i \in \cA_{n_i}$ satisfy $e(G_i)=f(n_i)$. Since the disjoint union $G_1 \cup G_2$ is in $\cA_{n_1+n_2}$ we have \[ f(n_1+n_2) \geq f(n_1)+f(n_2); \] that is, $f$ is superadditive. Hence by Fekete's Lemma (see for example van Lint and Wilson \cite{combinatoricsCourse}) \[ \frac{f(n)}{n} \, \to \; \sup_k \frac{f(k)}{k} = \beta_\cH \;\; \mbox{ as } n \to \infty. \] \end{proof} \begin{lemma} Let $\cA$ be an addable minor-closed class of graphs, with suitable set $\cH$ of excluded minors. Then there is a constant $\beta^-_\cH \geq 1$ such that \[ \frac1{n} M^-_\cH(n) \to \beta^-_{\cH} \;\; \mbox{ as } n \to \infty. \] \end{lemma} \begin{proof} Let $h = \min \{ v(H) : H \in \cH \},$ and note that $h \geq 3$. Consider the function $f(n) = M^-_{\cH}(n) + (h-2)^2$. Note that each edge-maximal graph in $\cA$ is connected, so $f(n) \geq n$ for each $n$. Let $\beta^-_\cH = \inf_{k} f(k)/k \geq 1$. We claim that $f(n)$ is subadditive (that is $f(a+b) \leq f(a) + f(b)$), so by Fekete's Lemma, as $n \to \infty$ we have $f(n)/n \to \beta^-_\cH$ and thus also $M^-_{\cH}(n)/n \to \beta^-_\cH$. It remains to establish the claim that $f$ is subadditive. Let $n_1, n_2 \geq 1$ and let $G_1, G_2$ be edge-maximal $\cH$-free graphs with $v(G_1) = n_1$, $v(G_2) = n_2$, and such that $e(G_1) = M^-_{\cH}(n_1), e(G_2) = M^-_{\cH}(n_2)$. Note that $G_1$ and $G_2$ are connected. As in the proof of the last lemma, the disjoint union $G = G_1 \cup G_2$ is $\cH$-free. It will be enough to show that we cannot add more than $(h-2)^2$ edges to $G$ without creating an $H$-minor for some $H \in \cH$. Indeed, let $u_1 \neq v_1$ be in $V(G_1)$ and let $u_2,v_2$ be in $V(G_2)$, and assume that we can (simultaneously) add the edges $\{u_1,u_2 \}$ and $\{v_1,v_2 \}$ to $G$ without creating any $H$-minor. Then the edge $\{u_1, v_1\}$ must be present in $G_1$ since otherwise, after adding $\{u_1,u_2 \}$ and $\{v_1,v_2 \}$ to $G$, by the connectedness of $G_2$ there is a path between $u_1$ and $v_1$ that uses only vertices in $G_2$, and we may contract this path to an edge between $u_1$ and $v_1$: this would necessarily create an $H$-minor for some $H \in \cH$ by the edge-maximality of $G_1$. Hence if we add edges to $G$ without creating any $H$-minor then the vertices in $G_1$ incident to the edges that we add must induce a clique in $G_1$, with an analogous statement holding for $G_2$. By the definition of $h$, these cliques can have size at most $h-2$ (if there were an $(h-1)$-clique in $G_1$ say, and we contracted $G_2$ to a single vertex, we would obtain an $h$-clique), hence we can add at most $(h-2)^2$ edges. Consequently, \begin{align*} f(n_1+n_2) & = M^-_{\cH}(n_1+n_2) + (h-2)^2 \\ & \leq \left ( M^-_{\cH}(n_1) + M^-_{\cH}(n_2) + (h-2)^2 \right ) + (h-2)^2 \\ & = f(n_1)+f(n_2 ). \end{align*} Thus $f(n)$ is subadditive, and the proof is complete. \end{proof} The last two lemmas show that, if $\cA$ is an addable minor-closed class of graphs with suitable set $\cH$ of excluded minors, then \begin{equation} \label{eq:addableLimit} \frac{\gap_{\cH}(n)}{n} \to \beta_\cH - \beta^-_\cH \;\; \mbox{ as } n \to \infty. \end{equation} Thus $ \limp(\cH) = \beta_\cH - \beta^-_\cH$, and $\frac{\gap_{\cH}(n)}{n} \to \limp(\cH)$ as $n \to \infty$, which completes the proof of Theorem~\ref{thm:AddableLimit}. We close this section by sketching a useful consequence of purity or near-purity. Let $\cA$ be a minor-closed class of graphs, with non-empty set $\cH$ of excluded minors. Let $\cA_n$ denote the set of graphs in $\cA$ on vertex set $[n]=\{1,2,\ldots,n\}$, let $a_n = |\cA_n|$, and let \[ \gamma(\cA) = \limsup_{n \to \infty} \left ( \frac{a_n}{n!} \right )^{1/n}. \] Norine, Seymour, Thomas and Wollan \cite{properSmall} (see also Dvo\v{r}\'ak and Norine \cite{smallClasses}) showed that $\gamma(\cA) < \infty$. Now suppose that $\cA$ is addable, that is, the excluded minors are $2$-connected. Then (see, for example \cite{randomMinorClosed}), $( a_n / n! )^{1/n}$ converges to $\gamma(\cA)$ and we say that $\cA$ has \emph{growth constant} $\gamma(\cA)$. Defining $a_{n,q} = |\cA_{n,q}|$ to be the number of graphs in $\cA_n$ with $\lfloor qn \rfloor$ edges, following the methods in Gerke, McDiarmid, Steger and Wei{\ss}l \cite{planarSoda} it can be shown that $(a_{n,q} / n!)^{1/n}$ tends to a limit $\gamma(\cA,q)$. If $\cA$ is pure or near-pure then, again following the analysis in \cite{planarSoda}, we may see that $\gamma(\cA,q)$ as a function of $q$ is log-concave, and hence continuous, for $q \in (1, \beta_\cH)$. \section{Purity and linear impurity: excluding a leafless graph} \label{sec.noleaf} In this section we prove the following lemma, which shows linear impurity for some excluded minors~$H$. It is a step towards proving both Theorems~\ref{thm:connpure} and~\ref{thm:threeClasses}, and together with Theorem~\ref{thm:AddableLimit} immediately yields Theorem~\ref{thm:PositiveLimit}. \begin{lemma} \label{lem.noleaf} Let $H$ be a connected graph with $\delta(H) \geq 2$, other than $K_3$ and $K_4$. Then $H$ is linearly impure. \end{lemma} We shall often use the following fact proved by Sylvester in $1884$. \begin{fact} \label{fact:frobeniusNumber} Let $a_1, a_2$ be a pair of positive coprime integers. Then for every integer $N > a_1 a_2 - a_1 - a_2$ there are some non-negative integers $b_1, b_2$ such that \[ N = a_1 b_1 + a_2 b_2. \] \end{fact} Let us call a vertex $v$ in a connected $h$-vertex graph $H$ a \emph{strong separating vertex} if each component of $H-v$ has at most $h-3$ vertices (so $v$ is a separating vertex which does not just cut off a single leaf). In order to prove Lemma~\ref{lem.noleaf} we first consider complete graphs, and then non-complete graphs with no leaves. In the next lemma we deal with complete graphs. \begin{lemma} \label{lem:completeImpure} For each $r \geq 5$ the class of $K_r$-free graphs satisfies $\limp(K_r) \geq \frac76$. \end{lemma} \begin{proof} We prove the lemma by induction on $r$. First, let $r = 5$. Wagner \cite{wagnerK5} showed that any edge-maximal $K_5$-free graph on at least $4$ vertices can be constructed recursively, by identifying edges or triangles, from edge-maximal planar graphs (i.e., triangulations) and copies of the Wagner graph (recall that the Wagner graph is formed from the cycle $C_8$ by joining the four opposite pairs of vertices, hence it has $8$ vertices and $12$ edges). If $n = 6k+2$, we can take $G_1$ to be an arbitrary plane triangulation on $n$ vertices with $e(G_1) = 3n-6 = 18k$. We then take $G_2$ to be a clique-sum of $k$ copies of the Wagner graph $W_8$ that all overlap in one common edge. Then $e(G_2) = 11k+1$ and \[ \frac{e(G_1) - e(G_2)}{n} = \frac{7k-1}{6k+2} \to 7/6 \] as $k \to \infty$. For general $n$ we can modify the construction of $G_2$ by taking a clique-sum of $k$ copies of $W_8$ and a triangulation on $3 \leq m \leq 7$ vertices (in fact, by the above characterisation of the edge-maximal $K_5$-free graphs, it is easy to check that $\limp(K_5) = \frac76$). Therefore the lemma holds for $r=5$. The statement for $r+1$ follows from the statement for $r$ by observing that if we take any edge-maximal $K_r$-free graph $G$, add to it one vertex and connect it to all vertices of $G$, then the resulting graph is edge-maximal $K_{r+1}$-free. \end{proof} \begin{remark} Recall from \cite{completeMinors} that $M^+_{K_r}(n) \sim \alpha \, r \log r \, n$ for $\alpha \approx 0.319$, while the constructions in Lemma \ref{lem:completeImpure} have both $e(G_1)$ and $e(G_2)$ that grow linearly with $r$. Thus we see that $\limp(K_r) \sim \alpha \, r \log r$. \end{remark} \smallskip We next consider connected graphs that are not complete but do not have any leaves. We say that $G$ has \emph{connectivity $k$} if $k$ is the minimum size of a vertex cut of $G$ (except that, for $n \geq 2$, $K_n$ has connectivity $n-1$). Also, we say that $G$ is $j$-connected if $G$ has connectivity at least $j$. Recall that $\delta(G)$ denotes the minimum degree and, for $u \in V(G)$, let \[ N(u) = \{ v \in V(G) : \{u,v\} \in E(G)\} \] denote the neighbourhood of $u$ in $G$. The following simple fact will be very useful to us. \begin{fact} \label{fact:highlyConnected} Let $G$ be a non-complete graph on $n$ vertices with $\delta(G) = \delta$. Then $G$ has connectivity at least $2\delta-n+2$. \end{fact} \begin{proof} Let $u$ and $v$ be a non-adjacent pair of vertices. Then \[ 2 \delta \leq \deg(u) + \deg(v) = |N(u) \cup N(v)| + |N(u) \cap N(v)| \leq n -2+ |N(u) \cap N(v)|, \] so $u$ and $v$ have at least $2\delta-n+2$ common neighbours, and any vertex cut separating $u$ and $v$ must contain all of these vertices. \end{proof} \begin{lemma} \label{lem:noLeaves} Let $H$ be a connected non-complete graph on $h \geq 4$ vertices with $\delta: = \delta(H) \geq 2$. Then the class of $H$-free graphs satisfies $\limp(H) \geq \frac1{2h}$. \end{lemma} \begin{proof} Since $H$ is connected, $H$ has connectivity $k$ for some $k \geq \max \{2\delta-h+2, 1\}$. We first show that for all $m \geq 1$ there exist two graphs $G_1, G_2$, both on \[ n = (h-k)(h-k+1) m+k-1 \] vertices, that are edge-maximal $H$-free and such that \[ e(G_1) - e(G_2) \geq \frac{(h-k)m}{2} = (1+o(1))\frac{n}{2(h-k+1)}. \] We construct the ``dense'' graph $G_1$ as follows. We take $(h-k+1)m$ copies of $K_{h-1}$ that all overlap in a fixed set of $k-1$ vertices. Clearly $G_1$ is $H$-free since $H$ has connectivity $k$ and trying to fit an $H$-minor in $G_1$ we would need to find it across more than one of the copies of $K_{h-1}$. Also, $G_1$ has $(h-k)(h-k+1) m+k-1$ vertices and \[ \begin{split} e(G_1) & = (h-k+1) m \left ( \binom{h-k}{2} + (h-k)(k-1) \right ) + \binom{k-1}{2} \\ & = (h-k) m (h-k+1) \frac{h+k-3}{2} + \binom{k-1}{2}. \end{split} \] We construct the ``sparse'' graph $G_2$ similarly. We start by taking $(h-k)m$ copies of $K_{h-1}$ that all overlap in a fixed set $I$ of $k-1$ vertices. The resulting graph $G'_2$ has $(h-k)^2 m + k-1$ vertices, i.e., $(h-k)m$ fewer that $G_1$. We complete the construction of $G_2$ by adding these $(h-k)m$ missing vertices and joining each of them to $\delta-1$ vertices in a distinct copy of $K_{h-1}$ in such a way that the neighbourhood of each new vertex does not contain the whole of $I$ (see Figure \ref{figure:noLeaves}). Note that $G_2$ is $H$-free: for if $G_2$ had a minor $H$ then so would $G'_2$ (since vertices $v$ of degree $< \delta(H)$ with $N(v)$ complete are redundant), and we may see as for $G_1$ that $G_2'$ has no minor $H$. We have \[ \begin{split} e(G_2) & = (h-k) m \left ( \binom{h-k}{2} + (h-k)(k-1) \right ) + \binom{k-1}{2} + (h-k)m(\delta-1) \\ & = (h-k) m \left ( (h-k) \frac{h+k-3}{2} + \delta-1 \right ) + \binom{k-1}{2}. \end{split} \] \begin{figure}[htb] \centering \begin{tikzpicture} \tikzstyle{vertex}=[draw,shape=circle,minimum size=5pt,inner sep=0pt] \draw (2,0) ellipse (2cm and 1cm); \draw (0,0) ellipse (2cm and 1cm); \draw (1,1) ellipse (1cm and 2cm); \draw (1,-1) ellipse (1cm and 2cm); \draw[black] (-1,0) node {$h-k$}; \draw[black] (1,2) node {$h-k$}; \draw[black] (1,-2) node {$h-k$}; \draw[black] (3,0) node {$h-k$}; \draw[black] (1,0) node {$k-1$}; \foreach \name/\x/\y in {1/-2/-2, 2/3/-3, 3/4/2, 4/-1/3} \node[vertex] (P-\name) at (\x,\y) {~}; \foreach \x/\y in {-1/-0.45, -0.75/-0.5, -0.5/-0.55} \draw (P-1) -- (\x,\y); \foreach \x/\y in {1.65/-2, 1.7/-1.75, 1.75/-1.5} \draw (P-2) -- (\x,\y); \foreach \x/\y in {3/0.45, 2.75/0.5, 2.5/0.55} \draw (P-3) -- (\x,\y); \foreach \x/\y in {0.35/2, 0.3/1.75, 0.25/1.5} \draw (P-4) -- (\x,\y); \end{tikzpicture} \caption{Graph $G_2$ as defined in Lemma \ref{lem:noLeaves}.} \label{figure:noLeaves} \end{figure} Consequently, \[ e(G_1) - e(G_2) = (h-k) m \left ( \frac{h+k-3}{2} - \delta + 1 \right ). \] By Fact \ref{fact:highlyConnected} we have $h+k-3 \geq 2 \delta - 1$ hence \[ e(G_1) - e(G_2) \geq \frac {(h-k) m}{2} = \frac{n-k+1}{2(h-k+1)} \sim \frac{n}{2(h-k+1)}. \] To show that $G_2$ is edge-maximal $H$-free, assume that we add an edge $e$ to $G_2$. If $e$ connects vertices not in $I$ in two distinct copies of $K_{h-1}$, then by contracting it we obtain two copies of $K_{h-1}$ that overlap in $k$ vertices and the resulting graph contains $H$ as a subgraph because $H$ has connectivity $k$. If $e$ connects a vertex $v$ of degree $\delta-1$ to a vertex in the copy of $K_{h-1}$ that contains the whole of $N(v)$ then this graph contains $H$ as a subgraph because now $\deg(v) = \delta = \delta(H)$. If finally $e$ connects a vertex $v$ of degree $\delta-1$ to another vertex $u$ that either has degree $\delta-1$ or is located in some other copy of $K_{h-1}$ then we can contract the path between $u$ and a vertex in $I \setminus N(v)$. The resulting graph again contains $H$ as a subgraph, because now $\deg(v) = \delta = \delta(H)$. To complete the proof of the lemma we observe that $h-k$ and $h-k+1$ are coprime. Thus by Fact \ref{fact:frobeniusNumber} for all $n$ large enough we can build approximations $G'_1, G'_2$ of the above graphs $G_1, G_2$ using the building blocks described above ($K_{h-1}$, and $K_{h-1}$ plus a vertex of degree $\delta-1$), with $\frac{e(G'_1) - e(G'_2)}{n} \to \frac{1}{2(h-k+1)} \geq \frac{1}{2h}$. \end{proof} At this stage, we have seen by Lemmas~\ref{lem:completeImpure} and~\ref{lem:noLeaves} that, if the connected graph $H$ has $\delta(H) \geq 2$ and $H$ is not $K_3$ or $K_4$, then $\limp(H)>0$; that is, we have proved Lemma~\ref{lem.noleaf}. Now Theorem~\ref{thm:PositiveLimit} follows from Theorem~\ref{thm:AddableLimit}. \section{Purity, near-purity and linear impurity: excluding a graph with a leaf} \label{sec.leaf} In this section we complete the proof of Theorem~\ref{thm:threeClasses}, which says that for a connected excluded minor $H$ there are only the three possibilities of purity, near-purity or linear impurity for $\Ex(H)$. We first deal quickly with graphs $H$ which have a strong separating vertex, treating the claw graph $K_{1,3}$ separately in Observation \ref{obs:claw}; and then we consider graphs $H$ with at least one leaf and no strong separating vertex. \begin{lemma} \label{lem:separatingVertex} Let $H$ be a connected graph on $h \geq 5$ vertices which contains a strong separating vertex~$v$. Then the class of $H$-free graphs satisfies $\limp(H) \geq \frac12$. \end{lemma} \begin{proof} The construction here is very simple. For $m \geq 1$, let $G_1$ consist of $(h-2)m$ disjoint copies of $K_{h-1}$ and let $G_2$ consist of $(h-1)m$ disjoint copies of $K_{h-2}$. Both graphs contain $n = (h-1)(h-2)m$ vertices and are trivially $H$-free. They are edge-maximal $H$-free because whenever we add an edge $e$ to either $G_1$ or $G_2$, we can then contract it and identify the resulting common vertex of two cliques of size either $h-1$ or $h-2$ with $v$. The resulting graph contains $H$ as a subgraph because $h \geq 5$ and consequently $h-2 +h- 3 \geq h$. We clearly have $e(G_1) = (h-1)(h-2)^2 m/2$ and $e(G_2) = (h-1)(h-2)(h-3) m/2$. Hence \[ e(G_1) - e(G_2) = \frac{(h-1)(h-2) m}{2} = \frac{n}{2}. \] The construction for general $n$ follows easily from Fact \ref{fact:frobeniusNumber} since $h-1$ and $h-2$ are coprime. \end{proof} \begin{observation} \label{obs:claw} The only connected graph on $h=4$ vertices with a strong separating vertex is the claw $K_{1,3}$. The class of $K_{1,3}$-free graphs is not pure, since for all $n \geq 4$ the cycle $C_n$ and the union of a cycle $C_{n-1}$ and an isolated vertex are edge-maximal $K_{1,3}$-free with $n$ and $n-1$ edges respectively. However, this class is near-pure with $\gap_{K_{1,3}}(n) = 1$ for all $n \geq 4$. Indeed, note that any connected component of an edge-maximal $K_{1,3}$-free graph $G$ on $n$ vertices is either a cycle, an edge or an isolated vertex. Moreover, $G$ can have at most one component of size less than $3$ to preserve edge-maximality. Hence $G$ must have either $n$ or $n-1$ edges. \end{observation} For the rest of this section we consider the case when the connected graph $H$ on $h$ vertices has at least one leaf and has no strong separating vertex. We say that a connected graph $G$ is \emph{leaf-and-edge-maximal $H$-free} if $G$ is edge-maximal $H$-free and attaching a new leaf to an arbitrary vertex of $G$ creates an $H$-minor. \begin{lemma} \label{lem:IOmaximal} Suppose that the connected graph $H$ has a leaf, and the class of $H$-free graphs is not linearly impure. Then each leaf-and-edge-maximal $H$-free graph $G$ satisfies $e(G)/v(G) = (h-2)/2$; and each $H$-free graph $G$ satisfies $e(G)/v(G) \leq (h-2)/2$. \end{lemma} \begin{proof} Indeed, if there existed two leaf-and-edge-maximal $H$-free graphs $G_1, G_2$ with $\frac{e(G_1)}{v(G_1)} > \frac{e(G_2)}{v(G_2)}$ then we could trivially construct two arbitrarily large edge-maximal $H$-free graphs with the same number of vertices: $G'$ consisting of disjoint copies of $G_1$, and $G''$ consisting of disjoint copies of $G_2$, such that \[ e(G')-e(G'') = \left ( \frac{e(G_1)}{v(G_1)} - \frac{e(G_2)}{v(G_2)} \right ) v(G'). \] Further, to handle general $n$, to both $G'$ and $G''$ we could add a union of at most $\frac{\max\{v(G'),v(G'')\}}{h-1}$ disjoint copies of $K_{h-1}$ and a $K_{i}$ for some $1 \leq i \leq h-2$, keeping the graph edge-maximal $H$-free. The claim now follows from the observation that, since $H$ has a leaf, $K_{h-1}$ is always a leaf-and-edge-maximal $H$-free graph. The second statement in the lemma follows similarly, by taking $G_1$ as $G$ and $G_2$ as $K_{h-1}$. \end{proof} \begin{observation} \label{obs:atMostOneNotIO} Suppose that $H$ has no strong separating vertex. Then any edge-maximal $H$-free graph contains at most one component that is not leaf-and-edge-maximal $H$-free. Otherwise we could connect two such components by a suitably attached edge, and the resulting graph would still be $H$-free because $H$ has no strong separating vertex and the components we started with were not leaf-and-edge-maximal $H$-free. \end{observation} The next lemma is the final step towards proving Theorem \ref{thm:threeClasses}. \begin{lemma} \label{lem:noGreyTerritory} Let $H$ be a graph on $h$ vertices that is connected, has at least one leaf and has no strong separating vertex. If there exists $n > 0$ and two edge-maximal $H$-free graphs on $n$ vertices $G_1, G_2$ such that \begin{equation} \label{eq:boundOnM} e(G_1) - e(G_2) \geq M = \frac{h-2}{2} + 2\beta_H^2+1 \end{equation} then the class of $H$-free graphs is linearly impure. \end{lemma} \begin{proof} Assume for a contradiction that the class of $H$-free graphs is not linearly impure. Let $G_1$ and $G_2$ be edge-maximal $H$-free graphs on the same vertex set such that $e(G_1)-e(G_2) \geq M$, for $M$ as in \eqref{eq:boundOnM} (we observe that $\beta_H \leq \beta_{K_h})$. If all components of $G_2$ were leaf-and-edge-maximal $H$-free then by Fact \ref{lem:IOmaximal} we would have some component $C$ of $G_1$ that was $H$-free and satisfied $e(C)/v(C) > (h-2)/2$, which contradicts Lemma~\ref{lem:IOmaximal}. Hence, by Observation \ref{obs:atMostOneNotIO}, $G_2$ has exactly one component $C$, with $|C|=c$, that is edge-maximal $H$-free, but not leaf-and-edge-maximal $H$-free. By Lemma~\ref{lem:IOmaximal} we have \[ e(C) \leq \frac{h-2}{2}c - M. \] Let $A$ be the set of all vertices $v$ in $C$ such that attaching a leaf to $v$ does not create an $H$ minor, and let $a = |A| \geq 1$. Clearly the graph induced by $A$ must be $H$-free so the set $A$ induces at most $\beta_H a$ edges. Let $v$ be a vertex in $A$ with the minimum number of neighbours in $A$: clearly, $\deg_A(v) \leq 2 \beta_H$. Let $n = (c-1)m+1+t(h-1)+s$, where $t \geq 0$, $0 \leq s \leq h-2$, and $t(h-1)+s < c-1$. We take $m$ isomorphic copies of $C$ and turn them into one connected graph on $n' = (c-1)m+1$ vertices and $me(C)$ edges by identifying the vertices $v$ in all these copies into one vertex (still called $v$). Next, we add a copy of $K_{s}$ and join all (if $s \leq h-3$) or one (if $s = h-2$) of its vertices to $v$ by an edge. Finally, to this graph we add $t$ disjoint copies of $K_{h-1}$. The resulting graph $G$ on $n$ vertices is $H$-free by the definition of $A$ and by the fact that $H$ has no strong separating vertex (this latter property is the reason why we can join all the vertices of $K_s$ with $v$ if $s \leq h-3$). We do not know if this graph is edge-maximal $H$-free. However, observe that we can only add edges to $G$ between distinct copies of the set $A$, or between one of the copies of $A$ and the clique $K_{s}$, or between $v$ and the clique $K_s$ (if $s=v-2$). Moreover, we are not allowed to add edges incident to vertices in $A$ that are not adjacent to $v$. Indeed, assume that we add an edge $\{u,w\}$ such that $u \notin N(v)$. Then by contracting a path from $w$ to $v$ (recall that $C$ is connected) we ``add'' the edge $\{u,v\}$ to a copy of $C$ which creates an $H$ minor by the edge-maximality of $C$ (see Figure \ref{figure:noGreyTerritory}). Hence there are at most $2 \beta_H m + s$ vertices other than $v$ between which we can add edges and keep the graph $H$-free. \begin{figure}[htb] \centering \begin{tikzpicture} \tikzstyle{vertex}=[draw,shape=circle,minimum size=3pt,fill, inner sep=0pt] \draw[thick] (2.5,0) ellipse (2cm and 0.5cm); \draw[thick] (-0.5,0) ellipse (2cm and 0.5cm); \draw[thick] (1,1.5) ellipse (0.5cm and 2cm); \draw[thick] (1,-1.5) ellipse (0.5cm and 2cm); \draw[black] (-1.75,0) node {$C$}; \draw[black] (1,2.75) node {$C$}; \draw[black] (1,-2.75) node {$C$}; \draw[black] (3.75,0) node {$C$}; \draw (1.85,0) ellipse (1.35cm and 0.35cm); \draw (0.15,0) ellipse (1.35cm and 0.35cm); \draw (1,0.85) ellipse (0.35cm and 1.35cm); \draw (1,-0.85) ellipse (0.35cm and 1.35cm); \draw[black] (-0.75,0) node {$A$}; \draw[black] (1,1.75) node {$A$}; \draw[black] (1,-1.75) node {$A$}; \draw[black] (2.75,0) node {$A$}; \foreach \name/\x/\y in {v/0.9/0, u/0.9/1, w/0.1/0} \node[vertex] (P-\name) at (\x,\y) {~}; \foreach \name/\x/\y in {$v$/1.1/0, $u$/1.1/1, $w$/-0.15/0} \draw[black] (\x,\y) node {\name}; \draw [color=black] (P-u) .. controls +(-0.5,0) and +(0,0.5) .. (P-w); \draw [color=black, dashed] (P-w) .. controls +(0.5,0.25) and +(-0.5,-0.25) .. (P-v); \end{tikzpicture} \caption{An $H$-free graph $G$ with $t=s=0$ as defined in Lemma \ref{lem:noGreyTerritory}.} \label{figure:noGreyTerritory} \end{figure} Again, to avoid creating an $H$-minor we can add at most $2 \beta_H^2 m + \beta_H s$ edges between these vertices. Since we can add at most $h-3$ edges incident to $v$, there exists an edge-maximal $H$-free graph $G'$ on $n = (c-1)m+1+t(h-1)+s$ vertices with \[ e(G') \leq e(C)m + 2 \beta_H^2 m + \beta_H s + h-3 < m \left( \frac{h-2}{2}c - M + 2 \beta_H^2 \right ) + h(\beta_H+1). \] We take $G''$ to be an edge-maximal $H$-free graph on $n$ vertices consisting of $\lfloor n/(h-1) \rfloor$ disjoint copies of $K_{h-1}$ and one copy of $K_i$ for some $0 \leq i \leq h-2$. Hence we have \[ e(G'') > \frac{n(h-2)}{2} - \frac{(h-2)^2}{2}. \] Therefore \[ \begin{split} e(G'') - e(G') & \geq ((c-1)m+1)\frac{h-2}{2} - \frac{(h-2)^2}{2} - m \left ( \frac{h-2}{2}c - M + 2 \beta_H^2 \right ) - h(\beta_H+1) \\ & \geq m \left ( M- \frac{h-2}{2} - 2 \beta_H^2 \right ) - \frac{(h-3)(h-2)}{2} - h(\beta_H+1) \\ & \geq m - \frac{(h-3)(h-2)}{2} - h(\beta_H+1), \end{split} \] where the last inequality follows from \eqref{eq:boundOnM}. Consequently, \[ \frac{e(G'') - e(G')}{n} \geq \frac{m - \frac{(h-3)(h-2)}{2} - h(\beta_H+1)}{(c-1)m+1+t(h-1)+s} \to \frac{1}{c-1} \] as $n \to \infty$. This completes the proof of Lemma~\ref{lem:noGreyTerritory}, and thus of Theorem \ref{thm:threeClasses}. \end{proof} \section{Purity with one forbidden connected minor} \label{sec:allNonPure} In this section we complete the proof of Theorem~\ref{thm:connpure}, showing that $K_2, K_3, K_4$ and $P_3$ are indeed the only connected graphs yielding pure $H$-free classes of graphs. \begin{lemma} \label{thm:allNonPure} Let $H \notin \{K_2, K_3, K_4, P_3\}$ be a connected graph. Then $\Ex(H)$ is not a pure class of graphs. \end{lemma} By Lemma~\ref{lem.noleaf}, $\Ex(H)$ is not pure if the minimum degree $\delta(H) \geq 2$. By Lemma \ref{lem:separatingVertex} and Observation \ref{obs:claw}, $\Ex(H)$ is not pure if there is a strong separating vertex. Note that, in particular, Lemma \ref{lem:separatingVertex} and Observation \ref{obs:claw} cover all graphs $H$ such that some vertex of $H$ has at least two leaves attached to it. Hence in this section we focus on graphs $H$ with at least one leaf and with no strong separating vertex. \begin{remark} In what follows, for various classes of excluded minors $H$ with $v(H) = h$ we prove that there exists some $n \in \NN$ such that $\gap_H(n) > 0$. Since we consider graphs $H$ with $\delta(H)=1$, this immediately implies that for all $k \geq 0$ we have $\gap_H(n+k(h-1)) > 0$. Indeed, a disjoint union of an edge-maximal $H$-free graph $G$ and $k$ copies of $K_{h-1}$ is again an edge-maximal $H$-free graph. \end{remark} \begin{lemma} \label{lem:twoDisjointLeaves} Let $H$ be a graph on $h > 5$ vertices with at least two leaves, and with no strong separating vertex. Then the class of $H$-free graphs is not pure. \end{lemma} \begin{proof} Let $G_1$ be the union of $K_{h-1}$ and an isolated vertex. Clearly $G_1$ is $H$-free and is maximal since $H$ has leaves. Also, $e(G_1) = \binom{h-1}{2}$. Let $G_2$ be formed from a $K_{h-2}$ and a $K_3$ that have one vertex in common. To see that $G_2$ is $H$-free notice that the removal of the common vertex would leave no component of size at least $h-2$. Also, $G_2$ is edge-maximal $H$-free since adding an extra edge would allow us to place two leaves of $H$ in the initial $K_3$. Obviously, $e(G_2) = \binom{h-2}{2}+3$ and $e(G_1) > e(G_2)$ for all $h > 5$. \end{proof} \begin{observation} \label{obs:n=4twoLeaves} The only connected graph $H$ on $4$ vertices with at least two leaves and with no strong separating vertex is $P_4$, the path on $4$ vertices. However, let us show that $\limp(P_4) = \frac12$. Indeed, every edge-maximal $P_4$-free graph has at most one isolated vertex, thus we have $M^-_{P_4}(n) \geq \frac{n-1}{2}$. Also, a perfect matching for $n$ even, or a triangle plus a perfect matching on the remaining $n-3$ vertices for $n$ odd, is edge-maximal $P_4$-free, so $M^-_{P_4}(n) \leq \frac{n+3}{2}$. On the other hand, any component of a $P_4$-free graph must be acyclic or unicyclic, as otherwise it would contain a $C_4$ or a \emph{bowtie graph} (two triangles with one common vertex) as a minor, thus it would not be $P_4$-free. Hence $M^+_{P_4}(n) \leq n$. Since a star on $n$ vertices guarantees $M^+_{P_4}(n) \geq n-1$, we have $\limp(P_4) = 1-\frac12 = \frac12$. \end{observation} \begin{observation} \label{obs:n=5twoLeaves} The only connected graph $H$ on $5$ vertices with at least two leaves and with no strong separating vertex consists of a triangle on $\{1,2,3\}$ and two additional edges $\{1,4\}, \{2,5\}$ (it is the so called \emph{bull graph}). Let us show that $\limp(H) = \frac12$. Since every edge-maximal $H$-free graph has at most one acyclic component, we have $M^-_{H}(n) \geq n-1$. On the other hand, for all $n \geq 5$ the cycle $C_n$ is edge-maximal $H$-free so $M^-_{H}(n) \leq n$ for all $n \geq 5$. Let us show that $M^+_{H}(n) \leq \frac{3n}{2}$. Let $C$ be a component of size at least $5$ of an edge-maximal $H$-free graph (components of size at most $4$ trivially have the edge-to-vertex ratio at most $3/2$). Observe that for any cycle in $C$, at most one vertex of the cycle has degree higher than $2$. Otherwise we immediately can find a bull graph in $C$, or $C$ contains the \emph{diamond graph} ($K_4$ less an edge) as a subgraph (hence also the bull, since $|C| \geq 5$). Thus $C$ is obtained from a tree by adding disjoint cycles, and then identifying one vertex of the cycle with one vertex of the tree. Hence, to maximise the ratio $e(C)/v(C)$ we should take all cycles to be triangles, and the tree to be just one vertex. This gives $e(C)/v(C) \leq 3(n-1)/2$. Therefore $M^+_{H}(n) \leq \frac{3n}{2}$. On the other hand, a disjoint union of $\lfloor n/4 \rfloor$ copies of $K_4$ is $H$-free, so we have $M^+_{H}(n) \geq \frac{3(n-3)}{2}$. This gives $\limp(P_4) = \frac32-1 = \frac12$. \end{observation} We claim that the only graphs that remain to be checked are graphs $H$ with exactly one leaf $v$ and such that the graph $H' = H - v$ is $2$-connected. Indeed, if $\delta (H') = 1$ then either $H$ has two leaves or the unique vertex of degree $1$ in $H'$ is the neighbour $u$ of $v$ in $H$. In the latter case, let the unique neighbour of $u$ in $H'$ be $w$. Then either all the components of $H - w$ have size at most $v(H)-3$ (so $w$ is a strong separating vertex), or $H$ is a $P_4$. Since neither of these is possible, we have $\delta (H') \geq 2$. But then, if $H'$ has connectivity $1$ then clearly $H$ has a strong separating vertex. This establishes our claim. Unfortunately, it will require several more steps to deal with the case in the claim. \begin{lemma} \label{lem:cliquePlusALeaf} Let $H$ be a graph on $h \geq 5$ vertices consisting of a clique on $h-1$ vertices and one pendant edge. Then the class of $H$-free graphs satisfies $\limp(H) \geq \frac{h-4}{2}$. \end{lemma} \begin{proof} Let $n = m(h-1)+k$, $0 \leq k \leq h-2$. Let $G_1$ be the union of $m$ disjoint copies of $K_{h-1}$ and one copy of $K_k$. Clearly $G_1$ is edge-maximal $H$-free. Also, $v(G_1) = n = m(h-1)+k$ and \[ e(G_1) = m \binom{h-1}{2}+\binom{k}{2} \leq \frac{h-2}{2} n. \] We construct a denser $n$-vertex graph $G_2$ as follows. We start with a clique on $h-4$ vertices and a cycle $C_{n-h+4}$. We then build a complete bipartite graph between the clique and the cycle (see Figure \ref{figure:cliquePlusALeaf}). To see that $G_2$ is $H$-free note that in order to obtain a clique on $h-1$ vertices we would need to contract the cycle $C_{n-h+4}$ to a triangle, but then we would only have $h-1$ vertices left in the graph. But \[ e(G_2) = \binom{h-4}{2}+n-h+4 + (h-4)(n-h+4) = (h-3) n - \frac{(h-1) (h-4)}{2}. \] Hence \[ \frac{e(G_2) - e(G_1)}{n} \geq \frac{n \left (h-3 - \frac{h-2}{2} \right )-\frac{(h-1) (h-4)}{2} }{n} \to \frac{h-4}{2} \] as $n \to \infty$. \end{proof} \begin{figure}[htb] \centering \begin{tikzpicture} \tikzstyle{vertex}=[draw,shape=circle,minimum size=5pt,inner sep=0pt] \node[vertex] (Q-1) at (0,0) {~}; \node[vertex] (Q-2) at (0,1) {~}; \node[vertex] (Q-3) at (0,2) {~}; \node[vertex] (Q-4) at (0,3) {~}; \node[vertex] (Q-5) at (0,4) {~}; \foreach \x/\y in {1/2,2/3,3/4,4/5} { \draw [color=black] (Q-\x) -- (Q-\y); } \draw [color=black] (Q-5) .. controls +(-0.5,0.5) and +(-0.5,-0.5) .. (Q-1); \draw[black] (-1,2) node {$C_5$}; \node[vertex] (P-1) at (3,3) {~}; \node[vertex] (P-2) at (2.75,2) {~}; \node[vertex] (P-3) at (3,1) {~}; \draw [color=black] (P-1) -- (P-2); \draw [color=black] (P-1) -- (P-3); \draw [color=black] (P-2) -- (P-3); \draw[black] (3.5,2) node {$K_3$}; \draw[black] (1,-0.5) node {$G_2$}; \foreach \x in {1,2,3,4,5} \foreach \y in {1,2,3} { \draw [color=black] (P-\y) -- (Q-\x); } \end{tikzpicture} \caption{Graph $G_2$ as defined in Lemma \ref{lem:cliquePlusALeaf} for $h=7$ and $n=8$.} \label{figure:cliquePlusALeaf} \end{figure} \begin{observation} \label{obs:pan} In Lemma \ref{lem:cliquePlusALeaf} we prove linear impurity of $\Ex(H)$ when the clique in $H$ contains at least $4$ vertices. Indeed, when $H$ is the \emph{pan graph} on $4$ vertices, consisting of a triangle and a pendant edge, then $\Ex(H)$ is near-pure with $\gap_H(n) = 1$ for all $n \geq 4$. To see this, observe that every connected component of an $H$-free graph is either a cycle or a tree, and an edge-maximal $H$-free graph has at most one acyclic component (in fact, this component can be any tree except a path $P_m$ on $m \geq 3$ vertices which we could close to a cycle without creating an $H$-minor). \end{observation} \begin{lemma} \label{lem:leafAdjacentToABiggie} Let the connected graph $H$ have exactly one leaf $v$, with neighbour $u$. Let $H' = H - v$ satisfy $\delta' := \delta(H') \geq 2$, and suppose that there is a vertex $w \neq u$ in $H'$ with $\deg_{H'}(w) = \delta'$. Then the class $\Ex(H)$ is not pure. \end{lemma} \begin{proof} By Lemma~\ref{lem:cliquePlusALeaf}, we may assume that $H'$ is not complete. Thus $2 \leq \delta' \leq h-3$, and so $h \geq 5$. Let $G_1$ be the graph on vertex set $[h+1]$ constructed as follows. Start with a clique on $\{1,2,\ldots,h-2\}$. Next, for $i=1,2,3$, connect the vertex $h-2+i$ to $1, 2, \ldots, \delta'-2$, as well as to $\delta'-2+i$ (see Figure \ref{figure:leafAdjacentToABiggie}). Clearly, $e(G_1) = \binom{h-2}{2} + 3(\delta'-1)$. To see that $G_1$ is $H$-free, note that it has an independent set of 3 vertices each of degree $< \delta'$, so after one edge-contraction there must still be at least two vertices of degree $< \delta'$. Next we show that $G_1$ is edge-maximal $H$-free. Suppose that we add an edge $e$ to $G_1$, where wlog $e$ is incident to vertex $h-1$. There are now two cases. (a) Suppose that $e$ is incident to $h$ or $h+1$, wlog to $h$. Contract $e$ to form a new vertex called $w$, and place $v$ at $h+1$. If $uw \in E(H)$ then place $u$ at vertex 1; and if not then place $u$ at vertex $\delta'+1$. (b) Suppose that $e$ is incident to a vertex in $\{\delta',\ldots,h-2\}$. Then $e$ is not incident to at least one of vertices $\delta', \delta'+1$, wlog the former. Place $w$ at vertex $h-1$, place $v$ at $h$, and delete vertex $h+1$. If $uw \in E(H)$ then place $u$ at vertex 1; and if not then place $u$ at vertex $\delta'$. We construct the graph $G_2$ as a disjoint union of $K_{h-1}$ and the edge $\{h,h+1\}$. Clearly $G_2$ is edge-maximal $H$-free, and $e(G_2) = \binom{h-1}{2} + 1$. We have $e(G_1) \neq e(G_2)$ unless $\delta' = (h+2)/3$. Note that the smallest value of $h$ for which this could hold with both $h$ and $\delta'$ being integers is $h=7$ (which gives $\delta' = 3$). \begin{figure}[htb] \centering \begin{tikzpicture} \tikzstyle{vertex}=[draw,shape=circle,minimum size=5pt,inner sep=0pt] \draw (1.25,0) ellipse (3cm and 1cm); \node[vertex] (R-1) at (-0.25,2) {~}; \draw[black] (-0.25,2.35) node {\footnotesize $h-1$}; \node[vertex] (R-2) at (0.75,2) {~}; \draw[black] (0.75,2.35) node {\footnotesize $h$}; \node[vertex] (R-3) at (1.75,2) {~}; \draw[black] (1.75,2.35) node {\footnotesize $h+1$}; \node[vertex] (Q-1) at (-1,0.25) {~}; \node[vertex] (Q-2) at (-0.5,0.25) {~}; \draw[black] (0,0.25) node {$\cdots$}; \node[vertex] (Q-3) at (0.5,0.25) {~}; \node[vertex] (Q-4) at (1,0.25) {~}; \node[vertex] (Q-5) at (1.5,0.25) {~}; \node[vertex] (Q-6) at (2,0.25) {~}; \node[vertex] (Q-7) at (2.5,0.25) {~}; \draw[black] (3,0.25) node {$\cdots$}; \node[vertex] (Q-8) at (3.5,0.25) {~}; \draw[black] (-1.3,-1) node {$h-2$}; \draw[black] (1.25,-1.5) node {$G_1$}; \draw [decorate,decoration={brace,amplitude=5pt,mirror}, xshift=0pt,yshift=-3pt] (-1.15,0.2) -- (0.6,0.2) node [black,midway,yshift=-0.5cm] {\footnotesize $\delta'-2$}; \foreach \x in {1,2,3} \foreach \y in {1,2,3} \draw (R-\x) -- (Q-\y); \foreach \x/\y in {1/4,2/5,3/6} \draw (R-\x) -- (Q-\y); \end{tikzpicture} \caption{Graph $G_1$ as defined in Lemma \ref{lem:leafAdjacentToABiggie}.} \label{figure:leafAdjacentToABiggie} \end{figure} If we have $\delta' = (h+2)/3$ and $h > 7$, which implies that $\delta'-2 = (h-4)/3 < h-6$, then we alter our constructions of $G_1$ and $G_2$ as follows. We take the graph $G_1'$ consisting of $K_{h-2}$ together with four extra vertices $h-1, h, h+1, h+2$ such that $h-2+i$ is connected to $1, 2, \ldots, \delta'-2, \delta'-2+i$ for $1 \leq i \leq 4$ (observe that $\delta'-2+4 < h-2$). Then we have \[ e(G_1') = \binom{h-2}{2} + 4(\delta'-1) = \binom{h-2}{2} + 4\frac{h-1}{3}. \] We compare $G_1'$ to $G_2'$ formed of disjoint copies of $K_{h-1}$ and $K_3$, which has $e(G_2') = \binom{h-1}{2}+3$. We then obtain \[ e(G_1')-e(G_2') = \frac{h-7}{3} > 0. \] In the last remaining case where $h=7, \delta'=3$, we alter the construction a little bit. We build $G_1''$ on $10$ vertices, starting from a Hamiltonian cycle of edges $\{i,i+1\}$ (as usual, we identify vertex $11$ with $1$). Then we add edges to make the even vertices into a clique. Thus we have $e(G_1'') = 20$. Graph $G_1''$ is edge-maximal $H$-free for exactly the same reasons as our previous constructions: the odd vertices all have degree $2 < \delta'$ and form an independent set, while the union of the neighbourhoods of any two of them has size either $3$ or $4$. We take $G_2''$ to be a disjoint union of $K_6$ and $K_4$, so clearly $e(G_2'') = 15+6 = 21$. This completes the proof of the lemma. \end{proof} \begin{lemma} \label{lem:leafAdjacentToATiny} Let $H$ be a graph on $h \geq 6$ vertices with exactly one leaf $v$ and such that the graph $H' = H - v$ has connectivity $k$ for some $2 \leq k \leq h-4$. Also, let the unique neighbour of $v$ in $H$ be $u$. If $\deg_H(u) = \delta(H')+1$ then the class $\Ex(H)$ is not pure. \end{lemma} \begin{proof} We use a similar construction as in Lemma \ref{lem:noLeaves}. Let $A$ and $B$ be $(h-2)$-sets with $|A \cap B|=k-1$. Let $G_1$ be the union of a clique on $A$ and a clique on $B$. This graph clearly has $|A \cup B| =2h-k-3 \geq h+1$ vertices and $2\binom{h-2}{2}-\binom{k-1}{2}$ edges. To see why $G_1$ is $H$-free we note that it is $H'$-free, since we cannot have a model of $H'$ within $A$ or within $B$. On the other hand, adding a single edge to $G_1$ and contracting it gives us a graph on at least $h$ vertices consisting of a union of two cliques on $h-2$ vertices each that overlap in $k$ vertices. Let us show that this graph is not $H$-free. We can obviously find $H'$ in this graph as a subgraph because $H'$ has $h-1$ vertices and connectivity (exactly) $k$. The only time we need to worry about being able to add the leaf $v$ to our minor is when all but one of the vertices of $H'$ are located in $A$ and only one in $B \setminus A$ (or vice-versa). But then that one vertex (say vertex $x$) would have degree at most $k$ in $H'$, so $\deg_{H'}(x)= \delta(H') = k$, and now we can place vertex $u$ at $x$. We take $G_2$ to be a disjoint union of $K_{h-1}$ and $K_{h-k-2}$, which is clearly seen to be edge-maximal $H$-free. We have $e(G_2) = \binom{h-1}{2}+\binom{h-k-2}{2}$. The only integer solutions to $e(G_1) = e(G_2)$ are $h=1, k=0$ and $h=5, k=2$. This completes the proof. \end{proof} The next lemma fills one of the gaps left by Lemma \ref{lem:leafAdjacentToATiny}. \begin{lemma} \label{lem:leafPlusAlmostClique} Let $H$ be a graph on $h \geq 6$ vertices with exactly one leaf $v$ and such that the graph $H' = H - v$ has connectivity $h-3$. Then the class $\Ex(H)$ satisfies $\limp(H) \geq \frac{h-5}{2}>0$. \end{lemma} \begin{proof} For each $m \geq 2$, let $n = h-4+2m$ and let the $n$-vertex graph $G_1$ be the union of $m$ cliques, each on $h-2$ vertices, that overlap in a common set of $h-4$ vertices (see Figure \ref{figure:leafPlusAlmostClique}). As in Lemma \ref{lem:leafAdjacentToATiny}, $G_1$ is $H$-free and has size $e(G_1) = \binom{h-4}{2} + m(2(h-4)+1)$. Let $G_2$ be a disjoint union of (as many a possible) cliques on $h-1$ vertices and possibly one smaller clique containing the remaining $k$ vertices, where $0 \leq k \leq h-2$. Then $G_2$ is edge-maximal $H$-free. It is easy to see that, as $n \to \infty$, we have $e(G_1) = (1+o(1))(2(h-4)+1)n/2$, while $e(G_2) = (1+o(1))(h-2)n/2$. Thus \[ \frac{e(G_1)-e(G_2)}{n} \to \frac{h-5}{2}, \] as desired. Note that we do not need $G_1$ to be edge-maximal here, and so for $n = h-4+2m+1$ we can just take $G_1$ plus an isolated vertex. \end{proof} \begin{figure}[htb] \centering \begin{tikzpicture} \tikzstyle{vertex}=[draw,shape=circle,minimum size=5pt,inner sep=0pt] \draw (0,0) ellipse (1cm and 1cm); \draw[black] (0,0) node {$h-4$}; \foreach \name/\x/\y in {11/-2/0.5, 12/-2/-0.5, 21/2/0.5, 22/2/-0.5, 31/0.5/2, 32/-0.5/2, 41/0.5/-2, 42/-0.5/-2} \node[vertex] (P-\name) at (\x,\y) {~}; \foreach \x/\y in {11/12, 21/22, 31/32, 41/42} \draw (P-\x) -- (P-\y); \foreach \y in {0.4, 0.15, -0.1} \draw (P-11) -- (-0.6,\y); \foreach \y in {-0.4, -0.15, 0.1} \draw (P-12) -- (-0.6,\y); \foreach \y in {0.4, 0.15, -0.1} \draw (P-21) -- (0.6,\y); \foreach \y in {-0.4, -0.15, 0.1} \draw (P-22) -- (0.6,\y); \foreach \x in {0.4, 0.15, -0.1} \draw (P-31) -- (\x,0.6); \foreach \x in {-0.4, -0.15, 0.1} \draw (P-32) -- (\x,0.6); \foreach \x in {0.4, 0.15, -0.1} \draw (P-41) -- (\x,-0.6); \foreach \x in {-0.4, -0.15, 0.1} \draw (P-42) -- (\x,-0.6); \end{tikzpicture} \caption{Graph $G_1$ with $m=4$ as defined in Lemma \ref{lem:leafPlusAlmostClique}.} \label{figure:leafPlusAlmostClique} \end{figure} \begin{observation} \label{obs:h=5oneLeaf} The last remaining graphs that we need to consider are the connected graphs $H$ on $5$ vertices that have exactly one leaf $v$ and are such that $H-v$ is 2-connected but is not a complete graph. Up to isomorphism, there are exactly three such graphs $H$, and they each give rise to classes satisfying $\limp(H)\geq \frac12$. Consider $n>4$. In each case, our `dense' example is the disjoint union of $\lfloor n/4 \rfloor$ copies of $K_4$, together with a copy of $K_t$ where $t = n-4\lfloor n/4 \rfloor$ if $4 \nmid n$, which is an edge-maximal $H$-free graph with $3n/2+O(1)$ edges. \begin{enumerate} \item Let $H_1$ be $C_4$ with an added leaf. Then $C_n$ is an edge-maximal $H_1$-free graph with $n$ edges. Hence $\limp(H_1)\geq \frac12$. \item Let $H_2$ be a diamond ($K_4$ minus an edge), with an added leaf adjacent to a vertex of degree $2$ of the diamond. Then the graph obtained from $C_{n-1}$ by adding one vertex and joining it to two adjacent vertices on the cycle is an edge-maximal $H_2$-free graph with $n+1$ edges; and it follows that $\limp(H_2)\geq \frac12$. \item Let $H_3$ be a diamond ($K_4$ minus an edge), with an added leaf adjacent to a vertex of degree $3$ of the diamond. Then the graph obtained from $K_4$ by subdividing one edge $n-4$ times (or equivalently, from $C_{n-1}$ by adding a vertex and joining it to three consecutive vertices on the cycle) is an edge-maximal $H_3$-free graph with $n+2$ edges; and it follows $\limp(H_3) \geq \frac12$. \end{enumerate} \end{observation} \begin{remark} In fact, it can be shown that $\limp(H_1) = 1/2$, $\limp(H_2) = 1$ and $\limp(H_3) = 2/3$ (see Appendix \ref{app:v5delta1}). \end{remark} This completes the proof of Lemma~\ref{thm:allNonPure}, and thus of Theorem~\ref{thm:connpure}. \section{Forbidding several minors or disconnected minors} \label{sec:generalizations} We start this section by generalising Lemma \ref{lem:noLeaves} to a case where we may have more than one excluded minor, and the excluded minors need not be connected. For our proof to work, the forbidden set $\cH$ needs to satisfy specific and rather strict conditions. Roughly, we require that one component of one excluded minor is `smallest' in several senses. However, cases like $\cH = \{m C_h\}$ (that is, $m$ disjoint copies of the cycle $C_h$) for $h \geq 4$, or $\cH = \{ K_{2,3}, C_5 \}$, can be dealt with using the following result, which shows that in these cases the classes $\Ex(\cH)$ are linearly impure. \begin{lemma} \label{lem:multipleMinors} Let $\cH = \{H_1, H_2, \ldots, H_m\}$ be a set of $m \geq 1$ excluded minors. Let $t_1,\ldots,t_m$ be positive integers. For each $1 \leq i \leq m$, let $H_i = \bigcup_{j=1}^{t_i} H_i^j$; that is, let each graph $H_i$ be a disjoint union of connected graphs $H_i^j$ for $1 \leq j \leq t_i$. Assume that the following conditions hold: \begin{enumerate} \item $v(H_1) = \min_{1 \leq i \leq m} v(H_i)$ and $v(H_1^1) = \min \left \{ v \left (H_i^j \right ): 1 \leq i \leq m, 1 \leq j \leq t_i \right \} :=h$. \item $\delta(H_1^1) = \min_{1 \leq i \leq m} \delta(H_i) := \delta$ and $\delta$ satisfies $2 \leq \delta \leq v \left (H_1^1 \right )-2$. \item Taking $k_i^j$ to be the connectivity of $H_i^j$ we have $k_1^1 = \min \left \{ k_i^j: 1 \leq i \leq m, 1 \leq j \leq t_i \right \} : = k$. \end{enumerate} Then we have $\limp(\cH) \geq \frac{1}{2h}$. \end{lemma} \begin{proof} The proof of this lemma is nearly identical to the proof of Lemma \ref{lem:noLeaves} when we take $H = H_1^1$. We amend the constructions of graphs $G_1$ and $G_2$ by adding to both of them a clique of size $v(H_1)-1$ and identifying $k-1$ vertices of that new clique with the ``small cut'' $I$ consisting of the central $k-1$ vertices in the previously built graphs. By our assumptions on $\cH$, these graphs are $\cH$-free, and adding an arbitrary edge to the graph creates an $H_1$-minor: we trivially find the graphs $H_1^2, \ldots, H_1^{t_1}$ in the ``large'' clique on $v(H_1)-1$ vertices (if $t_1 \geq 2$), and $H_1^1$ is created like $H$ was in Lemma \ref{lem:noLeaves}. \end{proof} \begin{remark} \label{rem:disjointTriangles} One interesting case that is not covered by Lemma \ref{lem:multipleMinors} is $\Ex(mK_3)$, i.e., the class of graphs with $m$ disjoint triangles excluded for some $m \geq 2$. However, building on the work of Corradi and Hajnal \cite{independentCircuits} on the number of disjoint cycles in graphs of given density, it was observed in \cite{sparseMinors} that every graph $G$ with $e(G) \geq (2m-1)v(G)$ contains $mK_3$ as a minor. Moreover, this bound is asymptotically sharp as demonstrated by the complete bipartite graph $G = K_{2m-1,n-2m+1}$. On the other hand, any maximal $mK_3$-free graph can have at most one acyclic component, so $M^-_{mK_3}(n) \geq n-1$, and by analysing $G$ constructed from $K_{3m-1}$ by adding $n-3m+1$ pendant edges we see that $M^-_{mK_3}(n) = n+O(1)$. Hence we can conclude that $\limp(mK_3)= 2m-2$ for all $m \geq 2$. \end{remark} So far we have seen only two graphs $H$ such that the class $\Ex(H)$ is near pure. Namely, this happens when $H$ is the claw or the pan graph. However, in both cases $\gap_H(n) \leq 1$ and it is unclear whether there are more connected graphs $H$ such that $\Ex(H)$ is near pure, and if the answer to that question is positive, whether $\gap_H(n)$ can take arbitrarily large values (or in fact, any value larger than $1$). This is the case when we forbid more complex sets of graphs. In the following proposition we only take $t \geq 16$ to avoid complications in the statement that would make the conclusions more difficult to observe. \begin{prop} \label{prop:excludingStars} Let $t \geq 16$ be an integer, and let $\cH = \{K_{1,t}, 2K_{1,3} \}$. Then the class $\Ex(\cH)$ is near-pure with $t-10 \leq \gap_{\cH}(n) \leq t-1$. \end{prop} \begin{proof} We first claim that for all $n \geq 1$, every edge-maximal $\cH$-free graph $G$ satisfies $e(G) \geq n-1$. Indeed, every $\cH$-free graph must have at most one component that is not a cycle nor a path to avoid creating a $2K_{1,3}$-minor. Consequently, a maximal $\cH$-free graph has at most one acyclic component because we could connect one of the endpoints of any path to a leaf of any other tree without creating any of the forbidden minors. Now let $G$ be an edge-maximal $\cH$-free graph on $n \geq 4$ vertices. Let $\Delta = \Delta(G)$ denote the maximum degree of a vertex in $G$. Clearly $\Delta \geq 3$. We consider three cases -- when $3 \leq \Delta \leq 5$, $\Delta=6$ and $\Delta \geq 7$. In each case, let $v$ be a vertex of degree $\Delta$, let $V_i$ denote the set of vertices at distance $i$ from $v$ in $G$, and let $W_2$ denote $\bigcup_{i \geq 2} V_i$, the set of vertices at distance at least $2$ from $v$ (recall that $G$ might not be connected). Suppose first that $3 \leq \Delta \leq 5$. Each vertex in $V_1$ can have at most 2 edges to $V_2$ (this is immediate if $\Delta = 3$, while for $\Delta \geq 4$ follows from the fact that otherwise we have a $2K_{1,3}$ minor), so there are at most $2\Delta$ edges between $V_1$ and $V_2$. Similarly, each vertex in $W_2$ can have at most 2 edges to vertices in $W_2$. Hence the degree sum is at most \[ (\Delta+1) \cdot \Delta +2\Delta +(n-\Delta-1) \cdot 2=2n+(\Delta+1)\Delta-2 \leq 2n+28. \] If $\Delta = 6$ then each vertex in $W_2$ has degree at most 2, so the degree sum is at most $7 \cdot 6 + (n-7)\cdot 2=2n+28$. (Observe that the disjoint union of $K_7$ and $C_{n-7}$ achieves this bound.) If $\Delta \geq 7$ then also each vertex in $V_1$ has degree at most 3, so the degree sum is at most \[ \Delta+ \Delta \cdot 3 + (n-\Delta-1) \cdot 2 = 2n + 2 \Delta -2; \] and since also $\Delta \leq t-1$ this is at most $2n+2t-4$. Thus for $\Delta \leq 6$ we have $e(G) \leq n+14$, and for $\Delta \geq 7$ we have $e(G) \leq n+t-2$. Hence, since $t \geq 16$, we always have $e(G) \leq n+t-2$. It follows that $\gap_{\cH}(n) \leq n+t-2 -(n-1) \leq t-1$. The upper bound $n+t-2$ is achieved. Let $G'$ be the disjoint union of the $t$-vertex wheel (a~$C_{t-1}$ plus a central vertex) and $C_{n-t}$. Then $G'$ is (edge-maximal) $\cH$-free, and $e(G')=n+t-2$. We now find a much smaller edge-maximal $\cH$-free graph. Start with $K_5$, choose two vertices $u$ and $v$ in the $K_5$ and add two new vertices $x$ and $y$, both adjacent to both of $u$ and $v$ (this gives the total of $10+4$ edges so far). Finally we add $n-7$ vertices which form a path of $n-6$ edges between $x$ and $y$. The resulting graph is edge-maximal $\cH$-free with $n+8$ edges. Hence $\gap_{\cH}(n) \geq n+t-2 -(n+8) =t-10$. \end{proof} \section{Concluding remarks and open problems} When the connected graph $H$ satisfies $\delta(H)=1$ then a natural example of a leaf-and-edge-maximal $H$-free graph is a union of disjoint copies of $K_{h-1}$, where $h=v(H)$. It often turns out to be a ``dense'' example of such a graph, though in some cases we can find denser $H$-free graphs (see, e.g., Lemmas \ref{lem:cliquePlusALeaf} and \ref{lem:leafPlusAlmostClique}). In general, it appears that the graphs with minimum degree $1$ can cause us most trouble analysing their purity, as illustrated in the following example. \begin{example} Let the graph $H$ on $h \geq 6$ vertices consist of a clique on $h-2$ vertices and two pendant, non-incident edges. Two obvious examples of edge-maximal $H$-free graphs are a union of disjoint cliques each on $h-1$ vertices, and a union of cliques each on $h-2$ vertices that share one common vertex. It is easy to check that in both cases the density of these graphs tends to $(h-2)/2$ as the number of cliques constituting them tends to infinity. As finding other edge-maximal $H$-free graphs appears non-trivial, this might suggest that $\Ex(H)$ is near-pure. This is, however, not true and the following sparse construction, after comparing with the disjoint union of copies of $K_{h-1}$ (together with a smaller clique if necessary), will show that we have \begin{equation} \label{eqn.twoleaves} \limp(H) \geq \frac{h-4}{2} \end{equation} for all $h \geq 6$. Let $G'$ be a subdivision of $K_{h-2}$, obtained from $K_{h-2}$ by subdividing every edge at least once. Let $H^-$ be $H$ less a leaf (that is, $K_{h-2}$ plus one pendant edge). Clearly, adding an edge joining two vertices created through subdivisions of the same edge of $K_{h-2}$ creates an $H^-$-minor in $G'$. In fact, by case analysis, it is easy to check that $G'$ is leaf-and-edge-maximal $H^-$-free (it is enough to check it for $h=6$ because the edge we add to $G'$ can be ``wrapped'' in a $K_4$ containing it). Importantly, when we add an edge to $G'$ then we always have at least two choices of an original vertex of $K_{h-2}$ to which we can attach a leaf of the $H^-$-minor (see Figure \ref{figure:cliquePlusTwoLeafs}). For $n$ large enough, it is then enough to take a union of two such (not necessarily identical) subdivisions of $K_{h-2}$ of sizes that sum up to $n+1$, and connect them by picking an original vertex of $K_{h-2}$ from each subdivided graph and identifying them. The resulting graph is edge-maximal $H$-free with density tending to $1$ as $n$ tends to infinity. This establishes~\eqref{eqn.twoleaves}, and completes the example. \begin{figure}[htb] \centering \begin{tikzpicture}[scale = \subdivisionscale] \tikzstyle{vertex}=[draw,shape=circle,minimum size=5pt,inner sep=0pt] \node[vertex] (P-1) at (0,0) {~}; \node[vertex] (P-2) at (0,2) {~}; \node[above = 0.6em, left = 0.1em] () at (0,2) {$1$}; \node[vertex] (P-3) at (2,2) {~}; \node[vertex] (P-4) at (2,0) {~}; \node[below = 0.6em, right = 0.1em] () at (2,0) {$2$}; \node[vertex] (Q-11) at (0,2/3) {~}; \node[vertex] (Q-12) at (0,4/3) {~}; \node[vertex] (Q-21) at (1/2,0) {~}; \node[vertex] (Q-22) at (1,0) {~}; \node[vertex] (Q-23) at (3/2,0) {~}; \node[vertex] (Q-3) at (2,1) {~}; \node[right = 0.2em] () at (2,1) {$b$}; \node[vertex] (Q-4) at (1,2) {~}; \node[above = 0.2em] () at (1,2) {$a$}; \node[vertex] (Q-5) at (1,1) {~}; \node[left = 0.6em, below = 0.2em] () at (1,1) {$c$}; \node[vertex] (Q-6) at (2.75,-0.75) {~}; \foreach \x/\y in {1/11, 1/21, 2/12, 2/4, 2/5, 3/4, 3/3, 4/23, 4/3, 4/5} { \draw [color=black] (P-\x) -- (Q-\y); } \draw [color=black] (Q-11) -- (Q-12); \draw [color=black] (Q-21) -- (Q-22); \draw [color=black] (Q-22) -- (Q-23); \draw [color=black] (P-1) .. controls +(1,-1) and +(-0.5,-0.5) .. (Q-6); \draw [color=black] (P-3) .. controls +(1,-1) and +(0.5,0.5) .. (Q-6); \draw [color=black, dashed] (Q-3) -- (Q-4); \end{tikzpicture} \caption{Graph $G'$ for $h=6$ which is a subdivision of $K_4$. When, for example, we add the edge $\{a,b\}$ to $G'$, we can contract $\{1,a\}$ and $\{2,b\}$ and delete either $\{1,c\}$ or $\{2,c\}$, hence finding a minor consisting of a $K_4$ and a pendant edge attached to either $1$ or $2$.} \label{figure:cliquePlusTwoLeafs} \end{figure} \end{example} Let us recall the definition of the set \[ E_\cH(n) = \{ e(G) : v(G) = n \text{ and } G \text{ is an edge-maximal $\cH$-free graph} \}. \] The main objects of study of this paper were the extreme values of the set $E_\cH(n)$, i.e., $M^-_\cH(n)$ and $M^+_\cH(n)$. However, once we know that $\Ex(\cH)$ is not pure (i.e., that $M^-_\cH(n) \neq M^+_\cH(n)$ for some $n$), we can ask additional questions about the structure of $E_\cH(n)$. As a test case, let us consider $\cH = \{K_5\}$. Recall again the result of Wagner, who proved that edge-maximal $K_5$-free graphs are obtained as $2$- or $3$-clique-sums of planar graphs and of the Wagner graph $W_8$ (the sums must be ``maximal'', in particular, we only take a $2$-clique-sum of two graphs along an edge if that edge in not in any triangle in at least one of those graphs). Consequently, taking clique-sums of only planar graphs, always leads to building edge-maximal $K_5$-free graphs on $n$ vertices and $3n-6$ edges. Therefore, the first interesting case is $n=8$. The only possible edge-numbers of edge-maximal $K_5$-free graphs on $8$ vertices are $12$ (the Wagner graph $W_8$) and $18$ ($3$-clique-sums of planar graphs). For $n=9$ these edge-numbers are $14$ ($W_8$ glued to a triangle) and $21$, while for $n=10$ it can be $16$ ($W_8$ plus two triangles glued to different edges of $W_8$), $17$ ($W_8$ and $K_4$ glued along an edge) or $24$. Continuing this way, for $n=14$, we can build edge-maximal $K_5$-free graphs with any number of edges between $23$ and $29$, as well as $36$. More generally, taking $0 \leq i \leq 5$ and $n = 6k+2+i$ large, we have $M^-_{K_5}(n) = \frac{11(n-2-i)}{6}+1+2i$, and $E_{K_5}(n)$ contains all values between $M^-_{K_5}(n)$ and $3n-13$ (obtained, e.g., using one copy of $W_8$ glued along an edge with a triangulation on $n-6$ vertices), as well as $3n-6$. Hence in general we don't have $E_\cH(n)$ forming an interval, but do we always have $\gap_\cH(n) - |E_\cH(n)| = O(1)$, or at least is it always the case that if $\Ex(\cH)$ is linearly impure then $|E_H(n)| / \gap_H(n) \to 1$ as $n$ tends to infinity? We have determined the complete list of four connected graphs $H$ leading to pure minor-closed classes $\Ex(H)$. For connected $H$ we also know that $\Ex(H)$ is linearly impure if \begin{itemize} \item $\delta(H) \geq 2$, see Lemma \ref{lem:noLeaves}, or \item $H$ has a strongly separating vertex (except for the claw $K_{1,3}$), see Lemma \ref{lem:separatingVertex}, or \item $H$ is the path $P_4$ (Observation \ref{obs:n=4twoLeaves}), the bull graph (Observation \ref{obs:n=5twoLeaves}), a clique on at least four vertices with an additional one (Lemma \ref{lem:cliquePlusALeaf}) or two leaves (see the discussion at the beginning of this section), or \item $H$ consists of a clique on at least five vertices minus a matching, plus a pendant edge, see Lemma \ref{lem:leafPlusAlmostClique}, or \item $H$ is one of the three graphs discussed in Observation \ref{obs:h=5oneLeaf}. \end{itemize} Additionally, we know that $\Ex(H)$ is near-pure with $\gap_H(n) = 1$ if $H$ is the claw (Observation \ref{obs:claw}) or the pan graph (Observation \ref{obs:pan}). What about the remaining connected graphs $H$ which are not pure? Are there any more near-pure minor-closed classes $\Ex(H)$ for some connected graph $H$? Can we find an example such that $\gap_H(n) \geq 2$ for some $n$? We defined $\limp(\cH) = \liminf_{n \to \infty} \gap_{\cH}(n)/n$. Theorem~\ref{thm:AddableLimit} says that $\gap_{\cH}(n)/n$ tends to a limit if all graphs in $\cH$ are $2$-connected, so that in this case we could define $\limp(\cH)$ as the limit of $\gap_{\cH}(n)/n$. Do we always have $\gap_{\cH}(n)/n \to \limp(\cH)$? Finally, what about minor-closed classes with two or more connected excluded minors, whose analysis we started in Section \ref{sec:generalizations}: which are the pure classes, and are all such classes pure or near-pure or linearly impure? For example, the classes $\Ex(K_5, K_{3,3})$ of planar graphs, $\Ex(K_3, K_{1,3})$ of `forests of paths', $\Ex(2K_2, K_{3})$ of a star and isolated vertices, $\Ex(\text{Diamond}, \text{Bowtie})$ of graphs consisting of unicyclic and acyclic components, and $\Ex(K_4, K_{2,3})$ of outerplanar graphs are all pure; while for all $t \geq 5$, the class $\Ex(C_t, K_{1,3})$ where each component is a path or a short cycle, is near-pure with $\gap(n) = 1$ for all $n \geq \max\{t,6\}$ (two examples of $\{C_t, K_{1,3}\}$-free edge-maximal graphs are a path on $n$ vertices and $n-1$ edges, and a union of disjoint copies of $C_3$ and $C_4$ with total of $n$ vertices and $n$ edges, which exists for all $n \geq 6$ by Fact \ref{fact:frobeniusNumber}). Note that $\Ex(C_4, K_{1,3})$ is an interesting case with $\gap(3k) = 1$ for all $k \geq 2$, and $\gap(n) = 0$ otherwise. Obviously, similar questions could be asked about excluding disconnected minors. \paragraph{Acknowledgements.} We would like to thank Andrius Vaicenavicius for stimulating discussions during the course of this work. \begin{appendices} \section{Connected graphs $H$ on $5$ vertices with $\delta(H)=1$} \label{app:v5delta1} In this appendix we refine the analysis in Observation \ref{obs:h=5oneLeaf} and, in the following three propositions, we study the purity of the connected graphs $H$ on $5$ vertices that have exactly one leaf $v$ and are such that $H-v$ is 2-connected but is not a complete graph. \begin{prop} \label{prop:C4plusLeaf} Let $H$ be $C_4$ with an added leaf. Then $\gap_{H}(n)= \tfrac12 n +O(1)$ and so $\limp(H) = \frac12$. \end{prop} \begin{proof} Any edge-maximal $H$-free graph has at most one acyclic component, thus $M^-_H(n) \geq n-1$. Also, for $n \geq 6$, the cycle $C_{n-1}$ together with an isolated vertex is edge-maximal $H$-free graph with $n-1$ edges. Hence $M^-_H(n) = n-1$ for $n \geq 6$. On the other hand, a disjoint union of $\lfloor n/4 \rfloor$ copies of $K_4$, together with a copy of $K_t$ where $t = n-4\lfloor n/4 \rfloor$ if $4 \nmid n$, is an edge-maximal $H$-free graph with $3n/2+O(1)$ edges. Hence we have $M^+_H(n) \geq 3n/2+O(1)$. It remains to show that every $H$-free graph $G$ has $e(G) \leq 3n/2+O(1)$. Let $C$ be a component of $G$. If $v(C) \leq 4$ then clearly $e(C) \leq 3v(C)/2$. If $v(C) > 4$ and $C$ is not $C_4$-free then $C$ must be a cycle, hence $e(C) = v(C)$. Finally, if $C$ is $C_4$-free then each block of $C$ is an edge or a triangle, so $e(C) \leq (3v(C)-3)/2$. (Recall that a \emph{block} of a graph is a maximal connected subgraph that has no cut-vertex.) This implies that $M^+_H(n) \leq 3n/2$, and so $M^+_H(n) = 3n/2 +O(1)$. Thus we have $\gap_{H}(n)= n/2 +O(1)$. \end{proof} \begin{prop} \label{prop:diamondPlusLeaf1} Let $H$ be a diamond ($K_4$ minus an edge), with an added leaf adjacent to a vertex of degree $2$ of the diamond. Then $\gap_H(n) = n-3$ for each $n \geq 6$ and so $\limp(H) = 1$. \end{prop} \begin{proof} Let $G$ be an edge-maximal $H$-free graph. Then at most one component has at most one cycle. Further any acyclic component has at most two vertices. So $M^-_H(n) \geq n$ for $n \geq 3$. Now let $n \geq 6$ and $G$ be formed from an $(n-3)$-cycle and three non-incident pendant edges. It is easy to check that $G$ is an edge-maximal $H$-free graph. Thus $M^-_H(n) = n$ for $n \geq 6$. If $G$ is a $K_{2,n-2}$ with an extra edge joining the two vertices in the small class then $G$ is edge-maximal $H$-free, and so $M^+_H(n) \geq 2n-3$. Let us show that any $H$-free graph $G$ with $v(G) \geq 5$ satisfies $e(G) \leq 2v(G) -3$. Let $C$ be a component of $G$. If $C$ is $K_4$-free then we are done. Hence assume that $C$ is not $K_4$-free. Hence it has a subgraph homeomorphic to $K_4$. We observe that this subgraph must be spanning $C$ (as otherwise we would have an $H$-minor). Also, if any of the edges of $K_4$ were subdivided, it would also create an $H$-minor. Therefore $C$ must be a $K_4$. Hence $M^+_H(n)=2n-3$ for each $n \geq 2$, except $M^+_H(4)=6$. Thus $\gap_H(n) = n-3$ for each $n \geq 6$. \end{proof} \begin{prop} \label{prop:diamondPlusLeaf2} Let $H$ be a diamond, with an added leaf adjacent to a vertex of degree $3$ of the diamond. Then $\gap_{H}(n)= \tfrac23 n+O(1)$ and so $\limp(H) = \tfrac23$. \end{prop} \begin{proof} We argue as before for $M^-_H(n)$. Let $G$ be an edge-maximal $H$-free graph. Then at most one component has at most one cycle. Further any acyclic component has at most two vertices. So $M^-_H(n) \geq n$ for $n \geq 3$. Now let $n \geq 7$ and $G$ be formed from an $(n-3)$-cycle and three non-incident pendant edges. It is easy to check that $G$ is an edge-maximal $H$-free graph. Thus $M^-_H(n) = n$ for $n \geq 7$. Now, let $n = 3k + 1 + i \geq 6$ for some $i \in \{0,1,2\}$. Then a graph obtained as follows: take $k$ copies of a diamond graph ($K_4$ minus an edge) and a $K_{i+1}$ (if $i > 0$) and connect them into one graph by identifying one of the vertices of degree $2$ in every diamond, and and arbitrary vertex of the $K_{i+1}$. The resulting graph is edge-maximal $H$-free with $5(n-1-i)/3+\binom{i+1}{2}$ edges, hence $M^+_H(n) \geq 5n/3+O(1)$. Proving the inequality in the other direction shall require a bit more work. Let $G$ be an edge-maximal $H$-free graph. Suppose $G$ has a component $C$ with a $K_4$-minor. Then $C$ must be a subdivision of $K_4$. Indeed, $C$ has a subgraph which is a subdivision of $K_4$ and it is easy to see that this subgraph must be spanning. By case analysis we may check that no edge can be added, or we would obtain $H$ as a minor. So we have $e(C) \leq 3 v(C)/2$. Now, let $C'$ be a block of $G$ with no $K_4$-minor, and suppose that $C'$ is not just an edge or cycle. Then $C'$ contains a subgraph $D$ which is a subdivision of the diamond graph (since $C'$ is $2$-connected). Then $D$ consists of two vertices, $a$ and $b$, and three internally vertex disjoint $ab$ paths $P_1, P_2, P_3$ (one of which may be just an edge). We claim that $D$ is spanning in $C'$. For suppose there is a vertex of $C'$ not in $D$. Then, since $C'$ is $2$-connected, there are distinct vertices $u$ and $v$ in $D$ and a path $P_{out}$ of length at least $2$ between them outside $D$. Let $u_{out}$ be the neighbour of $u$ on $P_{out}$. Clearly $\{u,v\} \cap \{a,b\} = \emptyset$ (or we would obtain $H$ as a minor). Also, $u$ and $v$ must be on the same path $P_i$ (or we would obtain a $K_4$-minor). Now $u$ is not adjacent in $P_i$ to at least one of $a, b$, say not adjacent to $b$. If we contract the segment of $P_i$ between $u$ and $a$ to a new vertex $a'$, we obtain a copy of $D$ with three paths between $a'$ and $b$, plus the edge $a'u_{out}$; and so we have a minor $H$, a contradiction. We now know that $D$ is spanning $C'$. We claim that $C'$ is in fact equal to $D$. Indeed, suppose that there is an extra edge $xy$ in $C'$. This edge cannot be between internal vertices on distinct paths $P_i$ (or we get $K_4$ as a minor). If $\{x,y\}=\{a,b\}$ then each path $P_i$ has length at least 2 and we find an $H$-minor. If $\{x,y\} \cap \{a,b\}$ is a singleton, wlog $x=a$, then we find $H$ (with the 'extra' vertex being the neighbour of $a$ on the path $P_i$ from $a$ to $y$). The last case is when $x$ and $y$ are internal vertices of the same path $P_i$, wlog appearing along $P_i$ in the order $a,x,y,b$ (with some vertices in between, in particular between $x$ and $y$). Now we can contract the segment of $P_i$ between $a$ and $x$, and we are back in the case when $\{x,y\} \cap \{a,b\}$ is a singleton. We have now seen that $C' = D$. Hence $e(C') \leq (5/3)(v(C')-1)$. Thus each block $B$ of $G$ with no $K_4$-minor has $e(B) \leq (5/3)(v(B)-1)$. Hence each component $C$ of $G$ with no $K_4$-minor has $e(C) \leq 5(v(C)-1)/3$. We may also have components $\tilde{C}$ which are subdivisions of $K_4$, and then $e(\tilde{C})< 5v(\tilde{C})/3$. Hence $e(G)< 5v(G)/3$, so $M^+_H(n)< 5n/3$. We have now seen that $M^+_H(n)= 5n/3+O(1)$, so $\gap_{H}(n)= 2n/3+O(1)$. This completes the proof of the Proposition. \end{proof} \end{appendices}
1,108,101,563,916
arxiv
\section{introduction} Although Newton's gravity law and Einstein's general relativity have given marvelous understanding about gravity, it is still the most mysterious problem in the whole field of science \cite{Weinberg}. It is widely believed that the unification of gravity and quantum mechanics and the unification of gravity and other three fundamental forces are impossible in the foreseeable future. One of the main reason is that the abnormal effects observed by experiments on earth are highly scarce. To observe possible abnormal effects, most recently, Adler, Mueller and Perl \cite{Perl} proposed a terrestrial search for dark contents of the vacuum using atom interferometry, somewhat similarly to the Michelson-Morley experiment. Recently, Verlinde's work \cite{Verlinde} has renewed enthusiasm \cite{add,add2} of the possibility that gravity is an entropy force, rather than a fundamental force \cite{Jacobson}. Of course, if gravity is not a fundamental force, we should reconsider the unification of quantum mechanics and gravity. Inspired by the thermodynamic origin of gravity for classical objects, we will try to consider several fundamental problems: (i) Why the gravity between two classical objects is attractive? (ii) Does there exist new and observable quantum gravity effect? (iii) Whether there is new clue to solve the mechanism of accelerating universe? In this paper, the quantum effect of gravity is studied based on the general principle that gravity originates from the coupling and thermal equilibrium between matter and vacuum background. For classical particles, this general principle gives the Newton's gravity law. For particles described by quantum wave packets, we predict an abnormal quantum effect of gravity. Based on this abnormal quantum gravity effect, we consider the possible origin of the dark energy from the coupling and thermal equilibrium between matter and vacuum background. Quite surprising, the ratio of the dark energy in our simple calculations is $2.2$, which agrees quantitatively with the result 7/3 obtained from various astronomical observations \cite{d1,d2,d3,d4,d5}. Our works also show that, with a sphere full of superfluid helium, there is a feasible experimental scheme to test our idea with a gravimeter placed in the sphere. The sensitivities of $\Delta g/g$ below $10^{-8}$ could be used to test our idea, which satisfy the present experimental technique of atom interferometer \cite{AI}, free-fall absolute gravimeters \cite{Freefalling} and superconducting gravimeters \cite{superconductor}. The paper is organized as follows. First, in Sec. II, we consider the change of entropy for a particle with a displacement in space. For a particle with an acceleration, we give a derivation of the vacuum temperature due to the coupling between matter and vacuum. Based on the consideration of local thermal equilibrium, we give the acceleration of a particle in the presence of a finite vacuum temperature field distribution. In Sec. III, we give a derivation of Newton's law of gravitation. In particular, we explain the physical mechanism of the attractive gravity between two classical objects. In Sec. IV, we give a brief discussion of the physical mechanism of the equivalence principle based on the thermodynamic origin of gravity. In Sec. V, we consider an abnormal quantum gravity effect for a particle described by the wave function in quantum mechanics. In Sec. VI, we consider an experimental scheme to test this abnormal quantum gravity effect with a superfluid helium sphere. The application of this abnormal quantum gravity effect to test many-world interpretation and de Broglie-Bohm theory is discussed. We also discuss the application of this abnormal quantum gravity effect to condensed matter physics. In Sec. VII, we give the field equation including the quantum gravity effect of vacuum excitations. This gives a possible interpretation of the repulsive gravity effect of dark energy. In Sec. VIII, we calculate the dark energy density from the general principle in this work. In Sec. IX, the general field equation including the classical and quantum gravity effect of matter and radiation is given. We give relevant summary and discussions in the last section. \section{Entropy, vacuum temperature and inertia law} Compared with electromagnetism, weak interaction and strong interaction, the gravitation has several different features. (i) The gravitation is universal. (ii) The gravitation {}``charge'' is in a sense the energy-momentum tensor. Hence, the gravitation {}``charge'' is not quantized. (iii) The coupling between the energy-momentum tensor and spacetime leads to the gravity force for another particle. (iv) The gravity law closely resembles the laws of thermodynamics and hydrodynamics \cite{Bekenstein,Bardeen,Hawking,Davies,Unruh,Pand}. These features strongly suggest that the gravitation deserves studies with completely different idea, compared with other forces. It is well known that the force in classical and quantum gases can be understood in a natural way with statistical mechanics. Following the intensive pioneering works to eliminate the gravitation from a fundamental force, we will study the thermodynamic origin of gravitation, and in particular the quantum gravity effect when both quantum mechanics and thermodynamics are considered. Although the theoretical predication about the abnormal quantum gravity effect addresses a lot of subtle problems, our starting points are the following two formulas about the change of entropy for a displacement of a particle, and the vacuum temperature because of the vacuum excitations induced by an accelerating object. (1) The formula about the change of entropy $S$ after a displacement $x$ for a particle with mass $m$. \begin{equation} S=2\pi k_{B}\frac{mc}{\hbar}x.\label{entropy}\end{equation} In the original work by Verlinde \cite{Verlinde}, this postulation motivated by Bekenstein's work \cite{Bekenstein} about black holes and entropy, plays a key role in deriving the Newton's law of gravitation. The above formula means that after a displacement $x$ for a particle with mass $m$, there is an entropy increase of $S$ for the whole system. To understand this formula, we emphasize three aspects about this formula. (i) The whole system about the entropy includes the vacuum background. (ii) There is a strong coupling between the particle and vacuum background. This can be understood after little thought. Without a strong coupling, it is nonsense to define the location and time for a particle existing in the spacetime (${\it {i.e.}}$ the vacuum background). (iii) As a medium for various matters, the zero-point (or ground-state) energy density of the vacuum background is extremely large with the standard quantum field theory. The vacuum zero-point energy density is of the order of $10^{122}eV/cm^{3}$ if the energy cutoff is the Planck energy. This also gives the reason why there is a strong coupling between matter and vacuum. In a sense, the motion of a particle in the vacuum background is a little similar to a speedboat moving in the sea. The speedboat left behind a navigation path in the sea. After a navigation of distance $x$, the speedboat stopped. After waiting sufficient time, we cannot know the navigation path in the sea. If the location resolution in the navigation path in the sea is $l_{c}$, about $x/l_{c}$ bits of information are lost. In this situation, there is an entropy increase of $S\sim k_{B}x/l_{c}$. For a matter (similar to the speedboat) moving in the vacuum background (similar to the sea), it is similar to understand the relation $S\sim k_{B}x/l_{c}$ with $l_{c}$ being the location resolution in the sight of the vacuum background. Here we consider a method to calculate the location resolution $l_{c}$. From special relativity, the energy of the particle is $E=mc^{2}$. Together with quantum mechanics, the eigenfrequency is $\omega=E/\hbar=mc^{2}/\hbar$. It is understandable that various gauge fields in the vacuum have the propagation velocity of light velocity $c$. Hence, the relative velocity between matter and the gauge field in the vacuum is $c$. In this situation, we get the coherence length $l_{c}$ of the particle in the sight of the vacuum background. With the standard quantum mechanics method, we have $l_{c}=2\pi c/\omega=2\pi\hbar/mc$. This coherence length can also be regarded as the location resolution in the sight of the vacuum background. More detailed discussions about this coherence length and entropy are given in the Appendix. (2) Vacuum temperature induced by a uniformly accelerating object in the vacuum\ background. Considering a particle with acceleration $\mathbf{a}$, we have \begin{equation} x_{j}=\frac{a_{j}t^{2}}{2}.\label{x}\end{equation} Here $j=1,2,3$. In this paper, all bold symbols represent vectors. From Eq. (\ref{entropy}), we have \begin{equation} dS=\frac{2\pi k_{B}mc}{\hbar}\sqrt{\sum_{j}\left(a_{j}t\right)^{2}}dt.\label{ds}\end{equation} In addition, from $E=\sqrt{m^{2}c^{4}+p^{2}c^{2}}$, we have\begin{equation} dE\approx m\sum_{j}a_{j}a_{j}tdt.\label{de}\end{equation} Using the fundamental thermodynamic relation $dE=T_{V}dS$ for the whole system including the vacuum background, we have\begin{equation} k_{B}T_{V}\approx\frac{\hbar}{2\pi c}\frac{\sum_{j}a_{j}^{2}}{\sqrt{\sum_{j}a_{j}^{2}}}.\label{t}\end{equation} In this process, there is no entropy increase for the particle itself. The entropy increase comes from the vacuum. Hence, $T_{V}$ refers to the vacuum temperature at the location of the particle. Because temperature is a concept of statistical average, rigorously speaking, the right-hand side of the above expression should be written as the form of statistical average, ${\it {i.e.}}$\begin{equation} k_{B}T_{V}\approx\frac{\hbar}{2\pi c}\frac{\sum_{j}\left\langle a_{j}^{2}\right\rangle }{\left\langle \sqrt{\sum_{j}\left(a_{j}\right)^{2}}\right\rangle }.\label{eq:statemperature}\end{equation} If the fluctuations of the acceleration can be omitted, we have\begin{equation} T_{V}\approx\frac{\hbar\left\vert \mathbf{a}\right\vert }{2\pi k_{B}c}.\label{Unruh}\end{equation} This shows that the acceleration of a particle will induce vacuum excitations, and thus lead to finite vacuum temperature. Although the above formula is the same as the Unruh temperature \cite{Unruh}, the physical meaning is in a sense different from that of the Unruh effect. In the present work, the temperature $T_{V}$ denotes the vacuum temperature due to the vacuum excitations. Because the Unruh effect itself addresses a lot of subtle problems, in this paper, we will not discuss the detailed difference between $T_{V}$ and Unruh temperature. We will apply the above expression with our understanding of $T_{V}$ in this work. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Slide01} \caption{(Color onine) The red line shows a vacuum temperature field distribution. The red spheres show two classical particles at locations A and B. For an acceleration $\mathbf{a}_{B}$ for the particle B, the acceleration induces a temperature field distribution shown by the blue dashed line, with the peak value given by Eq. (\ref{eq:U2}). The local thermal equilibrium requests that this peak value is equal to the temperature of the vacuum temperature field (red line) at location B. This gives the physical mechanism why the particle B will accelerate in the presence of a finite vacuum temperature field.} \end{figure} The meaning of the above equation is further shown in Fig. 1. For a particle at location B with acceleration $\mathbf{a}_{B}$ (shown by the dashed red arrow), the coupling between the particle and the vacuum establishes a temperature field distribution shown by the dashed blue line with peak value $T_{V}\left(\mathbf{a}_{B}\right)=\hbar\left\vert \mathbf{a}_{B}\right\vert /2\pi k_{B}c$. In a sense, the strong coupling between the particle and the vacuum leads to a {}``dressed'' state which includes the local vacuum excitations and the particle itself. If the particle has no size, the width of the local vacuum excitations is of the order of the Planck length, with the same consideration of the derivation of the Newton's law of gravitation in the following section. In Fig. 1, there is a temperature field distribution shown by the red line. At location B, there is a particle denoted by red sphere. To establish local thermal equilibrium, the red sphere will accelerate so that the peak temperature of the dressed state is equal to the temperature of the vacuum temperature field at the location B, ${\it {i.e.}}$ $T_{V}\left(\mathbf{a}_{B}\right)=T_{V}\left(B\right)$. In this situation, we have\begin{equation} \left\vert \mathbf{a}_{B}\right\vert \approx\frac{2\pi k_{B}cT_{V}\left(B\right)}{\hbar}.\label{eq:U2}\end{equation} However, the acceleration is a vector, while the temperature is a scalar. Therefore, the above formula is not clear about the direction of the acceleration. We further consider the free energy of the whole system which is defined as $F_{fe}=U-T_{V}S$. Here $U$ is the overall energy of the system, which is a conserved quantity. Because the system evolution has the tendency to decrease the free energy with the most effective way, we get the following formula to determine the magnitude and direction of the acceleration in the presence of a vacuum temperature field.\begin{equation} \mathbf{a}\left(\mathbf{R}\right)\approx\frac{2\pi k_{B}cT_{V}\left(\mathbf{R}\right)}{\hbar}\frac{\nabla_{\mathbf{R}}T_{V}\left(\mathbf{R}\right)}{\left\vert \nabla_{\mathbf{R}}T_{V}\left(\mathbf{R}\right)\right\vert }.\label{Unruhvector}\end{equation} Here $\mathbf{R}$ denotes a three-dimensional spatial vector. In Fig. 1, for the particle at location B, the direction of the acceleration is determined based on the consideration of the free energy. We will show in due course that the above equation will interpret why the gravity force is attractive between two spatially separated objects. Note that Eq. (\ref{Unruhvector}) is invalid for uniformly distributed vacuum temperature field. For uniformly distributed vacuum temperature field such as location A in Fig. 1, from the consideration of the free energy, the direction of the acceleration of a particle is completely random. This means that the direction of the acceleration is highly fluctuating. Although $\left\langle {\bf \mathbf{a}}\right\rangle =0$ for this case, $\delta a\equiv\sqrt{\left\langle \left|\mathbf{a}-\left\langle \mathbf{a}\right\rangle \right|^{2}\right\rangle }>0$ if the uniformly distributed vacuum temperature is nonzero. By using Eq. (\ref{eq:statemperature}), we have $\delta a\sim2\pi k_{B}cT_{V}/\hbar$. From $dE=\mathbf{F}\cdot d\mathbf{x}$ and $dE=T_{V}dS$, we have $\mathbf{F}\cdot d\mathbf{x}=T_{V}dS$. Using Eqs. (\ref{x})-(\ref{t}), we have\begin{equation} \sum_{j}F_{j}a_{j}=m\sum_{j}a_{j}a_{j}.\end{equation} From the above equation, we get the following inertia law (Newton's second law) \begin{equation} \mathbf{F}=m\mathbf{a}.\end{equation} Of course, from $dE=\mathbf{F}\cdot d\mathbf{x}$ we can directly get this inertia law. The above derivation of the inertia law shows that it is self-consistent to consider the origin of classic force from statistical mechanics and the coupling between matter and vacuum background. If $T_{V}=0$, we have $\left|\mathbf{a}\right|=0$. This is Newton's first law. The coupling between the particle and vacuum background, and the consistency with Newton's first law suggest that the vacuum background is in a sense a superfluid. In this superfluid, the propagator velocity of various gauge fields is the light velocity $c$. If $c$ is regarded as the sound velocity of the vacuum background, the critical velocity to break the superfluidity is $c$. \section{A derivation of Newton's law of gravitation} As discussed in previous section, the coupling between particle and vacuum background leads to an entropy increase for a displacement. The physical mechanism of this entropy increase is that there is a strong coupling between matter and vacuum. In this section, we give a derivation of Newton's law of gravitation physically originating from these vacuum excitations. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Slide02} \caption{(Color online) The grid shows the correlation length ($\sim l_{p}$) for the vacuum fluctuations. The red sphere shows a particle which leads to a vacuum temperature field distribution, through the coupling with the vacuum.} \end{figure} When there are vacuum excitations due to the coupling between matter and space, we assume $l_{G}$ as the correlation length of the vacuum excitations. At least for the situations considered in the present work, the vacuum excitation energy density is much smaller than the vacuum zero-point energy density. We will show in due course that $l_{G}$ is of the order of the Planck length $l_{p}\equiv\sqrt{\hbar G/c^{3}}$ with $G$ being the gravitational constant, which is the length scale at which the structure of spacetime becomes dominated by quantum gravity effect. In Fig. 2, we consider the three-dimensional space with $l_{G}$ showing the structure of the space due to quantum gravity effect. Although the microscopic mechanism of $l_{G}$ is not completely clear, it is not unreasonable to assume the existence of $l_{G}$ in the structure of the space. These lowest space structures are a little similar to atoms in a solid. We assume the energy of a particle shown by red sphere in Fig. 2 is $\varepsilon$. We further assume the freedom of the lowest space structure is $i$. From the local thermal equilibrium, at the location of the particle, the temperature of the vacuum is determined by\begin{equation} \frac{i}{2}k_{B}T_{V}\left(R=0\right)=\gamma\varepsilon.\end{equation} Here $\gamma$ denotes a dimensionless coupling strength between matter and space. From the ordinary statistical mechanics, $\mbox{\ensuremath{\gamma}}$ should be of the order of $1$. We have\begin{equation} T_{V}\left(R=0\right)=\frac{2\gamma\varepsilon}{ik_{B}}.\label{eq:Tvr0}\end{equation} When $R\rightarrow\infty$, $T_{V}=0$. Hence, one expects the temperature field distribution shown in Fig. 3. For another particle with mass $m$ in this temperature field distribution, from Eq. (\ref{Unruhvector}), the acceleration field distribution is then\begin{equation} \mathbf{a}=-\frac{2\pi k_{B}cT_{V}}{\hbar}{\bf \mathbf{e}}_{R}.\label{eq:ar}\end{equation} Here the radial unit vector $\mathbf{e}_{R}\equiv{\bf \mathbf{R}/\left|\mathbf{R}\right|}$. To get the above expression, the spherical symmetry of the system for $R\gg l_{p}$ is also used. This explains the attractive gravity force between two classical objects. It is worthy to point out that both in Newton's gravitation law and Einstein's general relativity, this attractive gravity force is imposed from the observation results, rather than microscopic mechanism. One of the merit of thermodynamics lies in that even we do not know the exact collision property such as scattering length between atoms, the macroscopic force such as pressure can be derived. When the thermodynamic origin of gravity is adopted, there is a request that one gets the correct result of the direction of gravity force. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Slide03} \caption{(Color online) Shown is a temperature field distribution for a classical particle at $R=0$.} \end{figure} From the continuous property (Gauss's flux theorem) of the force $\mathbf{F}=m{\bf \mathbf{a}}$, we have\begin{equation} T_{V}\left(\mathbf{R}\right)=\frac{\eta}{R^{2}}.\end{equation} Note that the above expression holds for $R\gg l_{G}$, so that the spherical symmetry approximation can be used. Combined with Eq. (\ref{eq:Tvr0}), we have $\eta/l_{G}^{2}=2\gamma\beta\varepsilon/ik_{B}$. Although we do not know the exact value of $\beta$, it is understandable that $\beta$ is of the order of $1$. In this situation, we have\begin{equation} T_{V}\left(\mathbf{R}\right)=\frac{2\gamma\beta Mc^{2}l_{G}^{2}}{ik_{B}}\frac{1}{R^{2}}.\end{equation} To get the above expression, we have used $\varepsilon=Mc^{2}$. Using Eq. (\ref{eq:ar}), we have\begin{equation} \mathbf{a}=-\frac{4\pi\gamma\beta c^{3}l_{G}^{2}}{i\hbar}\frac{M}{R^{2}}{\bf \mathbf{e}}_{R}.\label{eq:ar-1}\end{equation} Assuming\[ l_{G}=\sqrt{\frac{i}{4\pi\gamma\beta}}l_{p},\] we get the standard result of Newton's gravitation law\begin{equation} \mathbf{a}=-\frac{GM}{R^{2}}{\bf \mathbf{e}}_{R}.\end{equation} We see that the correlation length of the vacuum excitation is of the order of the Planck length. The temperature field distribution for $R\gg l_{p}$ becomes\begin{equation} T_{V}\left({\bf \mathbf{R}}\right)=\frac{\hbar GM}{2\pi k_{B}c}\frac{1}{R^{2}}.\label{eq:Tr}\end{equation} We consider above the situation of a particle whose size is of the order of the Planck length. If the size of the particle is larger than the Planck length, by using the Gauss's flux theorem, for $R$ being much larger than the size of the particle, the above result still holds. It is worthy to further consider the meaning of the vacuum excitations due to the coupling between matter and space. The above derivations give the temperature field distribution. Assuming the vacuum zero-point energy density is $\rho_{VG}$, we consider the situation that a particle with mass $M$ suddenly emerges at the location $R=0$. This will lead to the establishment of the vacuum temperature field distribution (\ref{eq:Tr}). However, one should note that in the establishment process of the temperature field distribution, the whole energy should be conserved. Hence, assuming that $\rho_{ex}$ is the vacuum energy density in the presence of $M$, we have\begin{equation} \int\rho_{ex}dV\simeq\int\rho_{VG}dV,\end{equation} \begin{equation} \sqrt{\left\langle \left(\rho_{ex}\left({\bf \mathbf{R}}\right)-\rho_{VG}\right)^{2}\right\rangle }l_{p}^{3}\sim\frac{i}{2}k_{B}T_{V}\left({\bf \mathbf{R}}\right).\end{equation} This physical picture is further shown in Fig. 4. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Slide04} \caption{(Color online) For a particle at $R=0$, the blue line shows the fluctuating vacuum energy distribution $\rho_{ex}$ around the zero-point vacuum energy density $\rho_{VG}$. We will show in due course that when the whole universe is considered, $\rho_{ex}$ will exhibit fluctuation behavior around $\rho_{VG}+\rho_{V}$ with $\rho_{V}$ being the dark energy density.} \end{figure} For an assembly of classical fundamental particles (here {}``classical\textquotedblright{}\ means that the quantum wave packet effect is negligible) shown in Fig. 5, we assume the temperature field distribution due to the $i$th particle is $T_{Vi}\left(\mathbf{R}\right)$. Because there is no quantum interference effect between different classical particles, the force (a measurement result in quantum mechanics) on the object with mass $m$ is \begin{equation} \frac{\mathbf{F}\left(\mathbf{R}\right)}{m}=\mathbf{a}\left(\mathbf{R}\right)=\frac{2\pi k_{B}c}{\hbar}\sum_{i}\frac{T_{Vi}\left(\mathbf{R}\right)\mathbf{\nabla}_{\mathbf{R}}T_{Vi}\left(\mathbf{R}\right)}{\left\vert \mathbf{\nabla}_{\mathbf{R}}T_{Vi}\left(\mathbf{R}\right)\right\vert }.\label{classicalacc}\end{equation} The above expression is based on the assumption of the linear superposition of gravity force ${\bf {\bf \mathbf{F}}=\Sigma}_{i}\mathbf{F}_{i}$. In the Newton's law of gravitation for an assembly of classical particles, this implicit assumption is also used. We stress here that, rigorously speaking, the summation in the above expression is about all fundamental particles. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Slide05} \caption{(Color online) Shown is the gravity force on the red particle by an assembly of classical blue particles.} \end{figure} One may still consider the following possibility of calculating the acceleration field\begin{eqnarray} T_{V} & = & \sum_{i}T_{Vi},\notag\\ \mathbf{a} & = & \frac{2\pi k_{B}cT_{V}}{\hbar}\frac{\nabla_{{\bf \mathbf{R}}}T_{V}}{\left|\nabla_{{\bf \mathbf{R}}}T_{V}\right|}.\label{wrong}\end{eqnarray} Obviously, the above expression contradicts with the Newton's law of gravitation. Considering the astrodynamics addressing three bodies, this method to calculate the gravity force should be ruled out. The reason of the error in the above calculations lies in the method to get $T_{V}$. Even for the ordinary situation, the above method to get $T_{V}$ is also not correct. We consider the presence of $N$ thermal sources at different location $\mathbf{x}_{i}$. The temperature increase for an observer due to these $N$ thermal sources is $\delta T$. If only one thermal source exists, we assume that the temperature increase for the observer is $\delta T_{i}$. It is obvious that when $N$ thermal sources coexist, the temperature increase for the observer is not $\delta T=\Sigma_{i=1}^{N}\delta T_{i}$. \section{the equivalence principle} It is well known that the equivalence principle is based on the assumption that the gravitational mass $m_{g}$ is equal to the inertial mass $m_{I}$. It is still a mystery why $m_{g}=m_{I}$ because it seems that the gravitational mass and inertial mass are different physical concept, if the gravity force is regarded as a fundamental force, similarly to other fundamental forces. For example, for electromagnetism, the inertial mass is completely different from the electric charge leading to electromagnetism. Previous studies clearly show that if the thermodynamic origin of gravity force is adopted, the gravitational mass and the inertial mass are the same mass in the mass-energy relation in special relativity. In all our derivations, we do not need especially introduce the gravitational mass at all. In this situation, the gravitational mass and inertial mass are in fact the same physical concept. Hence, it is not surprising that $m_{g}=m_{I}$. As discussed previously, the velocity of light is in a sense the sound velocity of the vacuum. If the vacuum is regarded as a superfluid, there is a breakdown of the superfluidity if the velocity of an object is larger than $c$. This gives a possible physical mechanism why $c$ is the speed limit for all objects. Hence, the validity of special relativity is in a sense determined by the vacuum background where there is a finite and extremely large zero-point energy density. In other words, it's the vacuum background which leads to the theory of special relativity. This is the reason why there is an internal relation between $c$ and vacuum zero-point energy density. In a sufficiently small region of a freely falling system, the temperature due to the acceleration of this system is equal to the vacuum temperature field. In this small region, the object will not experience the finite vacuum temperature effect. This suggests the validity of the strong principle of equivalence. Hence, when the quantum effect is included in the field equation of gravitation, we will adopt the strong principle of equivalence, which is also adopted by Einstein in deriving his field equation for gravitation. From $S\sim k_{B}x/l_{c}$ and $l_{c}\sim\hbar/mc$, we see that there is another definition of the mass. For different fundamental particles, there are different correlation lengths in the coupling between particles and vacuum. The mass is then $m\sim\hbar/l_{c}c$. Considering previous studies that Newton's first law, second law and Newton's gravitation law can be derived from $S\sim k_{B}x/l_{c}$, this gives the definition of mass with the view of information. For the particle with inertial mass $m$, we consider the possible change of $m$ due to a large mass near us, such as the Milky Way galaxy. At the location of this particle, assuming the presence of the Milky Way galaxy leads to a change of the correlation length $l_{c}$, we have $\delta m/m=\delta l_{c}/l_{c}$. Assuming at the location of the particle, the vacuum temperature field due to the Milky Way galaxy is $T_{V}$, because $l_{c}$ comes from the coupling between the vacuum and particle, to the first order, $\delta l_{c}/l_{c}$ would be of the order of $k_{B}T_{V}/E_{Pl}$ with $E_{Pl}$ being the Planck energy. In this situation, we have $\delta m/m\sim k_{B}T_{V}/E_{Pl}$. We see that, to observe the possible change of the inertial mass, extremely large $T_{V}$ would be needed. In most situations we can imagine, we cannot distinguish the gravity effect between Einstein's general relativity and Mach's principle. It is not clear, whether at the singular point of black hole or during the Planck epoch et al., there would be obvious effect predicted by Mach's principle. \section{abnormal quantum effect of gravity} In the thermodynamic origin of gravity, for an object with mass $M$, it establishes a temperature field of $T_{V}\left(\mathbf{R}\right)\sim M/\left\vert \mathbf{R}\right\vert ^{2}$. Using the formula $\mathbf{a}=2\pi k_{B}cT_{V}\nabla_{\mathbf{R}}T_{V}/\hbar\left|\nabla_{\mathbf{R}}T_{V}\right|$ about the relation between acceleration and temperature, we get the Newton's gravity law $\mathbf{F}=-GMm\mathbf{R}/R^{3}$ between two classical objects. In particular, the attractive gravity force between two classical objects is explained. In this section, we will consider the quantum gravity effect by including quantum mechanics in the thermodynamic origin of gravity. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Slide06} \caption{(Color online) Fig. (a) shows a wave packet distribution of a particle in the black sphere. Fig. (b) shows the gravity acceleration due to this quantum wave packet. Figs. (c) and (d) show the classical situation, respectively. } \end{figure} To give a general study on the quantum gravity effect, we consider the following wave function for a fundamental particle (such as electron) with mass $m_{q}$,\begin{eqnarray} \phi_{q}\left(\mathbf{x},t\right) & \simeq & \frac{1}{\sqrt{V}},\left(R<R_{0}\right),\notag\\ \phi_{q}\left(\mathbf{x},t\right) & \simeq & 0,\left(R>R_{0}\right).\label{wavefunction}\end{eqnarray} The average density distribution $\left|\phi_{q}\left(\mathbf{x},t\right)\right|^{2}$ is shown by the black quantum sphere in Fig. 6(a). For $R>R_{0}$, similarly to the consideration of a classical particle, it is easy to get \begin{equation} T_{Vq}=\frac{\hbar Gm_{q}}{2\pi k_{B}cR^{2}}.\label{Tq1}\end{equation} At $R=0$, from the consideration of the spherical symmetry, ${\bf \mathbf{a}}\left(R=0\right)=\mathbf{0}$. Hence, we have $T_{Vq}\left(R=0\right)=0$. It is clear that the temperature field distribution is that shown in Fig. 7. For $R<R_{0}$, using again the spherical symmetry, we have\begin{equation} T_{Vq}=\frac{\hbar Gm_{q}R}{2\pi k_{B}cR_{0}^{3}}.\label{Tq2}\end{equation} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Slide07} \caption{The temperature field distribution for a particle with wave function given by Eq. (\ref{wavefunction}).} \end{figure} For the quantum wave packet shown in Fig. 6(a), from the relation between acceleration field and temperature field distribution given by Eq. (\ref{Unruhvector}), we have\begin{eqnarray} \mathbf{a} & = & \frac{Gm_{q}\mathbf{R}}{R_{0}^{3}},\left(R<R_{0}\right),\notag\\ \mathbf{a} & = & -\frac{Gm_{q}\mathbf{R}}{R^{3}},\left(R>R_{0}\right).\label{anomousacc}\end{eqnarray} This means a remarkable predication that in the interior of the quantum sphere, the gravity force is repulsive! This abnormal effect is further shown in Fig. 6(b). It is clear that this abnormal quantum gravity effect physically originates from the quantum wave packet effect for the particle in the black sphere. At $R=R_{0}$, the average acceleration field is zero. However, because of the finite vacuum temperature, the direction of the acceleration is highly fluctuating, ${\it {i.e.}}$ $\left|{\bf {a}}\left(R_{0}\right)\right|\sim Gm_{q}/R_{0}^{2}$ and $\left\langle {\bf {a}}\left(R_{0}\right)\right\rangle =0$. It's this highly fluctuating acceleration field leads to different value of $\oint$$\mathbf{F}\cdot d\mathbf{S}$ in the interior and exterior of the quantum sphere. If there are $N$ particles in the same quantum state given by Eq. (\ref{wavefunction}), we have\begin{eqnarray} \mathbf{a} & =\frac{2\pi k_{B}c}{\hbar}\sum_{i}\frac{T_{Vi}\left(\mathbf{R}\right)\mathbf{\nabla}_{\mathbf{R}}T_{Vi}\left(\mathbf{R}\right)}{\left\vert \mathbf{\nabla}_{\mathbf{R}}T_{Vi}\left(\mathbf{R}\right)\right\vert }= & \frac{GNm_{q}\mathbf{R}}{R_{0}^{3}},\left(R<R_{0}\right),\notag\\ \mathbf{a} & =\frac{2\pi k_{B}c}{\hbar}\sum_{i}\frac{T_{Vi}\left(\mathbf{R}\right)\mathbf{\nabla}_{\mathbf{R}}T_{Vi}\left(\mathbf{R}\right)}{\left\vert \mathbf{\nabla}_{\mathbf{R}}T_{Vi}\left(\mathbf{R}\right)\right\vert }= & -\frac{GNm_{q}\mathbf{R}}{R^{3}},\left(R>R_{0}\right).\label{anomousacc-1}\end{eqnarray} For this abnormal quantum gravity effect, one should be very careful. Without experimental verification, it is only a theoretical predication. Although we will show that it is possible that this abnormal quantum gravity effect just gives an interpretation to the accelerating universe due to dark energy, it is necessary to consider whether this theoretical predication is self-consistent. To further understand the abnormal quantum gravity effect, we consider a classical sphere with the following density distribution\begin{eqnarray} n\left(\mathbf{x},t\right) & \simeq & \frac{N}{V},\left(R<R_{0}\right),\notag\\ n\left(\mathbf{x},t\right) & \simeq & 0,\left(R>R_{0}\right).\label{densitdis}\end{eqnarray} This density distribution for a classical sphere is shown in Fig. 6(c). At first sight, one may get the same acceleration field distribution, compared with the situation of the quantum sphere. This is not correct. For a classical sphere, assuming there are $N$ particles, it is very clear that the wave packets of all particles are highly localized. Hence, for a particle at location ${\bf {x}}_{j}$, the temperature field distribution due to this particle is $T_{Vj}\left({\bf {R}}\right)\sim m_{j}/\left|{\bf {R}-{\bf {x}}_{j}}\right|^{2}$. From ${\bf {a}}\sim\Sigma_{j}T_{Vj}\nabla T_{Vj}/\left|\nabla T_{Vj}\right|$, we get the result given by Newton's law of gravitation, shown in Fig. 6(d). It is easy to show that the quantum states of the quantum sphere and classical sphere are completely different. In the quantum sphere, the many-body wave function is\begin{equation} \Psi_{q}\left(\mathbf{x}_{1},\cdots,\mathbf{x}_{N}\right)=\phi_{q}\left(\mathbf{x}_{1}\right)\cdots\phi_{q}\left(\mathbf{x}_{N}\right).\end{equation} Here $\phi_{q}$ is given by Eq. (\ref{wavefunction}). For the classical sphere, the many-body wave function is\begin{equation} \Psi_{c}\left(\mathbf{x}_{1},\cdots,\mathbf{x}_{N}\right)=c\Sigma_{P}P\left[\phi_{1}\left(\mathbf{x}_{1}\right)\cdots\phi_{N}\left(\mathbf{x}_{N}\right)\right].\end{equation} Here $\phi_{1}$, $\cdots$, $\phi_{N}$ are highly localized wave functions. $P$ denotes all the permutations for the particles in different single-particle state. $c$ is a normalization factor. We consider here the situation of bosons. It is similar for Fermi system. It's this essential difference of the many-body wave function that leads to different gravity effect. Assuming there are $N$ fundamental particles whose wave functions are $\phi_{1}\left(\mathbf{x},t\right),\cdot\cdot\cdot,\phi_{j}\left(\mathbf{x},t\right),\cdot\cdot\cdot,\phi_{N}\left(\mathbf{x},t\right)$, we give here the formulas to calculate the acceleration field due to these $N$ particles. Previous studies directly lead to the following two formulas. \begin{equation} \mathbf{a}\left(\mathbf{R}\right)\mathbf{=}\frac{2\pi k_{B}c}{\hbar}\sum_{j=1}^{N}T_{Vj}\left(\mathbf{R}\right)\frac{\nabla_{\mathbf{R}}T_{Vj}\left(\mathbf{R}\right)}{\left\vert \nabla_{\mathbf{R}}T_{Vj}\left(\mathbf{R}\right)\right\vert },\label{acc}\end{equation} and\begin{equation} T_{Vj}\left(\mathbf{R}\right)=\frac{\hbar Gm_{j}}{2\pi k_{B}c}\left\vert \int d^{3}\mathbf{x}\phi_{j}^{\ast}\left(\mathbf{x},t\right)\frac{\mathbf{x}-\mathbf{R}}{\left\vert \mathbf{x}-\mathbf{R}\right\vert ^{3}}\phi_{j}\left(\mathbf{x},t\right)\right\vert .\label{temfield}\end{equation} Here $m_{j}$ is the mass of the $j$th fundamental particle. The integral in the right-hand side of Eq. (\ref{temfield}) is due to the quantum wave packet of the $j$th fundamental particle, while the norm of the vector after calculating this integral is due to the fact that $T_{Vj}$ is a scalar field larger than zero and $T_{Vj}$ is an observable quantity based on Eq. (\ref{acc}). It is easy to show that if all these $N$ particles are highly localized classical particles, we get the Newton's gravity law $\mathbf{a}\left(\mathbf{R}\right)=-\sum_{j}Gm_{j}\left(\mathbf{R-x}_{j}\right)/\left\vert \mathbf{x}_{j}-\mathbf{R}\right\vert ^{3}$, with $\mathbf{x}_{j}$ being the location of the $j$th particle. We assume that a fundamental particle with mass $M$ is uniformly distributed in the sphere with radius $R_{M}$. For the vacuum temperature field due to this particle, the maximum temperature is\begin{equation} T_{V}^{\max}=\frac{1}{2\pi}\frac{\hbar GM}{k_{B}cR_{M}^{2}}.\end{equation} From the mass-energy relation, even all the energy is transferred into temperature with only one freedom, we get a limit temperature\begin{equation} T_{V}^{0}=\frac{2Mc^{2}}{k_{B}}.\end{equation} Obvious, there is a request of\begin{equation} T_{V}^{\max}\leq T_{V}^{0}.\end{equation} This means that \begin{equation} R_{M}\geq\sqrt{\frac{\hbar G}{4\pi c^{3}}}=\sqrt{\frac{1}{4\pi}}l_{p}.\end{equation} This result shows that any object having strong coupling with the vacuum\ background cannot be distributed within a sphere of radius $l_{p}/\sqrt{4\pi}$, irrelevant to its mass. This provides a physical mechanism why in our calculations of $T_{V}$, the volume of the smallest space unit is of the order of $l_{p}^{3}$. \section{experimental scheme and potential applications of quantum gravity effect} Now we turn to consider a feasible experimental scheme to test further the quantum gravity effect with superfluid $^{4}$He, shown in Fig. 8. For brevity's sake, we consider a sphere full of superfluid $^{4}$He. There is a hole in this sphere. In this situation, from Eqs. (\ref{acc}) and (\ref{temfield}), the gravity acceleration in the sphere due to superfluid $^{4}$He can be approximated as \begin{equation} \mathbf{a}=\frac{4\pi}{3}Gn_{He}\mathbf{R}.\label{ahe}\end{equation} Here the liquid helium density is $n_{He}\approx550$ kg/m$^{3}$. From this, the anomalous acceleration is $\mathbf{a}=1.5\times10^{-7}\mathbf{R}/s^{2}$. The gradient of this anomalous acceleration is $1.5\times10^{-7}/s^{2}$. Even only the condensate component of superfluid $^{4}$He is considered, for a superfluid $^{4}$He sphere whose radius is $1m$, the maximum anomalous acceleration is about $10^{-8}m/s^{2}$. Quite interesting, this value is well within the present experimental technique of atom interferometer \cite{AI} to measure the gravity acceleration. Nevertheless, this is a very weak observable effect. Thus, it is unlikely to find an evidence to verify or falsify this anomalous acceleration without future experiments. Apart from atom interferometer, the measurement of gravity acceleration with Bloch oscillation \cite{Bloch} for cold atoms in optical lattices, superfluid helium interferometry \cite{HeliumAI}, free-fall absolute gravimeters \cite{Freefalling} and superconducting gravimeters \cite{superconductor} provide other methods to test the abnormal quantum gravity effect. In particular, the standard deviation of free-fall absolute gravimeters in the present technique is about $10^{-8}m/s^{2}$ \cite{Freefalling}, while the superconducting gravimeters have achieved the sensitivities of one thousandth of one billionth ($10^{-12}$) of the Earth surface gravity. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Slide08} \caption{(Color online) An experimental scheme to test the abnormal quantum gravity effect. In the hole of the superfluid helium sphere, various apparatuses measuring the gravity acceleration are placed. As an example, we consider the application of atom interferometer, where the vacuum tube of the interference region is placed in the interior of the superfluid helium sphere, while the magneto-optical trap (abbreviated MOT) of cooling and trapping cold atoms may be placed outside the sphere.} \end{figure} Because this abnormal quantum gravity effect could be tested by contrast experiment with superfluid helium and normal helium, and the prediction that this abnormal quantum gravity effect is location-dependent, the sensitivities of a gravimeter could be used to test the abnormal quantum gravity effect. This makes it very promising to test the abnormal quantum gravity effect in future experiments. Because highly localized helium atoms will not lead to quantum effect of gravity, the measurement of acceleration will give us new chance to measure the fraction of highly localized helium atoms, which is still an important and challenging topic in condensed matter physics. In our theory, the quantum effect of gravity does not rely on superfluid behavior. It depends on whether the wave packet of a particle is localized. It is well known that whether there is wave packet localization is a central topic in condensed matter physics and material physics, such as the long-range order problem at a phase transition. Considering the remarkable advances in various gravimeters, it is promising that the quantum gravity effect would have potential applications in our understanding of condensed matter physics and material physics, \textit{etc}. A possible risk in the decisive test of the abnormal quantum gravity effect with superfluid helium sphere lies in our understanding of the superfluid behavior of liquid helium. In the ordinary understanding of superfluid helium, the superfluid fraction can achieve almost $100\%$ while the condensate fraction is about $8\%$ \cite{Penrose-1}. Because of the strong interaction between helium atoms, the liquid helium is a very complex strongly correlated system. Considering the fact that there are a lot of open questions in strongly correlated systems, we cannot absolutely exclude the possibility that the wave packets of all helium atoms are localized, although the whole system still exhibits superfluid behavior. This significantly differs from the Bose-Einstein condensate in dilute gases, where the wave function of the atoms in the condensate is certainly delocalized in the whole condensate. The abnormal quantum gravity effect gives a possible experimental scheme to test many-world interpretation \cite{Many}. In many-world interpretation, there is no {}``true\textquotedblright{}\ wave packet collapse process. For a particle described by a wave packet, the measurement result of the particle at a location does not mean that the wave packet of this particle at other locations disappears. It suggests that the wave function of the whole universe evolves into a series of orthogonal many-body wave functions due to the interaction between the measurement apparatus and the particle. The observed result of the particle at a location corresponds to one of the orthogonal many-body wave functions. Considering again the superfluid helium sphere, if we increase the temperature so that it becomes normal liquid, based on the many-world interpretation, the wave packets of helium atoms (at least the fraction of the helium atoms initially in the condensate) are still delocalized in the whole sphere. In this situation, if the many-world interpretation is correct, it is possible that one may also get the abnormal quantum gravity effect. This would mean a gravity effect dependent on the history of a system. At least, it seems that all previous experiments or astronomical observations do not overrule this possibility. The present work clearly shows that it's time to consider more seriously the new view of gravity, in particularly by future experiments. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Slide09} \caption{(Color online) Shown is the physical picture of $N$ particles with the same wave function $\phi$, based on the de Broglie-Bohm theory.} \end{figure} In the above studies of the quantum gravity effect, we adopt in a sense the ordinary understanding of quantum mechanics. It is worthy to consider the quantum gravity effect based on other understanding of quantum mechanics. Here we give a brief discussion with the de Broglie\textendash{}Bohm theory. We consider $N$ particles with the same wave function $\phi\left({\bf {x}}\right)$ shown in Fig. 9. In this pilot-wave theory, the particle is still a highly localized particle with a well-defined trajectory guided by the wave function. We can not predict the exact location of a particle because the initial position of the particle is not controllable by the experimenter, and the motion of the particle guided by the wave function is determined by the initial position of the particle. If we assume further that the energy is mainly carried by the particle, rather than the wave, for the situation shown in Fig. 9, the gravity effect due to these $N$ particles would be the same as that of $N$ classical particles whose density is $N\left|\phi\left({\bf {x}}\right)\right|^{2}$. Hence, this experimental scheme also gives us chance to test the de Broglie-Bohm theory. If the abnormal quantum gravity effect is verified, the de Broglie-Bohm theory would be excluded in a sense. \section{field equation including quantum gravity effect of vacuum excitations} As shown in Sec. IV, the gravitational mass and inertial mass are the same mass in Einstein's mass-energy relation, when gravity force is regarded as a thermodynamic effect. This explains in a natural way the equivalence principle. For the existence of vacuum excitations at location $\mathbf{x}$, a local reference system with acceleration given by Eq. (\ref{acc}) will not experience these vacuum excitations. Therefore, in this reference system, the physical law of special relativity would hold. This strongly suggests that Einstein's general relativity should be included to consider the quantum gravity effect. Because different locations can have different accelerations, different locations have different reference systems satisfying the physical law of special relativity. To construct the connection between the reference systems at different locations, Riemannian geometry is the most convenient mathematical tool. Therefore, although we try to argue here that thermodynamics and the coupling between matter and vacuum are the essential mechanisms of gravity force, Riemannian geometry is still needed to construct a systematic theory, because of the same reason in constructing Einstein's general relativity. To clearly introduce the new field equation including the quantum effect of gravity, we first give a brief introduction of Einstein's field equation for classical object (see Ref. \cite{Weinberg}). In weak field approximation, we have $g_{00}\simeq-\left(1+2\phi_{g}\right)$ with $\phi_{g}$ being the gravitational potential. From $\bigtriangledown^{2}\phi_{g}=4\pi Gn$ and $T_{00}\simeq n$, we have \begin{equation} \bigtriangledown^{2}g_{00}=-8\pi GT_{00}.\end{equation} If $T_{00}$ is due to classical particles, from the attractive gravity force between classical objects originating from the consideration of the free energy, the negative sign in the right-hand side of the above equation is included. We stress again that, in Einstein's derivation of his field equation, the negative sign in the right-hand side of the above equation is due to the observed phenomena that the gravity force between two classical objects is attractive, rather than a property derived from fundamental principle. The development of the above equation to relativistic case gives\begin{equation} G_{\mu v}=-8\pi GT_{\mu\upsilon}.\end{equation} With general considerations of Riemannian geometry and general covariance, we get the following Einstein's field equation\begin{equation} R_{\mu v}-\frac{1}{2}g_{\mu\nu}R=-8\pi GT_{\mu\upsilon}.\label{EFE}\end{equation} Note that we have adopted the unit with $c=1$ and the following Minkowski space-time\begin{equation} \eta_{\mu\nu}=\left(\begin{array}{cccc} -1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{array}\right).\end{equation} The presence of matter and radiation in the universe will establish various temperature field distributions. These temperature field distributions lead to the force for the matter and radiation in the universe. As shown previously, the physical mechanism of these temperature field distributions is due to the vacuum excitations. In higher-order calculations, the gravity effect of the vacuum excitations themselves should also be considered. Although this would be a complex nonlinear coupling process, general conditions (such as the principle of general covariance) could be imposed to attack this challenging problem. In a reasonable field equation including the gravitation effect of vacuum excitations themselves , one should introduce an extra energy-momentum tensor $T_{\mu\nu}^{vac}$, apart from $T_{\mu\upsilon}$ for ordinary matter and radiation. Because the vacuum excitations are virtual processes from the view of quantum field theory, these vacuum excitations are different from the ordinary matter. Because of this, these vacuum excitations cannot be described by the form of the energy-momentum tensor for the ordinary matter or radiation. In a sense, these vacuum excitations are integrated into the space time. Fortunately, in principle, one can add an extra term in Eq. (\ref{EFE}) without the violation of Riemannian geometry and\ principle of general covariance. The sole form of the field equation is \cite{Weinberg}\begin{equation} R_{\mu v}-\frac{1}{2}g_{\mu\nu}R=-8\pi G\left(T_{\mu\upsilon}+T_{\mu\nu}^{vac}\right).\label{NewEFE}\end{equation} Here\begin{equation} T_{\mu\nu}^{vac}=\lambda g_{\mu\nu}.\end{equation} This general consideration leads to a request that the energy density in $T_{\mu\nu}^{vac}$ is uniform. Assuming $\rho_{VG}$ is the ground-state energy density of the vacuum, $\rho_{ex}\left(\mathbf{x},t\right)$ is the energy density of the vacuum in the presence of matter and radiation, we have $\delta\rho_{V}\left(\mathbf{x},t\right)\equiv\sqrt{\left\langle \left(\rho_{ex}\left(\mathbf{x},t\right)-\rho_{VG}\right)^{2}\right\rangle }\sim\frac{i}{2}k_{B}T_{Veff}\left({\bf \mathbf{x},t}\right)/l_{p}^{3}$. Here $T_{Veff}$ is determined by Eq. (\ref{Unruh}). The role of this spatially-dependent $\delta\rho_{V}\left(\mathbf{x},t\right)$ and $T_{Veff}\left(\mathbf{x},t\right)$ has been included in Einstein field equation (\ref{EFE}). The energy density of the vacuum excitations is $\rho_{V}\left(\mathbf{x},t\right)\equiv\left\langle \rho_{ex}\left(\mathbf{x},t\right)-\rho_{VG}\right\rangle $. There are two equivalent ways to understand the average $\left\langle \right\rangle $ considered here: the average about the time interval $\delta t_{0}$ which is much larger than the response time of the vacuum, and the average about the space scale $l_{0}$ which is much larger than the Planck length. We see that although $\delta\rho_{V}\left(\mathbf{x},t\right)$ is spatially dependent, in principle, $\rho_{V}\left(\mathbf{x},t\right)$ could be spatially independent. Of course, this is only a statement that there would be no contradiction between spatially dependent $\delta\rho_{V}\left(\mathbf{x},t\right)$ and spatially independent $\rho_{V}\left(\mathbf{x},t\right)$. The spatial independence of $\rho_{V}\left(\mathbf{x},t\right)$ is due to the general principle of covariance for spacetime itself. The role of $\rho_{V}\left(\mathbf{x},t\right)$ is included through $T_{\mu\nu}^{vac}$ in Eq. (\ref{NewEFE}). From $T_{00}^{vac}=\rho_{V}$, we have\begin{equation} T_{\mu\nu}^{vac}=\pm\left(\begin{array}{cccc} -\rho_{V}g_{00} & 0 & 0 & 0\\ 0 & \rho_{V}g_{11} & 0 & 0\\ 0 & 0 & \rho_{V}g_{22} & 0\\ 0 & 0 & 0 & \rho_{V}g_{33}\end{array}\right).\end{equation} For a flat universe, assuming the scale factor as $1$ in the Fiedmann-Robertson-Walker (FRW) metric at the present time of our universe, we have\begin{equation} T_{\mu\nu}^{vac}=\pm\left(\begin{array}{cccc} -\rho_{V} & 0 & 0 & 0\\ 0 & \rho_{V} & 0 & 0\\ 0 & 0 & \rho_{V} & 0\\ 0 & 0 & 0 & \rho_{V}\end{array}\right).\end{equation} Considering the fact that various fields in the vacuum have the propagation velocity of $c$, it is not unreasonable to assume that various fields in the vacuum background have delocalized wave packets in the whole observable universe. Another reason is that when the big-bang model of the universe is adopted, various fields just have sufficient time to propagate in the whole observable universe. This could lead to delocalized wave packets for the vacuum excitation fields at least within the observable universe. The third reason is that when the coupling between matter and vacuum is considered, the vacuum has the characteristic of superfluidity. These analyses suggest a repulsive gravity effect by nonzero and positive $\rho_{V}$. This requests that one should take the negative sign in the above expression of $T_{\mu\nu}^{vac}$. We finally get\begin{equation} T_{\mu\nu}^{vac}=\left(\begin{array}{cccc} \rho_{V} & 0 & 0 & 0\\ 0 & p_{V} & 0 & 0\\ 0 & 0 & p_{V} & 0\\ 0 & 0 & 0 & p_{V}\end{array}\right).\label{eq:darkenergytensor}\end{equation} Here $p_{V}=-\rho_{V}$. We see that vacuum excitations as a whole have positive energy and abnormal negative pressure. In our theory, it is clearly shown that the abnormal negative pressure physically originates from the quantum characteristic of vacuum excitations. \section{dark energy density} In this section, we will try to calculate $\rho_{V}$ based on the whole thermal equilibrium in the coupling between matter and vacuum. If the gravity effect is regarded as the thermodynamics and the coupling between matters and vacuum, it is a natural request that the whole vacuum excitations (dark energy) should be determined based on the thermodynamic origin of gravity. Now we turn to consider the whole average effect of these vacuum excitations for large scale of the universe. We will try to calculate the whole average vacuum excitations, ${\it {i.e.}}$ the dark energy density. In calculating the dark energy density, we will use the assumption of isotropy and homogeneity on the large scale of the universe. Another assumption is the spatially independent $\rho_{V}$ discussed in previous section. Using spherical polar coordinates for three-dimensional space, we have\begin{equation} d\tau^{2}=dt^{2}-a^{2}\left(t\right)\left[\frac{dr^{2}}{1-Kr^{2}}+r^{2}d\Omega.\right]\end{equation} Here $d\Omega\equiv d\theta^{2}+sin^{2}\theta d\phi^{2}$. $r$, $\theta$, $\phi$ are time-independent co-moving coordinates. $a\left(t\right)$ is the scale factor. $K=1$, $-1$, $0$ for spherical, hyperspherical and Euclidean cases. In the present work, we consider the case of $K=0$ verified by various astronomical observations. If the evolution of the universe is considered, the radial coordinate $r\left(z\right)$ of a source that is observed now with redshift $z$ is \cite{Cosmology} \begin{equation} r\left(z\right)=\frac{1}{a_{0}H_{0}}\int_{1/\left(1+z\right)}^{1}\frac{dx}{x^{2}\sqrt{\Omega_{\Lambda}+\Omega_{M}x^{-3}+\Omega_{R}x^{-4}}}.\label{eq:radial}\end{equation} Here $\Omega_{\Lambda}\equiv8\pi G\rho_{V0}/3H_{0}^{2}$, $\Omega_{M}\equiv8\pi G\rho_{M0}/3H_{0}^{2}$, $\Omega_{R}\equiv8\pi G\rho_{R0}/3H_{0}^{2}$. $\rho_{V0}$, $\rho_{M0}$ and $\rho_{R0}$ are the present values of the dark energy density, average cold matter (${\it {e.g.}}$ dust) energy density and hot matter (${\it {e.g.}}$ radiation) energy density. $a_{0}\equiv1$ and $H_{0}$ are the present values of the scale factor and Hubble constant. Because the gravity field (${\it {\it {i.e.}}}$ the vacuum excitations) has the propagation velocity of $c$, for the observer at $r=0$, the above expression of $r$ about $z$ has the merit that, $r\left(z\right)$ is the radial coordinate of a source whose gravity field is observed now with redshift $z$. This gravity field at earlier time is that the observer could experience at the present time. This is the same reason why the above expression is very useful in calculating the luminosity of distant stars. Various astronomical observations have given precision measurement of $\Omega_{M0}$ and $\Omega_{R0}$. Assuming $\alpha=\Omega_{\Lambda}/\left(\Omega_{M}+\Omega_{R}\right)$, we have\begin{equation} r\left(\alpha,z\right)=\frac{1}{a_{0}H_{0}\sqrt{\Omega_{M}+\Omega_{R}}}\int_{1/\left(1+z\right)}^{1}\frac{dx}{x^{2}\sqrt{\alpha+\left(\Omega_{M}/\left(\Omega_{M}+\Omega_{R}\right)\right)x^{-3}+\left(\Omega_{R}/\left(\Omega_{M}+\Omega_{R}\right)\right)x^{-4}}}.\label{eq:radial2}\end{equation} From this equation, we can also get $z\equiv z\left(\alpha,r\right)$. Note that the radial coordinate $r\left(\alpha,z\right)$ for co-moving sources is time-independent. We will use this sort of radial coordinate to calculate the dark energy density. For an observer at $r=0$ of the present time, the overall energy of cold matter and radiation the observer experiences is then\begin{eqnarray} E_{MR} & \left(\alpha\right)= & E_{M}\left(\alpha\right)+E_{R}\left(\alpha\right).\end{eqnarray} Here $E_{M}$ for cold matter is given by\begin{eqnarray} E_{M} & \left(\alpha\right)=\int_{0}^{\infty}\rho_{Mr}\left(r\right) & \sqrt{\frac{1}{1-v^{2}\left(\alpha,r\right)}}\frac{1}{\left(1+z\left(\alpha,r\right)\right)^{2}}r^{2}drd\Omega.\end{eqnarray} $\rho_{Mr}\left(r\right)$ is the cold matter density in the co-moving radial coordinate $r\left(\alpha,z\right)$, without considering the expansion of the universe. When this radial coordinate is adopted, the cold matter density in this coordinate system should not depend on the scale factor. Hence, when the present value of the scale factor $a$ is takes as $1$, $\rho_{Mr}\left(r\right)=\rho_{M0}$. The existence of the average relative velocity $v\left(\alpha,r\right)$ between the observer and the matter at $r$ makes the cold matter energy density for the observer becomes $\rho_{M0}\sqrt{1/\left(1-v^{2}\left(\alpha,r\right)\right)}$. This is the physical origin of the factor $\sqrt{1/\left(1-v^{2}\left(\alpha,r\right)\right)}$ in the above expression. Because $z\left(\alpha,r\right)$ denotes the redshift, we have $v\left(\alpha,r\right)=\left(\left(1+z\left(\alpha,r\right)\right)^{2}-1\right)/\left(\left(1+z\left(\alpha,r\right)\right)^{2}+1\right)$. The factor $1/\left(1+z\left(\alpha,r\right)\right)^{2}$ in the right-hand side of the above equation originates from two physical effects. When the thermodynamic origin of the gravity is considered, there are various sounds in the temperature field of the vacuum due to the presence of matter. In this situation, for the observer, the rate of arrival of the individual sounds in the gravitation field (temperature field) is reduced by the redshift factor $1/\left(1+z\left(\alpha,r\right)\right)$. On the other hand, the energy of the individual sounds experienced by the observer is also reduced by the redshift factor $1/\left(1+z\left(\alpha,r\right)\right)$. Hence, the effective energy density is reduced by the factor $1/\left(1+z\left(\alpha,r\right)\right)^{2}$ in the above equation. There is another way to understand the factor $1/\left(1+z\left(\alpha,r\right)\right)^{2}$ in the above equation. For the matter with the radial coordinate $r\left(z\right)$, we consider the gravity field emitted at appropriate time $t\left(z\right)$. When this gravity field emitted at time $t\left(z\right)$ just arrives at the observer, the effective distance between the matter that we consider and the observer becomes $r\left(z\right)\left(1+z\right)$ because of the expansion of the universe. For the observer, this is equivalent to the case that the energy the observer experiences for the matter at $r\left(z\right)$ is reduced by the multiplication of the factor $1/\left(1+z\left(\alpha,r\right)\right)^{2}$. As for $E_{R}$, because the radiation field propagates at a speed equal to $c$, we have\begin{eqnarray} E_{R} & \left(\alpha\right)=\int_{0}^{\infty}\rho_{R0} & \frac{1}{\left(1+z\left(\alpha,r\right)\right)^{2}}r^{2}drd\Omega.\end{eqnarray} It is understandable that the vacuum excitations leading to the gravity effect have also the propagation velocity of $c$. In this situation, it is similar to get $E_{V}$ for the vacuum excitations (dark energy), which is given by\begin{eqnarray} E_{V} & \left(\alpha\right)=\int_{0}^{\infty}\rho_{V0} & \frac{1}{\left(1+z\left(\alpha,r\right)\right)^{2}}r^{2}drd\Omega.\end{eqnarray} Assuming that there is a thermal equilibrium between matter and vacuum on the large scale of the universe, we have\begin{equation} E_{MR}\left(\alpha\right)=E_{V}\left(\alpha\right).\end{equation} The above equation is also due to the reason that the smallest space unit ($\sim l_{p}^{3}$) is the same for cold matter, hot matter and vacuum excitation, so that the overall freedom ($\sim V/l_{p}^{3}$ with $V$ being the volume of the universe) is the same for cold matter, hot matter and vacuum excitation. It is straightforward to get\begin{equation} \alpha=\frac{\int_{0}^{\infty}\left(\frac{\rho_{M0}}{\rho_{M0}+\rho_{R0}}\sqrt{\frac{1}{1-v^{2}\left(\alpha,r\right)}}+\frac{\rho_{R0}}{\rho_{M0}+\rho_{R0}}\right)\frac{1}{\left(1+z\left(\alpha,r\right)\right)^{2}}r^{2}drd\Omega}{\int_{0}^{\infty}\frac{1}{\left(1+z\left(\alpha,r\right)\right)^{2}}r^{2}drd\Omega}.\end{equation} Combined with Eq. (\ref{eq:radial2}) to get $z\left(\alpha,r\right)$, one can get the ratio $\alpha$. The numerical result is $\alpha\thickapprox2.2$, agrees quantitatively with the result $7/3$ in astronomical observations \cite{d1,d2,d3,d4,d5}. In the co-moving coordinate, the dark energy density is then\begin{equation} \rho_{V0}=\alpha\left(\rho_{M0}+\rho_{R0}\right).\label{eq:dark}\end{equation} Because the co-moving radial coordinate $r\left(z\right)$ is time-independent, $\rho_{M0}$ and $\rho_{R0}$ can be also regarded as the cold matter density and radiation energy density in the co-moving coordinate. For an observer at other times, in the co-moving coordinate, $\rho_{M0}$ and $\rho_{R0}$ are also the cold matter density and radiation energy density for this observer. It is easy to show that the dark energy density is still given by the above equation. Together with this time-independent dark energy density in the co-moving coordinate, we see that $\alpha\approx2.2$ is a universal value once the following conditions are satisfied. (i) There is a big-bang origin of the universe. (ii) The universe and its evolution are isotropic and homogeneous. (iii) In the co-moving coordinate, the radiation energy density is much smaller than the cold matter energy density. (iv) The evolution of the universe is a quasi-equilibrium process. This condition suggests that for cosmic inflation process, the calculation of $\alpha$ deserves further studies. One may consider the problem that why we do not introduce the similar concept of luminosity distance $d_{L}=r\left(1+z\right)$, so that the temperature field due to matter and radiation at the location of the observer becomes\begin{equation} T_{MR}\left(\alpha\right)\propto\int_{0}^{\infty}\left(\rho_{M0}\sqrt{\frac{1}{1-v^{2}\left(\alpha,r\right)}}+\rho_{R0}\right)\frac{1}{d_{L}^{2}}r^{2}drd\Omega.\end{equation} With the similar idea, the temperature field at the location of the observer due to vacuum excitations is\begin{equation} T_{V}\left(\alpha\right)\propto\int_{0}^{\infty}\rho_{R0}\frac{1}{d_{L}^{2}}r^{2}drd\Omega.\end{equation} From $T_{MR}=T_{V}$, we get\begin{equation} \alpha=\frac{\int_{0}^{\infty}\left(\frac{\rho_{M0}}{\rho_{M0}+\rho_{R0}}\sqrt{\frac{1}{1-v^{2}\left(\alpha,r\right)}}+\frac{\rho_{R0}}{\rho_{M0}+\rho_{R0}}\right)\frac{1}{\left(1+z\left(\alpha,r\right)\right)^{2}}drd\Omega}{\int_{0}^{\infty}\frac{1}{\left(1+z\left(\alpha,r\right)\right)^{2}}drd\Omega}.\end{equation} This method to calculate the dark energy density is wrong. In this method, at the location of the observer, the sum of the vacuum temperature due to the matter and radiation at different locations is adopted. As shown already previously, this sort of sum of the vacuum temperature due to the matter and radiation is nonsense. With the above equation, the numerical result of $\alpha$ is about $1.2$. As expected, it does not agree with the astronomical observations. In Eq. (\ref{eq:dark}), the dark energy density is given in the time-independent co-moving coordinate. If the evolution of the scale factor $a\left(t\right)$ is considered, $\rho_{V}\left(t\right)$, $\rho_{M}\left(t\right)$ and $\rho_{R}\left(t\right)$ in the ordinary coordinate (where the proper distance $d\left(r,t\right)=a\left(t\right)r$ is adopted) need further studies. As shown previously, if the energy-momentum tensor given by Eq. (\ref{eq:darkenergytensor}) is adopted for the dark energy, the whole energy-momentum tensor can be written as\begin{equation} T_{\mu\upsilon}=T_{\mu\upsilon}^{vac}+T_{\mu\nu}^{mat}+T_{\mu\nu}^{rad}.\end{equation} Here $T_{\mu\nu}^{mat}$ and $T_{\mu\nu}^{rad}$ are the energy-momentum tensors for cold matter and radiation. In the evolution of the universe, when the Friedmann-Robertson-Walker metric is used, the energy conservation law $T_{;\mu}^{0\mu}=0$ leads to\begin{equation} \frac{d\rho}{dt}+\frac{3\dot{a}}{a}\left(p+\rho\right)=0.\end{equation} From this we have $\rho\left(t\right)\propto a\left(t\right)^{-3-3w}$. If $w=-1$ for dark energy as discussed in previous section, this requests that the dark energy density is constant in the evolution of the universe. It is similar to get the well-known results of $\rho_{M}\left(t\right)=\rho_{M0}\left(a\left(t\right)\right)^{-3}$ and $\rho_{R}\left(t\right)=\rho_{R0}\left(a\left(t\right)\right)^{-4}$. For coexisting cold matter, radiation and dark energy, when there is no interchange of energy between different components, these evolutions for different components always hold. It is consistent with the result of the dark energy density in previous calculations. The above energy conservation law clearly shows a further condition in previous calculations of the dark energy density, ${\it {i.e.}}$ the time-independent dark energy density relies on the condition that there is no energy interchange (or material conversion) between cold matter and hot matter including radiation. If this energy interchange happens, it is possible that the dark energy density becomes time-dependent. This deserves further theoretic studies for the early non-equilibrium universe, to find observable effect. Most quantum field theories predict a huge value for the quantum vacuum. It is generally agreed that this huge value should be decreased $10^{-120}$ times to satisfy the observation result. Because there are no {}``true\textquotedblright{} wave functions for these {}``quantum zero-point states\textquotedblright{}, based on our theory, even there are huge vacuum energy due to {}``quantum zero-point states\textquotedblright{}, the gravity effect should be multiplied by zero! Another reason is that, the temperature of the vacuum including only {}``quantum zero-point states\textquotedblright{} is zero. The finite temperature characteristic of the vacuum is due to the excitations from the vacuum, which influence the motion of the matters. Based on our theory, the cosmological constant problem is not a {}``true\textquotedblright{} problem at all. In this work, the vacuum energy calculated by us is due to the coupling and thermal equilibrium between matter and vacuum background. The coupling and thermal equilibrium lead to various {}``true\textquotedblright{} excitations from the vacuum. What we calculated in this work is in fact aims at these excitations which can be described by sophisticated and delocalized wave functions. In Fig. 10, we give a summary of the role of different forms of vacuum energy. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Slide10} \caption{(Color online) Shown is different role of the fluctuating vacuum energy $\rho_{ex}$. Because of the whole thermal equilibrium between matter and vacuum, and the request of general covariance on the gravity effect of the vacuum excitations, $\rho_{ex}$ has a fluctuation around $\rho_{VG}+\rho_{V}$. The fluctuations in $\rho_{ex}$ lead to the gravity effect in the general relativity for the ordinary matter and radiation, while $\rho_{V}$ (dark energy) leads to repulsive gravity effect which gives a physical mechanism of the accelerating universe.} \end{figure} For the sake of completeness, now we consider the evolution of our universe in a brief way. $g_{\mu\nu}\left(t\right)$ for flat universe is\begin{equation} g_{\mu\nu}\left(t\right)=\left(\begin{array}{cccc} -1 & 0 & 0 & 0\\ 0 & a^{2}\left(t\right) & 0 & 0\\ 0 & 0 & a^{2}\left(t\right) & 0\\ 0 & 0 & 0 & a^{2}\left(t\right)\end{array}\right).\label{FRW}\end{equation} We assume the present value of the scale factor $a$ as $1$. If the matter (including dark matter) and radiation are regarded as classical when the evolution of the whole universe is studied, $T_{\mu\nu}$ in Eq. (\ref{NewEFE}) takes the ordinary form. From Eqs. (\ref{NewEFE}) and (\ref{FRW}), we have \cite{Peeble} \begin{equation} \left(\frac{\overset{\cdot}{a}}{a}\right)^{2}=\frac{8\pi G}{3}\left[\rho_{M0}\left(1+z\right)^{3}+\rho_{R0}\left(1+z\right)^{4}+\rho_{V}\right],\end{equation} and\begin{equation} \frac{\overset{\cdot\cdot}{a}}{a}=-\frac{8\pi G}{3}\left[\rho_{M0}\left(1+z\right)^{3}/2+\rho_{R0}\left(1+z\right)^{4}-\rho_{V}\right].\end{equation} \section{field equation including quantum gravity effect of matter} The quantum gravity effect of vacuum excitations has been included in Einstein's field equation, which explains in a simple way the remarkable astronomical observations about dark energy. The basic clues of the combination of quantum gravity effect of matter and general relativity are: (1) The direction of the acceleration field due to gravity force should be determined by the tendency of decreasing the free energy. (2) The field equation should satisfy the principle of general covariance. We first consider further the abnormal quantum gravity effect based on Eqs. (\ref{acc}) and (\ref{temfield}). For a particle with mass $m$ and wave function $\phi$, Eqs. (\ref{acc}) and (\ref{temfield}) have another equivalent form as follows.\begin{equation} \mathbf{a}_{c}\left(\mathbf{R}\right)=Gm\int d^{3}\mathbf{x}\phi^{\ast}\left(\mathbf{x},t\right)\frac{\mathbf{x}-\mathbf{R}}{\left\vert \mathbf{x}-\mathbf{R}\right\vert ^{3}}\phi\left(\mathbf{x},t\right)\label{aclassical}\end{equation} \begin{equation} \mathbf{a}_{q}\left(\mathbf{R}\right)\mathbf{=}a_{c}\left(\mathbf{R}\right)\frac{\nabla_{\mathbf{R}}a_{c}\left(\mathbf{R}\right)}{\left\vert \nabla_{\mathbf{R}}a_{c}\left(\mathbf{R}\right)\right\vert }.\label{aquantum}\end{equation} Here $\mathbf{a}_{c}$ is the acceleration field without considering quantum effect of gravity. $\mathbf{a}_{q}$ is the acceleration field when quantum effect of gravity is considered. For a classical particle, $\mathbf{a}_{q}=\mathbf{a}_{c}$. Note that $a_{c}=\left\vert \mathbf{a}_{c}\right\vert $. From Eq. (\ref{aclassical}), we have $\bigtriangledown\times\mathbf{a}_{c}=0$. In this situation, we get a simple relation between $\mathbf{a}_{c}$ and $\mathbf{a}_{q}$, which is given by\begin{equation} \mathbf{a}_{q}\left(\mathbf{R}\right)=f_{q}\left(\mathbf{R}\right)\mathbf{a}_{c}\left(\mathbf{R}\right).\end{equation} Here $f_{q}\left(\mathbf{R}\right)=\pm1$. The sign $\pm$ at a location $\mathbf{R}$ is determined by the rule that the direction of the acceleration $\mathbf{a}_{q}\left(\mathbf{R}\right)$ should point to the increasing of $T\left(\mathbf{R}\right)\sim\left\vert \mathbf{a}_{c}\left(\mathbf{R}\right)\right\vert =\left\vert \mathbf{a}_{q}\left(\mathbf{R}\right)\right\vert $ in the neighboring region including $\mathbf{R}$. In classical case, this rule physically originating from the property of the free energy has explained why the gravity force between two classical objects is attractive. Obviously, it is natural to generalize this rule to quantum wave packet, because classical mechanics has been replaced by quantum mechanics, when fundamental physical law is addressed. In reality, the wave function of a lot of particles may be very complex. For brevity's sake, we consider a system of identical bosons which can be directly generalized to more complex case. The many-body wave function is assumed as\begin{equation} \Psi_{matter}=\Psi_{matter}\left(\mathbf{x}_{1},\mathbf{x}_{2},\cdot\cdot\cdot,\mathbf{x}_{N},t\right).\end{equation} The single-particle density matrix is given by\begin{eqnarray} \rho_{1}\left(\mathbf{x},\mathbf{x}^{\prime},t\right) & = & \int\Psi_{matter}^{\ast}\left(\mathbf{x},\mathbf{x}_{2},\cdot\cdot\cdot,\mathbf{x}_{N},t\right)\Psi_{matter}\left(\mathbf{x}^{\prime},\mathbf{x}_{2},\cdot\cdot\cdot,\mathbf{x}_{N},t\right)d^{3}\mathbf{x}_{2}\cdot\cdot\cdot d^{3}\mathbf{x}_{N}\notag\\ & \equiv & \left\langle \widehat{\Psi}^{\dag}\left(\mathbf{x},t\right)\widehat{\Psi}\left(\mathbf{x}^{\prime},t\right)\right\rangle .\end{eqnarray} This single-particle density matrix can be diagonalized, i.e. written in the form \cite{Leggett}\begin{equation} \rho_{1}\left(\mathbf{x},\mathbf{x}^{\prime},t\right)=\sum_{j}N_{j}\phi_{j}^{\ast}\left(\mathbf{x},t\right)\phi_{j}\left(\mathbf{x}^{\prime},t\right).\end{equation} It is natural to consider the field equation of quantum gravity with this diagonalized single-particle density matrix. To calculate the gravity field of the whole system when quantum effect is sufficiently considered, we first calculate the following field equation for $N_{1}$ bosons described by the wave function $\phi_{1}$ \begin{equation} R_{\mu v}\left(1\right)-\frac{1}{2}g_{\mu\nu}\left(1\right)R\left(1\right)=-8\pi GT_{\mu\upsilon}\left(1\right).\label{Eq1F}\end{equation} Here $T_{\mu\upsilon}\left(1\right)\equiv T_{\mu\upsilon}\left[N_{1},\phi_{1}\right]$ can be calculated with the standard method. The calculation based on the above equation does not consider the sign problem. From the following formula\begin{equation} \frac{d^{2}x^{\mu}}{d\tau^{2}}+\Gamma_{^{{}}\upsilon\lambda}^{\mu}\left(1\right)\frac{dx^{v}}{d\tau}\frac{dx^{\lambda}}{d\tau}=0,\label{Gamma}\end{equation} we can get the acceleration field $\mathbf{a}\left(1\right)$. With the rule to solve the sign problem, we get a new field equation\begin{equation} R_{\mu v}\left(1^{\prime}\right)-\frac{1}{2}g_{\mu\nu}\left(1^{\prime}\right)R\left(1^{\prime}\right)=-8\pi Gf_{1}\left(x^{\tau}\right)T_{\mu\upsilon}\left(1\right).\label{newFE}\end{equation} Here, $f_{1}\left(x^{\tau}\right)=\pm1$ is determined by the sign rule, and the acceleration field $\mathbf{a}\left(1\right)$ obtained from Eqs. (\ref{Eq1F}) and (\ref{Gamma}). It is clear that the above new field equation still satisfies general covariance. One should note that if at $x^{\tau}=\beta^{\tau}$, $f_{1}\left(\beta^{\tau}\right)=-1$ based on the sign rule. In calculating the above new field equation at $\beta^{\tau}$, $T_{\mu\upsilon}\left(1\right)$ in the whole space time is still the result based on the standard method. More specifically, Eq. (\ref{newFE}) is not equivalent to the following field equation\begin{eqnarray} R_{\mu v}\left(1^{\prime}\right)-\frac{1}{2}g_{\mu\nu}\left(1^{\prime}\right)R\left(1^{\prime}\right) & = & -8\pi GT_{\mu\upsilon}\left(1^{\prime}\right),\notag\\ T_{\mu\upsilon}\left(1^{\prime}\right) & = & \left\{ \begin{array}{c} T_{\mu v}\left(x^{\tau}\right),x^{\tau}\in\Sigma_{1}\\ -T_{\mu v}\left(x^{\tau}\right),x^{\tau}\in\Sigma_{2}\\ T_{\mu v}\left(x^{\tau}\right),x^{\tau}\in\Sigma_{3}\\ \cdot\cdot\cdot\cdot\cdot\cdot\end{array}\right._{.}\end{eqnarray} It is very clear that this `wrong' field equation is not self-consistent. It is similar to consider $N_{2}$ bosons described by the wave function $\phi_{2}$. It is straightforward and natural to get the following field equation.\begin{equation} R_{\mu v}-\frac{1}{2}g_{\mu\nu}R=-8\pi G\left(\sum\limits _{j}f_{j}\left(x^{\tau}\right)T_{\mu\upsilon}\left(j\right)+T_{\mu\nu}^{vac}\right).\label{FnewFE}\end{equation} Here $T_{\mu\upsilon}\left(j\right)$ is the ordinary energy-momentum tensor for $N_{j}$ identical particles described by the wave function $\phi_{j}$. $f_{j}\left(x^{\tau}\right)$ is determined by the sign rule for this mode. For the sake of completeness, $T_{\mu\nu}^{vac}$ is also included in the above equation. \section{summary and discussion} In summary, Einstein's field equation is extended to include the quantum effect of gravity, from the general assumption that gravity force originates from the coupling and thermal equilibrium between matter and vacuum background. With this new field equation and physical mechanism of gravity force, without any fitting parameter, the accelerating universe of remarkable astronomical observations is quantitatively explained. We stress that the pioneering works on the connection between thermodynamics and gravity force, and the astronomical observations of accelerating universe play key role in the construction of the present theory. In fact, the present work is the result of the direct stimulation of these two works. The present paper is a significant extension of our previous works \cite{xiong1,xiong2,xiong3,xiong4}, with a lot of physical concepts being clarified and new method added to calculate the dark energy density by including general relativity. The present theory has several significant features. (1) It is a simple extension of Einstein's field equation to include quantum effect of gravity. (2) Different from Newton's gravity law and Einstein's general relativity, the present theory gives an interpretation why the gravity force between two classical objects is attractive. At least, the weak equivalence principle also becomes a derivable property. (3) With the same reason to explain the attractive gravity force between two classical objects, the present theory predicts new physical effect, i. e. abnormal quantum gravity effect. (4) The present theory quantitatively agrees with the astronomical observations of accelerating universe without any fitting parameter. (5) If the present theory is verified, it has wide applications, even in condensed matter physics and experimental test of the many-world interpretation and de Broglie-Bohm theory, etc. (6) Last but maybe most important, our theory can be falsified by experiment on earth by measuring gravity acceleration in the interior of a superfluid helium sphere. Because of the above features, we believe the present theory should be taken seriously. In particular, these features mean great opportunity to relevant experiments on earth and astronomical observations. One of the main results of the present work is the possible emergence of the quantum effect of gravity at the macroscopic scale. Nevertheless, it is an interesting issue to consider whether this macroscopic and abnormal quantum gravity effect could be derived from the microscopic mechanism about the quantum gravity, such as superstring with a positive cosmological constant \cite{Kachru}, loop quantum gravity and twistor theory \cite{Penrose}, \textit{etc}. Recently, the relation between gravity and thermodynamics has been studied in the frame of loop quantum gravity \cite{Smolin}, which provides possible clue to this problem. Although the microscopic interaction mechanism of quantum gravity at Planck scale is an unsolved problem, the thermodynamics of the macroscopic quantum gravity effect of this work is still meaningful. When statistical mechanics was initiated in 1870 by Boltzmann, the concept of atom is still an unrecognized hypothesis, not to mention the collision mechanism between atoms due to electromagnetic interaction. However, this does not influence the power of statistical mechanics in the description of gas dynamics. On the contrary, the theoretical and experimental advances of statistical mechanics greatly promoted further understanding of atoms. Once the thermodynamic origin of gravity is verified by future experiments and astronomical observations, the abnormal quantum gravity effect would also greatly promote the understanding of microscopic quantum gravity at Planck scale. In this work, the repulsive gravity effect for superfluid helium sphere and dark energy physically originate from the same mechanism that the wave packet of energy is delocalized in the space scale we study. If the de Broglie-Bohm theory is correct, we would not observe the abnormal gravity effect for superfluid helium. However, for dark energy originating from various vacuum excitations, it is still possible that the energy is carried by various wave packets, rather than particles, because there is no stable particles in the vacuum excitations. Hence, even the abnormal gravity effect for superfluid helium is excluded in future experiments, it is still possible that the theory of the present work could be applied to accelerating universe. One may consider the problem that whether there is abnormal quantum gravity effect in the interior of a neutron star. If the temperature (not the vacuum temperature) of the neutron star is zero and the wave packets of all neutrons are delocalized in the whole interior of the neutron star, we would expect an abnormal quantum gravity effect. However, the temperature of the neutron state is in fact extremely high. The typical temperature of a neutron star is about $10^{6}$ kelvins. In addition, the neutron star is not an ideal Fermi system without the consideration of the interaction between neutrons. The neutron star comprises very complex structures, such as the out core consisting of neutron-proton Fermi liquid and the inner core consisting of possible quark gluon plasma. In quantum statistical mechanics, the approximate model of the neutron star as an ideal Fermi system holds because of the validity of the local density approximation. The validity of this ideal Fermi system does not mean that the wave packets of all neutrons are delocalized in the whole interior of the neutron star. Considering the extremely high temperature and complex structure of the neutron star, it is more reasonable to assume that the delocalized scale of the wave packet of the neutrons is much smaller than the size of the neutron star (the typical size of a neutron star is about $10$ km). In this situation, we still expect the normal gravity effect in the interior of the neutron star. \begin{acknowledgments} We thank the discussions with Prof. Biao Wu, and his great encouragements. We also thank Prof. W. Vincent Liu's great encouragements. This work was supported by National Key Basic Research and Development Program of China under Grant No. 2011CB921503 and NSFC 10875165. \end{acknowledgments} \newpage{} \section*{Appendix} To get Eq. (\ref{entropy}) about the entropy increase of a particle having a displacement, there is another request of the loss of information about the trajectory of the particle. In Fig. 11(a), we show the motion of a particle along the dashed line. Because of the location resolution from the view of the vacuum background, the dashed line is partitioned by the box with side length $l_{c}$. At time $t_{1}$, the wave packet of the particle is shown by the dashed line. At a later time $t_{2}$ ($=t_{1}+l_{c}/v$), the wave packet of the particle is shown by the solid line. At time $t_{2}$, the information of the location of the particle at time $t_{1}$ recorded by the vacuum background will be lost because of two reasons: (1) At time $t_{2}$, although the location of the particle with spatial resolution $l_{c}$ is recorded by the vacuum background, the velocity information is highly uncertain. From $\Delta x\Delta p\geq\hbar/2$, the velocity uncertainty of the particle is $\Delta v\sim c$. In this situation, at time $t_{2}$, after the position of the particle is recorded by the vacuum background, the history of the particle is lost from the view of the vacuum background. (2) At time $t_{1}$, although the location with spatial resolution $l_{c}$ is recorded by the vacuum background, it will be lost rapidly after a displacement at time $t_{2}$. At time $t_{1}$, the location record is due to the coupling between the particle and the vacuum background. In other words, due to the coupling between the particle and the vacuum background, at the location of the particle at time $t_{1}$, the vacuum has larger fluctuations. It's these spatial-dependent vacuum fluctuations record the location of the particle with resolution $l_{c}$. Because the vacuum background is highly fluctuating, at time $t_{2}$, the location information at time $t_{1}$ will be lost because of the re-establishment of the thermal equilibrium in the vacuum background. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{SlideS1} \caption{(Color online) Fig. (a) and Fig. (b) illustrate the physical picture to get the starting point given by Eq. (\protect\ref{entropy}). Fig. (c) shows the essential difference between $l_{c}$ and $l_{de}$.} \end{figure} To justify further the above derivation of Eq. (\ref{entropy}), we give here several discussions. (i) In the above derivation, we assume a strong coupling between the particle and the vacuum background, so that the location of the particle can be recorded by the vacuum background. When quantum field theory is used, the vacuum background is in fact full of various gauge fields, virtual matter-antimatter pairs. This implies a strong coupling between the particle and the vacuum background. (ii) One should distinguish two sorts of coherence lengths: the coherence length $l_{c}$ and the ordinary thermal de Broglie wave length. In the non-relativistic approximation, the thermal de Broglie wave length reflects the spatial coherence length which is $l_{de}\sim\hbar/\Delta p$. We stress that $l_{c}$ and $l_{de}$ have different physical origins. $l_{c}$ originates from the coupling between the particle and the vacuum background. Because of the extremely large vacuum zero-point energy and strong coupling between matter and vacuum, the special relativity and quantum mechanics are used to calculate $l_{c}$. In contrast to $l_{c}$, $l_{de}$ originates from the coupling or interaction between the particle we study and other particles or environment. To clearly elaborate this, we take a hydrogen atom as an example, which consists of an electron and a proton. For a system consisting of a large number of hydrogen atoms, $l_{de}=\sqrt{2\pi}\hbar/\sqrt{mk_{B}T_{sys}}$ with $T_{sys}$ being the ordinary temperature in thermal equilibrium. $l_{de}$ physically originates from the interatomic interaction or the coupling with the environment. For Bose-Einstein condensation of atomic hydrogen \cite{HydrogenBEC}, $l_{de}$ can arrive at $300$ \AA{}. In Fig. 11(c), we show $l_{c}$ and $l_{de}$ for an atom. (iii) Although there is a velocity uncertainty $\Delta v\sim c$ for a particle due to the strong coupling with the vacuum, we stress that this does not mean that the particle will have highly random motion in the vacuum without other forces. The reason lies in that the coupling between a particle and the vacuum will lead a {}``dressed'' state including the local vacuum excitations. As a whole, the velocity uncertainty could be much smaller than $c$. This is a little similar to the electron in a hydrogen atom. In the view of proton in the hydrogen atom, the velocity uncertainty of the electron is about $\hbar/m_{e}l_{hy}$ with $m_{e}$ being the electron mass and $l_{hy}$ the size of the hydrogen atom. However, for the hydrogen atom as a whole, the velocity uncertainty is determined by the wave packet of the hydrogen atom, rather than the electron. In a sense, it is possible that there is a physical mechanism of spatially resolved quantum non-demolition detection \cite{non} about the particle by the vacuum with spatial resolution $l_{c}$.
1,108,101,563,917
arxiv
\section{Cluster automorphisms}\label{sect 1} \subsection{ Cluster algebras} We recall that a {\it quiver} is a quadruple $Q=(Q_0,Q_1,s,t)$ consisting of two sets: $Q_0$ (whose elements are called \emph{points}) and $Q_1$ (whose elements are called \emph{arrows}), and of two maps $s,t:Q_1\to Q_0$ associating to each $\alpha \in Q_1$ its \emph{source} $s(\alpha)$ and its \emph{target} $t(\alpha)$, respectively. Given a point $i\in Q_0$, we denote by $i^-=\{ \alpha\in Q_1\mid t(\alpha)=i \}$ the set of arrows ending in $i$, and by $i^+=\{ \alpha\in Q_1\mid s(\alpha)=i \}$ the set of arrows starting in $i$. Let now $Q$ be a connected finite quiver without oriented cycles of length one or two. Let $n=|Q_0|$ denote the number of points in $Q$, let $\mathbf{x}=\{x_1,\dots,x_n\}$ be a set of $n$ variables, and denote the points by $Q_0=\{1,\ldots,n \}$, where we agree that the point $i$ corresponds to the variable $x_i$. We consider the field ${\cal F} = \mathbb{Q}(x_1,\dots,x_n)$ of rational functions in $x_1,\dots,x_n$, which we call the {\it ambient field}. The {\it cluster algebra} ${\cal A} = {\cal A}(\mathbf{x},Q)$ is a $\mathbb{Z}$-subalgebra of ${\cal F}$ defined by a set of generators obtained recursively from $\mathbf{x}$ in the following manner. Let $i$ be such that $1\leq i \leq n.$ The {\it mutation} $\mu_{x_i,\mathbf{x}}$ of $(\mathbf{x},Q)$ (or $\mu_{x_i}$ or $\mu_i$ for brevity if there is no ambiguity) is defined as follows. Firstly, $Q$ becomes a new quiver $Q^\prime$ obtained from $Q$ by: \begin{itemize} \item[(a)] inserting a new arrow $k\to j$ for each path $k\to i \to j$ of length two with midpoint $i$; \item[(b)] inverting all arrows of $Q$ passing through $i$; \item[(c)] deleting each occurrence of a cycle of length two. \end{itemize} Secondly, $\mathbf{x}$ becomes a new set of variables $\mathbf{x}^\prime=(\mathbf{x}\setminus\{x_i\}) \cup \{x_i^\prime\}$ where $x_i^\prime\in{\cal F}$ is defined by the so-called {\it exchange relation}: \begin{equation*} x_i x_i^\prime = \prod_{\alpha\in i^+} x_{t(\alpha)} + \prod_{\alpha\in i^-} x_{s(\alpha)}. \end{equation*} Let ${\cal X}$ be the union of all possible sets of variables obtained from $\mathbf{x}$ by successive mutations. Then ${\cal A} = {\cal A}(\mathbf{x}, Q)$ is the $\mathbb{Z}$-subalgebra of ${\cal F}$ generated by ${\cal X}$. Each pair $(\tilde{\mathbf{x}}, \tilde{Q})$ obtained from $(\mathbf{x}, Q)$ by successive mutations is called a {\it seed}, and $\tilde{\mathbf{x}}$ is called a {\it cluster}. The elements $\tilde{x}_1,\dots, \tilde{x}_n$ of a cluster $\tilde{\mathbf{x}}$ are {\it cluster variables}. Each cluster is a transcendence basis for the ambient field $\mathcal{F}$. The pair $(\mathbf{x}, Q)$ is the {\it initial seed} and $\mathbf{x}$ is the {\it initial cluster}. It has been shown in \cite[Theorem 3]{GSV} that for every seed $(\tilde\mathbf{x},\tilde Q)$ the quiver $\tilde Q$ is uniquely defined by the cluster $\tilde \mathbf{x}$, and we use the notation $Q(\tilde\mathbf{x})$ for the quiver of the cluster $\tilde\mathbf{x}$. More precisely, there is a canonical bijection $p$ from the cluster $\tilde \mathbf{x}$ to the set of points of the quiver $Q(\tilde\mathbf{x})$. We write $p_x$ for the point in $Q(\tilde\mathbf{x})$ corresponding to the cluster variable $x\in \tilde\mathbf{x}.$ We recall two results from the theory of cluster algebras. The so-called {Laurent phenomenon} is the fact that each cluster variable can be expressed as a Laurent polynomial in the $x_i$, with $i=1,\dots,n$, so that ${\cal A} \subseteq {\mathbb{Z}}[x_1^{\pm 1},\dots, x_n^{\pm 1}]$, see \cite{FZ1}. Also, ${\cal A}$ is {\it of finite type}, that is the set $\cal{X}$ of all cluster variables of $\cal A$ is finite, if and only if there exists a sequence of mutations transforming $Q$ into a Dynkin quiver \cite{FZ2}. In the latter case (as well as in many others), the so-called {\it positivity conjecture} holds true, that is the cluster variables are Laurent polynomials with non-negative coefficients or, equivalently, the cluster variables are contained in ${\mathbb{Z}_{\geq 0}}[x_1^{\pm 1},\dots, x_n^{\pm 1}]$, see \cite{MSW,N,AsRS}. \subsection{Main definition} We define cluster automorphisms as follows. \begin{definition} \label{def_main} Let $\cal A$ be a cluster algebra, and let $f:{\cal A} \to {\cal A}$ be an automorphism of ${\mathbb Z}$-algebras. Then $f$ is called a \emph{cluster automorphism} if there exists a seed $(\mathbf{x},Q)$ of ${\cal A}$, such that the following conditions are satisfied: \begin{itemize} \item [\textup{(CA1)}] $f(\mathbf{x})$ is a cluster; \label{def1} \item [\textup{(CA2)}] $f$ is compatible with mutations, that is, for every $x\in \mathbf{x}$, we have $$f(\mu_{x,\mathbf{x}}(x)) = \mu_{f(x),f(\mathbf{x})}(f(x)).$$ \label{def2} \end{itemize} \end{definition} \begin{remark} \label{rmk_initial} \begin{itemize} \item[\rm(a)] As we shall see in Proposition \ref{prop1} below, if $% \mathcal{A}$ is a cluster algebra, and $f:\mathcal{A\rightarrow A}$ is an automorphism of $\mathbb{Z}$-algebras, then $f$ is a cluster automorphism if and only if it satisfies properties \textup{(CA1)} and \textup{(CA2)} for \emph{every} seed $ \left( \mathbf{x}^{\prime},Q^{\prime }\right) $ of $\mathcal{A}$, thus it sends clusters to clusters and commutes with any sequence of mutations. \item[\rm(b)] Every cluster automorphism $f$ is uniquely determined by its value on the initial cluster variables $x_1,\dots, x_n$ and thus extends in a unique way to an automorphism of the ambient field ${\cal F} = \mathbb{Q}(x_1, \dots, x_n)$ by \begin{equation*} \frac{p(x_1,\dots,x_n)}{q(x_1,\dots,x_n)} \mapsto \frac{p(f(x_1), \dots, f(x_n))}{q(f(x_1), \dots, f(x_n))}, \end{equation*} for all polynomials $p,q$. \end{itemize} The converse, however, is not true, as shown in the following Example. \end{remark} \begin{example}\label{ex 1} \label{ex_mutation} We give an example of a $\mathbb Z$-automorphism of the ambient field $\cal F$ which does not restrict to a cluster automorphism of $\cal A$. Let $Q$ be the following quiver \begin{equation*} \xymatrix{ 1&2\ar[l]&3\ar[l]} \end{equation*} and $\mathbf{x}=(x_1,x_2,x_3)$. Clearly, any change of transcendence basis of ${\cal F} = {\mathbb Q}(x_1,x_2,x_3)$ induces an automorphism of $\cal F$, and such a change is induced, for instance, by a mutation. Let us define $f:{\cal F}\to {\cal F}$ by $f=\mu_{x_1}$, that is \begin{eqnarray*} &&f(x_1) = \mu_{x_1}(x_1)= \frac{1+x_2}{x_1},\\ &&f(x_2) = \mu_{x_1}(x_2) = x_2,\\ &&f(x_3) = \mu_{x_1}(x_3) = x_3. \end{eqnarray*} Then $f(\mathbf{x}) = \mu_{x_1}(\mathbf{x})$ is a cluster. On the other hand, a straightforward calculation gives that \begin{equation*} f\mu_{x_2, \mathbf{x}}(x_2) = \frac{1+x_2+x_1x_3}{x_1x_2} \end{equation*} while \begin{equation*} \mu_{f(x_2),f(\mathbf{x})}f(x_2) = \frac{x_1+x_3+x_2x_3}{x_1x_2}. \end{equation*} Thus condition \textup{(CA2)} is not satisfied and $f$ is not a cluster automorphism of $\cal A$. The above automorphism does not even map cluster variables to cluster variables. Indeed, one of the nine cluster variables in the cluster algebra $\mathcal{A}$ has the Laurent polynomial expansion \[\frac{x_1 + x_1x_2 + x_3}{x_2x_3} \] and applying $f$ to this cluster variable gives \[ \frac{1+2x_2+x_2^2 + x_1x_3}{x_1x_2x_3} \] which is not a cluster variable in $\mathcal{A}$. \end{example} \subsection{Equivalent characterisations} Since mutations induce maps of a cluster algebra onto itself, we may wonder why in the previous example we do not obtain a cluster automorphism. As we see below, the reason is that the associated quiver $Q(f(\mathbf{x}))$ \begin{equation*} \xymatrix{p_{f(x_1)}\ar[r]&p_{f(x_2)}&p_{f(x_3)}\ar[l]}, \end{equation*} is isomorphic neither to the original quiver $Q$ nor to its opposite $Q^{\textup{op}}$. When we say that a sequence of mutations $\mu$ transforms a quiver $Q$ into itself, we mean that for any $\alpha \in Q_1$ we have $\mu(s(\alpha))=s(\mu(\alpha))$ and $\mu(t(\alpha))=t(\mu(\alpha))$. When we say that $\mu$ transforms $Q$ into $Q^{op}$, we mean that for any $\alpha\in Q_1$ we have $\mu(s(\alpha))=t(\mu(\alpha))$ and $\mu(t(\alpha))=s(\mu(\alpha))$. \begin{lemma}\label{lemma1} Let $f$ be a $\mathbb{Z}$-algebra automorphism of $\mathcal{A}$. Then $f$ is a cluster automorphism if and only if there exists a seed $(\mathbf{x},Q)$ such that $f(\mathbf{x})$ is a cluster and one of the following two conditions is satisfied: \begin{itemize} \item[(a)] there exists an isomorphism of quivers $\varphi : Q\longrightarrow Q(f(\mathbf{x}))$ such that $\varphi(p_{x})=p_{f(x)}$ for all $p_{x}\in Q_0$, or \item[(b)] there exists an isomorphism of quivers $\varphi : Q^{\textup{op}}\longrightarrow Q(f(\mathbf{x}))$ such that $\varphi(p_{x})=p_{f(x)}$ for all $p_x\in Q_0$. \end{itemize} \end{lemma} \begin{proof} Consider a cluster $\mathbf{x}$ in $\mathcal{A}$. For a cluster variable $x_i$ in $\mathbf{x}$, corresponding, say, to the point $i\in Q_0$, the exchange relation reads: \begin{equation*} \mu_{x_i,\mathbf{x}}(x_i) = \frac{1}{x_i} \left( \prod_{\alpha \in i^+ } x_{t(\alpha)} + \prod_{\alpha \in i^-} x_{s(\alpha)} \right). \end{equation*} Since $f$ is an algebra homomorphism, this implies \begin{equation} f(\mu_{x_i,\mathbf{x}}(x_i)) = \frac{1}{f(x_i)}\left(\prod_{\alpha \in i^+ } f(x_{t(\alpha)}) + \prod_{\alpha \in i^-} f(x_{s(\alpha)}) \right). \label{temp1} \end{equation} On the other hand, $f$ induces a map $\varphi :Q_0 \to Q(f(\mathbf{x}))_0$ defined by $\varphi(p_{x})=p_{f(x)}$. Hence \begin{equation} \mu_{f(x_i),f(\mathbf{x})} (f(x_i)) = \frac{1}{f(x_i)}\left( \prod_{\beta \in \varphi(i)^+ } f(\mathbf{x})_{t(\beta)} + \prod_{ \beta \in \varphi(i)^-} f(\mathbf{x})_{s(\beta)} \right) , \label{temp2} \end{equation} where we denote by $f(\mathbf{x})_j$ the cluster variable in the cluster $f(\mathbf{x})$ which corresponds to the point $j\in Q(f(\mathbf{x}))_0$. Then $f$ is a cluster automorphism if and only if the expressions (\ref{temp1}) and (\ref{temp2}) coincide for every $i$. This is the case if and only if we have one of the following two situations: either \begin{itemize} \item [(i)] \begin{equation*} \prod_{\alpha \in i^+} f(x_{t(\alpha)}) = \prod_{ \beta\in \varphi(i)^+ } f(\mathbf{x})_{t(\beta)} \qquad \mbox{and} \qquad \prod_{ \alpha\in i^-} f(x_{s(\alpha)}) = \prod_{ \beta \in \varphi(i)^- } f(\mathbf{x})_{s({\beta)}}; \end{equation*} or \item [(ii)] \begin{equation*} \prod_{ \alpha \in i^+} f(x_{t(\alpha)}) =\prod_{ \beta \in \varphi(i)^- } f(\mathbf{x})_{s({\beta)}} \qquad \mbox{and} \qquad \prod_{ \alpha\in i^-} f(x_{s(\alpha)}) = \prod_{ \beta\in \varphi(i)^+ } f(\mathbf{x})_{t(\beta)}. \end{equation*} \end{itemize} Since the set $f\left( \mathbf{x}\right) $ is a transcendance basis of the ambient field $\mathcal{F}$, this implies that we have either \begin{itemize} \item[(i)] $i^{+}=\varphi (i)^{+}$ and $i^{-}=\varphi (i)^{-}$\\ or \item[(ii)] $i^{+}=\varphi (i)^{-}$\bigskip and $i^{-}=\varphi (i)^{+}\\ $ for every $i=1,\dots,n.$ \end{itemize} Let us now prove that if one of the two situations (i) or (ii) holds for the point $i$, then the same situation holds for every point of $Q$. Suppose that we are in the situation (i) for the point $i$, and let $j$ be a neighbour of $i$. Without loss of generality, we can assume that there is an arrow $\alpha :j\rightarrow i$ in the quiver $Q$, that is, $\alpha \in i^{-}$. This implies that there is no arrow from $i$ to $j$ because $Q$ has no cycles of length two. From the bijection between $i^{-}$ and $\varphi \left( i^{-}\right) $, we get an arrow $\varphi \left( \alpha \right) :\varphi \left( j\right) \rightarrow \varphi (i),$ and so $\varphi \left(\alpha \right) \in \varphi (j)^{+}$. Since $\alpha \in j^{+}$, this implies that there is a bijection $j^{+}\cong \varphi \left( j\right) ^{+}$. The proof is entirely similar if we choose $\alpha$ in $i^{+}.$ Proceeding in this way from neighbour to neighbour, we see that the map $\varphi $ between points extends to an isomorphism of quivers $Q\cong Q(f(x)).$ Analogously, the situation (ii) yields an isomorphism of quivers $% Q^{op}\rightarrow Q(f(\mathbf{x}))$. \end{proof} In the sequel, we shall mostly need quiver isomorphisms satisfying one of the conditions of Lemma \ref{lemma1}. Accordingly, if $f$ is a cluster automorphism of the cluster algebra $\mathcal{A}\left( \mathbf{x},Q\right) $, then $f$ induces a map (actually a bijection) between the points $\varphi :Q_{0}\rightarrow Q\left( f\left( \mathbf{x} \right) \right) _{0}$ by $ \varphi \left( p_{x}\right) =p_{f\left( x\right) }$ for every $x\in \mathbf{x}$. If this bijection $\varphi $ extends to an isomorphism of quivers $\varphi :Q\rightarrow Q\left( f\left( \mathbf{x} \right) \right) $ then we say that the latter is \emph{induced} by $f$, and that $f$ is a \emph{direct cluster automorphism}. Similarly, if $\varphi $ extends to an isomorphism of quivers $\varphi :Q^{op}\rightarrow Q\left( f\left( \mathbf{x}\right) \right) $ then we also say that $\varphi $ is \emph{induced} by $f$ but then $f$ is called an \emph{inverse cluster automorphism}. \begin{proposition}\label{prop1} Let $f$ be a cluster automorphism. Then $f$ satisfies conditions \textup{(CA1)} and \textup{(CA2)} for \emph{every} seed. \end{proposition} \begin{proof} Any seed is obtained from the seed $(\mathbf{x},Q)$ of Definition \ref{def_main} by a finite sequence of mutations. It is therefore enough to show that if (CA1), (CA2) hold for a seed $(\mathbf{x},Q)$, then they hold for any seed $(\mathbf{x}',Q')$ that is obtained from $(\mathbf{x},Q)$ by a single mutation. Let $(\mathbf{x}',Q')$ be such a seed. Then $(\mathbf{x}',Q')=\mu_{x,\mathbf{x}}(\mathbf{x},Q)$, for some $x\in \mathbf{x}$, thus \[\mathbf{x}'=\left(\mathbf{x} \setminus\{x\}\right)\cup\{x'\},\] with the exchange relation $$x'=\frac{1}{x}\left(\prod_{p_{x}\to p_{x_i} \in Q_1} x_i +\prod_{p_{x}\leftarrow p_{x_j} \in Q_1} x_j \right).$$ It follows that $f(\mathbf{x}') =\left( f(\mathbf{x}) \setminus\{f(x)\}\right)\cup\{f(x')\}.$ By (CA1), $f(\mathbf{x})$ is a cluster and by (CA2) \[f(x')=f(\mu_{x,\mathbf{x}}(x))= \mu_{f(x),f(\mathbf{x})}(f(x)).\] Therefore $f(\mathbf{x}')=\mu_{f(x),f(\mathbf{x})}(f(\mathbf{x}))$; in particular, $f(\mathbf{x}')$ is a cluster. This shows (CA1). Let us show that condition (a) or (b) of Lemma \ref{lemma1} is satisfied for the seed $(\mathbf{x}',Q')$. We have $Q'=\mu_x Q$. On the other hand, \[Q(f(\mathbf{x}'))=Q(f(\mu_{x,\mathbf{x}}(\mathbf{x}))) = Q(\mu_{f(x),f(\mathbf{x})}f(\mathbf{x})) = \mu_{f(x)}(Q(f(\mathbf{x}))),\] where the second equality follows from the condition (CA2) for the seed $(\mathbf{x},Q)$. Now, one of the conditions (a) or (b) holds for the seed $(\mathbf{x},Q)$, thus, if (a) holds, then there is an isomorphism $\varphi:Q\to Q(f(\mathbf{x}))$ induced by $f$, that is, such that $\varphi(p_{x})=p_{f(x)}$, for every $x\in \mathbf{x}$, and therefore \[Q'=\mu_x Q\cong \mu_{f(x)}Q(f(\mathbf{x})) = Q(f(\mathbf{x}')),\] and this isomorphism sends every point $p_{x_i'}$ in $Q'$ to the point $p_{f(x_i')}$ in $Q(f(\mathbf{x}'))$. In other words, the condition (a) holds for the seed $(\mathbf{x}',Q')$. On the other hand, if condition (b) holds for $(\mathbf{x},Q)$, then there is an isomorphism $\varphi:Q^{\textup{op}}\to Q(f(\mathbf{x}))$ induced by $f$, that is, such that $\varphi(p_{x})=p_{f(x)}$, for every $x\in \mathbf{x}$, and therefore \[Q'^{\textup{op}}=\mu_x Q^{\textup{op}}\cong \mu_{f(x)}Q(f(\mathbf{x})) = Q(f(\mathbf{x}')),\] and this isomorphism sends any point $p_{x_i'}$ in $Q'^{\textup{op}}$ to the point $p_{f(x_i')}$ in $Q(f(\mathbf{x}'))$; thus condition (b) holds for the seed $(\mathbf{x}',Q')$. Therefore since the structure of $f(Q^\prime)$ coincides with that of $Q^\prime$ or $Q'^{\textup{op}}$, the expressions analogous to (\ref{temp1}) and (\ref{temp2}) for the cluster $\mathbf{x}'$ are equal, thus (CA2) is satisfied for the cluster $f\left( \mathbf{x}^{\prime }\right).$ \end{proof} \begin{corollary} \label{lemma_property} Let ${\cal A=A}(\mathbf{x},Q)$ and $f:\cal A\to A$ be a cluster automorphism. Then \begin{itemize} \item[(a)] If $f$ is direct, then it induces a quiver isomorphism $Q'\cong Q(f(\mathbf{x}'))$, for any seed $(\mathbf{x}',Q')$. \item[(b)] If $f$ is inverse, then it induces a quiver isomorphism $Q'^{op}\cong Q(f(\mathbf{x}'))$, for any seed $(\mathbf{x}',Q')$. \end{itemize} \end{corollary} \begin{proof} This follows from the proof of Proposition \ref{prop1} and (CA2). \end{proof} \begin{remark} \label{rmk_seeds} It follows from (CA1) that a cluster automorphism amounts to a replacement of a cluster by another cluster of the same algebra. Since seeds are uniquely determined by clusters, a cluster automorphism can equivalently be considered as a ``change of seed". Condition (CA2) says that this change is compatible with mutations so that if $(\mathbf{x},Q)$ is a seed of $\cal A$ and $x\in \mathbf{x}$, then we have the following commutative diagram: \begin{equation*} \xymatrix@R35pt@C75pt{ (\mathbf{x},Q) \ar[r]^{{f}}\ar[d]^{\mu_{(x,\mathbf{x})}} & (f(\mathbf{x}),Q(f(\mathbf{x}))\ar[d]^{ \mu_{(f(x),f(\mathbf{x}))} } \\ (\mu_{x,\mathbf{x}}(\mathbf{x}), \mu_{x,\mathbf{x}}(Q))\ar[r]^{f} & (f\mu_{x,\mathbf{x}}(\mathbf{x}), Q(f\mu_{x,\mathbf{x}}(\mathbf{x})) .} \end{equation*} \end{remark} { We end this subsection with one more characterisation of cluster automorphisms. \begin{corollary}\label{cor2.7bis} Let $\mathcal{A}=\mathcal{A}\left( Q\right)$ and let $f:\mathcal{A}\rightarrow \mathcal{A}$ be a $\mathbb{Z}$-algebra automorphism. Then $f$ is a cluster automorphism if and only if $f$ maps each cluster to a cluster. \end{corollary} \begin{proof} Assume that for every cluster $ \mathbf{x} $, $f(\mathbf{% x)}$ is a cluster. We must prove that for every $x\in \mathbf{x}$ we have a commutative diagram as in the remark above. Let $x^{\prime }$ be the variable obtained from $x$ by mutation, then $\mathbf{x}^{\prime }=(\mathbf{x}\setminus \left\{ x\right\} )\cup \left\{ x^{\prime }\right\} $ is a cluster. Because of our hypothesis, $f(\mathbf{x}^{\prime })=(f(\mathbf{x)} \setminus\left\{ f(x)\right\} )\cup \left\{ f(x^{\prime })\right\} $ is a cluster as well. On the other hand, mutating in $f(x)$ the cluster $f(\mathbf{x)}$ yields the cluster $(f(\mathbf{x)}\setminus\left\{ f(x)\right\} )\cup \left\{ y^{\prime }\right\}$. These two clusters are obtained from $f(\mathbf{x)}$ by mutating in the same variable $f(x)$ therefore $y^{\prime }=f(x^{\prime }).$ \end{proof} } As a consequence, we see that the notion of direct cluster automorphism coincides with that of strong automorphism of a cluster algebra \cite{FZ2}, that is, an isomorphism of $\mathbb{Z}$-algebras that maps every seed to an isomorphic seed. \subsection{The group of cluster automorphisms} Examples and construction techniques for cluster automorphisms are given below. Clearly, the identity on $\cal A$ is a cluster automorphism. In fact, the following lemma holds. \begin{lemma} The set ${\rm Aut} \,\cal A$ of all cluster automorphisms of $\cal A$ is a group under composition. \end{lemma} \begin{proof} Let $f,g \in {\rm Aut} \,\cal A$. By Remark \ref{rmk_seeds}, a cluster automorphism amounts to replacing the initial seed $\left( \mathbf{x},Q\right) $ by another seed whose quiver is isomorphic to either $Q$ or $Q^{op},$ therefore $f^{-1} \in {\rm Aut} \,\cal A$ and $gf \in {\rm Aut} \,\cal A$. \end{proof} \begin{lemma} \label{lemma_index} The set ${\rm Aut}^+ {\cal A}$ of all direct cluster automorphisms of $\cal A$ is a normal subgroup of ${\rm Aut} \,\cal A$ of index at most two. \end{lemma} \begin{proof} Clearly, the identity of $\cal A$ is a direct automorphism. Also, if $f,g \in {\rm Aut}^+ {\cal A}$, then $fg^{-1} \in {\rm Aut}^+ {\cal A}$, therefore ${\rm Aut}^+ {\cal A}$ is a subgroup of ${\rm Aut} \,\cal A$. The normality follows from the fact that if $f\in{\rm Aut}^+ {\cal A}$ and $g\in{\rm Aut} \,\cal A$, then $gfg^{-1}$ induces an automorphism of $Q$ even if $g$ induces an anti-isomorphism. In order to prove the statement about the index, let us consider a map $\phi: {\rm Aut} \,\cal A \to \mathbb{Z}_2$ defined by \begin{equation}\label{2.3} \phi(f) = \left\{\begin{array}{l} \bar{0}, \;\mbox{if} \;\; f\in {\rm Aut}^+ {\cal A} \\\bar{1}, \;\mbox{if} \;\; f\notin {\rm Aut}^+ {\cal A} \end{array}\right. . \end{equation} The map $\phi$ is a group homomorphism. Indeed, if $f,g\in{\rm Aut} \,\cal A$, then $\phi(fg) = \bar{0}$ if and only if $fg \in {\rm Aut}^+ {\cal A}$, that is, if and only if $f$ and $g$ are both direct or both inverse. The latter condition may be written as $\phi(f)=\phi(g)$, which holds if and only if $\phi(f)+\phi(g)=\bar{0}.$ Thus $\phi(fg) = \phi(f)+\phi(g)$, and $\phi$ is a group homomorphism. Since $\textup{Ker}\, \phi={\rm Aut}^+ {\cal A}$ and ${\rm Im} \,\phi \subseteq \mathbb{Z}_2,$ the lemma is proved. \end{proof} \begin{example} Here is an example of an inverse cluster automorphism. Let $Q$ be the following quiver of type $\mathbb A_3$: \begin{equation*} \xymatrix{p_{x_1}&p_{x_2}\ar[l]\ar[r]&p_{x_3}} \end{equation*} and $\mathbf{x}=\{x_1,x_2,x_3\}$. The cluster variables computed inside the cluster category ${\cal C}_Q$ (see \cite{BMRRT} or Section \ref{sect_acyclic} below) are as follows: \begin{equation*} \scalebox{0.95}{$\begin{array}{ccccccccccccccc} & & x_3 & & & & \frac{1+x_2+x_1x_3}{x_2x_3} & & & & \frac{1+x_2}{x_1}& & & &x_1\\ & \nearrow & & \searrow & & \nearrow & & \searrow& & \nearrow & & \searrow& & \nearrow & \\ x_2& & & & \frac{1+x_1x_3}{x_2} &&&&\frac{x_2^2 + 2x_2+1+x_1x_3}{x_1x_2x_3}&&&&x_2 \\ & \searrow & & \nearrow & & \searrow & & \nearrow& & \searrow & & \nearrow& & \searrow & \\ & &x_1& &&&\frac{1+x_2+x_1x_3}{x_1x_2}&&&&\frac{1+x_2}{x_3}&&&&x_3 \end{array}$} \end{equation*} Define a map $f:\cal A\to A$ to be induced by the mutation $\mu_{x_2}$, so that on the initial cluster we have \begin{equation*} f(x_1) = x_1, \qquad f(x_2) = \frac{1+x_1x_3}{x_2}, \qquad f(x_3) = x_3. \end{equation*} Then $f$ extends to an algebra homomorphism. A straightforward computation gives the images under $f$ of the remaining cluster variables of the algebra: \begin{eqnarray*} && f \left( \frac{1+x_1x_3}{x_2} \right) = x_2, \qquad \qquad \qquad f \left( \frac{1+x_1x_3+x_2}{x_2x_3} \right) = \frac{x_2+1}{x_3}, \\ && f \left( \frac{1+x_1x_3+x_2}{x_1x_2} \right) = \frac{x_2+1}{x_1}, \qquad f \left( \frac{x_2^2 + 2x_2+1+x_1x_3}{x_1x_2x_3} \right) = \frac{x_2^2 + 2x_2+1+x_1x_3}{x_1x_2x_3}, \\ && f \left( \frac{1+x_2}{x_1} \right) = \frac{x_2+1+x_1x_3}{x_1x_2}, \qquad f \left( \frac{1+x_2}{x_3} \right) = \frac{x_2+1+x_1x_3}{x_2x_3} . \end{eqnarray*} Thus $f$ is a cluster automorphism sending the seed $$(\{x_1,x_2,x_3\}, \xymatrix{p_{x_1}&p_{x_2}\ar[l]\ar[r]&p_{x_3}} )$$ to the seed $$ (\{f(x_1),f(x_2),f(x_3)\},\xymatrix{p_{f(x_1)}\ar[r]&p_{f(x_2)}&p_{f(x_3)}\ar[l]})$$ hence $f$ induces an isomorphism of quivers $Q(\mathbf{x})^{op}\cong Q(f(\mathbf{x}))$. \end{example} Two quivers $Q$ and $Q'$ are called \emph{mutation equivalent} if there exists a sequence of mutations transforming $Q$ to $Q'$. \begin{theorem}\label{thm ref} Let $\mathcal{ A}=\mathcal{A}(\mathbf{x},Q)$. \begin{itemize} \item[(a)] {If $Q$ and $Q^{op}$ are mutation equivalent then the index of $\textup{Aut}^+\mathcal{A}$ in ${\rm Aut} \,\cal A$ is two.} \item[(b)] If $Q$ and $Q^{op}$ are not mutation equivalent then ${\rm Aut} \,\cal A={\rm Aut}^+ {\cal A}$. \end{itemize} \end{theorem} \begin{proof}{ If $Q$ and $Q^{op}$ are mutation equivalent then there exists a sequence of mutations $\mu$ such that $\mu(\mathbf{x},Q(\mathbf{x}))=(\mathbf{x}',Q(\mathbf{x}'))$ together with two isomorphisms of quivers $\varphi:Q(\mathbf{x})\stackrel{\cong }{\to}Q$ and $\varphi':Q^{op}\stackrel{\cong }{\to}Q(\mathbf{x}')$. {Observe that there is no reason for the isomorphisms $\varphi$ and $\varphi'$ to be compatible with the canonical bijection between the points $Q_0\cong Q(\mathbf{x}')_0$.} Define a map $f:\mathbf{x}\to\mathbf{x}'$ as the following composition of the bijections on points \[Q(\mathbf{x})_0\stackrel{\varphi}{\longrightarrow} Q_0 \stackrel{=}{\longrightarrow} (Q^{op})_0\stackrel{\varphi'}{\longrightarrow} Q(\mathbf{x}')_0.\] Since $\mathbf{x}$ and $\mathbf{x}'$ are transcendence bases of the ambient field, then $f$ extends to a $\mathbb{Z}$-algebra automorphism of $\mathcal{A}$, and, moreover, $\varphi' :Q^{op}\stackrel{\cong }{\to}Q(\mathbf{x}')$ is an isomorphism of quivers that satisfies $\varphi'(p_x)=p_{f(x)}$, and it follows from Lemma 2.3 that $f\in {\rm Aut} \,\cal A\setminus{\rm Aut}^+ {\cal A}$. This shows (a).} In order to show (b), suppose that $Q$ and $Q^{op}$ are not mutation equivalent. Suppose that there exists $f\in{\rm Aut} \,\cal A\setminus{\rm Aut}^+ {\cal A}$. Then there exists a seed $(\mathbf{x},Q(\mathbf{x}))$ in $\mathcal{A}$ with $Q(\mathbf{x})\cong Q$ whose image $(f(\mathbf{x}),Q(f(\mathbf{x})))$ under $f$ is a seed in $\mathcal{A}$ whose quiver is isomorphic to $Q^{op}$. Thus $Q$ and $Q^{op}$ are mutation equivalent, a contradiction. \end{proof} \begin{corollary}\label{cor two new} {If there exists $\sigma\in{\rm Aut} \,\cal A\setminus\textup{Aut}^+\mathcal{A}$ of order two, then ${\rm Aut} \,\cal A=\textup{Aut}^+\mathcal{A}\rtimes \mathbb{Z}_2$.} \end{corollary} \begin{proof} {If ${\rm Aut} \,\cal A= {\rm Aut}^+ {\cal A} $ then there is an exact sequence of groups. } \[ 1 \to {\rm Aut}^+ {\cal A} \to {\rm Aut} \,\cal A \stackrel{\phi}{\to} \mathbb{Z}_2\to1\] {where $\phi$ is defined in equation (\ref{2.3}). We have ${\rm Aut} \,\cal A\cong {\rm Aut}^+ {\cal A} \rtimes\mathbb{Z}_2$ if and only if this sequence splits, and this is the case if and only if there exists an inverse automorphism $\sigma$ of order 2. } \end{proof} \begin{example}\label{ex torus} Let $Q$ be the quiver $$\xymatrix@R20pt@C25pt{1\ar@<2pt>[rd]\ar@<-2pt>[rd] &&2 \ar@<2pt>[ll]\ar@<-2pt>[ll]\\ &3\ar@<2pt>[ru]\ar@<-2pt>[ru] }$$ and $\mathcal{A}=\mathcal{A}(\{x_1,x_2,x_3\},Q\})$ its cluster algebra. Then the mutation of $Q$ in $1$ is the quiver $$ \xymatrix@R20pt@C25pt{1\ar@<2pt>[rr]\ar@<-2pt>[rr] &&2 \ar@<2pt>[ld]\ar@<-2pt>[ld]\\ &3\ar@<2pt>[lu]\ar@<-2pt>[lu] },$$ and thus it induces an inverse cluster automorphism $f$ given by $$f(x_1)=\frac{x_2^2+x_3^2}{x_1}, \ f(x_2)=x_2 \textup{ and } f(x_3)=x_3.$$ Mutating once more, this time in $x_2$ yields back the quiver $Q$, and thus it induces a direct cluster automorphism $g$ given by $$g(x_1)=\frac{x_2^2+x_3^2}{x_1}, \ g(x_2)=\frac{1}{x_2} \left(\left(\frac{x_2^2+x_3^2}{x_1}\right)^2+x_3^2\right)\textup{ and } g(x_3)=x_3 .$$ Note that in this particular example, any sequence of mutations will either produce the quiver $Q$ (if the number of mutations is even) or its opposite (if the number of mutations is odd), and thus, in this example, any sequence of mutations induces a cluster automorphism. \end{example} \begin{example}\label{ex ref} Let $\mathcal{A}=\mathcal{A}(\mathbf{x},Q)$, where $Q$ is the quiver \[\xymatrix{&2\ar@<3pt>[rd]\ar@<0pt>[rd]\\1\ar@<3pt>[rr]\ar@<-3pt>[rr]\ar@<0pt>[rr] \ar[ur] &&3\ .}\] Then $Q $ is not mutation equivalent to $Q^{op}$ (and therefore $\textup{Aut}^+\mathcal{A}={\rm Aut} \,\cal A$). This can be seen using the associated cluster category, see \cite{BMRRT} or Section 3 below. Indeed, assume that $Q$ is mutation equivalent to $Q^{op}$. There exists a local slice $\Sigma$ (in the sense of \cite{ABS2}) in the cluster category ${\cal C}_Q$ of $Q$ whose quiver is isomorphic to $Q$. Since $Q$ and $Q^{op}$ are acyclic, the slice $\Sigma$ lies in the transjective component $\Gamma_{tr}$ of the Auslander-Reiten quiver of ${\cal C}_Q$, and this transjective component is of the form $\mathbb{Z} Q^{op}$. In particular, the point (corresponding to) $3$ in $\Sigma$ is the target of five arrows in $\Gamma_{tr}$. The point $1$ in $\Sigma$ is the source of three of the arrows of target $3$, and the point $2$ is the source of the remaining two arrows. Since $\Gamma_{tr}\cong \mathbb{Z} Q^{op}$, this implies the existence of an arrow from $2$ to $1$, and then the quiver of $\Sigma$ is not isomorphic to $Q$, a ! contradiction. \end{example} \subsection{Cluster automorphisms induced by quiver automorphisms} We now show how any automorphism of the quiver $Q$ induces a direct cluster automorphism of ${\cal A} = {\cal A}(\mathbf{x},Q)$. Let $\sigma \in {\rm Aut}\, Q$. Define a map $f_\sigma: \mathbf{x} \to \mathbf{x}$ by $f_\sigma(x)= x'$, for $x \in \mathbf{x}$, where $x'\in\mathbf{x} $ is the unique cluster variable such that $\sigma(p_x)=p_{x'}$. Then $f_\sigma$ permutes the cluster variables in $\mathbf{x}$ and clearly extends to a unique automorphism $f_\sigma: \cal F \to F$ of the ambient field. We now show that $f_\sigma$ is a direct cluster automorphism of $\cal A.$ \begin{proposition}\label{prop 2.13} \label{prop_kernel} The map $F:\sigma \mapsto f_\sigma$ is a group homomorphism from ${\rm Aut}\, Q$ to ${\rm Aut}^+ {\cal A}$ whose kernel is given by the stabiliser ${\rm Stab} \, Q_0$ of the points of $Q.$ \end{proposition} \begin{proof} In order to show that $f_\sigma$ is a cluster automorphism we must prove that conditions (CA1) and (CA2) are satisfied for the initial cluster $\mathbf{x}$. Since $f_\sigma(\mathbf{x}) = \mathbf{x}$, the first condition obviously holds. Let now $x \in \mathbf{x}$ and consider the mutation $\mu_x = \mu_{x,\mathbf{x}}$. Since $\sigma \in {\rm Aut}\, Q$, we have $\sigma Q = Q$ and hence $\mu_{f_\sigma(x),\mathbf{x}}$ is a mutation of the seed $(\mathbf{x}, Q)$. We want to show (CA2), that is \[ f_\sigma(\mu_{x,\mathbf{x}}(x))=\mu_{f_\sigma(x),\mathbf{x}}(f_\sigma(x)).\] Using the exchange relations, we have \begin{equation} \label{eq25} f_\sigma(\mu_{x,\mathbf{x}}(x)) = \frac{1}{f_{\sigma}(x)} \left(\,\prod_{\alpha\in p_x^+}f_\sigma(x_{t(\alpha)}) + \prod_{\alpha\in p_x^-}f_\sigma(x_{s(\alpha)})\right) , \end{equation} while \begin{equation} \label{eq26} \mu_{f_\sigma(x),\mathbf{x}}(f_\sigma(x)) = \frac{1}{f_\sigma(x)} \left(\prod_{\beta\in \sigma(p_x)^+} x_{t(\beta)} + \prod_{\beta\in \sigma(p_x)^-}x_{s(\beta)}\right) . \end{equation} Since $\sigma $ is an automorphism of $Q$, we have $\sigma(t(\alpha))=t(\sigma(\alpha))$, $\sigma(s(\alpha))=s(\sigma(\alpha))$, and $\sigma(p_x^+)=\sigma(p_x)^+$, $\sigma(p_x^-)=\sigma(p_x)^-$. Therefore \begin{eqnarray*} \prod_{\alpha\in p_x^+}f_\sigma(x_{t(\alpha)}) = \prod_{\alpha\in p_x^+}x_{t(\sigma(\alpha))} =\prod_{\beta\in \sigma(p_x)^+} x_{t(\beta)} && \textup{and} \\ \prod_{\alpha\in p_x^-}f_\sigma(x_{s(\alpha)}) = \prod_{\alpha\in p_x^-}x_{s(\sigma(\alpha)) }=\prod_{\beta\in \sigma(p_x)^-} x_{s(\beta)}, \end{eqnarray*} which shows that the right hand sides of equations (\ref{eq25}) and (\ref{eq26}) are equal, and hence (CA2). This completes the proof that $f_\sigma$ is a cluster automorphism. It is direct because $\sigma Q \cong Q$. This shows that the map $F$ is well-defined. It is easy to see that $F$ is a group homomorphism with kernel ${\rm Stab} \, Q_0.$ \end{proof} \begin{example} The automorphism of the Kronecker quiver \begin{equation*} \xymatrix{1&2\ar@<2pt>[l]\ar@<-2pt>[l]} \end{equation*} which fixes the points and interchanges the arrows lies in the kernel of $F$. \end{example} We finally observe that an anti-automorphism of the quiver induces in the same way an inverse cluster automorphism. Let $\mathcal{A}\left( \mathbf{x},Q\right) $ be a cluster algebra and $\sigma $ be an anti-automorphism of $ Q$. We define $f_{\sigma }:\mathbf{x}\rightarrow \mathbf{x}$ by setting $ f_{\sigma }\left( x\right) =x^{\prime }$ where $x^{\prime }$ is the unique cluster variable such that $\sigma \left( p_{x}\right) =p_{x^{\prime }}.$ Then $f_{\sigma }$ is clearly an automorphism of the ambient field $\mathcal{ F}$. \begin{proposition}\label{prop anti} With the above notation, the map $ f_{\sigma }$ is an inverse cluster automorphism of $\mathcal{A} \left( \mathbf{x},Q\right) $. \end{proposition} \begin{proof} The proof is entirely similar to that of Proposition \ref{prop 2.13} and will be omitted. \end{proof} \begin{corollary}\label{cor anti new} {Let $\mathcal{A}$ be a cluster algebra with a seed $(\mathbf{x},Q)$ such that $Q$ admits an anti-automorphism. Then ${\rm Aut} \,\cal A={\rm Aut}^+ {\cal A}\rtimes\mathbb{Z}_2$.} \end{corollary} \begin{proof} {In this situation, Proposition 2.15 yields that there exists $\sigma\in{\rm Aut} \,\cal A\setminus\textup{Aut}^+\mathcal{A}$ of order two, and the result follows from Corollary \ref{cor two new}.} \end{proof} \begin{example}\label{ex anti atilde} {The cluster algebras of euclidean type $\tilde{\mathbb{A}}$ admit an inverse cluster automorphism. Indeed, for any cluster algebra of type $\tilde{\mathbb{A}}$, there exist integers $p,q$ such that the cluster algebra has a seed whose quiver is of the following form.} \[\xymatrix{&2\ar[r]&3\ar[r]&\cdots\ar[r]&p\ar[rd]\\ 1\ar[ur]\ar[dr]&&&&&p+q\\ &p+1\ar[r]&p+2\ar[r]&\cdots\ar[r]&p+q-1\ar[ur] } \] { This quiver has an anti-automorphism given on the points by the permutation that exchanges 1 with $p+q$, $\ell $ with $p+2-\ell$ for $\ell=2,3,\cdots,p$, and $p+m$ with $p+q-m$ for $m=1,2,\ldots,q-1$. Therefore Proposition \ref{prop anti} implies that the cluster algebra has an inverse cluster automorphism. } \end{example} \section{The acyclic case} \label{sect_acyclic} \subsection{The cluster category} In this section, we assume that $Q$ is an acyclic quiver. In this case, the combinatorics of cluster variables are encoded in the cluster category. Let $k$ be an algebraically closed field, $kQ$ the path algebra of $Q$ and ${\rm mod} \, kQ$ the category of finitely generated right $kQ$-modules. We denote by ${\rm ind}\, kQ$ a full subcategory of ${\rm mod} \, kQ$ consisting of exactly one object from each isomorphism class of indecomposable $kQ$-modules. For $x \in Q_0$, we denote by $P_x$ the corresponding indecomposable projective $kQ$-module. For properties of ${\rm mod}\, kQ$ and its Auslander-Reiten quiver $\Gamma ({\rm mod}\, kQ)$, we refer the reader to \cite{ARS,ASS}. We denote by ${\cal D}^b({\rm mod}\,kQ)$ the bounded derived category over ${\rm mod} \, kQ$. This is a triangulated Krull-Schmidt category having Serre duality and hence almost split triangles. Since $kQ$ is hereditary, the Auslander-Reiten quiver $\Gamma({\cal D}^b({\rm mod}\,kQ))$ of ${\cal D}^b({\rm mod}\,kQ)$ is well-understood \cite{H}. The cluster category is defined to be the orbit category of ${\cal D}^b({\rm mod}\, kQ)$ under the action of the automorphism $\tau^{-1}[1]$, where $\tau$ is the Auslander-Reiten translation and $[1]$ is the shift of ${\cal D}^b({\rm mod}\, kQ)$, see \cite{BMRRT}. Then ${\cal C}_Q$ is also a triangulated Krull-Schmidt category having almost split triangles, and the projection functor ${\cal D}^b({\rm mod} \,kQ) \to {\cal C}_Q$ is a functor of triangulated categories commuting with the Auslander-Reiten translation \cite{K}. Moreover, ${\cal C}_Q$ is a $2$-Calabi-Yau category \cite{BMRRT}. Let ${\rm ind} \,{\cal C}_Q$ denote a full subcat! egory of ${\cal C}_Q$ consisting of exactly one object from each isomorphism class of indecomposable objects in ${\cal C}_Q$, then ${\rm ind}\, {\cal C}_Q$ can be identified with the disjoint union of ${\rm ind} \,kQ$ and $kQ[1] = \{P_x[1] | x\in Q_0\}$, the shifts of the indecomposable projective $kQ$-modules. We always use this identification in the sequel. The Auslander-Reiten quiver $\Gamma({\cal C}_Q)$ of ${\cal C}_Q$ is the quotient of $\Gamma({\cal D}^b({\rm mod}\,kQ))$ under the action of the quiver automorphism $\tau^{-1}[1]$. This Auslander-Reiten quiver has always a unique component containing all the objects of $kQ[1]$. This is the \emph{transjective} component of $\Gamma({\cal C}_Q)$ and is denoted by $\Gamma_{tr}$. If $Q$ is a Dynkin quiver, then $\Gamma({\cal C}_Q) \cong \Gamma_{tr}. $ Otherwise, $\Gamma_{tr}$ is isomorphic to the repetition quiver $\Gamma_{tr} \cong {\mathbb Z}Q$ of $Q$ (see \cite{ASS}), and there are infinitely many so-called \emph{regular}! components which are either stable tubes (if $Q$ is euclidean) or of type $\mathbb{ZA}_\infty$ (if $Q$ is wild). Let $n = |Q_0|$. There exists a map \begin{equation*} X_? : ({\cal C}_Q)_0 \to \mathbb Z [x_1^{\pm 1}, \dots, x_n^{\pm 1}] \end{equation*} called the \emph{canonical cluster character}, or the \emph{Caldero-Chapoton map}. The map $X_?$ induces a bijection between the $M$ in ${\rm ind}\, {\cal C}_Q$ which have no self-extensions, and the cluster variables $X_M$, see \cite{CK06}. Under this bijection, the clusters correspond to the so-called {\it tilting} objects (also known as cluster-tilting objects) in ${\cal C}_Q$. In practice, the map $X_?$ is difficult to compute explicitly. An easier method for computing the cluster variables is via the frieze functions \cite{AsRS,AD,ADSS}. \subsection{Cluster automorphisms in the acyclic case} In this section, we prove that if $Q$ is acyclic, then the cluster automorphisms of ${\cal A=A}(\mathbf{x}, Q)$ are entirely determined by the quiver automorphisms of the transjective component $\Gamma_{tr}$. \begin{lemma}\label{lemma 3.1} Let $f$ be a cluster automorphism of ${\cal A}={\cal A}(\mathbf{x},Q)$, where $Q$ is acyclic. \begin{itemize} \item[(a)] If $f$ is direct, then it induces a triangle equivalence $f_{{\cal D}}: {\cal D}^b({\rm mod} \; kQ) \to {\cal D}^b({\rm mod} \; kQ) $. \item[(b)] If $f$ is inverse, then it induces a triangle equivalence $f_{{\cal D}}: {\cal D}^b({\rm mod} \; kQ) \to {\cal D}^b({\rm mod} \; kQ^{op}) $. \end{itemize} \end{lemma} \begin{proof} Let $X_?$ denote as before the canonical cluster character. For each $x$ in $\mathbf{x} $, there exists a unique indecomposable object $M_{x}$ in the cluster category $\mathcal{C}_{Q}$ such that $f(x)=X_{M_{x}}$. Because $\mathbf{x}$ is a cluster, then $M=\oplus _{x\in \mathbf{x}}M_{x}$ is a tilting object in $\mathcal{C}_{Q}$. { If $f$ is direct then $\textup{End}\, M\cong kQ$.} Therefore, the set $\left\{ M_{x}\mid x\in \mathbf{x}\right\} $ forms a local slice in $ \mathcal{C}_{Q}$. Due to \cite{ABS2}, there exists a slice $\Sigma $ in a transjective component of ${\cal D}^{b}\left( \textup{mod}\,kQ\right) $, isomorphic to $% Q$, such that, for each $x\in \mathbf{x}$, the object $M_{x}$ lifts to $% \widetilde{M_{x}}$ in $\Sigma$. Because $\widetilde{M}=\oplus _{x\in \mathbf{x}}\widetilde{M_{x}}$ is a slice complex in ${\cal D}^{b}\left( \textup{mod}\, kQ\right)$, it is also a tilting complex. Therefore the triangle functor $ f_{\mathcal{D}}=-\otimes _{kQ}^{\mathbb{L}}\widetilde{M}[-1]:\mathcal{D}^{b}\left( \textup{mod}\,kQ\right) \rightarrow \mathcal{D}^{b}\left( \textup{mod}\,kQ\right) $ is a triangle equivalence. {This shows (a). If $f$ is inverse then $\textup{End}\, M\cong kQ^{op}$, and again the set $\left\{ M_{x}\mid x\in \mathbf{x}\right\} $ forms a local slice in $ \mathcal{C}_{Q}$. Again due to \cite{ABS2}, there exists a slice $\Sigma $ in a transjective component of ${\cal D}^{b}\left( \textup{mod}\,kQ\right) $, isomorphic to $% Q^{op}$, such that, for each $x\in \mathbf{x}$, the object $M_{x}$ lifts to $% \widetilde{M_{x}}$ in $\Sigma$. Because $\widetilde{M}=\oplus _{x\in \mathbf{x}}\widetilde{M_{x}}$ is a slice complex in ${\cal D}^{b}\left( \textup{mod}\, kQ\right)$, it is also a tilting complex. Again the triangle functor $ f_{\mathcal{D}}=-\otimes _{kQ}^{\mathbb{L}}\widetilde{M}[-1]:\mathcal{D}^{b}\left( \textup{mod}\,kQ\right) \rightarrow \mathcal{D}^{b}\left( \textup{mod}\,kQ\right) $ is a triangle equivalence. } \end{proof} Recall that a morphism of translation quivers is a morphism of quivers which commutes with the translation. \begin{corollary} Let $f$ be a cluster automorphism of ${\cal A=A}(\mathbf{x},Q)$, where $Q$ is acyclic. \begin{itemize} \item[(a)] If $f$ is direct, then it induces a quiver automorphism of the transjective component $\Gamma_{tr}$ of $ \Gamma({\cal C}_Q)$. \item[(b)] If $f$ is inverse, then it induces a quiver anti-automorphism of the transjective component $\Gamma_{tr}$ of $ \Gamma({\cal C}_Q)$. \end{itemize} \end{corollary} \begin{proof} We only prove (a), because the proof of (b) is similar. Let $f$ be direct. As seen in Lemma \ref{lemma 3.1}, it induces a triangle equivalence $f_{\mathcal{ D}}:\mathcal{D}^{b}\left( \textup{mod}\,kQ\right) \rightarrow\mathcal{D}^{b}\left( \textup{mod}\, kQ\right) $ mapping the slice $\mathcal{P}=\left\{ P_{x}\mid x\in \mathbf{x}\right\} $ consisting of the indecomposable projective $kQ$-modules to the isomorphic slice $\Sigma =\left\{ \widetilde{M_{x}}\mid x\in \mathbf{x}\right\}$, and both slices may be assumed to lie in the same transjective component $\Gamma $ of $\Gamma \left( \mathcal{D}^{b}\left( \textup{mod}\,kQ\right) \right)$. Clearly, $f_{\mathcal{D}}\left( P_{x}\right) =% \widetilde{M_{x}}$, and moreover $f_{\mathcal{D}}$ induces a quiver isomorphism $\mathcal{P}\cong \Sigma ,$ which extends uniquely to a quiver automorphism of $\Gamma$. {On the other hand, $f_{\mathcal{D}}$ is a triangle equivalence, hence commutes with the shift of $ \mathcal{D}^{b}\left( \textup{mod}\,kQ\right)$ and the Auslander-Reiten translation of $\Gamma$. Therefore it induces an automorphism of the translation quiver $\Gamma_{tr}$. } \end{proof} \begin{remark}\label{rem 3.3} As a direct consequence of the above proofs, if $f$ is nontrivial, then it acts nontrivially on the transjective component. \end{remark} \begin{example} {{To illustrate Remark \ref{rem 3.3} we give an example of a cluster automorphism defined by permuting two regular components of the cluster category and show that this automorphism induces a nontrivial action on the transjective component. } { Let $(\mathbf{x},Q)$ be the seed with cluster $\mathbf{x}=\{x_1,x_2,x_3,x_4\}$ and quiver $$\xymatrix@R=10pt{&2\ar[ld]\\1\ar@<2pt>[rr]\ar@<-2pt>[rr] &&4\ar[ul]\ar[dl]\ . \\ &3\ar[lu]}$$ This is a seed of type $\tilde {\mathbb{A}}_{2,2}$: indeed, mutating the seed in $x_2$ and then in $x_3$ one gets a new seed with cluster $\mathbf{x}'=\{x_1,\frac{x_1+x_4}{x_2},\frac{x_1+x_4}{x_3},x_4\}$ and quiver $$\xymatrix@R=10pt{&2\ar[rd]\\1\ar[ru]\ar[rd] &&4\ . \\ &3\ar[ru]}$$ } {In the corresponding cluster category, each of the objects associated with $x_2$ and $x_3$ sits in the mouth of a tube of rank 2. Therefore permuting the two tubes induces a cluster automorphism given on the cluster $\mathbf{x}$ by the permutation of the two variables $x_2 $ and $x_3$. On the cluster $\mathbf{x}'$ this automorphism acts by permutation of the variables $\frac{x_1+x_4}{x_2}$ and $\frac{x_1+x_4}{x_3}$. Therefore the induced action on the transjective component is nontrivial.}} \end{example} We now relate cluster automorphisms to automorphisms of the original quiver. We call a mutation an \emph{APR-mutation} if it is applied to a source or to a sink. The letters APR stand for Auslander, Platzeck and Reiten and evoke the similarity between such mutations and the so-called APR-tilts \cite{APR}. Equivalently, one may think of APR-mutations as a generalisation of the reflection functors of Bernstein, Gelfand and Ponomarev \cite{BGP}. We also note that, given an APR-mutation, the new cluster variable is obtained from the old ones by using the frieze function. For our purposes, it is useful to understand how an APR-mutation translates into terms of the cluster category. Let $\mathcal{A}\left( \mathbf{x},Q\right) $ be a cluster algebra, with $Q$ acyclic, and $\mu _{x}$ be an APR-mutation corresponding to a source (say) in $Q$. Then $\mu _{x}$ maps $\mathcal{A}% \left( \mathbf{x},Q\right) $ to $\mathcal{A}\left( \mu _{x}\mathbf{x},\mu _{x}Q\right)$. Identifying $Q$ with the full subquiver $kQ[1]=\left\{ P_{x}[1]\mid x\in \mathbf{x}\right\} $ of the transjective component $\Gamma _{tr}$ of $\Gamma \left( \mathcal{C}_{Q}\right) $, the application of $\mu _{x}$ to $kQ[1]$ clearly amounts to replacing $kQ[1]$ by the new tilting object $(kQ[1]\setminus \left\{ P_{x}[1]\right\} )\cup \left\{ P_{x}\right\} $ whose quiver is also a slice in $\Gamma _{tr}$ , though not isomorphic to $Q$. \begin{lemma} \label{lemma_APR} Let $f$ be a cluster automorphism of ${\cal A} (\mathbf{x},Q)$, where $Q$ is acyclic. Then there exists a sequence $\mu$ of APR mutations such that $\mu(\mathbf{x})=f(\mathbf{x})$ {as sets}. \end{lemma} \begin{proof} Let $\mathbf{x}=\{x_1,\ldots,x_n\}$. If $f\in {\rm Aut} \,\cal A$, then $(f(\mathbf{x}), Q(f(\mathbf{x})) $ is a seed such that $Q(f(\mathbf{x}))$ is isomorphic to $Q$ or $Q^{\textup{op}}$. In particular, $Q(f(\mathbf{x}))$ is acyclic. Therefore, the cluster $f(\mathbf{x})$ corresponds to a tilting object $T=\oplus_{i=1}^n T_i$ in ${\cal C}_Q$ such that the indecomposable summand $T_i$ corresponds to the variable $f(x_i)$ for all $i$ such that $1\le i\le n$. Then $Q(f(\mathbf{x}))$ is the ordinary quiver of the cluster-tilted algebra ${\rm End_{{\cal C}_Q}}T$, and since $Q(f(\mathbf{x}))$ is acyclic, it follows that ${\rm End_{{\cal C}_Q}}T$ is hereditary. Therefore, $T$ is a local slice in the transjective component $\Gamma_{tr}$ of $\Gamma({\cal C}_Q)$, see \cite[Corollary 20]{ABS2}, and hence there exists a sequence $\mu$ of APR-mutations such that $\mu(\mathbf{x})=f(\mathbf{x})$ {as sets. Observe that if one lifts the slice $(kQ[1]\setminus \{P_x [1]\}) \cup \{P_x \}$ to a slice $\Sigma$, say, in the derived category, then the endomorphism algebra of $\Sigma$ is obtained from $kQ$ by an APR-tilt (see \cite{APR}).} \end{proof} \begin{remark} Lemma \ref{lemma_APR} states that the action of a cluster automorphism on a cluster is given by a composition of APR-mutations. However, it is not true in general that the automorphism $f$ itself is given by a sequence of APR-mutations, as we show in the following example. \end{remark} \begin{example}\label{ex atilde} Let $Q$ be the quiver $ \xymatrix{p_{x_1}\ar@/^10pt/[rr]\ar[r]&p_{x_2}\ar[r]&p_{x_3}}$. The transjective component $\Gamma_{tr}$ of the Auslander-Reiten is the infinite quiver illustrated in Figure \ref{fig atilde}. The position of the initial cluster $\{x_1,x_2,x_3\}$ as well as one other variable $u$ are depicted in the figure. There is a direct cluster automorphism $f\in\textup{Aut}^+\mathcal{A}$ defined by $f(x_1)=x_2, f(x_2)=x_3$ and $f(x_3)=u$. Note that $f$ induces an isomorphism of quivers $Q(\mathbf{x})\cong Q(f(\mathbf{x}))$. The corresponding APR-mutation $\mu$ of Lemma \ref{lemma_APR} is the mutation in $x_1$. We have $\mu_{x_1}(\mathbf{x})=\{u,x_2,x_3\}=f(\mathbf{x})$. But the mutation $\mu$ fixes the variables $x_2,x_3$ and sends $x_1$ to $u$, which shows that the automorphism $f$ is not given by $\mu$. Also note that $\mu$ sends the sink $p_{x_3}$ in the quiver $Q$ to a point $p_{x_3}$ of the quiver $Q(f(\mathbf{x}))$ which is neither a source nor a sink, whence $\mu$ does not induce a quiver isomorphism between $Q$ and $Q(f(\mathbf{x}))$ or between $Q^{\textup{op}}$ and $Q(f(\mathbf{x}))$; consequently, $\mu$ does not induce a cluster automorphism. \begin{figure} \[ \xymatrix@R=10pt@C=10pt{ &&&\cdot\ar[rrd]\ar@/_5pt/[rdd]&&&x_3\ar[rrd]\ar@/_5pt/[rdd]&&&\cdot\ar[rrd]\ar@/_5pt/[rdd]&&&\cdot\ar@/_5pt/[rdd]&\\ \cdots&&\cdot\ar[ru]\ar[rrd]&&&x_2\ar[ru]\ar[rrd]&&&\cdot\ar[ru]\ar[rrd]&&&\cdot\ar[ru]\ar[rrd]&&\cdots\\ &\cdot\ar[ru]\ar@/^12pt/[rruu]&&&x_1\ar[ru]\ar@/^12pt/[rruu]&&&u\ar[ru]\ar@/^12pt/[rruu]&&&\cdot\ar[ru]\ar@/^12pt/[rruu]&&& } \] \caption{ $\Gamma_{tr}$ in Example \ref{ex atilde}} \label{fig atilde} \end{figure} \end{example} We have seen in Examples \ref{ex_mutation} and \ref{ex atilde} that not every sequence of APR-mutations is a cluster automorphism. As follows from Lemma \ref{lemma1}, if a sequence of mutations $\mu$ transforms $Q$ into itself or its opposite $Q^{op}$ in such a way that the bijection $\mu: Q_0 \to \mu(Q)_0$ extends to an isomorphism of quivers of the form either $\mu: Q \to \mu(Q)$ or $\mu: Q^{op} \to \mu(Q)$, then $\mu$ induces a cluster automorphism $f_{\mu }\in \textup{Aut}\, \mathcal{A}\left( \mathbf{x},Q\right) $ defined on the initial cluster by $f_{\mu }\left( x_{i}\right) =\mu \left( x_{i}\right) .$ Note that this applies also to quivers which are not acyclic. Now we are in a position to prove the first main theorem of this section. \begin{theorem} \label{theorem_main} The map $\phi: {\rm Aut}(\Gamma_{tr}) \to {\rm Aut}^+ {\cal A}$ defined by $\phi(g)(x_i) = X_{g(P_i[1])}$ for $g\in {\rm Aut}(\Gamma_{tr}) $ and $1\leq i\leq n,$ is a surjective group homomorphism, whose kernel equals the stabiliser ${\rm Stab}(\Gamma_{tr})_0$ of the points in $\Gamma_{tr}$. \end{theorem} \begin{proof} (a) First we prove that the map $\phi$ is well defined. Let $f=\phi(g)$. Now, since $g\in {\rm Aut} (\Gamma_{tr})$, then the local slice $g(kQ[1])$ of $\Gamma_{tr}$ has underlying quiver isomorphic to $Q$. Therefore $(f(\mathbf{x}),Q(f(\mathbf{x}))$ is a seed with quiver isomorphic to $Q$. This shows that $f$ satisfies the condition (CA1). Moreover, $f$ satisfies the condition (CA2) because $g$ is an automorphism of $\Gamma_{tr}$, and the canonical cluster character map commutes with mutations. This proves that $f$ is a cluster automorphism. Since $g$ is an automorphism of $\Gamma_{tr}$, and so preserves the orientation of the arrows between the $P_i[1]$, then $f$ is a direct automorphism. (b) We now prove that $\phi$ is a group homomorphism. Let $g_1,\, g_2 \in {\rm Aut}\, \Gamma_{tr}$, then the following equalities hold: \begin{eqnarray*} \phi(g_1 g_2)(x_i) = X_{g_1g_2(P_i[1])}, \\ \phi(g_1)\phi(g_2)(x_i) = \phi(g_1)(X_{g_2(P_i[1])}) \end{eqnarray*} with $1\leq i\leq n.$ We need to establish the equality of these two expressions. Note that $X_{g_2(P_i[1])}$ is a Laurent polynomial which we denote by $L_{2,i}(x_1,\dots,x_n)$ and, similarly, $X_{g_1(P_i[1])}$ is a Laurent polynomial which we denote by $L_{1,i}(x_1,\dots,x_n)$. It follows directly from the definition of a morphism of translation quivers that $g_1g_2kQ[1]$ is a local slice with quiver isomorphic to $Q$. Thus, denoting by $g_2(i)$ the image in $g_2kQ[1]$ of the point $P_i[1]$, we get \begin{equation*} X_{g_1g_2(P_i[1])} = L_{1,g_2(i)}( L_{2,1}(x_1,\dots,x_n) ,\dots, L_{2,n}(x_1,\dots,x_n)). \end{equation*} On the other hand, $g_2kQ[1]$ is also a local slice with quiver isomorphic to $Q$, therefore \begin{equation*} \phi(g_1)X_{g_2(P_i[1])} = L_{1,g_2(i)}( L_{2,1}(x_1,\dots,x_n) ,\dots, L_{2,n}(x_1,\dots,x_n)), \end{equation*} which completes the proof. (c) We now prove that the kernel of $\phi$ equals ${\rm Stab}(\Gamma_{tr})_0$. Assume that $g, \,g^\prime \in {\rm Aut}\,\Gamma_{tr}$ are such that $\phi(g)=\phi(g^\prime)$. By definition of $\phi$, this means that $g(P_i[1]) = g^\prime(P_i[1])$, for every $i$. Thus $g$ and $g^\prime$ coincide on the initial slice. Therefore $g(M)=g^\prime(M)$ for every indecomposable object $M$ of ${\cal C}_Q$, that is, $g^{-1}g^\prime \in {\rm Stab}(\Gamma_{tr})_0$. (d) To show that $\phi$ is surjective, let $f\in{\rm Aut}^+ {\cal A}$, where ${\cal A=A}(\mathbf{x},Q)$. Then $Q(f(\mathbf{x}))\cong Q$. Let $M_i$ be an object in ${\rm ind}\, {\cal C}_Q$ such that $X_{M_i} = f(x_i) \in f(\mathbf{x})$ with $1\leq i\leq n.$ Due to the isomorphism $Q\cong Q(f(\mathbf{x})),$ the $M_i$'s form a local slice in the category ${\cal C}_Q$ and, in particular, $M_i$ lies in the transjective component of ${\cal C}_Q$. The correspondence $P_i[1] \mapsto M_i$ extends to an automorphism $g$ of $\Gamma_{tr}$ (actually to an automorphism of ${\cal C}_Q$) and we get $\phi(g)(x_i) = X_{g(P_i[1])} = X_{M_i} = f(x_i)$ for each $i.$ Therefore $\phi(g)=f.$ \end{proof} \begin{remark}\label{rem 3.11} There is always a distinguished automorphism of the transjective component, induced by the Auslander-Reiten translation $\tau $. If $Q$ is a Dynkin quiver, then $\tau $ is periodic so this automorphism is of finite order. If $Q$ is a representation-infinite quiver, then it is of infinite order and $\mathbb{Z}$ is a subgroup of Aut$^{+}\mathcal{A}$. \end{remark} \begin{lemma} \label{lemma_reflection} Let $\mathcal{A}$ be a cluster algebra with a seed $(\mathbf{x},Q)$ such that $Q$ is a tree. Then there exists an anti-automorphism $\sigma$ of $\Gamma_{tr}$ given by a reflection with respect to a vertical axis. Moreover, we have $\sigma^2=1$ and $\sigma \tau^m \sigma = \tau^{-m}$ for any $m \in {\mathbb Z}$, where $\tau$ stands for the Auslander-Reiten translation on $\Gamma_{tr}$. \end{lemma} \begin{proof} We may assume without loss of generality that the quiver $Q$ of the initial seed is a tree. Let $M$ be an arbitrary point in a transjective component $\Gamma $ of $\Gamma \left( \mathcal{D}^{b}\left( \textup{mod}\,kQ\right) \right)$. We recall that $\Gamma \cong \mathbb{Z}Q$ and we assume fixed a polarisation of $\Gamma $ (see \cite[p. 131]{ASS}). We define a reflection $\sigma _{\mathcal{D}}$ "along the vertical axis passing through $M$" in the following way. There exists a unique slice $\Sigma ^{+}$ in $\Gamma $ having $M$ as its unique source. This slice is the full subquiver of $\Gamma $ consisting of all the points $N$ in $\Gamma $ such that there exists a path from $M$ to $N$ and every such path is sectional. Dually, one constructs the unique slice $% \Sigma ^{-}$ in $\Gamma $ having $M$ as its unique sink. Now there exists an obvious bijection between the sets of points of $\Sigma ^{+}$ and $\Sigma ^{-}$, mapping each point of $\Sigma ^{+}$ to the unique point in $\Sigma ^{-}$ lying in the same $\tau$-orbit. Since $Q$ is a tree, the very definition of $\mathbb{Z}% Q$ implies that this bijection first extends to an anti-isomorphism between $\Sigma ^{+}$ and $\Sigma ^{-}$ and then to an anti-automorphism $\sigma _{\mathcal{D}}$ of $\Gamma$. Because $\sigma _{\mathcal{D}}$ is clearly compatible with the functor $\tau ^{-1}[1],$ it induces an anti-automorphism $\sigma $ of the transjective component $\Gamma _{tr}$ of $\Gamma \left( \mathcal{C}_{Q}\right) .$ Finally, the asserted relations for $\sigma $ and $\tau $ follow easily. \end{proof} \begin{theorem} \label{theorem_semidirect} {If $\mathcal{A}$ is a cluster algebra with a seed $(\mathbf{x},Q)$ such that $Q$ is a tree then the group of cluster automorphisms is the semidirect product ${\rm Aut} \, {\cal A} = {\rm Aut}^+ {\cal A} \rtimes \mathbb{Z}_2$. This product is not direct.} \end{theorem} \begin{proof} {The anti-automorphism $\sigma $ given by Lemma \ref{lemma_reflection} induces a cluster automorphism $f_\sigma\in{\rm Aut} \,\cal A\setminus\textup{Aut}^+\mathcal{A}$ of order two, and, therefore, Corollary \ref{cor two new} implies that ${\rm Aut} \, {\cal A} = {\rm Aut}^+ {\cal A} \rtimes \mathbb{Z}_2$. This product is not direct because $f_\sigma$ does not commute with $\tau$.} \end{proof} \begin{corollary}{ Let $\mathcal{A}$ be a cluster algebra of Dynkin or euclidean type. Then ${\rm Aut} \,\cal A={\rm Aut}^+ {\cal A}\rtimes\mathbb{Z}_2$.} \end{corollary} \begin{proof}{ The only case that is not a tree is the euclidean type $\tilde{\mathbb{A}}$, and, for this case, the result follows from Corollary \ref{cor anti new} and Example \ref{ex anti atilde}.} \end{proof} \subsection{Computing the automorphism groups for quivers of types ${\mathbb A}, {\mathbb D}, {\mathbb E}, \tilde{{\mathbb A}}, \tilde{{\mathbb D}}, \tilde{{\mathbb E}}$} \label{sect 3.3} Using Theorems \ref{theorem_main} and \ref{theorem_semidirect}, we are able to compute the cluster automorphism groups for the cluster algebras of Dynkin and euclidean types explicitly by computing the automorphism groups of the corresponding transjective component $\Gamma_{tr}$ of the cluster category. This computation is straightforward and is done by using the fact that under such an automorphism, the local structure of the quiver is preserved. We refer the reader to \cite{Ri} for a similar calculation. The results are collected in Table \ref{table}. As an example, we discuss the case of the quiver $\tilde{\mathbb D}_n$ in more detail. \begin{table}[htdp] \begin{center} \begin{tabular}{c|c|c} Dynkin Type & ${\rm Aut}^+ {\cal A}$ & ${\rm Aut}\, {\cal A}$ \\\hline ${\mathbb A}_n, n>1$ & ${\mathbb Z}_{n+3}$ & $D_{n+3}$ \\\hline ${\mathbb A}_1$ & ${\mathbb Z}_{2}$ & $\mathbb{Z}_{2}$ \\\hline ${\mathbb D}_n,\;n>4$ & ${\mathbb Z}_n\times {\mathbb Z}_2$ & ${\mathbb Z}_n\times {\mathbb Z}_2 \rtimes {\mathbb Z}_2$ \\\hline ${\mathbb D}_4$ & ${\mathbb Z}_4 \times S_3$ & $D_4 \times S_3$ \\\hline ${\mathbb E}_6$ & ${\mathbb Z}_{14}$& $D_{14}$ \\\hline ${\mathbb E}_7 $ &${\mathbb Z}_{10}$& $D_{10}$ \\\hline$ {\mathbb E}_8$ & ${\mathbb Z}_{16}$& $D_{16}$ \\ \\Euclidean Type & ${\rm Aut}^+ {\cal A}$ & ${\rm Aut}\, {\cal A}$ \\\hline $\tilde{\mathbb A}_{p,q},\;p\neq q$ & $H_{p,q}$ & $H_{p,q} \rtimes {\mathbb Z}_2$ \\\hline $\tilde{\mathbb A}_{p,p}$ & $H_{p,p}\rtimes {\mathbb Z}_2$ & $H_{p,p} \rtimes {\mathbb Z}_2 \rtimes {\mathbb Z}_2$ \\\hline $\tilde{\mathbb D}_{n-1}, \; n\neq5$ & $G$ & $G\rtimes {\mathbb Z}_2$ \\\hline $\tilde{\mathbb D}_4$ & ${\mathbb Z} \times S_4$ & ${\mathbb Z} \times S_4\rtimes {\mathbb Z}_2$ \\\hline $ \tilde{\mathbb E}_6$ & ${\mathbb Z} \times S_3$ & ${\mathbb Z} \times S_3 \rtimes {\mathbb Z}_2$ \\\hline $\tilde{\mathbb E}_7$ & ${\mathbb Z} \times {\mathbb Z}_2$ & $({\mathbb Z}\times {\mathbb Z}_2) \rtimes {\mathbb Z}_2$ \\\hline $\tilde{\mathbb E}_8$ & ${\mathbb Z}$ & ${\mathbb Z} \rtimes {\mathbb Z}_2$ \end{tabular} \caption{Cluster automorphism groups for quivers of Dynkin and euclidean types} \end{center} \label{table} \end{table} The notation in Table \ref{table} is as follows: $D$ stands for the dihedral group, $H_{p,q} = \langle r_1, r_2| r_1r_2 = r_2 r_1, r_1^p=r_2^q \rangle$ and $G$ is given by equation (\ref{G}) below. Note that for the type $\mathbb{A}_1$, the quiver $Q$ has no arrows, and therefore there are no anti-automorphisms in this case. Moreover, the cluster algebra has exactly two clusters, each consisting of a single cluster variable and the Auslander-Reiten translation in the cluster category is of order 2. In the case of $\tilde{\mathbb{A}}_{p,q}$, the automorphisms $r_{1}$ and $ r_{2}$ are defined as follows. Let $\Sigma ^{+}$, as in the proof of Lemma \ref{lemma_reflection}, be the unique slice in the transjective component having $M$ as its only source. Let $\omega $ denote the unique path of length $p$ from $M$ to the only sink in $\Sigma^+$. The automorphism $r_{1}$ is defined as follows. We send $M$ to the target $N$ of the only arrow on $\omega $ with source $M$, and send $\Sigma ^{+}$ isomorphically to the unique slice having $N$ as its only source. This map extends to an automorphism $r_{1}$ of the transjective component. We define $r_{2}$ similarly, using the path of length $q$ from $M$ to the sink of $\Sigma ^{+}$. Note that $\tau^{-1} =r_{1}r_{2}.$ \begin{example} Consider a cluster algebra ${\cal A=A}(\mathbf{x}, Q)$, where $Q$ is the following euclidean quiver of type $\widetilde{\mathbb D}_{n-1}$ with $n\neq 5$ (the number of points is $n$). \begin{equation*} \xymatrix@R10pt{ 1\ar[rd]&&&&&&n-1\\ &3\ar[r]&4\ar[r]&\quad\cdots\quad\ar[r]&n-3\ar[r]&n-2\ar[ru]\ar[rd]\\ 2\ar[ru]&&&&&&n } \end{equation*} The transjective component $\Gamma_{tr}$ is represented by the infinite translation quiver $\mathbb{Z}Q$, an example with $n=8$ is given in Figure \ref{fig 22}. \begin{figure} \[\xymatrix@R10pt@C10pt@!{&&\cdot\ar[rd]&&\cdot\ar[rd]&&1\ar[rd] &&\cdot\ar[rd] &&\cdot\ar[rd]\\ &\cdot \ar[ru]\ar[r]\ar[rd]&\cdot\ar[r]&\cdot \ar[ru]\ar[r]\ar[rd] & \cdot \ar[r] &\sigma 6 \ar[ru]\ar[r]\ar[rd] & 2 \ar[r] &3 \ar[ru]\ar[r]\ar[rd] & \cdot \ar[r] &\cdot \ar[ru]\ar[r]\ar[rd] & \cdot \ar[r] &\cdot\\ \cdots&&\cdot \ar[ru]\ar[rd] && \sigma 5\ar[ru]\ar[rd] &&\cdot \ar[ru]\ar[rd] &&4\ar[ru]\ar[rd] &&\cdot \ar[ru]\ar[rd]&& \cdots \\ &\cdot\ar[ru]\ar[rd] &&\sigma 4\ar[ru]\ar[rd] &&\cdot\ar[ru]\ar[rd] &&\cdot\ar[ru]\ar[rd] &&5\ar[ru]\ar[rd] &&\cdot\\ &\sigma 2\ar[r] &\sigma 3 \ar[ru]\ar[rd]\ar[r] & \cdot\ar[r] & \cdot \ar[ru]\ar[rd]\ar[r] & \cdot\ar[r] & \cdot \ar[ru]\ar[rd]\ar[r] & \cdot\ar[r] & \cdot \ar[ru]\ar[rd]\ar[r] & \cdot\ar[r] & 6 \ar[ru]\ar[rd]\ar[r] & 7 &\\ &\sigma 1\ar[ru] &&\cdot\ar[ru] &&\cdot\ar[ru] &&\cdot\ar[ru] &&\cdot\ar[ru] &&8 } \] \caption{ Auslander-Reiten quiver $\Gamma_{tr}$ of the transjective component of type $\tilde{ \mathbb{D}}$}\label{fig 22} \end{figure} Recall that automorphisms of a translation quiver are by definition the quiver automorphisms commuting with the translation $\tau$. Besides $\tau$ we introduce three more generators of the automorphism group. Let $\rho_1$ denote the automorphism which interchanges the corresponding points in the $\tau$-orbits of $1$ and $2$ in $\Gamma_{tr}$ and fixes all other points. Let $\rho_n$ denote the automorphism which interchanges the corresponding points in the $\tau$-orbits of $n-1$ and $n$ in $\Gamma_{tr}$ and fixes all other points. Finally, let $\sigma$ be the automorphism given by the translation of the plane that sends the point $n$ to the point $1$ followed by the reflection with respect to the horizontal line through the point $1$; we have indicated the action of $\sigma$ on the points $1,2,\ldots,6$ in Figure \ref{fig 22}, and $\sigma 8=1$ and $\sigma 7=2$. Since for every point of the quiver $kQ$ the number of incoming and outgoing arrows is preserved under a quiver automorphism, one sees that every automorphism of $kQ$ can be expressed as a combination of $\tau,\rho_1,\rho_n$, and $\sigma.$ We note the following relations between these generators: \begin{enumerate} \item the translation $\tau$ commutes with all automorphisms and is of infinite order; \item $\rho_1$ and $\rho_n$ are of order two and commute with each other; \item $\sigma^2=\tau^{n-3}$; \item $\rho_1\sigma=\sigma\rho_n$ and $\sigma\rho_1=\rho_n\sigma$. \end{enumerate} Thus we get a presentation of the group of automorphisms of $\Gamma_{tr}$ as \begin{equation}\label{G} G=\left\langle \tau,\sigma,\rho_1,\rho_n \left| \begin{array}{c} \rho_i^2=1 ,\tau \rho_i=\rho_i \tau \ (i=1,n)\\ \tau\sigma=\sigma\tau,\ \sigma^2=\tau^{n-3} \\ \rho_1\sigma=\sigma\rho_n, \ \sigma\rho_1=\rho_n\sigma \end{array}\right.\right\rangle \end{equation} \end{example} \begin{section} {Cluster algebras from surfaces}\label{sect 3} Following \cite{FST}, we describe the construction of cluster algebras from surfaces. Let $S$ be an oriented Riemann surface with or without boundary, and let $M\subset S$ be a finite set of marked points such that $M$ contains at least one point of every connected component of the boundary. If the boundary is empty then the surface is said to be \emph{closed}. Points in $M$ that are in the interior of $S$ are called \emph{punctures}. We call the pair $(S,M)$ simply a \emph{surface}. For technical reasons, we require that $(S,M)$ is not a sphere with one, two or three punctures; a disc with one, two or three marked points on the boundary; or a punctured disc with one marked point on the boundary. Some simple examples of surfaces are given in Table \ref{table 1}. \begin{table} \begin{center} \begin{tabular}{ c | c | c | c | l | l } \ g\ \ &\ \ b \ \ & \ \ c \ \ &\ \ p\ \ &\ surface & type \\ \hline 0& 1 & n+3 &0& \ polygon & $\mathbb{A}_n$\\ 0 & 1 & n &1& \ once punctured polygon & $\mathbb{D}_n$ \\ 0 & 1 & n-3 &2&\ twice punctured polygon& $\tilde{\mathbb{D}}_{n-1}$ \\ 0& 2 & n & 0&\ annulus& $\tilde{\mathbb{A}}_{n-1}$\\ 0 & 2 & n-3 & 1&\ punctured annulus& not acyclic\\ 0 & 0 & 0 & 4 & \ sphere with 4 punctures& not acyclic\\ 1 & 0 & 0 & 1 &\ punctured torus& not acyclic\\ 3 & 0 & n-3 & 0& \ pair of pants & not acyclic\\ \\ \end{tabular} \end{center} \caption{Examples of surfaces, $g$ is the genus, $b$ is the number of boundary components, $p$ the number of punctures, $c$ the number of marked points on the boundary and $n$ is the rank of the cluster algebra.}\label{table 1} \end{table} \subsection{Arcs and triangulations} \begin{definition} An \emph{arc} $\gamma$ in $(S,M)$ is the isotopy class of a curve in $S$ such that \begin{itemize} \item[(a)] the endpoints of the curve are in $M$; \item[(b)] the curve does not cross itself, except that its endpoints may coincide; \item[(c)] except for the endpoints, the curve is disjoint from $M$ and from the boundary of $S$, \item[(d)] the curve does not cut out an unpunctured monogon or an unpunctured bigon. \end{itemize} \end{definition} For any two arcs $\gamma,\gamma'$ in $S$, let $e(\gamma,\gamma')$ be the minimal number of crossings of curves $\alpha$ and $\alpha'$, where $\alpha$ and $\alpha'$ range over all curves in the isotopy classes $\gamma$ and $\gamma'$, respectively. We say that two arcs $\gamma$ and $\gamma'$ are \emph{compatible} if $e(\gamma,\gamma')=0$. An \emph{ideal triangulation} is a maximal collection of pairwise compatible arcs. The arcs of an ideal triangulation cut the surface into \emph{ideal triangles}. \begin{figure} \begin{center} \includegraphics{fig2.eps} \caption{Two ideal triangulations of a punctured annulus related by a flip of the arc $6$. The triangulation on the right hand side has a self-folded triangle.} \label{fig 2} \end{center} \end{figure} Examples of ideal triangulations are given in Figure \ref{fig 2}. There are two types of ideal triangles: triangles that have three distinct sides and triangles that have only two. The latter are called \emph{self-folded} triangles. For example, the triangle formed by the arcs $6$ and $1$ on the right hand side of Figure \ref{fig 2} is self-folded. In a self-folded triangle the arc incident to the puncture is called \emph{radius} and the other arc is called \emph{loop}. Ideal triangulations are connected to each other by sequences of {\it flips}. Each flip replaces a single arc $\gamma$ in a triangulation $T$ by the (unique) arc $\gamma' \neq \gamma$ that, together with the remaining arcs in $T$, forms a new ideal triangulation. Note that an arc $\gamma$ that is the radius inside a self-folded triangle in $T$ cannot be flipped. In \cite{FST}, the authors associated a cluster algebra to any bordered surface with marked points. Roughly speaking, the cluster variables correspond to arcs, the clusters to triangulations, and the mutations to flips. However, because arcs inside self-folded triangles cannot be flipped, the authors were led to introduce the slightly more general notion of {\it tagged arcs}. They showed that ordinary arcs can all be represented by tagged arcs and gave a notion of flip that applies to all tagged arcs. A {\it tagged arc} is obtained by taking an arc that does not cut out a once-punctured monogon and marking (``tagging") each of its ends in one of two ways, {\it plain} or {\it notched}, so that \begin{itemize} \item[(a)] each end connecting to a marked point on the boundary of $S$ must be tagged plain; \item[(b)] if both ends of an arc connect to the same point then they must be tagged in the same way. \end{itemize} \begin{definition} \label{compatible} Two tagged arcs $\alpha$ and $\beta$ are called {\it compatible} if the arcs $\alpha^0$ and $\beta^0$ obtained from $\alpha$ and $\beta$ by forgetting the taggings are compatible and \begin{itemize} \item[(a)] if $\alpha^0=\beta^0$ then at least one end of $\alpha$ must be tagged in the same way as the corresponding end of $\beta$; \item[(b)] if $\alpha^0\neq \beta^0$ but they share an endpoint $a$, then the ends of $\alpha$ and $\beta$ connecting to $a$ must be tagged in the same way. \end{itemize} \end{definition} One can represent an ordinary arc $\beta$ by a tagged arc $\iota(\beta)$ as follows. If $\beta$ does not cut out a once-punctured monogon, then $\iota(\beta)$ is simply $\beta$ with both ends tagged plain. Otherwise, $\beta$ is a loop based at some marked point $a$ and cutting out a punctured monogon with the sole puncture $b$ inside it. Let $\alpha$ be the unique arc connecting $a$ and $b$ and compatible with $\beta$. Then $\iota(\beta)$ is obtained by tagging $\alpha$ plain at $a$ and notched at $b$. Figure \ref{figtags} shows the tagged triangulation corresponding to the triangulation on the right hand side of Figure \ref{fig 2}. The notching is indicated by a bow tie. A maximal collection of pairwise compatible tagged arcs is called a {\it tagged triangulation}. \begin{figure} \begin{center} \includegraphics{figtags.eps} \caption{Tagged triangulation of the punctured annulus corresponding to the ideal triangulation of the right hand side of Figure \ref{fig 2}.} \label{figtags} \end{center} \end{figure} We are now ready to define the cluster algebra associated to the surface. For that purpose, we choose an ideal triangulation $T$ and then define a quiver $Q_T$ without loops or 2-cycles, or, equivalently, a skew-symmetric integer matrix $B_T$. Let $\tau_1,\tau_2,\ldots,\tau_n$ be the $n$ arcs of $T$. For any triangle $\Delta$ in $T$ which is not self-folded, we define a matrix $B^\Delta=(b^\Delta_{ij})_{1\le i\le n, 1\le j\le n}$ as follows. \begin{itemize} \item $b_{ij}^\Delta=1$ and $b_{ji}^{\Delta}=-1$ in each of the following cases: \begin{itemize} \item[(a)] $\tau_i$ and $\tau_j$ are sides of $\Delta$ with $\tau_j$ following $\tau_i$ in the clockwise order; \item[(b)] $\tau_j$ is a radius in a self-folded triangle enclosed by a loop $\tau_\ell$, and $\tau_i$ and $\tau_\ell$ are sides of $\Delta$ with $\tau_\ell$ following $\tau_i$ in the clockwise order; \item[(c)] $\tau_i$ is a radius in a self-folded triangle enclosed by a loop $\tau_\ell$, and $\tau_\ell$ and $\tau_j$ are sides of $\Delta$ with $\tau_j$ following $\tau_\ell$ in the clockwise order; \end{itemize} \item $b_{ij}^\Delta=0$ otherwise. \end{itemize} Then define the matrix $ B_{T}=(b_{ij})_{1\le i\le n, 1\le j\le n}$ by $b_{ij}=\sum_\Delta b_{ij}^\Delta$, where the sum is taken over all triangles in $T$ that are not self-folded. Note that $B_{T}$ is skew-symmetric and each entry $b_{ij}$ is either $0,\pm 1$, or $\pm 2$, since every arc $\tau$ is in at most two triangles. We associate a quiver $Q_T$ to the matrix $B_T$ as follows. The points of $Q_T$ are labeled by $1,2,\ldots,n$ and the number of arrows from $i$ to $j$ equals $b_{ij}$, with the convention that if $b_{ij}$ is a negative number, then having $b_{ij}$ arrows from $i$ to $j$ means having $|b_{ij}|$ arrows from $j$ to $i$. For example, the quiver corresponding to the triangulation on the right of Figure \ref{fig 2} is \[\xymatrix@C50pt@R10pt{1\ar[rd]&&3\ar[rd]\ar[dd]\\&2\ar[ru]&&4\\ 6\ar[ru]&&5\ar[ul]\ar[ru]}\] Since the matrix $B_T$ is skew-symmetric, it follows that $Q_T$ has no oriented cycles of length at most two. The cluster algebra $\mathcal{A}=\mathcal{A}(\mathbf{x},Q_T)$ given by the quiver $Q_T$ is said to be the \emph{cluster algebra} (with trivial coefficients) \emph{associated to the surface $(S,M)$}. \begin{remark}\label{remtag} {It has been shown in \cite{FST} that} if the surface $(S,M)$ is not a closed surface with exactly one puncture then there is a bijection between tagged arcs in $(S,M)$ and cluster variables in the cluster algebra, such that compatible tagged arcs correspond to compatible cluster variables, and tagged triangulations correspond to clusters. The mutations in the cluster algebra are given by the flips in the tagged triangulations. If $(S,M)$ is a closed surface with exactly one puncture, for example a once punctured torus, then there is a bijection between arcs (not tagged arcs) and cluster variables. The reason for this is that a flip can not change the tagging at the endpoint of a given arc because all arcs are incident to the unique puncture, thus changing the tagging on one of the arcs would not be compatible with the others. This fact will be important when we consider the cluster automorphisms induced by change of taggings, see Lemma \ref{lemtag} and Theorem \ref{theorem mg}. \end{remark} \subsection{Mapping class groups} \label{sect 4.2} In this section, we give the definitions and some basic properties of mapping class groups. For further details we refer the reader to \cite{FM}. Let $\textup{Homeo}^+(S)$ be the group of orientation preserving homeomorphisms from $S$ to $S$ and let $\textup{Homeo}^+(S,\partial S)$ be the subgroup of all $f\in \textup{Homeo}^+(S)$ such that the restriction $f|_{\partial S}$ of $f$ to the boundary is equal to the identity $1_{\partial S}$. Two homeomorphisms $f,g$ of $S$ are \emph{isotopic} if there is a continuous function $H:S\times [0,1]\to S$ such that $H(x,0)=f$ and $H(x,1)=g$ and such that for each $t\in [0,1]$ the map $H(x,t):S\to S$ is a homeomorphism. Let $\textup{Homeo}_0(S,\partial S)$ be the subgroup of all $f\in \textup{Homeo}^+(S,\partial S)$ that are isotopic to the identity $1_S$ via an isotopy $H$ fixing $\partial S$ pointwise, thus $H(x,t)=x$ for all $x\in \partial S$ and $t\in [0,1]$. The \emph{mapping class group} $\mathcal{M}od\,S$ of the surface $S$ is defined as the quotient group \[\mathcal{M}od\,S=\textup{Homeo}^+(S,\partial S)/\textup{Homeo}_0(S,\partial S).\] We now define the mapping class group of the surface with marked points $(S,M)$ in a similar way. Let $\textup{Homeo}^+(S,M)$ be the group of orientation preserving homeomorphisms from $S$ to $S$ which map $M$ to $M$. Note that we do \emph{not} require that the points in $M$ are fixed, neither that the points on the boundary of $S$ are fixed, nor that each boundary component is mapped to itself. However if a boundary component $C_1$ is mapped to another component $C_2$ then the two components must have the same number of marked points. We say that a homeomorphism $f$ is \emph{isotopic to the identity relative to $M$}, if $f$ is isotopic to the identity via an isotopy $H$ that fixes $M$ pointwise, thus $H(x,t)=x$ for all $x\in M$ and $t\in [0,1]$. Let $\textup{Homeo}_0(S,M)$ be the subgroup of all $f\in \textup{Homeo}^+(S,M)$ that are isotopic to the identity relative to $M$. We define the \emph{mapping class group} $\mathcal{MG}(S,M)$ of the surface $(S,M)$ to be the quotient \[\mathcal{MG}(S,M)=\textup{Homeo}^+(S,M)/\textup{Homeo}_0(S,M).\] The two mapping class groups are related as follows. \begin{lemma}\label{mgg} $\mathcal{M}od\,S $ is isomorphic to a subgroup of $\mathcal{MG}(S,M)$. \end{lemma} \begin{proof} Clearly $\textup{Homeo}_0(S,\partial S)$ is a subgroup of $\textup{Homeo}_0(S,M)$, and thus there is a map from $\mathcal{M}od\,S$ to $\mathcal{MG}(S,M)$ sending the class of a homeomorphism $f$ in $\mathcal{M}od\,S$ to its class in $\mathcal{MG}(S,M)$. This map is an injective group homomorphism. \end{proof} Next we review \emph{Dehn twists}. If $S$ is an annulus, we can parametrise $S$ as $S^1\times [0,1]$, where $S^1$ denotes the circle of radius one, such that the two boundary components of $S$ are $S^1\times\{0\}$ and $S^1\times\{1\}$. Then the map $T:S^1\times [0,1]\to S^1\times[0,1], (\theta,t)\mapsto ( \theta +2\pi t, t)$ is an orientation preserving homeomorphism that fixes both boundary components of $S$ pointwise. $T$ is called \emph{Dehn twist} on $S$. Thus $T\in\textup{Homeo}^+(S,\partial S)$, and since $T$ is not isotopic to the identity relative to $\partial S$, the class of $T$ in $\mathcal{M}od\,S$ is non-zero. It is clear that the class of $T$ has infinite order in $\mathcal{M}od\,S$, and hence it generates an infinite cyclic subgroup of $\mathcal{M}od\,S $. Since $S$ is an annulus, one can show that the class of $T$ actually generates the whole group $\mathcal{M}od\,S $. One can think of this Dehn twist as cutting the annulus along the equator $S^1\times\{1/2\}$, performing a full rotation of one end (keeping the boundary fixed) and gluing the two pieces back together, see Figure \ref{fig dehn}. \begin{figure} \includegraphics{dehn1.eps} \caption{Dehn twist on the annulus, the curve a is mapped to the curve a'; the equator is drawn as a dashed line} \label{fig dehn} \end{figure} Now suppose that $S$ is any Riemann surface, and that $c$ is a closed simple curve in $S$. Then one can define a Dehn twist about $c$ in $S$ by performing the Dehn twist $T$ on a regular neighbourhood $N$ of $c$ in $S$ which is homeomorphic to an annulus, see Figure \ref{fig regular}. \begin{figure} \begin{center} \includegraphics{regular.eps} \end{center}\caption{Regular neighborhood of a closed curve $c$} \label{fig regular} \end{figure} \begin{remark}\label{rem mg1} If the surface $S$ has genus at least one or if $S$ has genus zero and at least two boundary components, then there exists a Dehn twist that generates an infinite cyclic subgroup of $\mathcal{M}od\,S$. \end{remark} \begin{remark}\label{rem mg2} We list the mapping class groups of some Riemann surfaces: \begin{center} \begin{tabular}{|c|c|c|c|c|c|}\hline S& disc & annulus& punctured disc & torus\\\hline $\mathcal{M}od (S)$ & $0$ &$\mathbb{Z}$&$0$&$\textup{SL}(2,\mathbb{Z})$ \\ \hline \end{tabular} \end{center} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline S& $\textup{sphere with $3$ punctures}$& $\textup{disc with $p$ punctures}$\\ \hline $\mathcal{M}od (S)$ & $S_3$ &$B_p$ \\ \hline \end{tabular} \vspace{10pt} \end{center} where $S_3$ denotes the symmetric group on $3$ letters and $B_p$ the braid group on $p$ strands. In general mapping class groups are very difficult to compute and known only for a few cases. \end{remark} \subsection{Marked mapping class group} In order to describe the cluster automorphism group of a cluster algebra corresponding to a marked surface $\left( S,M\right) $, we need a group that contains the mapping class group $\mathcal{MG}(S,M)$ from the previous subsection, but also contains automorphisms that change the taggings at the punctures. We call this group the \emph{marked mapping class group.} \begin{definition} A \emph{marked mapping class} $(\bar f, \cal P) $ is an element $\bar f\in \mathcal{MG}(S,M)$ together with a subset $\cal P$ of the set of punctures of $(S,M)$. \end{definition} If the set $\cal P$ consists of a single element $z$, then we write $(\bar f, z)$ instead of $(\bar f, \{z\})$ for the marked mapping class. A marked mapping class acts on the set of arcs of the surface by applying the homeomorphism $f$ and changing the tagging at each puncture in the set $\mathcal P$. We can define a product on the set of marked mapping classes by \[(\bar f_1,\mathcal{P}_1)(\bar f_2, \mathcal {P}_2) =(\bar f_1\bar f_2, \mathcal {P}_1\ominus f_1(\mathcal {P}_2)),\] where $\ominus $ denotes the symmetric difference $A\ominus B= (A\cup B)\setminus (A\cap B)$. \begin{lemma} The set of marked mapping classes forms a group under the product above. \end{lemma} \begin{proof} Associativity follows from the associativity of the symmetric difference, the identity is given by $(1, \emptyset)$, where $1$ denotes the identity of $\mathcal{MG}(S,M)$, and inverses are given by $(\bar f,\mathcal{P})^{-1}=(\bar f^{-1},f^{-1}(\mathcal{P}))$. \end{proof} \begin{definition} The \emph{marked mapping class group} $\mathcal{MG}_{\bowtie}(S,M)$ of the surface $(S,M)$ is the group of all marked mapping classes of $(S,M)$. \end{definition} We can also define $\mathcal{MG}_{\bowtie}(S,M)$ as a semidirect product as follows. Let $\{z_1,\ldots,z_p\}$ be the set of punctures and let $\mathcal{Z}$ be the group of powersets of $\{z_1,\ldots,z_p\}$ with respect to the group operation $\ominus$. Note that $\mathcal{Z}\cong\mathbb{Z}_2^p$. For each $\overline{f} \in \mathcal{MG}(S,M)$, the homeomorphism $f$ induces an automorphism of $\mathcal{Z}$. This defines an action of $\mathcal{MG}(S,M)$ on $\mathcal{Z}$. \begin{lemma}\label{lem sd} $\mathcal{MG}_{\bowtie}(S,M)$ is isomorphic to the semidirect product $\mathcal{Z}\rtimes \mathcal{MG}(S,M)$ with respect to the above action. \end{lemma} \begin{proof} The product in $\mathcal{Z}\rtimes\mathcal{MG}(S,M)$ is defined as \[(\overline{f}_1,\mathcal{P}_1)(\overline{f}_2,\mathcal{P}_2)=(\overline{f}_1\overline{f}_2,\mathcal{P}_1\ominus f_1(\mathcal{P}_2)),\] which proves the statement. \end{proof} \begin{remark}\label{rem 38} If the surface has precisely one puncture, then for each $\overline{f}\in \mathcal{MG}(S,M)$ the homeomorphism $f$ must fix the puncture, hence the action of $\mathcal{MG}(S,M)$ on $\mathcal{Z}$ is trivial, whence $\mathcal{MG}_{\bowtie}(S,M)$ is the direct product $\mathcal{Z}\times \mathcal{MG}(S,M)$. \end{remark} \begin{corollary}\label{cor 37} \begin{itemize} \item[{\rm (1)}] $\mathcal{MG}(S,M)$ is a subgroup of $\mathcal{MG}_{\bowtie}(S,M)$; \item[{\rm (2)}] $\mathcal{Z}$ is a normal subgroup of $\mathcal{MG}_{\bowtie}(S,M)$; \item[{\rm (3)}] $\mathcal{MG}_{\bowtie}(S,M)$ is generated by the elements $(\bar f, z)$ where $\bar f$ runs over all elements of $\mathcal{MG}(S,M)$ and $z$ runs over all punctures; \item [{\rm (4)}] $(1,z)(1,z)=(1,\emptyset)$, more generally $(1,\mathcal{ P})^m=(1, \emptyset)$ if $m$ is even, and $(1,\mathcal{ P})^m=(1,\mathcal{P})$ if $m$ is odd; \item[{\rm (5)}] If $z,z'$ are two punctures such that $f$ maps $z$ to $z'$, then \[ \big(1, z'\big)\,\big(\bar f,\emptyset\big) = \big(\bar f,\emptyset\big)\,\big(1,z\big).\] \end{itemize} \end{corollary} \begin{proof} (1),(2) and (3) are direct consequences of Lemma \ref{lem sd}, and (4) and (5) are easy computations. \end{proof} \subsection{Mapping class group and cluster automorphism group} We now show that the group of cluster automorphisms has a subgroup isomorphic to $\mathcal{MG}_{\bowtie}(S,M)$. The change of tagging induces a cluster automorphism which is described in the following Lemma. Let $z$ be a puncture in $(S,M)$ and $\alpha$ any arc. We denote by $\alpha^z$ the arc that is isotopic to $\alpha$ and has the opposite tagging at each endpoint that is equal to $z$. Essentially there are three different cases which are illustrated in Figure \ref{figtag}, namely $\alpha$ can have one endpoint, both endpoints or no endpoint equal to $z$. \begin{figure} \begin{center} \scalebox{0.8}{\input{figtag.pstex_t}} \caption{Change of tagging at $z$}\label{figtag} \end{center} \end{figure} \begin{lemma} \label{lemtag} Assume $(S,M)$ is not a closed surface with exactly one puncture, and assume $T$ is a triangulation of $(S,M)$. Then, for every puncture $z$ of $(S,M)$, the automorphism $\psi_z:\mathcal{A}\to\mathcal{A}$ defined by $\psi_z(x_\tau)=x_{\tau^z}$, for every arc $\tau\in T$, where $x_\tau$ is the cluster variable corresponding to $\tau$, and extended to the other cluster variables by the algebra homomorphism properties, is a cluster automorphism in $\textup{Aut}^+\mathcal{A}$. \end{lemma} \begin{proof} The cluster algebra $\mathcal{A}$ has an initial seed $(\mathbf{x}_T,B_T)$ associated to the triangulation $T$. As noted in \cite[Definition 9.2]{FST}, the compatibility of tagged arcs is invariant with respect to a simultaneous change of all tags at a given puncture, therefore the set $T'=\{\tau_1^z,\ldots,\tau_n^z\}$ also is a triangulation, and hence it defines another seed $(\mathbf{x}_{T'},B_{T'})$ of $\mathcal{A}$, where \[ B_{T'}=\left(b_{\tau_i^z\tau_j^z}\right)_{ij}=\left(b_{\tau_i\tau_j}\right)_{ij} =B_T.\] The automorphism $\psi_z$ sends the seed $(\mathbf{x}_T,B_T)$ to the seed $(\mathbf{x}_{T'},B_{T'})$, which shows (CA1). Moreover, since $B_{T^{^{\prime }}}=B_{T},$ the quivers corresponding to these two matrices are equal. By Lemma \ref{lemma1}, $\psi _{z}\in $ Aut$^{+}\mathcal{A}$. \end{proof} \begin{remark} In the case of a closed surface with exactly one puncture, $\psi_z$ is not defined. Indeed, in this case all arcs of a triangulation start and end at the puncture and thus must be tagged in the same way. Therefore flips do not change the tagging. \end{remark} \begin{theorem} \label{theorem mg} Let $(S,M)$ be a surface with $p$ punctures. Then $\mathcal{MG}(S,M)$ is isomorphic to a subgroup of $\textup{Aut}^+\mathcal{A}$. If moreover $p\ge 2$, or if $\partial S\ne\emptyset$, then $\mathcal{MG}_{\bowtie}(S,M)$ is isomorphic to a subgroup of $\textup{Aut}^+\mathcal{A}$. \end{theorem} \begin{proof} We start by showing that $\mathcal{MG}(S,M)$ is isomorphic to a subgroup of $\textup{Aut}^+\mathcal{A}$. Fix a triangulation $T$ and let $\mathbf{x}_T$ be the corresponding cluster. Denote the elements of $T$ by $\tau_1,\ldots,\tau_n$ and those of $\mathbf{x}_T$ by $x_{\tau_1},\ldots,x_{\tau_n}$. Then $\{x_{\tau_1},\ldots,x_{\tau_n}\}$ is a transcendence basis of the ambient field $\mathcal{F}$ of the cluster algebra. For any element $f\in \textup{Homeo}^+(S,M)$, let $\overline{f}\in\mathcal{MG}(S,M)$ denote its class in the mapping class group. Define a map $\phi:\mathcal{MG}(S,M)\to \textup{Aut}^+\mathcal{A}$ by letting $\phi(\overline{f})$ be the map from $\mathcal{A}$ to $\mathcal{A}$ defined on the basis by $\phi(\overline{f})(x_{\tau_i})=x_{f(\tau_i)}$ and extended to $\mathcal{A}$ by the algebra homomorphism properties. We show that $\phi$ is an injective group homomorphism. To show that the definition of $\phi $ does not depend on the choice of the representative $f$, assume that $f$ is isotopic to the identity relative to $M$. Then $f(\tau_i)$ is isotopic to $\tau_i$ relative to $M$, and thus, $f(\tau_i)$ and $\tau_i$ represent the same arc. It follows that $\phi$ does not depend on the choice of $f$. Next we show that $\phi(\overline{f})$ is a cluster automorphism. Since $f$ is a homeomorphism, any two arcs $\alpha,\beta$ in $(S,M)$ which are compatible, have compatible images $f(\alpha),f(\beta)$ in $(S,M)$. Therefore any triangulation of $(S,M)$ is mapped under $f$ to a triangulation of $(S,M)$. Thus $\phi \left( \overline{f}\right) $ maps clusters to clusters. By Corollary \ref{cor2.7bis}, $\phi \left( \overline{f}\right) $ is a cluster automorphism. Moreover, since $f$ is actually an orientation preserving homeomorphism, we have $\phi(\overline{f})\in \textup{Aut}^+\mathcal{A}$, and this shows that $\phi$ is well-defined. Now we show that the definition of $\phi$ does not depend on the choice of the triangulation $T$. That is, let us show that for any arc $\alpha $ in $(S,M)$ \begin{equation} \label{stern} x_{f(\alpha)}=\phi(\overline{f})(x_\alpha). \end{equation} Indeed, let $\alpha $ be an arc in $(S,M)$. Then there is a sequence of flips $\mu_T$ such that $\mu_T\,\tau=\alpha$ for some $\tau\in T$. Let $\overline{\mu}$ denote the corresponding sequence of mutations in $\mathcal{A}$. Then $x_{f(\alpha)}= x_{f(\mu_T\,\tau)}=x_{\mu_{f(T)}f(\tau)}$, where the last identity holds because $f$ commutes with flips. Since flips correspond to mutations, we get $x_{f(\alpha)}= \overline{\mu}_{\phi(\overline{f})(\mathbf{x}_T)}x_{f(\tau)}$, which by definition of $\phi(\overline{f})$ is equal to $\overline{\mu}_{\phi(\overline{f})(\mathbf{x}_T)}\phi(\overline{f})(x_{\tau})$. Now using the fact that $\phi(\overline{f})$ is a cluster automorphism, we get $x_{f(\alpha)}= \phi(\overline{f})(\overline{\mu}_{\mathbf{x}_T}(x_\tau))$, which is equal to $\phi(\overline{f})(x_{\mu_{T}(\tau)})=\phi(\overline{f})(x_\alpha)$, because flips correspond to mutations. To show that $\phi$ is a group homomorphism, let $\overline{f},\overline{g}\in \mathcal{MG}(S,M)$, then for any arc $\alpha$, \[\phi(\overline{f}\circ\overline{g})(x_\alpha)=x_{f\circ g (\alpha)} =\phi(\overline{f})(x_{g(\alpha)})=\phi(\overline{f})\big(\phi(\overline{g})(x_\alpha)\big). \] Finally, we show that $\phi $ is injective. Let $\overline{f}\in\textup{Ker}\,\phi$. Then $\phi(\overline{f})=1_{\mathcal{A}}$, and for any arc $\alpha$ in $(S,M)$, we have $x_\alpha=\phi(\overline{f})(x_\alpha)=x_{f(\alpha)}$, where the last identity holds by equation (\ref{stern}). Thus $f(\alpha)$ is isotopic to $\alpha$, for every arc $ \alpha$ in $(S,M)$, and in particular, $f $ fixes each point in $M$. Thus for every triangle $\Delta$, $f$ fixes the points of $\Delta$ and maps the arcs of $\Delta$ to isotopic arcs. Therefore $f$ is isotopic to the identity on each triangle, and hence on the whole surface. This shows that $f$ is zero in $\mathcal{MG}(S,M)$, and hence $\phi$ is injective. This shows that $\mathcal{MG}(S,M)$ is isomorphic to a subgroup of $\textup{Aut}^+\mathcal{A}$. Now suppose that $p\ge2$ or $\partial S \ne \emptyset$, For any puncture $z$ let $\psi_z$ be the cluster automorphism of Lemma \ref{lemtag}. Define a map $\chi:\mathcal{MG}_{\bowtie}(S,M)\to \textup{Aut}^+\mathcal{A}$ by $\chi(\overline{f},\mathcal{P})=(\prod_{z\in\mathcal{P}}\psi_z)\phi(\overline{f})$. In order to show that $\chi$ is a group homomorphism, we compute \begin{equation}\label{eq44} \chi( (\overline{f}_1,\mathcal{P}_1)(\overline{f}_2,\mathcal{P}_2) ) = (\prod_{z\in \mathcal{P}_1\ominus f_1(\mathcal{P}_2)} \psi_z) \phi(\overline{f}_1\overline{f}_2)\end{equation} and on the other hand \begin{eqnarray} \chi (\overline{f}_1,\mathcal{P}_1)\chi(\overline{f}_1,\mathcal{P}_1)&=& (\prod_{z\in \mathcal{P}_1} \psi_z) \phi(\overline{f}_1)(\prod_{z\in \mathcal{P}_2} \psi_z) \phi(\overline{f}_2) \nonumber \\ &=&(\prod_{z\in \mathcal{P}_1} \psi_z) (\prod_{z\in f_1(\mathcal{P}_2)} \psi_z) \phi(\overline{f}_1)\phi(\overline{f}_2),\label{eq45}\end{eqnarray} where the last identity follows from the equation $\phi(\overline{f})\psi_z=\psi_{f(z)}\phi(\overline{f})$. The equality of the expressions in equations (\ref{eq44}) and (\ref{eq45}) now follows because $\phi $ is a homomorphism and because $\psi_z^2=1$. This shows that $\chi $ is a homomorphism. To show that $\chi $ is injective, suppose that $\chi(\overline{f},\mathcal{P}) $ is the identity automorphism. Then we have $\mathcal{P}=\emptyset$ and $\chi(\overline{f},\emptyset)=\phi(\overline{f})$, and from the injectivity of $\phi$ we get that $\overline{f}=1$. Thus $\chi $ is injective. \end{proof} {The theorem does not describe the whole group $\textup{Aut}^+\mathcal{A}$ but only a subgroup, and it is not true in general that $\textup{Aut}^+\mathcal{A}= \mathcal{MG}_{\bowtie}(S,M)$. However, the only cases we know where this equality does not hold are the surfaces corresponding to the acyclic types $\mathbb{D}_4$ and $\tilde{\mathbb{D}}_4$. These two types correspond to star shaped quivers with $3$ and $4$ branches which have $S_3$-symmetry and $S_4$-symmetry, respectively. However the corresponding surfaces do not have such symmetries. We conjecture that these are the only exceptions, since we know no other surface that gives rise to a quiver having an $S_\ell$-symmetry, with $\ell>2$. } \begin{conj}\label{conj} Let $(S,M)$ be any surface different from the disc with exactly one puncture and four marked points on the boundary or the disc with exactly two punctures and two marked points on the boundary. Then \begin{enumerate} \item if $(S,M)$ is not a closed surface with exactly one puncture then \[\textup{Aut}^+\mathcal{A}= \mathcal{MG}_{\bowtie}(S,M).\] \item if $(S,M)$ is a closed surface with exactly one puncture then \[\textup{Aut}^+\mathcal{A}= \mathcal{MG}(S,M).\] \end{enumerate} \end{conj} We can prove this conjecture using the results from section \ref{sect 3.3} in the cases where the cluster algebra from the surface is of acyclic type. \begin{theorem}\label{theorem a} Let $(S,M)$ be a disc or an annulus without punctures. Then \[\textup{Aut}^+\mathcal{A}=\mathcal{MG}(S,M)=\mathcal{MG}_{\bowtie}(S,M).\] \end{theorem} \begin{proof} Since $(S,M)$ has no punctures, we have $\mathcal{MG}(S,M)=\mathcal{MG}_{\bowtie}(S,M)$. Suppose first that $(S,M)$ is a disc, and let the number of marked points be $n+3$. Then the cluster algebra is of Dynkin type $\mathbb{A}_n$, and we know from section \ref{sect 3.3} that $\textup{Aut}^+\mathcal{A}\cong \mathbb{Z}_{n+3}$. Thus we only need to show that $\mathcal{MG}(S,M)\cong\mathbb{Z}_{n+3}$. We may assume without loss of generality that the marked points are the points of a regular polygon, so that any rotation about the centre of the disc of angle $k \frac{2\pi}{n+3}$, with $k\in \mathbb{Z}$, maps $M$ to $M$. Each of these rotations is an orientation preserving homeomorphism of $(S,M)$, and it is isotopic to the identity relative to $M$ if and only if it fixes each point in $M$. This shows that the rotations form a subgroup of $\mathcal{MG}(S,M)$ isomorphic to $\mathbb{Z}_{n+3}$. Each element of $\mathcal{MG}(S,M)$ is determined by its values on $M$, because if two orientation preserving homeomorphisms $f,g$ agree on $M$, then $fg^{-1}$ fixes $M$, and we may suppose without loss of generality that $fg^{-1}\in \textup{Homeo}^+(S,\partial S)$. Since $\mathcal{M}od(S,M)=0$, it follows that $fg^{-1}\in \textup{Homeo}_0(S,\partial S)$, and therefore $fg^{-1}\in \textup{Homeo}_0(S,M)$. Since each element of $\mathcal{MG}(S,M)$ maps the boundary to itself and preserves orientation, each element can be represented by a rotation. This shows that $\mathcal{MG}(S,M)\cong\mathbb{Z}_{n+3}$. Now suppose that $(S,M)$ is an annulus, let $C_1,C_2$ be the two boundary components of $(S,M)$, and let $p$ be the number of marked points on $C_1$ and $ q$ be the number of marked points on $C_2$. Then the cluster algebra is of euclidean type $\tilde{\mathbb{A}}_{p,q}$, and we know from section \ref{sect 3.3} that \[\textup{Aut}^+\mathcal{A}\cong \left\{ \begin{array} {ll} H_{p,q} &\textup{if $p\ne q$;}\\ H_{p,p}\times \mathbb{Z}_2 &\textup{if $p= q$;} \end{array} \right.\] where $H_{p,q}=\langle r_1,r_2\mid r_1r_2=r_2r_1, r_1^p=r_2^q\rangle.$ As in the case of the disc above, the rotations of each boundary component form a subgroup of $\mathcal{MG}(S,M)$. Note however that these subgroups are infinite cyclic. We can choose two generators $r_1 $ for the group given by rotating $C_1$ and $r_2 $ for the group given by rotating $C_2$ such that $r_1^p$ and $ r_2^q$ fix every point in $M$ and $r_1^p=r_2^q$. (Thus $r_1,r_2$ are rotations in opposite directions.) Moreover $r_1r_2=r_2r_1$. This shows that $H_{p,q}$ is a subgroup of $\mathcal{MG}(S,M)$. Note that $r_1^p$ and $r_2^q$ are Dehn twists of the annulus described in section \ref{sect 4.2}. Suppose first that $p\ne q$. Then each element of $\mathcal{MG}(S,M)$ maps each boundary component to itself and, in particular, on each boundary component it is given by a rotation. Moreover, each element of $\mathcal{MG}(S,M)$ is determined by its values on $M$ up to composition with $r_1^p$, because if two elements $f,g$ agree on $M$, then $fg^{-1}$ fixes $M$, hence without loss of generality $fg^{-1}$ fixes each point on the boundary. It follows that $fg^{-1}\in\mathcal{M}od\,S$ and therefore $fg^{-1}$ is a power of a Dehn twist by Remark \ref{mgg}, hence a power of $r_1^p$. This shows that $\mathcal{MG}(S,M)\cong H_{p,q}$. Now suppose that $p=q$. Then the elements of $\mathcal{MG}(S,M)$ may map one boundary component to the other. Exchanging the boundary components twice maps each boundary component to itself, and by the same argument as in the case $p\ne q$, we see that such an element is given by the rotations. Thus exchanging the boundary components corresponds to a subgroup of order two, whence $\mathcal{MG}(S,M)\cong H_{p,p}\rtimes \mathbb{Z}_2$, as required. \end{proof} \begin{theorem}\label{theorem d} Let $(S,M)$ be a disc with $p$ punctures with $p$ equal to $1$ or $2$, and suppose that the number of marked points on the boundary is at least $5$ if $p=1$ and at least $3$ if $p=2$. Then\[\textup{Aut}^+\mathcal{A}= \mathcal{MG}_{\bowtie}(S,M).\] \end{theorem} \begin{proof} Suppose first that $(S,M)$ is a disc with one puncture let $n$ be the number of marked points on the boundary. By our assumption, we have $n>4$. Then the cluster algebra is of type $\mathbb{D}_n$ and we know from section \ref{sect 3.3} that $\textup{Aut}^+\mathcal{A} \cong \mathbb{Z}_n\times \mathbb{Z}_2$. On the other hand, the mapping class group of the once punctured disc is equal to the mapping class group of the unpunctured disc, see Remark \ref{rem mg2}, thus $\mathcal{MG}(S,M)\cong\mathbb{Z}_n$. Now it follows from Remark \ref{rem 38} that $\mathcal{MG}_{\bowtie}(S,M)\cong\mathbb{Z}_n\times \mathbb{Z}_2$ as required. Suppose now that $(S,M)$ is a disc with two punctures, and let $n-3$ be the number of marked points on the boundary. By our assumption, we have $n>5$. Then the cluster algebra is of type $\tilde{\mathbb{D}}_{n-1}$, and we know from section \ref{sect_acyclic} that $\textup{Aut}^+\mathcal{A}\cong G$, where \begin{equation}\label{GG} G=\left\langle \tau,\sigma,\rho_1,\rho_n \left| \begin{array}{c} \rho_i^2=1 ,\tau \rho_i=\rho_i \tau \ (i=1,n)\\ \tau\sigma=\sigma\tau,\ \sigma^2=\tau^{n-3} \\ \rho_1\sigma=\sigma\rho_n, \ \sigma\rho_1=\rho_n\sigma \end{array}\right.\right\rangle \end{equation} The mapping class group $\mathcal{M}od$ of the disc with $p$ punctures (without any marked points on the boundary) is isomorphic to the braid group $B_p$ on $p$ strands, see Remark \ref{rem mg2}, thus in our situation it is isomorphic to $B_2\cong \mathbb{Z}$. Let $s$ be a generator of $B_2$. Then $s $ maps one puncture to the other and $s^2$ is isotopic to a rotation of the boundary by the angle $2\pi$ that fixes the punctures. On the other hand, the elements of $\mathcal{MG}(S,M)$ which are induced by the rotations of the boundary (fixing the punctures) form an infinite cyclic group, and we can choose a generator $r$ such that $r^{n-3}=s^2$. Clearly, $rs=s r$. Up to composition with $r^{n-3}$ and $s$, any element of $\mathcal{MG}(S,M)$ is determined by its values on $M$, because if $f,g\in \mathcal{MG}(S,M)$ agree on $M$, then $fg^{-1}$ fixes each point in $M$, hence we can suppose without loss of generality that $fg^{-1}$ fixes each point on the boundary. Thus since $\mathcal{M}od\,S$ is generated by $s$, it follows that $fg^{-1}$ is a power of $s$. This shows that $\mathcal{MG}(S,M)=\langle r,s\mid s^2=r^{n-3}, rs=sr\rangle$. We must show that $G\cong\mathcal{MG}_{\bowtie}(S,M)$. Denote the two punctures by $z_1,z_2$, and let $\phi$ be the map from $G$ to $\mathcal{MG}_{\bowtie}(S,M)$ defined on the generators by \[\begin{array} {rcl} \phi(\tau) &=&(r\,,\,\{z_1,z_2\})\\ \phi(\sigma)&=& \left\{\begin{array}{ll} (s,\emptyset ) &\textup{if $n$ odd}\\ (s,z_1) &\textup{if $n$ even} \end{array}\right.\\ \phi(\rho_1) &=&(1,z_1)\\ \phi(\rho_n)&=& (1,z_2) \end{array} \] and extended to $G$ by the homomorphism property. One can easily check that $\phi $ preserves the relations of the group, for example if $n$ is even then $$\phi(\sigma^2)=(s,z_1)^2=(s^2,z_1\ominus s(z_1))=(s^2, \{z_1,z_2\}) $$ which is equal to $$\phi(\tau^{n-3})=(r,\{z_1,z_2\})^{n-3} =(r^{n-3},\{z_1,z_2\}),$$ where the last identity follows from Corollary \ref{cor 37} (4), since $n$ is even. To show that $\phi $ is injective suppose that $x=\tau^a\sigma^b\rho_1^c\rho_n^d\in \textup{Ker}\, \phi$ for some integers $a,b,c,d$. Then $(1,\emptyset)=\phi(x)$, and by computing the first coordinate of this equation, we get $1=r^a s^b $. Consequently, since $\tau$ and $\sigma$ satisfy the same relations as $r$ and $s$, we have $1=\tau^a \sigma^b$, and therefore $x=\rho_1^c\rho_n^d$. Thus $$(1,\emptyset)=\phi(x)=(1,\{z_1\}^c\ominus\{z_2\}^d),$$ which implies that $c$ and $d$ are even, by Corollary \ref{cor 37}.(4), and thus $x=1$. This shows that $\phi$ is injective. It remains to show that $\phi$ is surjective. Let $x=(r^as^b,\mathcal{P})\in \mathcal{MG}_{\bowtie}(S,M)$. Then $\phi(\tau^a\sigma^b)=(r^as^b,\overline{\mathcal{P}})$, for some subset $\overline{\mathcal{P}}\subset \{z_1,z_2\}$, and multiplying with $\phi(\rho_1)$ or $\phi(\rho_2)$ if necessary, we see that $x$ lies in the image of $\phi$. This shows that $\phi $ is surjective, and thus $\phi $ is an isomorphism. \end{proof} \begin{example} We end this section with another look at Example \ref{ex torus}. The quiver $$\xymatrix@R15pt@C10pt{x_1\ar@<2pt>[rd]\ar@<-2pt>[rd] &&x_2 \ar@<2pt>[ll]\ar@<-2pt>[ll]\\ &x_3\ar@<2pt>[ru]\ar@<-2pt>[ru] }$$ corresponds to a triangulation of the torus with one puncture, which can be seen easily using the plane as a universal cover and the triangulation shown on the left hand side of Figure \ref{fig torus}. The edges are labeled $1,2,3$ instead of $x_1,x_2,x_3$ for brevity. Edges that have the same label are to be identified, and each point in Figure \ref{fig torus} is identified to the puncture. Thus in the triangulation shown on the left hand side of Figure \ref{fig torus} there are exactly two triangles, both formed by edges $1,2,3$ and both having the same orientation. The picture in the middle of Figure \ref{fig torus} shows the triangulation corresponding to the seed obtained from the initial seed by mutating in $x_1 $, while the image on the right hand side of Figure \ref{fig torus} shows the triangulation corresponding to the seed obtained by mutating once more, this time in $x_2$. Geometrically, one can deform the picture on the left into the picture on the right by dragging the right end upwards and the left end downwards. In the torus, this ``deformation" corresponds to two Dehn twists along the closed curve labeled $3$. On the other hand, there is no orientation preserving homeomorphism transforming the picture on the left into the one in the middle. Thus this mutation is not given by a mapping class. Of course we could have deduced this simply from the observation in Example \ref{ex torus} that this mutation corresponds to an inverse cluster automorphism and not to a direct one. \begin{figure} \includegraphics{figtorus.eps} \caption{Three triangulations of the torus}\label{fig torus} \end{figure} \end{example} \end{section} \begin{section}{Finiteness of the automorphism group}\label{sect 4} In this section, we introduce the notion of \emph{automorphism finite} cluster algebras and prove that for acyclic cluster algebras and for cluster algebras from surfaces it is equivalent to the notion of finite type cluster algebras. We say that a cluster algebra $\mathcal{A}$ is \emph{automorphism finite} if its automorphism group ${\rm Aut} \,\cal A$ is finite. \begin{theorem}\label{finite} Let $\mathcal{A}$ be a cluster algebra arising from an acyclic quiver or from a surface. Then $\mathcal{A}$ is automorphism finite if and only if $\mathcal{A}$ is of Dynkin type. \end{theorem} \begin{proof} Sufficiency follows from Table \ref{table}. To prove necessity, suppose first that $\mathcal{A}$ is arising from an acyclic quiver $Q$. By Theorem \ref{theorem_main}, $\textup{Aut}^+\mathcal{A}$ is isomorphic to the quotient of the group of automorphisms of the transjective component $\Gamma_{tr}$ of the Auslander-Reiten quiver of the cluster category $\mathcal{C}_Q$ modulo the stabiliser of the points in $\Gamma_{tr}$. If $Q$ is not of Dynkin type, then the Auslander-Reiten translation induces an element of $\textup{Aut}(\Gamma_{tr})$ of infinite order, acting freely on the points of $\Gamma_{tr}$. Thus ${\rm Aut} \,\cal A$ is infinite if $Q$ is acyclic and not Dynkin. Suppose now that $\mathcal{A}$ arises from a surface $(S,M)$. By Lemma \ref{mgg}, the mapping class group $\mathcal{M}od\,S$ of the surface $S$ is a subgroup of $\mathcal{MG}(S,M)$, which in turn is isomorphic to a subgroup of ${\rm Aut} \,\cal A$, by Theorem \ref{theorem mg}. So in order to show that ${\rm Aut} \,\cal A$ is infinite, it suffices to find an infinite subgroup in $\mathcal{M}od\,S$. By Remark \ref{rem mg1}, there exists a Dehn twist which generates an infinite cyclic subgroup of $\mathcal{M}od\,S$ if $S$ has genus at least one or $S$ has genus zero and two or more boundary components. There remain the cases where $S$ is a disc or a sphere. If $S$ is a disc with $p\ge 2$ punctures, then the braid group $B_p$ is an infinite subgroup of the mapping class group of $S$. In the cases where $p=0$ or $1$, we have that $\mathcal{A}$ is of Dynkin type $\mathbb{A}$ or $\mathbb{D}$, respectively. Finally, if $S$ is a sphere with $p$ punctures, then by our assumption $p\ge 4$, and then it is known that the mapping class group of $S$ has a free subgroup, see \cite[4.2]{FM}. \end{proof} \begin{remark} For a sphere with 3 punctures, the mapping class group is $S_3$, which is a finite group. However the sphere with 3 or less punctures is excluded in the construction of cluster algebras from surfaces in \cite{FST}. \end{remark} \end{section}
1,108,101,563,918
arxiv
\section{Introduction} Double-Degenerate (DD) mergers are an alternative to the single-degenerate progenitor scenario for type 1a supernovae (SN) that has recently received a surge in attention \citep[e.g.][]{Saio2004,YoonDDSPH}. In this process, two white dwarfs (WD) merge, pushing one of them over the Chandrasekhar limit, which causes it to contract and finally explode. One of the main advantages of this model is that the predicted rate of such DD merger events is consistent with the observed SN rate, in contrast to many other SN1a progenitor models. This DD merger scenario was early on ruled out by \citet{Nomoto}. They pointed out that a CO WD accreting at rates higher than $\sim10^{-6}M_{\sun}\,s^{-1}$ would undergo ``accretion induced collapse'', turning it into an Oxygen/Magnesium/Neon WD. When pushed over the Chandrasekhar limit, this WD would then be able to cool sufficiently through neutrino emission to avoid a type 1a SN, forming a neutron star instead. However, Nomoto \& Kondo's study assumed constant accretion rates, and did not account for a complex rotation profile. Promising new results have shed a new light on their paradigm, opening up small windows of opportunity for SN1a as a result of DD mergers \citep{YoonDDSPH}. What is now needed are realistic high-precision simulations to determine if DD mergers are indeed a viable path to SN1a. Our Goal is to conduct a rigorous verification and validation process and produce high-precision SPH simulations to answer these questions, using a modified version of the SNSPH code \citep{SNSPH,DiehlHYDEF07}. In this work, we put particular emphasis on the importance of appropriate initial conditions and the effects of shocks in the equation of state. \section{The Importance of Initial Conditions} We use the self-consistent field method to setup up our initial conditions. This method was first developed by \citet{HachisuDDsetup} and employed by \citet{TohlineDD1}, \citet{TohlineDD2} and \citet{TohlineDD3}. It iteratively solves the equilibrium configuration for unequal mass close binaries in co-rotation. Its biggest advantage is that initial conditions are in perfect equilibrium, imposing very few artifacts in the simulations due to an imprecise initial setup. Its main drawback at the moment is that one is bound to choose a rather simple equation of state to produce polytropes, though work is underway to generalize this method for a more realistic equation of state. In order to be able to model a rather gentle Roche lobe overflow correctly with a particle based method like SPH, we have developed a new SPH particle setup method that allows us to increase the resolution in the outer layers, while keeping an optimal distribution of particles. This new setup metho is based on weighted Voronoi tesselations (WVT), allows an arbitrary spatial configuration without imposing a lattic geometry, and minimizes particle noise and density fluctuations \citep{DiehlWVTSPH}. Figure \ref{f.wvtsetup} shows an example setup for a $q=0.4$ mass ratio binary. The accretor is set up with a constant particle density on the left, while the donor on the right (filling 99.7\% of its Roche lobe) has an 8x higher resolution in its outer layers. \begin{figure} \hfil\includegraphics[width=0.75\textwidth]{setup}\hfil \caption{Initial Condition Setup for a $q=1.3$ mass ratio white dwarf binary system.\label{f.wvtsetup}} \end{figure} We also note a strong difference in behavior for corotating vs. non-corotating systems. Non-corotating systems tend to transfer a significant fraction of the orbital angular momentum into spinning up stars. This results in the orbit shrinking, bringing the star into deeper Roche lobe contact and artificially shortening the merger time scale. For more details, refer to \citet{FryerHYDEF07}. \section{The Importance of the Equation of State: Shocks vs. No Shocks} We have conducted various runs with different equations of state but otherwise identical initial conditions to determine the effect of the equation of state on the dynamics of the interaction. Figure \ref{f.eos} shows snapshots of two simulations with different equations of state at the same time in the simulation. The left side shows a polytropic equation of state, such that the pressure is a simple function of the density $P=k\,\rho^\Gamma$. During the simulations, this essentially results in the absence of shocks, since the entropy of the gas is fixed on the adiabat that is described by the constants $k_D$ and $k_A$ for donor and accretor. This setup is identical to that of the $q=1.3$ mass ratio setup of \citet{TohlineDD2}. \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{Q13_t183} \includegraphics[width=0.4\textwidth]{Q13i_t183} \end{center} \caption{Comparison of the same time step of two otherwise identical runs for an initial mass ration of $q=1.3$, only differing in their equations of state. SPH particles are colored according to their density, ranging logarithmically from $10^{-4}$ to $10^{0}$. The left side uses the the polytropic equation of state, which keeps entropy constant during the accretion process. The right side uses an ideal gas equation of state to include the effects of shocks. The donor (top) material now gets heavily shocked as it hits the accretor (bottom), which leads to the buildup of a halo that engulfes both stars. Due to our new setup method, we resolve the accretion stream with well over a thousand SPH particles on average. Also note that the binary now loses significant amounts of mass on the backside of both white dwarfs. \label{f.eos}} \end{figure} The right side of Figure \ref{f.eos} shows an ideal gas equation of state run instead. This simulation now includes shocks, which dramatically changes the dynamics of the simulation. Gas accreting from the donor (top) is strongly shocked when it hits the surface of the accretor (bottom). The major difference between the two runs is that the ideal gas equation of state builds up a hot halo of shocked material around the accretor which quickly starts to engulf the binary, essentially forming a common envelope system with different dynamics. Also note how the shocked gas in the ideal gas run is blown off the backside of both stars. This fundamentally changes the dynamics of the merger, since the expelled mass carries angular momentum outward with it. The next section will explore more details on this shocked run. Our goal is to produce quantitative comparison with the grid-based simulations of \citet{TohlineDD2} and \citet{TohlineDD3}. \section{Case Study: A Merger with a $q=1.3$ Mass Ratio with Shocks} \begin{figure} \begin{center} \includegraphics[width=0.3\textwidth]{Q13i_t280} \includegraphics[width=0.3\textwidth]{Q13i_t284} \includegraphics[width=0.3\textwidth]{Q13i_t288} \includegraphics[width=0.3\textwidth]{Q13i_t292} \includegraphics[width=0.3\textwidth]{Q13i_t297} \includegraphics[width=0.3\textwidth]{Q13i_t475} \end{center} \caption{SPH simulation of the $q=1.3$ merger simulation with an ideal gas equation of state. The figure show a time sequence starting at the onset of the merging process after about $9.5$ orbits. Note the importance of the strong shocks. The last panel shows the core of the relaxed merger remnant ($\sim16$ orbital periods).\label{f.Q13}} \end{figure} Figure \ref{f.Q13} shows the time evolution of a double-degenerate merger (rotating anti-clockwise) between two polytropes with a mass ratio of $q=1.3$. This sequence shows the crucial 1.5 orbits where the most dynamic phase of the actual merger takes place (starting at around 9.5 orbits). This run assumes an ideal gas equation of state and includes shocks, which can be easily seen as strong density discontinuities in the snapshots. The last panel on the lower right shows the merger remnant at a much later stage. Note that the remnant settled into an almost spherically symmetric configuration, and is differentially rotating, with a fast rotating core and a hot envelope. At the end of our simulation, the extent of the halo is around 100 solar radii, and is still expanding. However, we do emphasize that the actual behavior may very well quantitatively differ when a realistic equation of state is employed. \section{Conclusions and Outlook} We are on the path to produce high-precision 3D SPH simulations of double-degenerate mergers with various mass ratios. We are carrying out a detailed verification (code comparison with grid codes, and numerical convergence study) and validation (comparison to R Coronae Borealis stars) effort. Particular emphasis is put on the initial condition setup, and the choice of the equation of state, which we both find to strongly affect the dynamics of the merger. Our goal is to implement more realistic equations of state to use a stellar evolution model, and to follow dynamically important nuclear reactions and their energy input in the simulations. As part of the {\it NuGrid} collaboration\footnote{\texttt{http://forum.astro.keele.ac.uk:8080/nugrid}}, we will also use the new post-processing tool \texttt{tppnp} to follow the evolution of SPH particles with a complete nuclear network, and use this output to validate our simulations with abundance measurements observed in R Coronae Borealis Stars \citep{ClaytonHdC2}. The ultimate goal of this project is to find out whether DD mergers are valid paths to type 1a supernovae and R Coronae Borealis Stars. \bibliographystyle{apj}
1,108,101,563,919
arxiv
\section*{Acknowledgements} \noindent We would like to thank the LHCb RTA team for supporting this publication, and in particular Vladimir Gligorov for the review and Manuel Schiller for the code optimisation. We express our gratitude to our colleagues in the CERN accelerator departments for the excellent performance of the LHC. We thank the technical and administrative staff at the LHCb institutes. We acknowledge support from CERN and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); MOST and NSFC (China); CNRS/IN2P3 (France); BMBF, DFG and MPG (Germany); INFN (Italy); NWO (Netherlands); MNiSW and NCN (Poland); MEN/IFA (Romania); MSHE (Russia); MinECo (Spain); SNSF and SER (Switzerland); NASU (Ukraine); STFC (United Kingdom); DOE NP and NSF (USA). We acknowledge the computing resources that are provided by CERN, IN2P3 (France), KIT and DESY (Germany), INFN (Italy), SURF (Netherlands), PIC (Spain), GridPP (United Kingdom), RRCKI, Yandex LLC, HPC facilities at NRU HSE (Russia), CSCS (Switzerland), IFIN-HH (Romania), CBPF (Brazil), PL-GRID (Poland) and OSC (USA). We are indebted to the communities behind the multiple open-source software packages on which we depend. Individual groups or members have received support from AvH Foundation (Germany); EPLANET, Marie Sk\l{}odowska-Curie Actions and ERC (European Union); ANR, Labex P2IO and OCEVU, and R\'{e}gion Auvergne-Rh\^{o}ne-Alpes (France); Key Research Program of Frontier Sciences of CAS, CAS PIFI, and the Thousand Talents Program (China); Basic Research Program of the NRU HSE and Yandex LLC (Russia); GVA, XuntaGal and GENCAT (Spain); the Royal Society and the Leverhulme Trust (United Kingdom). \section{Correlated \texorpdfstring{$\chi^2$}{chi2}} \label{sec:newchi2} The misidentification of charged pions and kaons to muons has an almost irreducible component due to decays in flight, together with a combinatorial component that is relevant especially at $p<10\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$, where the hit coincidence is less stringent. The present muon identification algorithm was optimised for a low occupancy scenario, without prioritising the rejection of the combinatorial background. The higher luminosity of Run 3 will require instead to suppress this background more effectively, especially in the central detector regions where the occupancies are higher. An obvious limitation of the present approach based on $D^2$ variable is that it does not include the information from the multiple scattering experienced by charged particles while traversing the calorimeter and the iron absorbers, as well as the correlation between the hit positions across the muon system. The importance of taking into account these correlations is evident in \Figref{fig:muonDLL}, where two very different hit combinations are shown, yet giving a similar \texttt{MuonDLL} value. On the left, a random combination of hits scattered around the track extrapolation is shown, receiving contributions from uncrossed logical pads, indicated by the larger error bars. Such events are more affected by electronic noise and spillover hits. On the right-hand side, a clear pattern of hits is visible, which are displaced with respect to the extrapolated track due to multiple scattering. \begin{figure}[tb!] \centering \includegraphics[width=0.99\textwidth]{figs/muonDLL} \caption{Two different combination of muon hits having a similar value of \texttt{MuonDLL}: a combinatorial background event (left) and a clear muon pattern (right).} \label{fig:muonDLL} \end{figure} These two topologies can be discriminated by using a $\chi^2$ variable, expressed as \begin{equation}\label{eq:chi2corr} \ensuremath{\chi^2_{\mathrm{CORR}}}\xspace = \delta\overrightarrow{x}^T \text{V}_x^{-1} \delta\overrightarrow{x} + \delta\overrightarrow{y}^T \text{V}_y^{-1} \delta\overrightarrow{y}, \end{equation} where $\delta\overrightarrow{x}$ and $\delta\overrightarrow{y}$ are the distances, in the $x$ and $y$ directions, between the track extrapolation points and the closest hit positions, with indices running over the stations M2 to M5. The covariance matrices $\text{V}_x$ and $\text{V}_y$ both have a diagonal contribution from the detector resolution and an off-diagonal contribution from the multiple scattering (MS). The diagonal terms are of the form \begin{equation}\label{eq:Var_RES} \text{V}^{\text{RES}}_{jj} = \left( \text{pad}^j_{x,y}/\sqrt{12} \right)^2, \end{equation} where the pad size along $x$ and $y$ corresponding to the muon hit in the given station are used. The off-diagonal terms, accounting for MS, are modelled as \begin{equation}\label{eq:Var_MS} \text{V}^{\text{MS}}_{jk} = \sum_{z_i < z_j,z_k} (z_j - z_i)(z_k - z_i) \sigma_{\text{MS},i}^2, \end{equation} where $z_{j,k}$ represent the coordinates of stations M2 to M5 along the beam axis, $z_i$ represents the coordinates of the main absorbers, namely the calorimeters and the muon iron filters, as listed in \Tabref{tab:z/X0}, and $\sigma_{\text{MS},i}$ represents the MS deviation. This term takes the usual expression~\cite{PDG2017} \begin{equation} \label{eq:ms} \sigma_{\text{MS},i} = \frac{13.6 \text{MeV}}{\beta c p} \sqrt{\Delta z_i/X_0}, \end{equation} where $p$ and $\beta c$ are the momentum and the velocity of the incident particle, respectively, and $\Delta z_i/X_0$ is the thickness of the absorber at the given position $z_i$ in units of radiation length, also listed in \Tabref{tab:z/X0}. \begin{table}[htb!] \centering \begin{tabular}{c|cc} \toprule Absorber & $z$ position (m) & $\Delta z_i$/X$_0$ \\ \midrule ECAL\xspace & 12.8 & 25 \\ HCAL\xspace & 14.3 & 53 \\ Muon filter 1 & 15.8 & 47.5 \\ Muon filter 2 & 17.1 & 47.5 \\ Muon filter 3 & 18.3 & 47.5 \\ \bottomrule \end{tabular} \caption{Position along the beam axis and thickness in units of radiation length for the main scattering media contributing to the multiple scattering experienced by particles reaching the muon detector.} \label{tab:z/X0} \end{table} The probability for a muon to penetrate the iron absorbers and reach a given muon station depends on its momentum. In particular, below $6\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ the probability to reach M4 and M5 stations can be substantially smaller than one, so that hits falling in the FOI of the track are in this case often due to accidental background. For this reason, in that momentum interval only the hits on M2 and M3 stations are included in the \ensuremath{\chi^2_{\mathrm{CORR}}}\xspace computation. The performances of the \ensuremath{\chi^2_{\mathrm{CORR}}}\xspace variable are evaluated on muons and protons from data control samples collected in 2016. An abundant source of muons is provided by $J/\psi \ensuremath{\rightarrow}\xspace \mu^+\mu^-$ decays: by requiring the reconstructed $J/\psi$ to have a large flight distance significance and good decay vertex quality, most of the combinatorial background from the tracks originating from the primary vertex is removed, and the sample gets enriched by $B \rightarrow J/\psi X$ candidates. To further reduce the background, one of the decay tracks, the $tag$ muon, is required to be positively identified in the muon detector; the other track, the $probe$ muon, is unbiased with respect to particle identification and trigger requirements and it is therefore used to measure the algorithm performances. Protons are selected from $\Lambda \ensuremath{\rightarrow}\xspace p \pi^-$ decays, selected with vertex quality criteria and detachment of the decay vertex from the primary one. In addition, the invariant mass obtained by assigning the $\pi$ mass to the two daughters is required to be outside the nominal $K^0_S$ mass window. Examples of mass spectra for muon and proton calibration samples are shown in \Figref{fig:calib_samples}. \begin{figure}[tb!] \centering \includegraphics[width=0.47\textwidth]{figs/jpsimumu.pdf} \includegraphics[width=0.47\textwidth]{figs/lambdappi.pdf} \caption{Typical invariant mass distributions for $J/\psi\ensuremath{\rightarrow}\xspace\mu^+\mu^-$ (left) and $\Lambda\ensuremath{\rightarrow}\xspace p\pi^-$ (right) calibration samples. The superimposed fit (red line) is composed of a signal (dashed blue) and background (dotted dash green) component~\cite{Lupton:2134057}.} \label{fig:calib_samples} \end{figure} For both samples, the residual background contribution is subtracted by using the \mbox{\em sPlot}\xspace method~\cite{Pivk:2004ty}. To perform unbiased studies, the muon and proton samples have been weighted in order to equalize their momentum, transverse momentum, and track multiplicity spectra. In addition, since the main challenge for Run 3 is the fivefold luminosity increase with respect to Run 2, a weighting procedure that adds more emphasis on high multiplicity events is applied to each calibration sample. Since there is not enough data to accurately emulate the upgrade conditions, the samples have been weighted in such a way to reproduce an occupancy spectrum which is in-between the two actual running conditions. As a result, in \Figref{fig:chi2cor} the \ensuremath{\chi^2_{\mathrm{CORR}}}\xspace spectrum for muons and protons satisfying the \texttt{IsMuon} requirement is shown, demonstrating a good separation between signal and background. \begin{figure}[tb!] \centering \includegraphics[height=0.5\textwidth]{figs/chi2_closest_P_adjusted} \caption{Spectrum of the \ensuremath{\chi^2_{\mathrm{CORR}}}\xspace, normalised by the degrees of freedom, for muons and protons as evaluated on 2016 calibration samples} \label{fig:chi2cor} \end{figure} A quantitative comparison between the performance of the \ensuremath{\chi^2_{\mathrm{CORR}}}\xspace and \texttt{MuonDLL} variables is shown in \Figref{fig:run_2_chi2_proton}, where the proton rejection as a function of the muon efficiency, ROC curve in the following, is displayed for tracks satisfying the \texttt{IsMuon} requirement. The ROCs are shown in different momentum and transverse momentum intervals, which allow to probe the response of the muon identification algorithms in different regions of the detector and in different momentum regimes. \begin{figure}[tb!] \raggedright \includegraphics[width=0.32\textwidth]{figs/Fig_1_Run2.pdf} \includegraphics[width=0.32\textwidth]{figs/legend_Run2.pdf}\\ \includegraphics[width=0.32\textwidth]{figs/Fig_2_Run2.pdf} \includegraphics[width=0.32\textwidth]{figs/Fig_6_Run2.pdf}\\ \includegraphics[width=0.32\textwidth]{figs/Fig_3_Run2.pdf} \includegraphics[width=0.32\textwidth]{figs/Fig_7_Run2.pdf} \includegraphics[width=0.32\textwidth]{figs/Fig_10_Run2.pdf} \includegraphics[width=0.32\textwidth]{figs/Fig_4_Run2.pdf} \includegraphics[width=0.32\textwidth]{figs/Fig_8_Run2.pdf} \includegraphics[width=0.32\textwidth]{figs/Fig_11_Run2.pdf} \includegraphics[width=0.32\textwidth]{figs/Fig_5_Run2.pdf} \includegraphics[width=0.32\textwidth]{figs/Fig_9_Run2.pdf} \includegraphics[width=0.32\textwidth]{figs/Fig_12_Run2.pdf} \caption{Proton rejection as a function of muon efficiency for tracks satisfying \texttt{IsMuon} obtained with the \ensuremath{\chi^2_{\mathrm{CORR}}}\xspace (blue) and \texttt{MuonDLL} (black) variables on 2016 calibration data. Low momentum bins, which are not covered by the calibration samples, are not shown.} \label{fig:run_2_chi2_proton} \end{figure} The performance of the \ensuremath{\chi^2_{\mathrm{CORR}}}\xspace variable is definitely better than the \texttt{MuonDLL} in all regions of the phase space, and especially at low momenta. In particular, at muon efficiency of $\sim 98\%$, which is a good working point for efficient trigger selections, the gain in background rejection is a factor $\sim 1.4$ in the region $p>10\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace ,~\mbox{$p_{\mathrm{ T}}$}\xspace < 2 \, \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace $, and exceeds a factor of $2$ in the rest of the phase space. \subsection{Performance in HLT1\xspace} \label{ssec:perf} As discussed in Sec.~\ref{sec:Introduction}, for the HLT1\xspace in Run 3 it will be crucial to guarantee a high efficiency for muons and a fast execution time of the algorithms. Moreover, a tighter rejection against combinatorial background with respect to the present \texttt{IsMuon} selection will be certainly needed. Given the good performances of the \ensuremath{\chi^2_{\mathrm{CORR}}}\xspace variable in rejecting protons, which constitute pure combinatorial background to the muon detector, we consider as interesting to provide the rejection estimates on trigger unbiased events, which are mostly populated by pions, selected from a Run 2 data sample without any trigger requirement. As a preliminary selection for this benchmark, the events are filtered by requiring at least one track to satisfy \texttt{IsMuon} and the cuts $p_T>800\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace$ and $\text{IP}\chi^2>35$~\footnote{The impact parameter $\chi^2$, $\text{IP}\chi^2$, is defined as the difference in the primary vertex fit $\chi^2$ with and without the given track.}, which represent the main requirements of the Run 2 HLT1\xspace single muon line. The rejection is therefore computed relatively to the above selection, and thus represents the improvement with respect to the present HLT1\xspace, and by removing the \texttt{L0} trigger. To select high multiplicity events, only those having at least $3$ primary vertices (nPVs) are used, whereas average Run 2 events have one primary vertex. This study is done in three momentum intervals, $3-6$, $6-10$ and $p>10\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$, since the number of hits selected by \texttt{IsMuon} is different in each one, as described in Sec.~\ref{sec:Introduction}. In each interval, a \ensuremath{\chi^2_{\mathrm{CORR}}}\xspace cut with $\sim 98\%$ muon efficiency, as evaluated on muon calibration data, is chosen. The results are shown in Tab.~\ref{tab:reduc} and demonstrate the effectiveness of this variable in rejecting about half of the trigger unbiased events, with a very small efficiency loss, on top of the Run 2 HLT1\xspace muon selection. In particular, the highest rejection is achieved for $6 < p < 10\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$, where the fraction of pion decays in flight is lower with respect to $3<p<6\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$, and the MS correlations provide sensible discrimination as the momentum is not too high. \begin{table}[tb!] \centering \caption{Rejection factors of the \ensuremath{\chi^2_{\mathrm{CORR}}}\xspace variable on trigger unbiased events, for a muon efficiency of $\sim 98\%$. The rejection is evaluated on top of the $p_T>800\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace$, $\text{IP}\chi^2>35$, \texttt{IsMuon} and nPVs~$\geq3$ requirements.} \begin{tabular}{|c|c|} \hline Momentum range & Rejection factor \\ \hline \hline $3<p<6 \,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ & $ 1.8 $ \\ \hline $6<p<10\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ & $ 3.2 $ \\ \hline $p>10 \, \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ & $ 2.2$ \\ \hline \end{tabular} \label{tab:reduc} \end{table} Finally, the \ensuremath{\chi^2_{\mathrm{CORR}}}\xspace execution time is tested within the HLT1\xspace upgrade sequence. Throughput tests\footnote{On nodes mounting two Intel Xeon E5-2630 v4 CPUs at $2.20$ GHz (40 threads/node).} are performed on simulated Run 3 data and show a \ensuremath{\chi^2_{\mathrm{CORR}}}\xspace resource usage of about $0.4\%$ out of a total HLT1\xspace throughput rate of $\sim36$ MHz, and in view of a data taking rate of $30$ MHz. This result makes the \ensuremath{\chi^2_{\mathrm{CORR}}}\xspace well suited for a usage in the upgraded HLT1\xspace trigger of the experiment. \section{Conclusions} \label{sec:conclusions} Two new muon identification algorithms have been developed in view of the \mbox{LHCb}\xspace Run 3 upgrade. The first one, \ensuremath{\chi^2_{\mathrm{CORR}}}\xspace, expands on the muon likelihood variable developed in Run 1 by including the correlation among the muon hits. The second one features a multivariate algorithm based on the CatBoost machine learning toolkit. The performances of both algorithms in terms of background rejection versus signal efficiency are characterised on 2016 proton calibration data, and in both cases are found to improve considerably those of the muon likelihood used during Run 1 and Run 2, with the CatBoost classifier offering a slightly better performance. As far as the computational times are concerned, the \ensuremath{\chi^2_{\mathrm{CORR}}}\xspace has been proven to be fast enough to be included in the upgrade muon trigger lines. The CatBoost algorithm, while improving the competing state-of-art gradient boosting libraries, can be computed in the HLT2\xspace, where the time constraints are less stringent. \section{Detector and simulation} \label{sec:Detector} The paragraph below can be used for the detector description. Modifications may be required in specific papers to fit within page limits, to enhance particular detector elements or to introduce acronyms used later in the text. For journals where strict word counts are applied (for example, PRL), and space is at a premium, it may be sufficient to write, as a minimum: ``The LHCb detector is a single-arm forward spectrometer covering the pseudorapidity range $2 < \eta < 5$, described in detail in Refs.~\cite{LHCb-DP-2008-001,LHCb-DP-2014-002}''. A slightly longer version could specify the most relevant sub-detectors, {\it e.g} ``The LHCb detector~\cite{LHCb-DP-2008-001,LHCb-DP-2014-002} is a single-arm forward spectrometer covering the pseudorapidity range $2 < \eta < 5$, designed for the study of particles containing {\ensuremath{\Pb}}\xspace\ or {\ensuremath{\Pc}}\xspace\ quarks. The detector elements that are particularly relevant to this analysis are: a silicon-strip vertex detector surrounding the $pp$ interaction region that allows {\ensuremath{\Pc}}\xspace\ and {\ensuremath{\Pb}}\xspace\ hadrons to be identified from their characteristically long flight distance; a tracking system that provides a measurement of the momentum, $p$, of charged particles; and two ring-imaging Cherenkov detectors that are able to discriminate between different species of charged hadrons.'' \begin{verbatim} In the following paragraph, references to the individual detector performance papers are marked with a * and should only be included if the analysis relies on numbers or methods described in the specific papers. Otherwise, a reference to the overall detector performance paper~\cite{LHCb-DP-2014-002} will suffice. Note also that the text defines the acronyms for primary vertex, PV, and impact parameter, IP. Remove either of those in case it is not used later on. \end{verbatim} The \mbox{LHCb}\xspace detector~\cite{LHCb-DP-2008-001,LHCb-DP-2014-002} is a single-arm forward spectrometer covering the \mbox{pseudorapidity} range $2<\eta <5$, designed for the study of particles containing {\ensuremath{\Pb}}\xspace or {\ensuremath{\Pc}}\xspace quarks. The detector includes a high-precision tracking system consisting of a silicon-strip vertex detector surrounding the $pp$ interaction region~\cite{LHCb-DP-2014-001}\verb!*!, a large-area silicon-strip detector located upstream of a dipole magnet with a bending power of about $4{\mathrm{\,Tm}}$, and three stations of silicon-strip detectors and straw drift tubes~\cite{LHCb-DP-2013-003,LHCb-DP-2017-001}\verb!*!\footnote{Cite Ref.~\cite{LHCb-DP-2013-003} for Run 1 analyses and Ref.~\cite{LHCb-DP-2017-001} if Run 2 data is used.} placed downstream of the magnet. The tracking system provides a measurement of the momentum, \mbox{$p$}\xspace, of charged particles with a relative uncertainty that varies from 0.5\% at low momentum to 1.0\% at 200\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace. The minimum distance of a track to a primary vertex (PV), the impact parameter (IP), is measured with a resolution of $(15+29/\mbox{$p_{\mathrm{ T}}$}\xspace)\ensuremath{{\,\upmu\mathrm{m}}}\xspace$, where \mbox{$p_{\mathrm{ T}}$}\xspace is the component of the momentum transverse to the beam, in\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace. Different types of charged hadrons are distinguished using information from two ring-imaging Cherenkov detectors~\cite{LHCb-DP-2012-003}\verb!*!. Photons, electrons and hadrons are identified by a calorimeter system consisting of scintillating-pad and preshower detectors, an electromagnetic and a hadronic calorimeter. Muons are identified by a system composed of alternating layers of iron and multiwire proportional chambers~\cite{LHCb-DP-2012-002}\verb!*!. The online event selection is performed by a trigger~\cite{LHCb-DP-2012-004}\verb!*!, which consists of a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage, which applies a full event reconstruction. A more detailed description of the 'full event reconstruction' could be: \begin{itemize} \item The trigger~\cite{LHCb-DP-2012-004}\verb!*! consists of a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage, in which all charged particles with $\mbox{$p_{\mathrm{ T}}$}\xspace>500\,(300)\ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace$ are reconstructed for 2011\,(2012) data. For triggers that require neutral particles, energy deposits in the electromagnetic calorimeter are analysed to reconstruct {\ensuremath{\pion^0}}\xspace and $\gamma$ candidates. \end{itemize} The trigger description has to be specific for the analysis in question. In general, you should not attempt to describe the full trigger system. Below are a few variations that inspiration can be taken from. First from a hadronic analysis, and second from an analysis with muons in the final state. In case you have to look up specifics of a certain trigger, a detailed description of the trigger conditions for Run 1 is available in Ref.~\cite{LHCb-PUB-2014-046}. {\bf Never cite this note in a PAPER or CONF-note.} \begin{itemize} \item At the hardware trigger stage, events are required to have a muon with high \mbox{$p_{\mathrm{ T}}$}\xspace or a hadron, photon or electron with high transverse energy in the calorimeters. For hadrons, the transverse energy threshold is 3.5\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace. The software trigger requires a two-, three- or four-track secondary vertex with a significant displacement from any primary $pp$ interaction vertex. At least one charged particle must have a transverse momentum $\mbox{$p_{\mathrm{ T}}$}\xspace > 1.6\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ and be inconsistent with originating from a PV. A multivariate algorithm~\cite{BBDT,LHCb-PROC-2015-018}\footnote{Ref.~\cite{LHCb-PROC-2015-018} is oly for Run 2.} is used for the identification of secondary vertices consistent with the decay of a {\ensuremath{\Pb}}\xspace hadron. \item The $\decay{{\ensuremath{\B^0}}\xspace}{{\ensuremath{\kaon^{*0}}}\xspace{\ensuremath{\Pmu^+\Pmu^-}}\xspace}$ signal candidates are first required to pass the hardware trigger, which selects events containing at least one muon with transverse momentum $\mbox{$p_{\mathrm{ T}}$}\xspace>1.48\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ in the 7\ifthenelse{\boolean{inbibliography}}{\ensuremath{~T\kern -0.05em eV}}{\ensuremath{\mathrm{\,Te\kern -0.1em V}}}\xspace data or $\mbox{$p_{\mathrm{ T}}$}\xspace>1.76\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ in the 8\ifthenelse{\boolean{inbibliography}}{\ensuremath{~T\kern -0.05em eV}}{\ensuremath{\mathrm{\,Te\kern -0.1em V}}}\xspace data. In the subsequent software trigger, at least one of the final-state particles is required to have $\mbox{$p_{\mathrm{ T}}$}\xspace>1.7\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ in the 7\ifthenelse{\boolean{inbibliography}}{\ensuremath{~T\kern -0.05em eV}}{\ensuremath{\mathrm{\,Te\kern -0.1em V}}}\xspace data or $\mbox{$p_{\mathrm{ T}}$}\xspace>1.6\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ in the 8\ifthenelse{\boolean{inbibliography}}{\ensuremath{~T\kern -0.05em eV}}{\ensuremath{\mathrm{\,Te\kern -0.1em V}}}\xspace data, unless the particle is identified as a muon in which case $\mbox{$p_{\mathrm{ T}}$}\xspace>1.0\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ is required. The final-state particles that satisfy these transverse momentum criteria are also required to have an impact parameter larger than $100\ensuremath{{\,\upmu\mathrm{m}}}\xspace$ with respect to all PVs in the event. Finally, the tracks of two or more of the final-state particles are required to form a vertex that is significantly displaced from any PV." \end{itemize} For analyses using the Turbo stream, the following paragraph may be used to describe the trigger. \begin{itemize} \item The online event selection is performed by a trigger, which consists of a hardware stage followed by a two-level software stage. In between the two software stages, an alignment and calibration of the detector is performed in near real-time and their results are used in the trigger~\cite{LHCb-PROC-2015-011}. The same alignment and calibration information is propagated to the offline reconstruction, ensuring consistent and high-quality particle identification (PID) information between the trigger and offline software. The identical performance of the online and offline reconstruction offers the opportunity to perform physics analyses directly using candidates reconstructed in the trigger \cite{LHCb-DP-2012-004,LHCb-DP-2016-001} which the present analysis exploits. The storage of only the triggered candidates enables a reduction in the event size by an order of magnitude. \end{itemize} An example to describe the use of both TOS and TIS candidates: \begin{itemize} \item In the offline selection, trigger signals are associated with reconstructed particles. Selection requirements can therefore be made on the trigger selection itself and on whether the decision was due to the signal candidate, other particles produced in the $pp$ collision, or a combination of both. \end{itemize} A good example of a description of long and downstream {\ensuremath{\kaon^0_{\mathrm{ \scriptscriptstyle S}}}}\xspace is given in Ref.~\cite{LHCb-PAPER-2014-006}: \begin{itemize} \item Decays of \decay{{\ensuremath{\kaon^0_{\mathrm{ \scriptscriptstyle S}}}}\xspace}{{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace} are reconstructed in two different categories: the first involving {\ensuremath{\kaon^0_{\mathrm{ \scriptscriptstyle S}}}}\xspace mesons that decay early enough for the pions to be reconstructed in the vertex detector; and the second containing {\ensuremath{\kaon^0_{\mathrm{ \scriptscriptstyle S}}}}\xspace that decay later such that track segments of the pions cannot be formed in the vertex detector. These categories are referred to as \emph{long} and \emph{downstream}, respectively. The long category has better mass, momentum and vertex resolution than the downstream category. \end{itemize} Before describing the simulation, explain in one sentence why simulation is needed. The following paragraph can act as inspiration but with variations according to the level of detail required and if mentioning of \mbox{\itshape e.g.}\xspace \mbox{\textsc{Photos}}\xspace is required. \begin{itemize} \item Simulation is required to model the effects of the detector acceptance and the imposed selection requirements. In the simulation, $pp$ collisions are generated using \mbox{\textsc{Pythia}}\xspace~\cite{Sjostrand:2006za,*Sjostrand:2007gs} (In case only \mbox{\textsc{Pythia}}\xspace 6 is used, remove \verb=*Sjostrand:2007gs= from this citation; if only \mbox{\textsc{Pythia}}\xspace 8 is used, then reverse the order of the papers in the citation.) with a specific \mbox{LHCb}\xspace configuration~\cite{LHCb-PROC-2010-056}. Decays of unstable particles are described by \mbox{\textsc{EvtGen}}\xspace~\cite{Lange:2001uf}, in which final-state radiation is generated using \mbox{\textsc{Photos}}\xspace~\cite{Golonka:2005pn}. The interaction of the generated particles with the detector, and its response, are implemented using the \mbox{\textsc{Geant4}}\xspace toolkit~\cite{Allison:2006ve, *Agostinelli:2002hh} as described in Ref.~\cite{LHCb-PROC-2011-006}. \end{itemize} A quantity often used in LHCb analyses is \ensuremath{\chi^2_{\text{IP}}}\xspace. When mentioning it in a paper, the following wording could be used: ``$\ldots$\ensuremath{\chi^2_{\text{IP}}}\xspace\ with respect to any primary interaction vertex greater than X, where \ensuremath{\chi^2_{\text{IP}}}\xspace\ is defined as the difference in the vertex-fit \ensuremath{\chi^2}\xspace of a given PV reconstructed with and without the track under consideration/being considered.''\footnote{If this sentence is used to define \ensuremath{\chi^2_{\text{IP}}}\xspace\ for a composite particle instead of for a single track, replace ``track'' by ``particle'' or ``candidate''.} This definition can then be used to define the associated PV.\footnote{known as ``best'' PV in \mbox{\textsc{DaVinci}}\xspace. Use the word ``associated'', not ``best''.} However, \ensuremath{\chi^2_{\text{IP}}}\xspace should not be defined just to explain which PV is taken as associated. Instead one can write ``The PV that fits best to the flight direction of the {\ensuremath{\PB}}\xspace candidate is taken as the associated PV.'' Many analyses depend on boosted decision trees. It is inappropriate to use TMVA~\cite{Hocker:2007ht,*TMVA4} as sole reference as that is merely an implementation of the BDT algorithm. Rather it is suggested to write: ``In this paper we use a boosted decision tree~(BDT)~\cite{Breiman,AdaBoost} implemented in the TMVA toolkit~\cite{Hocker:2007ht,*TMVA4} to separate signal from background''. When describing the integrated luminosity of the data set, do not use expressions like ``1.0\ensuremath{\mbox{\,fb}^{-1}}\xspace of data'', but \mbox{\itshape e.g.}\xspace ``data sample corresponding to an integrated luminosity of 1.0\ensuremath{\mbox{\,fb}^{-1}}\xspace'', or ``a sample of data obtained from 3\ensuremath{\mbox{\,fb}^{-1}}\xspace of integrated luminosity''. For analyses where the periodical reversal of the magnetic field is crucial, \mbox{\itshape e.g.}\xspace in measurements of direct {\ensuremath{C\!P}}\xspace violation, the following description can be used as an example phrase: \begin{itemize} \item The magnetic field deflects oppositely charged particles in opposite directions and this can lead to detection asymmetries. Periodically reversing the magnetic field polarity throughout the data-taking almost cancels the effect. The configuration with the magnetic field pointing upwards (downwards), \mbox{\em Mag\kern -0.05em Up}\xspace (\mbox{\em MagDown}\xspace), bends positively (negatively) charged particles in the horizontal plane towards the centre of the LHC ring.\end{itemize} Only use the \mbox{\em Mag\kern -0.05em Up}\xspace, \mbox{\em MagDown}\xspace symbols if they are used extensively in tables or figures. If the momentum scaling has been applied and is relevant, add text along the lines of \begin{itemize} \item The momentum scale is calibrated using samples of $\decay{{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace}{{\ensuremath{\Pmu^+\Pmu^-}}\xspace}$ and $\decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace{\ensuremath{\kaon^+}}\xspace}$~decays collected concurrently with the~data sample used for this analysis~\cite{LHCb-PAPER-2012-048,LHCb-PAPER-2013-011}. The~relative accuracy of this procedure is estimated to be $3 \times 10^{-4}$ using samples of other fully reconstructed ${\ensuremath{\Pb}}\xspace$~hadrons, $\PUpsilon$~and ${\ensuremath{\kaon^0_{\mathrm{ \scriptscriptstyle S}}}}\xspace$~mesons. \end{itemize} \section{Introduction} \label{sec:Introduction} The \mbox{LHCb}\xspace experiment~\cite{LHCb-DP-2014-002} at the \mbox{LHC}\xspace is a single-arm forward spectrometer specialised in studying particles containing $b$ or $c$ quarks. Thanks to a versatile reconstruction and trigger system, the \mbox{LHCb}\xspace physics programme has been extended to electroweak, soft QCD and even heavy-ion physics. Many of the physics channels are identified by their very clean muon signatures, therefore muon identification and trigger are crucial for the success of the experiment. A brief description of the Run 2 muon detector and reconstruction techniques follows, which sets the basis for the improvements later discussed in view of Run 3. During Run 1 and Run 2, the tracking system of \mbox{LHCb}\xspace provided a measurement of the momentum $(p)$ of charged particles with a relative uncertainty that varied from 0.5\% at low momentum to 1.0\% at 200\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace. The minimum distance of a track to a primary $pp$ collision vertex (PV), the impact parameter (IP), was measured with a resolution of $(15+29/\mbox{$p_{\mathrm{ T}}$}\xspace)\,\ensuremath{{\,\upmu\mathrm{m}}}\xspace$, where \mbox{$p_{\mathrm{ T}}$}\xspace is the component of the momentum transverse to the beam, in \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace. Muons were identified and triggered by a system composed of five stations, M1-M5, of rectangular shape, placed along the beam axis as shown in \Figref{fig:muon_system}. \begin{figure}[b!] \centering \includegraphics[width=0.6\textwidth]{figs/sidev2new} \caption{Side view of the muon system in the $y-z$ plane.} \label{fig:muon_system} \end{figure} Stations M2 to M5 were placed downstream the calorimeters, and were interleaved with $80$~cm-thick iron absorbers to select penetrating muons. The M1 station was placed in front of the calorimeters and used to improve the \mbox{$p_{\mathrm{ T}}$}\xspace measurement in the trigger. Each muon station is subdivided into four regions, as shown in \Figref{fig:muon_chamber}, with different read-out schemes defining the $x,y$ resolutions. The dimensions of the logical pads were chosen such that their contribution to the \mbox{$p_{\mathrm{ T}}$}\xspace resolution was approximately equal to the multiple scattering contribution~\cite{LHCb-TDR-004}. \begin{figure}[tb] \centering \includegraphics[width=0.6\textwidth]{figs/sectorsnew} \caption{Front view of one quadrant of M2 showing the four regions. The intersection of a horizontal and a vertical strip defines a logical pad. The region and channel dimensions scale by a factor of two from one region to the following.} \label{fig:muon_chamber} \end{figure} These logical pads were obtained from the crossing of horizontal and vertical strips (either cathodic pads or group of wires), with the exception of the full M1 station and the innermost regions of stations M4 and M5, where the logical pads corresponded to physical channels on the detector and were readout directly. A schematic diagram showing the trigger data flow in Run 2 is depicted in \Figref{fig:trigger}. \begin{figure}[tb!] \centering \includegraphics[width=0.4\textwidth]{figs/trigger} \caption{The LHCb trigger scheme in Run 2.} \label{fig:trigger} \end{figure} The trigger and reconstruction scheme followed three basic steps: \begin{itemize} \item A hardware trigger (L0\xspace), based on selected calorimeter and muon information, to reduce the interaction rate of 20\ensuremath{{\mathrm{ \,MHz}}}\xspace\footnote{Out of the total LHC bunch crossing rate of 40\ensuremath{{\mathrm{ \,MHz}}}\xspace, there are about 30\ensuremath{{\mathrm{ \,MHz}}}\xspace of inelastic collisions, of which around 2/3 are visible in the LHCb detector.} to 1\ensuremath{{\mathrm{ \,MHz}}}\xspace, which corresponds to the readout bandwidth of the detector. The L0\xspace muon trigger was based on the coincidence of one hit in each of the five stations, selected in a projective Field Of Interest (FOI) defined in the $x-y$ plane, from which a muon standalone \mbox{$p_{\mathrm{ T}}$}\xspace reconstruction was performed with $\sim 20\%$ resolution~\cite{LHCb-TDR-004}. Candidate tracks above a \mbox{$p_{\mathrm{ T}}$}\xspace threshold of about $1.5\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ were then used to build single and dimuon topologies. \item A first software stage (HLT1\xspace) based on partial reconstruction of tracks from the spectrometer, which allowed to put more strict constraints on the candidate \mbox{$p_{\mathrm{ T}}$}\xspace and IP. Concerning muons, a loose and efficient selection was performed, called \texttt{IsMuon}, based on the coincidence of hits in M2 to M5 stations, and combined with the information of the spectrometer. The muon hits were selected in a FOI centered around the track extrapolation position on the muon stations: the number of hits required was two, three or four in the momentum ranges $3-6$, $6-10$ and above $10\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$, respectively, as expected from the muon penetration power in the iron absorbers~\cite{LHCb-PUB-2009-013}. \item A more refined software trigger (HLT2\xspace), exploiting the full reconstruction of the detector information to reconstruct more complex signal topologies. Concerning muons, a better discrimination than \texttt{IsMuon} was achieved by using a likelihood variable (\texttt{MuonDLL}), built upon the uncorrelated sum of the spatial residuals of the muon hits with respect to the track extrapolation position in each station~\cite{LHCb-PUB-2009-013}, defined as: \begin{equation} \label{eq:d2} D^2 = \frac{1}{N}\sum_{i=1}^N \left[ \left( \frac{x^i_{\text{closest}} - x^i_{\text{track}}}{\text{pad}^i_x} \right)^2 + \left( \frac{y^i_{\text{closest}} - y^i_{\text{track}}}{\text{pad}^i_y} \right)^2 \right], \end{equation} where the index $i$ runs over the $N$ stations containing hits inside the FOI, and the \textit{closest} coordinates represent the position of the hit which is closest to the track extrapolation point. The hit residuals were normalised to the logical pad size in the $x$ and $y$ directions, pad$_x$ and pad$_y$ respectively. The $D^2$ distribution for muons exhibits a narrow peak at 0, while hadrons satisfying the \texttt{IsMuon} criterion have a broader distribution. Using the $D^2$ spectra of muons as a signal proxy and of protons as a background proxy (pions are instead contaminated by real muons from decays in flight), the \texttt{MuonDLL} likelihood was defined, which measured the difference in probability for a candidate track to match the signal or background hypotheses. Using the above variable, on top of \texttt{IsMuon}, the misidentification probability for protons was kept at the 2-3 per mille level on the full momentum spectrum, while keeping the muon efficiency above 90\%. For pions, similar performances were obtained only for momenta higher than 50\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace, with decays in flight contributing for another few per mille at low momenta~\cite{LHCb-DP-2013-001}. \end{itemize} The \mbox{LHCb}\xspace detector will be upgraded for Run 3 to sustain a factor of five increase in the instantaneous luminosity, up to $2\times 10^{33}$\,cm$^{-2}$s$^{-1}$. The 1\ensuremath{{\mathrm{ \,MHz}}}\xspace readout limitation of the current detector will be removed, allowing for the full event rate to be processed in software without the need for a hardware stage~\cite{LHCb-TDR-012}. For this reason, a full software trigger has been implemented, which will allow to select signal events with higher efficiency, and with the goal of achieving an order of magnitude increase in the physics bandwidth with respect to Run 2. In preparation of Run 3, the M1 station has been removed due to the much higher occupancy which will be reached in front of the calorimeter, where the station is located (\Figref{fig:muon_system}). In addition, its main contribution, consisting in the improvement of the standalone muon \mbox{$p_{\mathrm{ T}}$}\xspace determination in the hardware L0\xspace trigger is no longer relevant. When working on the implementation of the future software muon trigger lines, two aspects have to be taken into account: \begin{itemize} \item The need to keep a high efficiency at HLT1\xspace, with a smooth dependence on the running conditions and on the phase space, and with a fast execution time. Concerning the bandwidth, a high rejection power must be guaranteed against the combinatorial background, especially important at low momentum. This background mainly originates from pion tracks extrapolated to the muon detector and paired to accidental hits in the muon stations, and is expected to increase almost linearly with the luminosity. \item The possibility to tune highly selective cuts in order to achieve very low mis-identification levels at HLT2\xspace, especially useful for example in rare decay searches. The full information from the \mbox{LHCb}\xspace PID detectors may conveniently be used in this case as the constraints on the execution time are less stringent. \end{itemize} These functionalities can be implemented following different approaches. In this paper we discuss a baseline strategy for the HLT1\xspace which is an evolution of the present scheme. This assumes that tracks in the spectrometer are reconstructed upfront, and that the muon identification is applied in two steps: a first step based on \texttt{IsMuon} as it is now, plus a second step based on a correlated $\chi^2$ variable (Section~\ref{sec:newchi2}), which represents an improvement of the \texttt{MuonDLL}. At the HLT2\xspace stage, the muon identification performances are further refined by means of a multivariate classifier (Section~\ref{sec:MVAs}). \section{Layout} \begin{enumerate} \item Unnecessary blank space should be avoided, between paragraphs or around figures and tables. \item Figure and table captions should be concise and use a somewhat smaller typeface than the main text, to help distinguish them. This is achieved by inserting \verb!\small! at the beginning of the caption. (NB with the latest version of the file \verb!preamble.tex! this is automatic) Figure captions go below the figure, table captions go above the table. \item Captions and footnotes should be punctuated correctly, like normal text. The use of too many footnotes should be avoided: typically they are used for giving commercial details of companies, or standard items like coordinate system definition or the implicit inclusion of charge-conjugate processes.\footnote{If placed at the end of a sentence, the footnote symbol normally follows the punctuation; if placed in the middle of an equation, take care to avoid any possible confusion with an index.}$^,$\footnote{The standard footnote reads: ``The inclusion of charge-conjugate processes is implied throughout.'' This may need to be modified, for example with ``except in the discussion of asymmetries.''} \item Tables should be formatted in a simple fashion, without excessive use of horizontal and vertical lines. Numbers should be vertically aligned on the decimal point and $\pm$ symbol. (\verb!\phantom{0}! may help, or defining column separators as \verb!@{\:$\pm$\:}!) See Table~\ref{tab:example} for an example. \begin{table}[t] \caption{ Background-to-signal ratio estimated in a $\pm 50\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$ mass window for the prompt and long-lived backgrounds, and the minimum bias rate. In this table, as the comparison of numbers among columns is not critical, the value $11\pm2$ may also be typeset without the space.} \begin{center}\begin{tabular}{lr@{\:$\pm$\:}lr@{\:$\pm$\:}ll} \hline Channel & \multicolumn{2}{c}{$B_{\mathrm{pr}}/S$} & \multicolumn{2}{c}{$B_{\mathrm{LL}}/S$} & MB rate \\ \hline \decay{\Bs}{\jpsi\phi} & $ 1.6$ &$0.6$ & $0.51 $ & $ 0.08$ & $\sim 0.3$ Hz \\ \decay{\Bd}{\jpsi\Kstarz} & $ 11\phantom{.0}$ & $ 2$ & $1.5\phantom{0}$ & $ 0.1 $ & $\sim 8.1$ Hz \\ \decay{{\ensuremath{\Bu}}\xspace}{{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace{\ensuremath{\kaon^{*+}}}\xspace} & $ 1.6 $ & $ 0.2$ & $0.29 $ & $ 0.06$ & $\sim 1.4$ Hz \\ \hline \end{tabular}\end{center} \label{tab:example} \end{table} \item Figures and tables should normally be placed so that they appear on the same page as their first reference, but at the top or bottom of the page; if this is not possible, they should come as soon as possible afterwards. They must all be referred to from the text. \item If one or more equations are referenced, all equations should be numbered using parentheses as shown in Eq.~\ref{eq:CKM}, \begin{equation} \label{eq:CKM} V_{{\ensuremath{\Pu}}\xspace{\ensuremath{\Ps}}\xspace}V_{{\ensuremath{\Pu}}\xspace{\ensuremath{\Pb}}\xspace}^* + V_{{\ensuremath{\Pc}}\xspace{\ensuremath{\Ps}}\xspace}V_{{\ensuremath{\Pc}}\xspace{\ensuremath{\Pb}}\xspace}^* + V_{{\ensuremath{\Pt}}\xspace{\ensuremath{\Ps}}\xspace}V_{{\ensuremath{\Pt}}\xspace{\ensuremath{\Pb}}\xspace}^* = 0 \ . \end{equation} \item Displayed results like \begin{equation*} {\ensuremath{\mathcal{B}}}\xspace(\decay{{\ensuremath{\B^0_\squark}}\xspace}{{\ensuremath{\Pmu^+\Pmu^-}}\xspace}) < 1.5 \times 10^{-8} \text{ at 95\% CL} \end{equation*} should in general not be numbered. \item Numbered equations should be avoided in captions and footnotes. \item Displayed equations are part of the normal grammar of the text. This means that the equation should end in full stop or comma if required when reading aloud. The line after the equation should only be indented if it starts a new paragraph. \item Equations in text should be put between a single pair of \$ signs. \verb!\mbox{...}! ensures they are not split over several lines. So \mbox{$\epsilon_\text{trigger}=(93.9\pm0.2)\%$} is written as \verb!\mbox{$\epsilon_\text{trigger}=(93.9\pm0.2)\%$}! and not as \verb!$\epsilon_\text{trigger}$=(93.9$\pm$0.2)\%! which generates the oddly-spaced $\epsilon_\text{trigger}$=(93.9$\pm$0.2)\%. \item Sub-sectioning should not be excessive: sections with more than three levels of index (1.1.1) should be avoided. \item Acronyms should be defined the first time they are used, \mbox{\itshape e.g.}\xspace ``A dedicated boosted decision tree~(BDT) is designed to select doubly Cabibbo-suppressed~(DCS) decays.'' The abbreviated words should not be capitalised if it is not naturally written with capitals, \mbox{\itshape e.g.}\xspace quantum chromodynamics (QCD), impact parameter (IP), boosted decision tree (BDT). Avoid acronyms if they are used three times or less. A sentence should never start with an acronym and its better to avoid it as the last word of a sentence as well. \end{enumerate} \section{List of all symbols} \label{sec:listofsymbols} \subsection{Experiments} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash lhcb} & \mbox{LHCb}\xspace & \texttt{\textbackslash atlas} & \mbox{ATLAS}\xspace & \texttt{\textbackslash cms} & \mbox{CMS}\xspace \\ \texttt{\textbackslash alice} & \mbox{ALICE}\xspace & \texttt{\textbackslash babar} & \mbox{BaBar}\xspace & \texttt{\textbackslash belle} & \mbox{Belle}\xspace \\ \texttt{\textbackslash belletwo} & \belletwo & \texttt{\textbackslash besiii} & \besiii & \texttt{\textbackslash cleo} & \mbox{CLEO}\xspace \\ \texttt{\textbackslash cdf} & \mbox{CDF}\xspace & \texttt{\textbackslash dzero} & \mbox{D0}\xspace & \texttt{\textbackslash aleph} & \mbox{ALEPH}\xspace \\ \texttt{\textbackslash delphi} & \mbox{DELPHI}\xspace & \texttt{\textbackslash opal} & \mbox{OPAL}\xspace & \texttt{\textbackslash lthree} & \mbox{L3}\xspace \\ \texttt{\textbackslash sld} & \mbox{SLD}\xspace & \texttt{\textbackslash cern} & \mbox{CERN}\xspace & \texttt{\textbackslash lhc} & \mbox{LHC}\xspace \\ \texttt{\textbackslash lep} & \mbox{LEP}\xspace & \texttt{\textbackslash tevatron} & Tevatron\xspace & \texttt{\textbackslash bfactories} & \bfactories \\ \texttt{\textbackslash bfactory} & \bfactory & \texttt{\textbackslash upgradeone} & \upgradeone & \texttt{\textbackslash upgradetwo} & \upgradetwo \\ \end{tabular*} \subsubsection{LHCb sub-detectors and sub-systems} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash velo} & VELO\xspace & \texttt{\textbackslash rich} & RICH\xspace & \texttt{\textbackslash richone} & RICH1\xspace \\ \texttt{\textbackslash richtwo} & RICH2\xspace & \texttt{\textbackslash ttracker} & TT\xspace & \texttt{\textbackslash intr} & IT\xspace \\ \texttt{\textbackslash st} & ST\xspace & \texttt{\textbackslash ot} & OT\xspace & \texttt{\textbackslash herschel} & \mbox{\textsc{HeRSCheL}}\xspace \\ \texttt{\textbackslash spd} & SPD\xspace & \texttt{\textbackslash presh} & PS\xspace & \texttt{\textbackslash ecal} & ECAL\xspace \\ \texttt{\textbackslash hcal} & HCAL\xspace & \texttt{\textbackslash MagUp} & \mbox{\em Mag\kern -0.05em Up}\xspace & \texttt{\textbackslash MagDown} & \mbox{\em MagDown}\xspace \\ \texttt{\textbackslash ode} & ODE\xspace & \texttt{\textbackslash daq} & DAQ\xspace & \texttt{\textbackslash tfc} & TFC\xspace \\ \texttt{\textbackslash ecs} & ECS\xspace & \texttt{\textbackslash lone} & L0\xspace & \texttt{\textbackslash hlt} & HLT\xspace \\ \texttt{\textbackslash hltone} & HLT1\xspace & \texttt{\textbackslash hlttwo} & HLT2\xspace & \\ \end{tabular*} \subsection{Particles} \subsubsection{Leptons} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash electron} & {\ensuremath{\Pe}}\xspace & \texttt{\textbackslash en} & \en & \texttt{\textbackslash ep} & {\ensuremath{\Pe^+}}\xspace \\ \texttt{\textbackslash epm} & \epm & \texttt{\textbackslash emp} & \emp & \texttt{\textbackslash epem} & {\ensuremath{\Pe^+\Pe^-}}\xspace \\ \texttt{\textbackslash muon} & {\ensuremath{\Pmu}}\xspace & \texttt{\textbackslash mup} & {\ensuremath{\Pmu^+}}\xspace & \texttt{\textbackslash mun} & \mun \\ \texttt{\textbackslash mupm} & {\ensuremath{\Pmu^\pm}}\xspace & \texttt{\textbackslash mump} & \mump & \texttt{\textbackslash mumu} & {\ensuremath{\Pmu^+\Pmu^-}}\xspace \\ \texttt{\textbackslash tauon} & {\ensuremath{\Ptau}}\xspace & \texttt{\textbackslash taup} & {\ensuremath{\Ptau^+}}\xspace & \texttt{\textbackslash taum} & {\ensuremath{\Ptau^-}}\xspace \\ \texttt{\textbackslash taupm} & \taupm & \texttt{\textbackslash taump} & \taump & \texttt{\textbackslash tautau} & {\ensuremath{\Ptau^+\Ptau^-}}\xspace \\ \texttt{\textbackslash lepton} & {\ensuremath{\ell}}\xspace & \texttt{\textbackslash ellm} & {\ensuremath{\ell^-}}\xspace & \texttt{\textbackslash ellp} & {\ensuremath{\ell^+}}\xspace \\ \texttt{\textbackslash ellell} & \ensuremath{\ell^+ \ell^-}\xspace & \texttt{\textbackslash neu} & {\ensuremath{\Pnu}}\xspace & \texttt{\textbackslash neub} & {\ensuremath{\overline{\Pnu}}}\xspace \\ \texttt{\textbackslash neue} & {\ensuremath{\neu_e}}\xspace & \texttt{\textbackslash neueb} & {\ensuremath{\neub_e}}\xspace & \texttt{\textbackslash neum} & {\ensuremath{\neu_\mu}}\xspace \\ \texttt{\textbackslash neumb} & {\ensuremath{\neub_\mu}}\xspace & \texttt{\textbackslash neut} & {\ensuremath{\neu_\tau}}\xspace & \texttt{\textbackslash neutb} & {\ensuremath{\neub_\tau}}\xspace \\ \texttt{\textbackslash neul} & {\ensuremath{\neu_\ell}}\xspace & \texttt{\textbackslash neulb} & {\ensuremath{\neub_\ell}}\xspace & \\ \end{tabular*} \subsubsection{Gauge bosons and scalars} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash g} & {\ensuremath{\Pgamma}}\xspace & \texttt{\textbackslash H} & {\ensuremath{\PH^0}}\xspace & \texttt{\textbackslash Hp} & {\ensuremath{\PH^+}}\xspace \\ \texttt{\textbackslash Hm} & {\ensuremath{\PH^-}}\xspace & \texttt{\textbackslash Hpm} & {\ensuremath{\PH^\pm}}\xspace & \texttt{\textbackslash W} & {\ensuremath{\PW}}\xspace \\ \texttt{\textbackslash Wp} & {\ensuremath{\PW^+}}\xspace & \texttt{\textbackslash Wm} & {\ensuremath{\PW^-}}\xspace & \texttt{\textbackslash Wpm} & {\ensuremath{\PW^\pm}}\xspace \\ \texttt{\textbackslash Z} & {\ensuremath{\PZ}}\xspace & \\ \end{tabular*} \subsubsection{Quarks} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash quark} & {\ensuremath{\Pq}}\xspace & \texttt{\textbackslash quarkbar} & {\ensuremath{\overline \quark}}\xspace & \texttt{\textbackslash qqbar} & {\ensuremath{\quark\quarkbar}}\xspace \\ \texttt{\textbackslash uquark} & {\ensuremath{\Pu}}\xspace & \texttt{\textbackslash uquarkbar} & {\ensuremath{\overline \uquark}}\xspace & \texttt{\textbackslash uubar} & {\ensuremath{\uquark\uquarkbar}}\xspace \\ \texttt{\textbackslash dquark} & {\ensuremath{\Pd}}\xspace & \texttt{\textbackslash dquarkbar} & {\ensuremath{\overline \dquark}}\xspace & \texttt{\textbackslash ddbar} & {\ensuremath{\dquark\dquarkbar}}\xspace \\ \texttt{\textbackslash squark} & {\ensuremath{\Ps}}\xspace & \texttt{\textbackslash squarkbar} & {\ensuremath{\overline \squark}}\xspace & \texttt{\textbackslash ssbar} & {\ensuremath{\squark\squarkbar}}\xspace \\ \texttt{\textbackslash cquark} & {\ensuremath{\Pc}}\xspace & \texttt{\textbackslash cquarkbar} & {\ensuremath{\overline \cquark}}\xspace & \texttt{\textbackslash ccbar} & {\ensuremath{\cquark\cquarkbar}}\xspace \\ \texttt{\textbackslash bquark} & {\ensuremath{\Pb}}\xspace & \texttt{\textbackslash bquarkbar} & {\ensuremath{\overline \bquark}}\xspace & \texttt{\textbackslash bbbar} & {\ensuremath{\bquark\bquarkbar}}\xspace \\ \texttt{\textbackslash tquark} & {\ensuremath{\Pt}}\xspace & \texttt{\textbackslash tquarkbar} & {\ensuremath{\overline \tquark}}\xspace & \texttt{\textbackslash ttbar} & {\ensuremath{\tquark\tquarkbar}}\xspace \\ \end{tabular*} \subsubsection{Light mesons} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash hadron} & {\ensuremath{\Ph}}\xspace & \texttt{\textbackslash pion} & {\ensuremath{\Ppi}}\xspace & \texttt{\textbackslash piz} & {\ensuremath{\pion^0}}\xspace \\ \texttt{\textbackslash pip} & {\ensuremath{\pion^+}}\xspace & \texttt{\textbackslash pim} & {\ensuremath{\pion^-}}\xspace & \texttt{\textbackslash pipm} & {\ensuremath{\pion^\pm}}\xspace \\ \texttt{\textbackslash pimp} & {\ensuremath{\pion^\mp}}\xspace & \texttt{\textbackslash rhomeson} & {\ensuremath{\Prho}}\xspace & \texttt{\textbackslash rhoz} & {\ensuremath{\rhomeson^0}}\xspace \\ \texttt{\textbackslash rhop} & {\ensuremath{\rhomeson^+}}\xspace & \texttt{\textbackslash rhom} & {\ensuremath{\rhomeson^-}}\xspace & \texttt{\textbackslash rhopm} & {\ensuremath{\rhomeson^\pm}}\xspace \\ \texttt{\textbackslash rhomp} & {\ensuremath{\rhomeson^\mp}}\xspace & \texttt{\textbackslash kaon} & {\ensuremath{\PK}}\xspace & \texttt{\textbackslash Kbar} & {\kern 0.2em\overline{\kern -0.2em \PK}{}}\xspace \\ \texttt{\textbackslash Kb} & {\ensuremath{\Kbar}}\xspace & \texttt{\textbackslash KorKbar} & \kern 0.18em\optbar{\kern -0.18em K}{}\xspace & \texttt{\textbackslash Kz} & {\ensuremath{\kaon^0}}\xspace \\ \texttt{\textbackslash Kzb} & {\ensuremath{\Kbar{}^0}}\xspace & \texttt{\textbackslash Kp} & {\ensuremath{\kaon^+}}\xspace & \texttt{\textbackslash Km} & {\ensuremath{\kaon^-}}\xspace \\ \texttt{\textbackslash Kpm} & {\ensuremath{\kaon^\pm}}\xspace & \texttt{\textbackslash Kmp} & {\ensuremath{\kaon^\mp}}\xspace & \texttt{\textbackslash KS} & {\ensuremath{\kaon^0_{\mathrm{ \scriptscriptstyle S}}}}\xspace \\ \texttt{\textbackslash Vzero} & \Vzero & \texttt{\textbackslash KL} & {\ensuremath{\kaon^0_{\mathrm{ \scriptscriptstyle L}}}}\xspace & \texttt{\textbackslash Kstarz} & {\ensuremath{\kaon^{*0}}}\xspace \\ \texttt{\textbackslash Kstarzb} & {\ensuremath{\Kbar{}^{*0}}}\xspace & \texttt{\textbackslash Kstar} & {\ensuremath{\kaon^*}}\xspace & \texttt{\textbackslash Kstarb} & {\ensuremath{\Kbar{}^*}}\xspace \\ \texttt{\textbackslash Kstarp} & {\ensuremath{\kaon^{*+}}}\xspace & \texttt{\textbackslash Kstarm} & {\ensuremath{\kaon^{*-}}}\xspace & \texttt{\textbackslash Kstarpm} & {\ensuremath{\kaon^{*\pm}}}\xspace \\ \texttt{\textbackslash Kstarmp} & {\ensuremath{\kaon^{*\mp}}}\xspace & \texttt{\textbackslash KorKbarz} & \KorKbarz & \texttt{\textbackslash etaz} & \ensuremath{\Peta}\xspace \\ \texttt{\textbackslash etapr} & \ensuremath{\Peta^{\prime}}\xspace & \texttt{\textbackslash phiz} & \ensuremath{\Pphi}\xspace & \texttt{\textbackslash omegaz} & \ensuremath{\Pomega}\xspace \\ \end{tabular*} \subsubsection{Charmed mesons} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash Dbar} & {\kern 0.2em\overline{\kern -0.2em \PD}{}}\xspace & \texttt{\textbackslash D} & {\ensuremath{\PD}}\xspace & \texttt{\textbackslash Db} & {\ensuremath{\Dbar}}\xspace \\ \texttt{\textbackslash DorDbar} & \kern 0.18em\optbar{\kern -0.18em D}{}\xspace & \texttt{\textbackslash Dz} & {\ensuremath{\D^0}}\xspace & \texttt{\textbackslash Dzb} & {\ensuremath{\Dbar{}^0}}\xspace \\ \texttt{\textbackslash Dp} & {\ensuremath{\D^+}}\xspace & \texttt{\textbackslash Dm} & {\ensuremath{\D^-}}\xspace & \texttt{\textbackslash Dpm} & {\ensuremath{\D^\pm}}\xspace \\ \texttt{\textbackslash Dmp} & {\ensuremath{\D^\mp}}\xspace & \texttt{\textbackslash DpDm} & \DpDm & \texttt{\textbackslash Dstar} & {\ensuremath{\D^*}}\xspace \\ \texttt{\textbackslash Dstarb} & {\ensuremath{\Dbar{}^*}}\xspace & \texttt{\textbackslash Dstarz} & {\ensuremath{\D^{*0}}}\xspace & \texttt{\textbackslash Dstarzb} & {\ensuremath{\Dbar{}^{*0}}}\xspace \\ \texttt{\textbackslash theDstarz} & \theDstarz & \texttt{\textbackslash theDstarzb} & \theDstarzb & \texttt{\textbackslash Dstarp} & {\ensuremath{\D^{*+}}}\xspace \\ \texttt{\textbackslash Dstarm} & {\ensuremath{\D^{*-}}}\xspace & \texttt{\textbackslash Dstarpm} & {\ensuremath{\D^{*\pm}}}\xspace & \texttt{\textbackslash Dstarmp} & {\ensuremath{\D^{*\mp}}}\xspace \\ \texttt{\textbackslash theDstarp} & \theDstarp & \texttt{\textbackslash theDstarm} & \theDstarm & \texttt{\textbackslash theDstarpm} & \theDstarpm \\ \texttt{\textbackslash theDstarmp} & \theDstarmp & \texttt{\textbackslash Ds} & {\ensuremath{\D^+_\squark}}\xspace & \texttt{\textbackslash Dsp} & {\ensuremath{\D^+_\squark}}\xspace \\ \texttt{\textbackslash Dsm} & {\ensuremath{\D^-_\squark}}\xspace & \texttt{\textbackslash Dspm} & {\ensuremath{\D^{\pm}_\squark}}\xspace & \texttt{\textbackslash Dsmp} & {\ensuremath{\D^{\mp}_\squark}}\xspace \\ \texttt{\textbackslash Dss} & {\ensuremath{\D^{*+}_\squark}}\xspace & \texttt{\textbackslash Dssp} & {\ensuremath{\D^{*+}_\squark}}\xspace & \texttt{\textbackslash Dssm} & {\ensuremath{\D^{*-}_\squark}}\xspace \\ \texttt{\textbackslash Dsspm} & {\ensuremath{\D^{*\pm}_\squark}}\xspace & \texttt{\textbackslash Dssmp} & {\ensuremath{\D^{*\mp}_\squark}}\xspace & \\ \end{tabular*} \subsubsection{Beauty mesons} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash B} & {\ensuremath{\PB}}\xspace & \texttt{\textbackslash Bbar} & {\ensuremath{\kern 0.18em\overline{\kern -0.18em \PB}{}}}\xspace & \texttt{\textbackslash Bb} & {\ensuremath{\Bbar}}\xspace \\ \texttt{\textbackslash BorBbar} & \kern 0.18em\optbar{\kern -0.18em B}{}\xspace & \texttt{\textbackslash Bz} & {\ensuremath{\B^0}}\xspace & \texttt{\textbackslash Bzb} & {\ensuremath{\Bbar{}^0}}\xspace \\ \texttt{\textbackslash Bd} & {\ensuremath{\B^0}}\xspace & \texttt{\textbackslash Bdb} & {\ensuremath{\Bbar{}^0}}\xspace & \texttt{\textbackslash BdorBdbar} & \BdorBdbar \\ \texttt{\textbackslash Bu} & {\ensuremath{\B^+}}\xspace & \texttt{\textbackslash Bub} & {\ensuremath{\B^-}}\xspace & \texttt{\textbackslash Bp} & {\ensuremath{\Bu}}\xspace \\ \texttt{\textbackslash Bm} & {\ensuremath{\Bub}}\xspace & \texttt{\textbackslash Bpm} & {\ensuremath{\B^\pm}}\xspace & \texttt{\textbackslash Bmp} & {\ensuremath{\B^\mp}}\xspace \\ \texttt{\textbackslash Bs} & {\ensuremath{\B^0_\squark}}\xspace & \texttt{\textbackslash Bsb} & {\ensuremath{\Bbar{}^0_\squark}}\xspace & \texttt{\textbackslash BsorBsbar} & \BsorBsbar \\ \texttt{\textbackslash Bc} & {\ensuremath{\B_\cquark^+}}\xspace & \texttt{\textbackslash Bcp} & {\ensuremath{\B_\cquark^+}}\xspace & \texttt{\textbackslash Bcm} & {\ensuremath{\B_\cquark^-}}\xspace \\ \texttt{\textbackslash Bcpm} & {\ensuremath{\B_\cquark^\pm}}\xspace & \texttt{\textbackslash Bds} & \Bds & \texttt{\textbackslash Bdsb} & \Bdsb \\ \texttt{\textbackslash BdorBs} & \BdorBs & \texttt{\textbackslash BdorBsbar} & \BdorBsbar & \\ \end{tabular*} \subsubsection{Onia} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash jpsi} & {\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace & \texttt{\textbackslash psitwos} & {\ensuremath{\Ppsi{(2S)}}}\xspace & \texttt{\textbackslash psiprpr} & {\ensuremath{\Ppsi(3770)}}\xspace \\ \texttt{\textbackslash etac} & {\ensuremath{\Peta_\cquark}}\xspace & \texttt{\textbackslash chic} & {\ensuremath{\Pchi_{c}}}\xspace & \texttt{\textbackslash chiczero} & {\ensuremath{\Pchi_{\cquark 0}}}\xspace \\ \texttt{\textbackslash chicone} & {\ensuremath{\Pchi_{\cquark 1}}}\xspace & \texttt{\textbackslash chictwo} & {\ensuremath{\Pchi_{\cquark 2}}}\xspace & \texttt{\textbackslash chicJ} & \chicJ \\ \texttt{\textbackslash Upsilonres} & \Upsilonres & \texttt{\textbackslash OneS} & {\Y1S} & \texttt{\textbackslash TwoS} & {\Y2S} \\ \texttt{\textbackslash ThreeS} & {\Y3S} & \texttt{\textbackslash FourS} & {\Y4S} & \texttt{\textbackslash FiveS} & {\Y5S} \\ \texttt{\textbackslash chib} & \chib & \texttt{\textbackslash chibzero} & \chibzero & \texttt{\textbackslash chibone} & \chibone \\ \texttt{\textbackslash chibtwo} & \chibtwo & \texttt{\textbackslash chibJ} & \chibJ & \texttt{\textbackslash theX} & \theX \\ \end{tabular*} \subsubsection{Light Baryons} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash proton} & {\ensuremath{\Pp}}\xspace & \texttt{\textbackslash antiproton} & {\ensuremath{\overline \proton}}\xspace & \texttt{\textbackslash neutron} & {\ensuremath{\Pn}}\xspace \\ \texttt{\textbackslash antineutron} & {\ensuremath{\overline \neutron}}\xspace & \texttt{\textbackslash Deltares} & {\ensuremath{\PDelta}}\xspace & \texttt{\textbackslash Deltaresbar} & {\ensuremath{\overline \Deltares}}\xspace \\ \texttt{\textbackslash Lz} & {\ensuremath{\PLambda}}\xspace & \texttt{\textbackslash Lbar} & {\ensuremath{\kern 0.1em\overline{\kern -0.1em\PLambda}}}\xspace & \texttt{\textbackslash LorLbar} & \kern 0.18em\optbar{\kern -0.18em \PLambda}{}\xspace \\ \texttt{\textbackslash Lambdares} & {\ensuremath{\PLambda}}\xspace & \texttt{\textbackslash Lambdaresbar} & {\ensuremath{\Lbar}}\xspace & \texttt{\textbackslash Sigmares} & {\ensuremath{\PSigma}}\xspace \\ \texttt{\textbackslash Sigmaz} & \Sigmaz & \texttt{\textbackslash Sigmap} & \Sigmap & \texttt{\textbackslash Sigmam} & \Sigmam \\ \texttt{\textbackslash Sigmaresbar} & {\ensuremath{\overline \Sigmares}}\xspace & \texttt{\textbackslash Sigmabarz} & \Sigmabarz & \texttt{\textbackslash Sigmabarp} & \Sigmabarp \\ \texttt{\textbackslash Sigmabarm} & \Sigmabarm & \texttt{\textbackslash Xires} & {\ensuremath{\PXi}}\xspace & \texttt{\textbackslash Xiresz} & \Xiresz \\ \texttt{\textbackslash Xiresm} & \Xiresm & \texttt{\textbackslash Xiresbar} & {\ensuremath{\overline \Xires}}\xspace & \texttt{\textbackslash Xiresbarz} & \Xiresbarz \\ \texttt{\textbackslash Xiresbarp} & \Xiresbarp & \texttt{\textbackslash Omegares} & {\ensuremath{\POmega}}\xspace & \texttt{\textbackslash Omegaresbar} & {\ensuremath{\overline \POmega}}\xspace \\ \texttt{\textbackslash Omegam} & \Omegam & \texttt{\textbackslash Omegabarp} & \Omegabarp & \\ \end{tabular*} \subsubsection{Charmed Baryons} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash Lc} & {\ensuremath{\Lz^+_\cquark}}\xspace & \texttt{\textbackslash Lcbar} & {\ensuremath{\Lbar{}^-_\cquark}}\xspace & \texttt{\textbackslash Xic} & {\ensuremath{\Xires_\cquark}}\xspace \\ \texttt{\textbackslash Xicz} & {\ensuremath{\Xires^0_\cquark}}\xspace & \texttt{\textbackslash Xicp} & {\ensuremath{\Xires^+_\cquark}}\xspace & \texttt{\textbackslash Xicbar} & {\ensuremath{\Xiresbar{}_\cquark}}\xspace \\ \texttt{\textbackslash Xicbarz} & {\ensuremath{\Xiresbar{}_\cquark^0}}\xspace & \texttt{\textbackslash Xicbarm} & {\ensuremath{\Xiresbar{}_\cquark^-}}\xspace & \texttt{\textbackslash Omegac} & {\ensuremath{\Omegares^0_\cquark}}\xspace \\ \texttt{\textbackslash Omegacbar} & {\ensuremath{\Omegaresbar{}_\cquark^0}}\xspace & \texttt{\textbackslash Xicc} & \Xicc & \texttt{\textbackslash Xiccbar} & \Xiccbar \\ \texttt{\textbackslash Xiccp} & \Xiccp & \texttt{\textbackslash Xiccpp} & \Xiccpp & \texttt{\textbackslash Xiccbarm} & \Xiccbarm \\ \texttt{\textbackslash Xiccbarmm} & \Xiccbarmm & \texttt{\textbackslash Omegacc} & \Omegacc & \texttt{\textbackslash Omegaccbar} & \Omegaccbar \\ \texttt{\textbackslash Omegaccc} & \Omegaccc & \texttt{\textbackslash Omegacccbar} & \Omegacccbar & \\ \end{tabular*} \subsubsection{Beauty Baryons} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash Lb} & {\ensuremath{\Lz^0_\bquark}}\xspace & \texttt{\textbackslash Lbbar} & {\ensuremath{\Lbar{}^0_\bquark}}\xspace & \texttt{\textbackslash Sigmab} & \Sigmab \\ \texttt{\textbackslash Sigmabp} & \Sigmabp & \texttt{\textbackslash Sigmabz} & \Sigmabz & \texttt{\textbackslash Sigmabm} & \Sigmabm \\ \texttt{\textbackslash Sigmabpm} & \Sigmabpm & \texttt{\textbackslash Sigmabbar} & \Sigmabbar & \texttt{\textbackslash Sigmabbarp} & \Sigmabbarp \\ \texttt{\textbackslash Sigmabbarz} & \Sigmabbarz & \texttt{\textbackslash Sigmabbarm} & \Sigmabbarm & \texttt{\textbackslash Sigmabbarpm} & \Sigmabbarpm \\ \texttt{\textbackslash Xib} & {\ensuremath{\Xires_\bquark}}\xspace & \texttt{\textbackslash Xibz} & {\ensuremath{\Xires^0_\bquark}}\xspace & \texttt{\textbackslash Xibm} & {\ensuremath{\Xires^-_\bquark}}\xspace \\ \texttt{\textbackslash Xibbar} & {\ensuremath{\Xiresbar{}_\bquark}}\xspace & \texttt{\textbackslash Xibbarz} & {\ensuremath{\Xiresbar{}_\bquark^0}}\xspace & \texttt{\textbackslash Xibbarp} & {\ensuremath{\Xiresbar{}_\bquark^+}}\xspace \\ \texttt{\textbackslash Omegab} & {\ensuremath{\Omegares^-_\bquark}}\xspace & \texttt{\textbackslash Omegabbar} & {\ensuremath{\Omegaresbar{}_\bquark^+}}\xspace & \\ \end{tabular*} \subsection{Physics symbols} \subsubsection{Decays} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash BF} & {\ensuremath{\mathcal{B}}}\xspace & \texttt{\textbackslash BR} & \BF & \texttt{\textbackslash BRvis} & {\ensuremath{\BF_{\mathrm{{vis}}}}} \\ \texttt{\textbackslash ra} & \ensuremath{\rightarrow}\xspace & \texttt{\textbackslash to} & \ensuremath{\rightarrow}\xspace & \\ \end{tabular*} \subsubsection{Lifetimes} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash tauBs} & {\ensuremath{\tau_{{\ensuremath{\B^0_\squark}}\xspace}}}\xspace & \texttt{\textbackslash tauBd} & {\ensuremath{\tau_{{\ensuremath{\B^0}}\xspace}}}\xspace & \texttt{\textbackslash tauBz} & {\ensuremath{\tau_{{\ensuremath{\B^0}}\xspace}}}\xspace \\ \texttt{\textbackslash tauBu} & {\ensuremath{\tau_{{\ensuremath{\Bu}}\xspace}}}\xspace & \texttt{\textbackslash tauDp} & {\ensuremath{\tau_{{\ensuremath{\D^+}}\xspace}}}\xspace & \texttt{\textbackslash tauDz} & {\ensuremath{\tau_{{\ensuremath{\D^0}}\xspace}}}\xspace \\ \texttt{\textbackslash tauL} & {\ensuremath{\tau_{\mathrm{ L}}}}\xspace & \texttt{\textbackslash tauH} & {\ensuremath{\tau_{\mathrm{ H}}}}\xspace & \\ \end{tabular*} \subsubsection{Masses} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash mBd} & {\ensuremath{m_{{\ensuremath{\B^0}}\xspace}}}\xspace & \texttt{\textbackslash mBp} & {\ensuremath{m_{{\ensuremath{\Bu}}\xspace}}}\xspace & \texttt{\textbackslash mBs} & {\ensuremath{m_{{\ensuremath{\B^0_\squark}}\xspace}}}\xspace \\ \texttt{\textbackslash mBc} & {\ensuremath{m_{{\ensuremath{\B_\cquark^+}}\xspace}}}\xspace & \texttt{\textbackslash mLb} & {\ensuremath{m_{{\ensuremath{\Lz^0_\bquark}}\xspace}}}\xspace & \\ \end{tabular*} \subsubsection{EW theory, groups} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash grpsuthree} & {\ensuremath{\mathrm{SU}(3)}}\xspace & \texttt{\textbackslash grpsutw} & {\ensuremath{\mathrm{SU}(2)}}\xspace & \texttt{\textbackslash grpuone} & {\ensuremath{\mathrm{U}(1)}}\xspace \\ \texttt{\textbackslash ssqtw} & {\ensuremath{\sin^{2}\!\theta_{\mathrm{W}}}}\xspace & \texttt{\textbackslash csqtw} & {\ensuremath{\cos^{2}\!\theta_{\mathrm{W}}}}\xspace & \texttt{\textbackslash stw} & {\ensuremath{\sin\theta_{\mathrm{W}}}}\xspace \\ \texttt{\textbackslash ctw} & {\ensuremath{\cos\theta_{\mathrm{W}}}}\xspace & \texttt{\textbackslash ssqtwef} & {\ensuremath{{\sin}^{2}\theta_{\mathrm{W}}^{\mathrm{eff}}}}\xspace & \texttt{\textbackslash csqtwef} & {\ensuremath{{\cos}^{2}\theta_{\mathrm{W}}^{\mathrm{eff}}}}\xspace \\ \texttt{\textbackslash stwef} & {\ensuremath{\sin\theta_{\mathrm{W}}^{\mathrm{eff}}}}\xspace & \texttt{\textbackslash ctwef} & {\ensuremath{\cos\theta_{\mathrm{W}}^{\mathrm{eff}}}}\xspace & \texttt{\textbackslash gv} & {\ensuremath{g_{\mbox{\tiny V}}}}\xspace \\ \texttt{\textbackslash ga} & {\ensuremath{g_{\mbox{\tiny A}}}}\xspace & \texttt{\textbackslash order} & {\ensuremath{\mathcal{O}}}\xspace & \texttt{\textbackslash ordalph} & {\ensuremath{\mathcal{O}(\alpha)}}\xspace \\ \texttt{\textbackslash ordalsq} & {\ensuremath{\mathcal{O}(\alpha^{2})}}\xspace & \texttt{\textbackslash ordalcb} & {\ensuremath{\mathcal{O}(\alpha^{3})}}\xspace & \\ \end{tabular*} \subsubsection{QCD parameters} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash as} & {\ensuremath{\alpha_s}}\xspace & \texttt{\textbackslash MSb} & {\ensuremath{\overline{\mathrm{MS}}}}\xspace & \texttt{\textbackslash lqcd} & {\ensuremath{\Lambda_{\mathrm{QCD}}}}\xspace \\ \texttt{\textbackslash qsq} & {\ensuremath{q^2}}\xspace & \\ \end{tabular*} \subsubsection{CKM, \boldmath {\ensuremath{C\!P}}\xspace violation} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash eps} & {\ensuremath{\varepsilon}}\xspace & \texttt{\textbackslash epsK} & {\ensuremath{\varepsilon_K}}\xspace & \texttt{\textbackslash epsB} & {\ensuremath{\varepsilon_B}}\xspace \\ \texttt{\textbackslash epsp} & {\ensuremath{\varepsilon^\prime_K}}\xspace & \texttt{\textbackslash CP} & {\ensuremath{C\!P}}\xspace & \texttt{\textbackslash CPT} & {\ensuremath{C\!PT}}\xspace \\ \texttt{\textbackslash T} & \T & \texttt{\textbackslash rhobar} & {\ensuremath{\overline \rho}}\xspace & \texttt{\textbackslash etabar} & {\ensuremath{\overline \eta}}\xspace \\ \texttt{\textbackslash Vud} & {\ensuremath{V_{\uquark\dquark}}}\xspace & \texttt{\textbackslash Vcd} & {\ensuremath{V_{\cquark\dquark}}}\xspace & \texttt{\textbackslash Vtd} & {\ensuremath{V_{\tquark\dquark}}}\xspace \\ \texttt{\textbackslash Vus} & {\ensuremath{V_{\uquark\squark}}}\xspace & \texttt{\textbackslash Vcs} & {\ensuremath{V_{\cquark\squark}}}\xspace & \texttt{\textbackslash Vts} & {\ensuremath{V_{\tquark\squark}}}\xspace \\ \texttt{\textbackslash Vub} & {\ensuremath{V_{\uquark\bquark}}}\xspace & \texttt{\textbackslash Vcb} & {\ensuremath{V_{\cquark\bquark}}}\xspace & \texttt{\textbackslash Vtb} & {\ensuremath{V_{\tquark\bquark}}}\xspace \\ \texttt{\textbackslash Vuds} & {\ensuremath{V_{\uquark\dquark}^\ast}}\xspace & \texttt{\textbackslash Vcds} & {\ensuremath{V_{\cquark\dquark}^\ast}}\xspace & \texttt{\textbackslash Vtds} & {\ensuremath{V_{\tquark\dquark}^\ast}}\xspace \\ \texttt{\textbackslash Vuss} & {\ensuremath{V_{\uquark\squark}^\ast}}\xspace & \texttt{\textbackslash Vcss} & {\ensuremath{V_{\cquark\squark}^\ast}}\xspace & \texttt{\textbackslash Vtss} & {\ensuremath{V_{\tquark\squark}^\ast}}\xspace \\ \texttt{\textbackslash Vubs} & {\ensuremath{V_{\uquark\bquark}^\ast}}\xspace & \texttt{\textbackslash Vcbs} & {\ensuremath{V_{\cquark\bquark}^\ast}}\xspace & \texttt{\textbackslash Vtbs} & {\ensuremath{V_{\tquark\bquark}^\ast}}\xspace \\ \end{tabular*} \subsubsection{Oscillations} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash dm} & {\ensuremath{\Delta m}}\xspace & \texttt{\textbackslash dms} & {\ensuremath{\Delta m_{{\ensuremath{\Ps}}\xspace}}}\xspace & \texttt{\textbackslash dmd} & {\ensuremath{\Delta m_{{\ensuremath{\Pd}}\xspace}}}\xspace \\ \texttt{\textbackslash DG} & {\ensuremath{\Delta\Gamma}}\xspace & \texttt{\textbackslash DGs} & {\ensuremath{\Delta\Gamma_{{\ensuremath{\Ps}}\xspace}}}\xspace & \texttt{\textbackslash DGd} & {\ensuremath{\Delta\Gamma_{{\ensuremath{\Pd}}\xspace}}}\xspace \\ \texttt{\textbackslash Gs} & {\ensuremath{\Gamma_{{\ensuremath{\Ps}}\xspace}}}\xspace & \texttt{\textbackslash Gd} & {\ensuremath{\Gamma_{{\ensuremath{\Pd}}\xspace}}}\xspace & \texttt{\textbackslash MBq} & {\ensuremath{M_{{\ensuremath{\PB}}\xspace_{\ensuremath{\Pq}}\xspace}}}\xspace \\ \texttt{\textbackslash DGq} & {\ensuremath{\Delta\Gamma_{{\ensuremath{\Pq}}\xspace}}}\xspace & \texttt{\textbackslash Gq} & {\ensuremath{\Gamma_{{\ensuremath{\Pq}}\xspace}}}\xspace & \texttt{\textbackslash dmq} & {\ensuremath{\Delta m_{{\ensuremath{\Pq}}\xspace}}}\xspace \\ \texttt{\textbackslash GL} & {\ensuremath{\Gamma_{\mathrm{ L}}}}\xspace & \texttt{\textbackslash GH} & {\ensuremath{\Gamma_{\mathrm{ H}}}}\xspace & \texttt{\textbackslash DGsGs} & {\ensuremath{\Delta\Gamma_{{\ensuremath{\Ps}}\xspace}/\Gamma_{{\ensuremath{\Ps}}\xspace}}}\xspace \\ \texttt{\textbackslash Delm} & {\mbox{$\Delta m $}}\xspace & \texttt{\textbackslash ACP} & {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace & \texttt{\textbackslash Adir} & {\ensuremath{{\mathcal{A}}^{\mathrm{ dir}}}}\xspace \\ \texttt{\textbackslash Amix} & {\ensuremath{{\mathcal{A}}^{\mathrm{ mix}}}}\xspace & \texttt{\textbackslash ADelta} & {\ensuremath{{\mathcal{A}}^\Delta}}\xspace & \texttt{\textbackslash phid} & {\ensuremath{\phi_{{\ensuremath{\Pd}}\xspace}}}\xspace \\ \texttt{\textbackslash sinphid} & {\ensuremath{\sin\!\phid}}\xspace & \texttt{\textbackslash phis} & {\ensuremath{\phi_{{\ensuremath{\Ps}}\xspace}}}\xspace & \texttt{\textbackslash betas} & {\ensuremath{\beta_{{\ensuremath{\Ps}}\xspace}}}\xspace \\ \texttt{\textbackslash sbetas} & {\ensuremath{\sigma(\beta_{{\ensuremath{\Ps}}\xspace})}}\xspace & \texttt{\textbackslash stbetas} & {\ensuremath{\sigma(2\beta_{{\ensuremath{\Ps}}\xspace})}}\xspace & \texttt{\textbackslash stphis} & {\ensuremath{\sigma(\phi_{{\ensuremath{\Ps}}\xspace})}}\xspace \\ \texttt{\textbackslash sinphis} & {\ensuremath{\sin\!\phis}}\xspace & \\ \end{tabular*} \subsubsection{Tagging} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash edet} & {\ensuremath{\varepsilon_{\mathrm{ det}}}}\xspace & \texttt{\textbackslash erec} & {\ensuremath{\varepsilon_{\mathrm{ rec/det}}}}\xspace & \texttt{\textbackslash esel} & \ensuremath{\epsilon_{\mathrm{sel}, {\ensuremath{\D^+_\squark}}\xspace}} \\ \texttt{\textbackslash etrg} & {\ensuremath{\varepsilon_{\mathrm{ trg/sel}}}}\xspace & \texttt{\textbackslash etot} & {\ensuremath{\varepsilon_{\mathrm{ tot}}}}\xspace & \texttt{\textbackslash mistag} & \ensuremath{\omega}\xspace \\ \texttt{\textbackslash wcomb} & \ensuremath{\omega^{\mathrm{comb}}}\xspace & \texttt{\textbackslash etag} & {\ensuremath{\varepsilon_{\mathrm{tag}}}}\xspace & \texttt{\textbackslash etagcomb} & {\ensuremath{\varepsilon_{\mathrm{tag}}^{\mathrm{comb}}}}\xspace \\ \texttt{\textbackslash effeff} & \ensuremath{\varepsilon_{\mathrm{eff}}}\xspace & \texttt{\textbackslash effeffcomb} & \ensuremath{\varepsilon_{\mathrm{eff}}^{\mathrm{comb}}}\xspace & \texttt{\textbackslash efftag} & {\ensuremath{\etag(1-2\omega)^2}}\xspace \\ \texttt{\textbackslash effD} & {\ensuremath{\etag D^2}}\xspace & \texttt{\textbackslash etagprompt} & {\ensuremath{\varepsilon_{\mathrm{ tag}}^{\mathrm{Pr}}}}\xspace & \texttt{\textbackslash etagLL} & {\ensuremath{\varepsilon_{\mathrm{ tag}}^{\mathrm{LL}}}}\xspace \\ \end{tabular*} \subsubsection{Key decay channels} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash BdToKstmm} & \decay{\Bd}{\Kstarz\mup\mun} & \texttt{\textbackslash BdbToKstmm} & \decay{\Bdb}{\Kstarzb\mup\mun} & \texttt{\textbackslash BsToJPsiPhi} & \decay{\Bs}{\jpsi\phi} \\ \texttt{\textbackslash BdToJPsiKst} & \decay{\Bd}{\jpsi\Kstarz} & \texttt{\textbackslash BdbToJPsiKst} & \decay{\Bdb}{\jpsi\Kstarzb} & \texttt{\textbackslash BsPhiGam} & \decay{\Bs}{\phi \g} \\ \texttt{\textbackslash BdKstGam} & \decay{\Bd}{\Kstarz \g} & \texttt{\textbackslash BTohh} & \decay{\B}{\Ph^+ \Ph'^-} & \texttt{\textbackslash BdTopipi} & \decay{\Bd}{\pip\pim} \\ \texttt{\textbackslash BdToKpi} & \decay{\Bd}{\Kp\pim} & \texttt{\textbackslash BsToKK} & \decay{\Bs}{\Kp\Km} & \texttt{\textbackslash BsTopiK} & \decay{\Bs}{\pip\Km} \\ \texttt{\textbackslash Cpipi} & \Cpipi & \texttt{\textbackslash Spipi} & \Spipi & \texttt{\textbackslash CKK} & \CKK \\ \texttt{\textbackslash SKK} & \SKK & \texttt{\textbackslash ADGKK} & \ADGKK & \\ \end{tabular*} \subsubsection{Rare decays} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash BdKstee} & \decay{\Bd}{\Kstarz\epem} & \texttt{\textbackslash BdbKstee} & \decay{\Bdb}{\Kstarzb\epem} & \texttt{\textbackslash bsll} & \decay{\bquark}{\squark \ell^+ \ell^-} \\ \texttt{\textbackslash AFB} & \ensuremath{A_{\mathrm{FB}}}\xspace & \texttt{\textbackslash FL} & \ensuremath{F_{\mathrm{L}}}\xspace & \texttt{\textbackslash AT\#1 \textbackslash AT2} & \AT2 \\ \texttt{\textbackslash btosgam} & \decay{\bquark}{\squark \g} & \texttt{\textbackslash btodgam} & \decay{\bquark}{\dquark \g} & \texttt{\textbackslash Bsmm} & \decay{\Bs}{\mup\mun} \\ \texttt{\textbackslash Bdmm} & \decay{\Bd}{\mup\mun} & \texttt{\textbackslash Bsee} & \Bsee & \texttt{\textbackslash Bdee} & \Bdee \\ \texttt{\textbackslash ctl} & \ensuremath{\cos{\theta_\ell}}\xspace & \texttt{\textbackslash ctk} & \ensuremath{\cos{\theta_K}}\xspace & \\ \end{tabular*} \subsubsection{Wilson coefficients and operators} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash C\#1 \textbackslash C9} & \C9 & \texttt{\textbackslash Cp\#1 \textbackslash Cp7} & \Cp7 & \texttt{\textbackslash Ceff\#1 \textbackslash Ceff9 } & \Ceff9 \\ \texttt{\textbackslash Cpeff\#1 \textbackslash Cpeff7} & \Cpeff7 & \texttt{\textbackslash Ope\#1 \textbackslash Ope2} & \Ope2 & \texttt{\textbackslash Opep\#1 \textbackslash Opep7} & \Opep7 \\ \end{tabular*} \subsubsection{Charm} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash xprime} & \ensuremath{x^{\prime}}\xspace & \texttt{\textbackslash yprime} & \ensuremath{y^{\prime}}\xspace & \texttt{\textbackslash ycp} & \ensuremath{y_{\CP}}\xspace \\ \texttt{\textbackslash agamma} & \ensuremath{A_{\Gamma}}\xspace & \texttt{\textbackslash dkpicf} & \decay{\Dz}{\Km\pip} & \\ \end{tabular*} \subsubsection{QM} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash bra[1] \textbackslash bra\{a\}} & \bra{a} & \texttt{\textbackslash ket[1] \textbackslash ket\{b\}} & \ket{b} & \texttt{\textbackslash braket[2] \textbackslash braket\{a\}\{b\}} & \braket{a}{b} \\ \end{tabular*} \subsection{Units (these macros add a small space in front)} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash unit[1] \textbackslash unit\{kg\} } & \unit{kg} & \\ \end{tabular*} \subsubsection{Energy and momentum } \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash tev} & \ifthenelse{\boolean{inbibliography}}{\ensuremath{~T\kern -0.05em eV}}{\ensuremath{\mathrm{\,Te\kern -0.1em V}}}\xspace & \texttt{\textbackslash gev} & \ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace & \texttt{\textbackslash mev} & \ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace \\ \texttt{\textbackslash kev} & \ensuremath{\mathrm{\,ke\kern -0.1em V}}\xspace & \texttt{\textbackslash ev} & \ensuremath{\mathrm{\,e\kern -0.1em V}}\xspace & \texttt{\textbackslash gevgev} & \gevgev \\ \texttt{\textbackslash mevc} & \ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace & \texttt{\textbackslash gevc} & \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace & \texttt{\textbackslash mevcc} & \ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace \\ \texttt{\textbackslash gevcc} & \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace & \texttt{\textbackslash gevgevcc} & \gevgevcc & \texttt{\textbackslash gevgevcccc} & \ensuremath{{\mathrm{\,Ge\kern -0.1em V^2\!/}c^4}}\xspace \\ \end{tabular*} \subsubsection{Distance and area (these macros add a small space)} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash km} & \ensuremath{\mathrm{ \,km}}\xspace & \texttt{\textbackslash m} & \ensuremath{\mathrm{ \,m}}\xspace & \texttt{\textbackslash ma} & \ensuremath{{\mathrm{ \,m}}^2}\xspace \\ \texttt{\textbackslash cm} & \ensuremath{\mathrm{ \,cm}}\xspace & \texttt{\textbackslash cma} & \ensuremath{{\mathrm{ \,cm}}^2}\xspace & \texttt{\textbackslash mm} & \ensuremath{\mathrm{ \,mm}}\xspace \\ \texttt{\textbackslash mma} & \ensuremath{{\mathrm{ \,mm}}^2}\xspace & \texttt{\textbackslash mum} & \ensuremath{{\,\upmu\mathrm{m}}}\xspace & \texttt{\textbackslash muma} & \ensuremath{{\,\upmu\mathrm{m}^2}}\xspace \\ \texttt{\textbackslash nm} & \ensuremath{\mathrm{ \,nm}}\xspace & \texttt{\textbackslash fm} & \ensuremath{\mathrm{ \,fm}}\xspace & \texttt{\textbackslash barn} & \ensuremath{\mathrm{ \,b}}\xspace \\ \texttt{\textbackslash mbarn} & \ensuremath{\mathrm{ \,mb}}\xspace & \texttt{\textbackslash mub} & \ensuremath{{\mathrm{ \,\upmu b}}}\xspace & \texttt{\textbackslash nb} & \ensuremath{\mathrm{ \,nb}}\xspace \\ \texttt{\textbackslash invnb} & \ensuremath{\mbox{\,nb}^{-1}}\xspace & \texttt{\textbackslash pb} & \ensuremath{\mathrm{ \,pb}}\xspace & \texttt{\textbackslash invpb} & \ensuremath{\mbox{\,pb}^{-1}}\xspace \\ \texttt{\textbackslash fb} & \ensuremath{\mbox{\,fb}}\xspace & \texttt{\textbackslash invfb} & \ensuremath{\mbox{\,fb}^{-1}}\xspace & \texttt{\textbackslash ab} & \ensuremath{\mbox{\,ab}}\xspace \\ \texttt{\textbackslash invab} & \ensuremath{\mbox{\,ab}^{-1}}\xspace & \\ \end{tabular*} \subsubsection{Time } \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash sec} & \ensuremath{\mathrm{{\,s}}}\xspace & \texttt{\textbackslash ms} & \ensuremath{{\mathrm{ \,ms}}}\xspace & \texttt{\textbackslash mus} & \ensuremath{{\,\upmu{\mathrm{ s}}}}\xspace \\ \texttt{\textbackslash ns} & \ensuremath{{\mathrm{ \,ns}}}\xspace & \texttt{\textbackslash ps} & \ensuremath{{\mathrm{ \,ps}}}\xspace & \texttt{\textbackslash fs} & \ensuremath{\mathrm{ \,fs}}\xspace \\ \texttt{\textbackslash mhz} & \ensuremath{{\mathrm{ \,MHz}}}\xspace & \texttt{\textbackslash khz} & \ensuremath{{\mathrm{ \,kHz}}}\xspace & \texttt{\textbackslash hz} & \ensuremath{{\mathrm{ \,Hz}}}\xspace \\ \texttt{\textbackslash invps} & \ensuremath{{\mathrm{ \,ps^{-1}}}}\xspace & \texttt{\textbackslash invns} & \ensuremath{{\mathrm{ \,ns^{-1}}}}\xspace & \texttt{\textbackslash yr} & \ensuremath{\mathrm{ \,yr}}\xspace \\ \texttt{\textbackslash hr} & \ensuremath{\mathrm{ \,hr}}\xspace & \\ \end{tabular*} \subsubsection{Temperature} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash degc} & \ensuremath{^\circ}{C}\xspace & \texttt{\textbackslash degk} & \ensuremath {\mathrm{ K}}\xspace & \\ \end{tabular*} \subsubsection{Material lengths, radiation} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash Xrad} & \ensuremath{X_0}\xspace & \texttt{\textbackslash NIL} & \ensuremath{\lambda_{int}}\xspace & \texttt{\textbackslash mip} & MIP\xspace \\ \texttt{\textbackslash neutroneq} & \ensuremath{\mathrm{ \,n_{eq}}}\xspace & \texttt{\textbackslash neqcmcm} & \ensuremath{\mathrm{ \,n_{eq} / cm^2}}\xspace & \texttt{\textbackslash kRad} & \ensuremath{\mathrm{ \,kRad}}\xspace \\ \texttt{\textbackslash MRad} & \ensuremath{\mathrm{ \,MRad}}\xspace & \texttt{\textbackslash ci} & \ensuremath{\mathrm{ \,Ci}}\xspace & \texttt{\textbackslash mci} & \ensuremath{\mathrm{ \,mCi}}\xspace \\ \end{tabular*} \subsubsection{Uncertainties} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash sx} & \sx & \texttt{\textbackslash sy} & \sy & \texttt{\textbackslash sz} & \sz \\ \texttt{\textbackslash stat} & \ensuremath{\mathrm{\,(stat)}}\xspace & \texttt{\textbackslash syst} & \ensuremath{\mathrm{\,(syst)}}\xspace & \\ \end{tabular*} \subsubsection{Maths} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash order} & {\ensuremath{\mathcal{O}}}\xspace & \texttt{\textbackslash chisq} & \ensuremath{\chi^2}\xspace & \texttt{\textbackslash chisqndf} & \ensuremath{\chi^2/\mathrm{ndf}}\xspace \\ \texttt{\textbackslash chisqip} & \ensuremath{\chi^2_{\text{IP}}}\xspace & \texttt{\textbackslash chisqvs} & \ensuremath{\chi^2_{\text{VS}}}\xspace & \texttt{\textbackslash chisqvtx} & \ensuremath{\chi^2_{\text{vtx}}}\xspace \\ \texttt{\textbackslash chisqvtxndf} & \ensuremath{\chi^2_{\text{vtx}}/\mathrm{ndf}}\xspace & \texttt{\textbackslash deriv} & \ensuremath{\mathrm{d}} & \texttt{\textbackslash gsim} & \gsim \\ \texttt{\textbackslash lsim} & \lsim & \texttt{\textbackslash mean[1] \textbackslash mean\{x\}} & \mean{x} & \texttt{\textbackslash abs[1] \textbackslash abs\{x\}} & \abs{x} \\ \texttt{\textbackslash Real} & \ensuremath{\mathcal{R}e}\xspace & \texttt{\textbackslash Imag} & \ensuremath{\mathcal{I}m}\xspace & \texttt{\textbackslash PDF} & PDF\xspace \\ \texttt{\textbackslash sPlot} & \mbox{\em sPlot}\xspace & \texttt{\textbackslash sFit} & \mbox{\em sFit}\xspace & \\ \end{tabular*} \subsection{Kinematics} \subsubsection{Energy, Momenta} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash Ebeam} & \ensuremath{E_{\mbox{\tiny BEAM}}}\xspace & \texttt{\textbackslash sqs} & \ensuremath{\protect\sqrt{s}}\xspace & \texttt{\textbackslash sqsnn} & \sqsnn \\ \texttt{\textbackslash pt} & \mbox{$p_{\mathrm{ T}}$}\xspace & \texttt{\textbackslash ptsq} & \ptsq & \texttt{\textbackslash ptot} & \mbox{$p$}\xspace \\ \texttt{\textbackslash et} & \mbox{$E_{\mathrm{ T}}$}\xspace & \texttt{\textbackslash mt} & \mbox{$M_{\mathrm{ T}}$}\xspace & \texttt{\textbackslash dpp} & \ensuremath{\Delta p/p}\xspace \\ \texttt{\textbackslash msq} & \ensuremath{m^2}\xspace & \texttt{\textbackslash dedx} & \ensuremath{\mathrm{d}\hspace{-0.1em}E/\mathrm{d}x}\xspace & \\ \end{tabular*} \subsubsection{PID} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash dllkpi} & \ensuremath{\mathrm{DLL}_{\kaon\pion}}\xspace & \texttt{\textbackslash dllppi} & \ensuremath{\mathrm{DLL}_{\proton\pion}}\xspace & \texttt{\textbackslash dllepi} & \ensuremath{\mathrm{DLL}_{\electron\pion}}\xspace \\ \texttt{\textbackslash dllmupi} & \ensuremath{\mathrm{DLL}_{\muon\pi}}\xspace & \\ \end{tabular*} \subsubsection{Geometry} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash degrees} & \ensuremath{^{\circ}}\xspace & \texttt{\textbackslash murad} & \murad & \texttt{\textbackslash mrad} & \ensuremath{\mathrm{ \,mrad}}\xspace \\ \texttt{\textbackslash rad} & \ensuremath{\mathrm{ \,rad}}\xspace & \\ \end{tabular*} \subsubsection{Accelerator} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash betastar} & \ensuremath{\beta^*} & \texttt{\textbackslash lum} & \lum & \texttt{\textbackslash intlum[1] \textbackslash intlum\{2 \,\ensuremath{\mbox{\,fb}^{-1}}\xspace\}} & \intlum{2 \,\ensuremath{\mbox{\,fb}^{-1}}\xspace} \\ \end{tabular*} \subsection{Software} \subsubsection{Programs} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash bcvegpy} & \mbox{\textsc{Bcvegpy}}\xspace & \texttt{\textbackslash boole} & \mbox{\textsc{Boole}}\xspace & \texttt{\textbackslash brunel} & \mbox{\textsc{Brunel}}\xspace \\ \texttt{\textbackslash davinci} & \mbox{\textsc{DaVinci}}\xspace & \texttt{\textbackslash dirac} & \mbox{\textsc{Dirac}}\xspace & \texttt{\textbackslash evtgen} & \mbox{\textsc{EvtGen}}\xspace \\ \texttt{\textbackslash fewz} & \mbox{\textsc{Fewz}}\xspace & \texttt{\textbackslash fluka} & \mbox{\textsc{Fluka}}\xspace & \texttt{\textbackslash ganga} & \mbox{\textsc{Ganga}}\xspace \\ \texttt{\textbackslash gaudi} & \mbox{\textsc{Gaudi}}\xspace & \texttt{\textbackslash gauss} & \mbox{\textsc{Gauss}}\xspace & \texttt{\textbackslash geant} & \mbox{\textsc{Geant4}}\xspace \\ \texttt{\textbackslash hepmc} & \mbox{\textsc{HepMC}}\xspace & \texttt{\textbackslash herwig} & \mbox{\textsc{Herwig}}\xspace & \texttt{\textbackslash moore} & \mbox{\textsc{Moore}}\xspace \\ \texttt{\textbackslash neurobayes} & \mbox{\textsc{NeuroBayes}}\xspace & \texttt{\textbackslash photos} & \mbox{\textsc{Photos}}\xspace & \texttt{\textbackslash powheg} & \mbox{\textsc{Powheg}}\xspace \\ \texttt{\textbackslash pythia} & \mbox{\textsc{Pythia}}\xspace & \texttt{\textbackslash resbos} & \mbox{\textsc{ResBos}}\xspace & \texttt{\textbackslash roofit} & \mbox{\textsc{RooFit}}\xspace \\ \texttt{\textbackslash root} & \mbox{\textsc{Root}}\xspace & \texttt{\textbackslash spice} & \mbox{\textsc{Spice}}\xspace & \texttt{\textbackslash urania} & \mbox{\textsc{Urania}}\xspace \\ \end{tabular*} \subsubsection{Languages} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash cpp} & \mbox{\textsc{C\raisebox{0.1em}{{\footnotesize{++}}}}}\xspace & \texttt{\textbackslash ruby} & \mbox{\textsc{Ruby}}\xspace & \texttt{\textbackslash fortran} & \mbox{\textsc{Fortran}}\xspace \\ \texttt{\textbackslash svn} & \mbox{\textsc{SVN}}\xspace & \texttt{\textbackslash git} & \git & \texttt{\textbackslash latex} & \latex \\ \end{tabular*} \subsubsection{Data processing} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash kbit} & \kbit & \texttt{\textbackslash kbps} & \kbps & \texttt{\textbackslash kbytes} & \ensuremath{{\mathrm{ \,kbytes}}}\xspace \\ \texttt{\textbackslash kbyps} & \kbyps & \texttt{\textbackslash mbit} & \mbit & \texttt{\textbackslash mbps} & \ensuremath{{\mathrm{ \,Mbyte/s}}}\xspace \\ \texttt{\textbackslash mbytes} & \ensuremath{{\mathrm{ \,Mbytes}}}\xspace & \texttt{\textbackslash mbyps} & \mbyps & \texttt{\textbackslash gbit} & \gbit \\ \texttt{\textbackslash gbps} & \gbps & \texttt{\textbackslash gbytes} & \ensuremath{{\mathrm{ \,Gbytes}}}\xspace & \texttt{\textbackslash gbyps} & \gbyps \\ \texttt{\textbackslash tbit} & \tbit & \texttt{\textbackslash tbps} & \tbps & \texttt{\textbackslash tbytes} & \ensuremath{{\mathrm{ \,Tbytes}}}\xspace \\ \texttt{\textbackslash tbyps} & \tbyps & \texttt{\textbackslash dst} & DST\xspace & \\ \end{tabular*} \subsection{Detector related} \subsubsection{Detector technologies} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash nonn} & \ensuremath{\mathrm{{ \mathit{n^+}} \mbox{-} on\mbox{-}{ \mathit{n}}}}\xspace & \texttt{\textbackslash ponn} & \ensuremath{\mathrm{{ \mathit{p^+}} \mbox{-} on\mbox{-}{ \mathit{n}}}}\xspace & \texttt{\textbackslash nonp} & \ensuremath{\mathrm{{ \mathit{n^+}} \mbox{-} on\mbox{-}{ \mathit{p}}}}\xspace \\ \texttt{\textbackslash cvd} & CVD\xspace & \texttt{\textbackslash mwpc} & MWPC\xspace & \texttt{\textbackslash gem} & GEM\xspace \\ \end{tabular*} \subsubsection{Detector components, electronics} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash tell1} & TELL1\xspace & \texttt{\textbackslash ukl1} & UKL1\xspace & \texttt{\textbackslash beetle} & Beetle\xspace \\ \texttt{\textbackslash otis} & OTIS\xspace & \texttt{\textbackslash croc} & CROC\xspace & \texttt{\textbackslash carioca} & CARIOCA\xspace \\ \texttt{\textbackslash dialog} & DIALOG\xspace & \texttt{\textbackslash sync} & SYNC\xspace & \texttt{\textbackslash cardiac} & CARDIAC\xspace \\ \texttt{\textbackslash gol} & GOL\xspace & \texttt{\textbackslash vcsel} & VCSEL\xspace & \texttt{\textbackslash ttc} & TTC\xspace \\ \texttt{\textbackslash ttcrx} & TTCrx\xspace & \texttt{\textbackslash hpd} & HPD\xspace & \texttt{\textbackslash pmt} & PMT\xspace \\ \texttt{\textbackslash specs} & SPECS\xspace & \texttt{\textbackslash elmb} & ELMB\xspace & \texttt{\textbackslash fpga} & FPGA\xspace \\ \texttt{\textbackslash plc} & PLC\xspace & \texttt{\textbackslash rasnik} & RASNIK\xspace & \texttt{\textbackslash elmb} & ELMB\xspace \\ \texttt{\textbackslash can} & CAN\xspace & \texttt{\textbackslash lvds} & LVDS\xspace & \texttt{\textbackslash ntc} & NTC\xspace \\ \texttt{\textbackslash adc} & ADC\xspace & \texttt{\textbackslash led} & LED\xspace & \texttt{\textbackslash ccd} & CCD\xspace \\ \texttt{\textbackslash hv} & HV\xspace & \texttt{\textbackslash lv} & LV\xspace & \texttt{\textbackslash pvss} & PVSS\xspace \\ \texttt{\textbackslash cmos} & CMOS\xspace & \texttt{\textbackslash fifo} & FIFO\xspace & \texttt{\textbackslash ccpc} & CCPC\xspace \\ \end{tabular*} \subsubsection{Chemical symbols} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash cfourften} & \ensuremath{\mathrm{ C_4 F_{10}}}\xspace & \texttt{\textbackslash cffour} & \ensuremath{\mathrm{ CF_4}}\xspace & \texttt{\textbackslash cotwo} & \cotwo \\ \texttt{\textbackslash csixffouteen} & \csixffouteen & \texttt{\textbackslash mgftwo} & \mgftwo & \texttt{\textbackslash siotwo} & \siotwo \\ \end{tabular*} \subsection{Special Text } \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash eg} & \mbox{\itshape e.g.}\xspace & \texttt{\textbackslash ie} & \mbox{\itshape i.e.}\xspace & \texttt{\textbackslash etal} & \mbox{\itshape et al.}\xspace \\ \texttt{\textbackslash etc} & \mbox{\itshape etc.}\xspace & \texttt{\textbackslash cf} & \mbox{\itshape cf.}\xspace & \texttt{\textbackslash ffp} & \mbox{\itshape ff.}\xspace \\ \texttt{\textbackslash vs} & \mbox{\itshape vs.}\xspace & \\ \end{tabular*} \subsubsection{Helpful to align numbers in tables} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash phz} & \phz & \\ \end{tabular*} \section{Multivariate algorithm} \label{sec:MVAs} At the second stage of the trigger, HLT2\xspace, and offline, the timing budget allows to use more complex algorithms. Besides the spatial information, each muon hit also carries two different time counters, one for each view, $x$ and $y$. The number of views ({\it i.e.} the fact that the hit is crossed or uncrossed) also provides valuable information, as noise or spillover hits typically have one view only. This information, along with its correlations, can be exploited in a multivariate operator. To this purpose, a recent variant of gradient tree boosting available in the CatBoost library from Yandex~\cite{prokhorenkova2018catboost} has been implemented~\cite{ml2019sweights-acat}. It uses oblivious decision trees as weak learners, as explained in the following. A regular decision tree selects each split independently, while an oblivious decision tree has the same split on each level. The difference is illustrated in Fig.~\ref{fig:oblivious-tree}. An oblivious decision tree is less expressive but is much faster to evaluate, as it makes possible to unwrap the tree into a table and look up the correct leaf in one operation, instead of the multiple conditional jumps of a regular tree. According to a benchmark study by the CatBoost authors, this provides 30--100 faster prediction compared to the competing state-of-the-art gradient boosting libraries~\cite{catboostblog}. \begin{figure}[ht!] \begin{subfigure}{0.40\textwidth} \centering \includegraphics[width=\textwidth]{figs/regular_tree.pdf} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{figs/oblivious_tree.pdf} \end{subfigure} \caption{Classic versus oblivious decision trees. Reproduced from \cite{matrixnet}.} \label{fig:oblivious-tree} \end{figure} For the muon identification, five variables for each muon station M2 to M5 are used as input to the CatBoost algorithm: \begin{itemize} \item $x_{\text{res}}$: the difference between the closest hit $x$ position and the track extrapolation, normalised to the total uncertainty; \item $y_{\text{res}}$: the difference between the closest hit $y$ position and the track extrapolation, normalised to the total uncertainty; \item $t_x$: the time of the $x$ view; \item $dt = t_x-t_y$: the temporal difference between the $x$ and $y$ views; \item $N_{\text{views}}$: the number of views. \end{itemize} The uncertainty in the residuals $x_{\text{res}}$ and $y_{\text{res}}$ contains the pad size and the contribution from the multiple scattering (Eq.~\ref{eq:ms}), summed in quadrature. In addition to the hit information, for each event the track extrapolation $x$ and $y$ coordinates on M2 are used, to allow the algorithm to discriminate between different detector regions. Finally, the aforementioned \ensuremath{\chi^2_{\mathrm{CORR}}}\xspace variable of the track is added. This set of up to 23 variables per event has been found to be the smallest one containing the maximum associated information, without introducing excessive correlations. This feature is very important in order to decrease the complexity and hence the computation time of the operator in the trigger. The classifier is trained using samples from 2016 data, to which the \texttt{IsMuon} requirement is applied. Since the \texttt{IsMuon} algorithm is very fast to execute and already provides rejections of $\mathcal{O}(1\%)$, evaluating the classifier only for events that pass the \texttt{IsMuon} requirement allows to significantly reduce the computational cost and to focus on reducing the remaining background. The data samples used in the training are: \begin{itemize} \item muons from $J/\psi \rightarrow \mu^+\mu^-$ decays, \item pions from $D^* \rightarrow D^{0+} (\rightarrow K^- \pi^+) \pi^- $ decays, \item protons from $\Lambda \rightarrow p\pi^-$ decays. \end{itemize} While protons represent pure combinatorial events, the pion sample is included to boost the training statistics and accounts for another source of classification error due to particles that decay in flight to muons before reaching the muon stations. These samples have been treated as described in Sec.~\ref{sec:newchi2}, including kinematic reweighting, background subtraction and multiplicity weights. To deal with negative sWeights, the solution proposed in Ref.~\cite{ ml2019sweights-acat} is used, which consists of first using a machine learning regression to estimate the expected sWeight in each point of the training variable phase space and, second, using the expected sWeight as event weight during classification. Finally, since the classifier is trained on the same 2016 calibration data which are used to evaluate its performance, a cross-validation method is used to obtain unbiased predictions. The dataset is split into $5$ subsets of equal size, and the model is independently trained on all subsets but the $i$-th, for which predictions are made. The ROC curves of the CatBoost algorithm are shown in \Figref{fig:runIIIp} for muon efficiencies above $90\%$. For comparison, the ROC curves for the \ensuremath{\chi^2_{\mathrm{CORR}}}\xspace variable are superimposed. \begin{figure}[htb!] \raggedright \includegraphics[width=0.32\textwidth]{figs/Fig_1_Upgrade.pdf} \includegraphics[width=0.32\textwidth]{figs/legend_Upgrade.pdf}\\ \includegraphics[width=0.32\textwidth]{figs/Fig_2_Upgrade.pdf} \includegraphics[width=0.32\textwidth]{figs/Fig_6_Upgrade.pdf}\\ \includegraphics[width=0.32\textwidth]{figs/Fig_3_Upgrade.pdf} \includegraphics[width=0.32\textwidth]{figs/Fig_7_Upgrade.pdf} \includegraphics[width=0.32\textwidth]{figs/Fig_10_Upgrade.pdf} \includegraphics[width=0.32\textwidth]{figs/Fig_4_Upgrade.pdf} \includegraphics[width=0.32\textwidth]{figs/Fig_8_Upgrade.pdf} \includegraphics[width=0.32\textwidth]{figs/Fig_11_Upgrade.pdf} \includegraphics[width=0.32\textwidth]{figs/Fig_5_Upgrade.pdf} \includegraphics[width=0.32\textwidth]{figs/Fig_9_Upgrade.pdf} \includegraphics[width=0.32\textwidth]{figs/Fig_12_Upgrade.pdf} \caption{Proton rejection as a function of muon efficiency for tracks satisfying \texttt{IsMuon} obtained with the CatBoost algorithm (magenta) and \ensuremath{\chi^2_{\mathrm{CORR}}}\xspace (blue) on 2016 calibration data. Low momentum bins, which are not covered by the calibration samples, are not shown. } \label{fig:runIIIp} \end{figure} As a result, for high muon efficiency the CatBoost algorithm has better discriminating power than \ensuremath{\chi^2_{\mathrm{CORR}}}\xspace in all the momentum bins. The difference in background rejection has a break point around 98\% muon efficiency, where for $p<10\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace $ it lies in 20 -- 40 \% range and 4 -- 10 \% for $p\geq 10\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace $. With a similar setup as the one for HLT1\xspace (Sec.~\ref{ssec:perf}), throughput tests are performed on simulated Run 3 data and show resource usage of about 0.4\% out of a total HLT2\xspace throughput rate of $\sim 129$ Hz. Therefore, this CatBoost operator is fast enough to be employed in the upgraded HLT2\xspace trigger of the experiment.
1,108,101,563,920
arxiv
\section{Annotated Stack Trees} An \emph{annotated stack tree} is a tree whose nodes are labelled by annotated stacks. Furthermore, each leaf node is also labelled with a control state. Let $\stacktrees{\maxord}{\salphabet}$ denote the set of order-$\maxord$ annotated stack trees over $\salphabet$. \begin{definition}[Order-$\maxord$ Annotated Stack Trees] An \emph{order-$\maxord$ annotated stack tree} over an alphabet $\salphabet$ and set of control states $\controls$ is a $\brac{ \stacks{\maxord}{\salphabet} \cup \brac{ \controls \times \stacks{\maxord}{\salphabet} } }$-labelled tree $\tree = \tup{\treedom, \treelabelling}$ such that for all leaves $\tnode$ of $\tree$ we have $\ap{\treelabelling}{\tnode} \in \controls \times \stacks{\maxord}{\salphabet}$ and for all internal nodes $\tnode$ of $\tree$ we have $\ap{\treelabelling}{\tnode} \in \stacks{\maxord}{\salphabet}$. \end{definition} \subsection{Annotated Stack Tree Operations} \begin{definition}[Order-$\maxord$ Annotated Stack Tree Operations] Over a given finite alphabet $\salphabet$ and finite set of control states $\controls$, the set of \emph{order-$\maxord$ stack tree operations} is defined to be \[ \begin{array}{rcl} \stacktreeops{\maxord}{\salphabet}{\controls} &=& \setcomp{ \stpush{\control}{\control_1, \ldots, \control_\numof}, \stpop{\control_1, \ldots, \control_\numof}{\control} }{ \control, \control_1, \ldots, \control_\numof \in \controls } \cup \\ & & \setcomp{\stsop{\control}{\sop}{\control'}} {\sop \in \stackops{\maxord}{\salphabet} \land \control, \control' \in \controls} \ . \end{array} \] \end{definition} Stack operations may be applied to any leaf of the tree. Let $\tleaf{\tree}{\idxi}$ denote the $\idxi$th leaf of tree $\tree$. We define the local application of a operation to the $\idxi$th leaf as follows. Let $\tree = \tup{\treedom, \treelabelling}$ and $\ap{\treelabelling}{\tleaf{\tree}{\idxi}} = \tup{\control, \stack}$ \[ \begin{array}{rcl} \apstop{\stsop{\control}{\sop}{\control'}}{\idxi}{\tree} &=& \treemod{\tree} {\tleaf{\tree}{\idxi}} {\tup{\control', \ap{\sop}{\stack}}} \\ \apstop{\stpush{\control}{\control_1, \ldots, \control_\numof}}{\idxi}{\tree} &=& \treemod{\treemod{\treemod{\tree} {\tleaf{\tree}{\idxi}} {\stack}} {\tleaf{\tree}{\idxi}1} {\tup{\control_1, \stack}} \cdots} {\tleaf{\tree}{\idxi}\numof} {\tup{\control_\numof, \stack}} \end{array} \] and when $\tleaf{\tree}{\idxi} = \tnode 1$, \ldots, $\tleaf{\tree}{\idxi+\numof-1} = \tnode \numof$ are the only children of $\tnode$, $\ap{\treelabelling}{\tleaf{\tree}{\idxi}} = \tup{\control_1, \stack_1}$, \ldots, $\ap{\treelabelling} {\tleaf{\tree}{\idxi+{\numof-1}}} = \tup{\control_\numof, \stack_\numof}$, and $\ap{\treelabelling}{\tnode} = \stack$, \[ \apstop{\stpop{\control_1, \ldots, \control_\numof}{\control}}{\idxi}{\tree} = \treemod{\brac{ \treedel{\tree} {\set{\tleaf{\tree}{\idxi}, \ldots, \tleaf{\tree}{\idxi+{\numof-1}}}} }} {\tnode} {\tup{\control, \stack}} \ . \] For all $\strule \in \stacktreeops{\maxord}{\salphabet}{\controls}$ we write $\ap{\strule}{\tree}$ to denote the set $\setcomp{\tree'}{\exists \idxi . \tree' = \apstop{\strule}{\idxi}{\tree}}$. \subsection{Ground Annotated Stack Tree Rewrite Systems} \begin{definition}[Order-$\maxord$ Ground Annotatee Stack Tree Rewrite Systems] An \emph{order-$\maxord$ ground annotated stack tree rewrite system (GASTRS)} $\gstrs$ is a tuple $\tup{\salphabet, \controls, \rules}$ where $\salphabet$ is a finite stack alphabet, $\controls$ is a finite set of control states, and $\rules \subset \stacktreeops{\maxord}{\salphabet}{\controls}$ is a finite set of operations. \end{definition} A configuration of an order-$\maxord$ GASTRS is an order-$\maxord$ annotated stack tree $\tree$ over alphabet $\salphabet$. We have a transition $\tree \tran \tree'$ whenever there is some $\strule \in \rules$ and $\tree' \in \ap{\strule}{\tree}$. We write $\tree \reaches \tree'$ when there is a run $\tree = \tree_0 \tran \cdots \tran \tree_\numof = \tree'$. \subsection{Regular Sets of Annotated Stack Trees} We define a notion of annotated stack tree automata for recognising regular sets of annotated stack trees. We give an initial exposition here, with more details (definitions and proofs) in Appendix~\ref{sec:aut-particulars}. In particular, \easyicalp{ we have the following result. \begin{proposition} Annotated stack tree automata form an effective boolean algebra, membership is in linear time, and emptiness is PSPACE-complete. \end{proposition} }{ stack tree automata form an effective boolean algebra, membership is linear time, and emptiness is PSPACE-complete. } Transitions of stack tree automata are labelled by states of stack automata which have a further nested structure~\cite{BCHS12}. These automata are based on a similar automata model by Bouajjani and Meyer~\cite{BM04}. We give the formal definition with intuition following. \begin{definition}[Order-$\maxord$ Annotated Stack Tree Automata] An \emph{order-$\maxord$ stack tree automaton} over a given stack alphabet $\salphabet$ and set of control states $\controls$ is a tuple \[ \ta = \tup{ \tastates, \sastates_\maxord,\ldots,\sastates_1, \salphabet, \tadelta, \sadelta_\maxord,\ldots,\sadelta_1, \controls, \tafinals, \safinals_\maxord,\ldots,\safinals_1 } \] where % $\salphabet$ is a finite stack alphabet, % $\tastates$ is a finite set of states, % \[ \tadelta \subset \tastates \times \setcomp{\tup{\idxi, \numof}}{1 \leq \idxi \leq \numof} \times \brac{\tastates \setminus \tafinals} \times \sastates_\maxord \] is a finite set of transitions, % $\controls \subseteq \tastates$ and $\tafinals \subseteq \tastates$ are initial and final states respectively, % and \begin{enumerate} \item for all $\maxord \geq \midord \geq 2$, we have % $\sastates_\midord$ is a finite set of states, % $\sadelta_\midord \subseteq \sastates_\midord \times \sastates_{\midord-1} \times 2^{\sastates_\midord}$ is a transition relation, and % $\safinals_\midord \subseteq \sastates_\midord$ is a set of accepting states, and \item $\sastates_1$ is a finite set of states, % $\sadelta_1 \subseteq \bigcup\limits_{2 \leq \midord \leq \maxord} \brac{\sastates_1 \times \salphabet \times 2^{\sastates_\midord} \times 2^{\sastates_1}}$ is a transition relation, and % $\safinals_1 \subseteq \sastates_1$ is a set of accepting states. \end{enumerate} \end{definition} \subsubsection{Accepting Stacks} Order-$\midord$ stacks are recognised from states in $\sastates_\midord$. A transition $\tup{\sastate, \sastate', \sastateset} \in \sadelta_\midord$ from $\sastate$ to $\sastateset$ for some $\midord > 1$ is denoted $\sastate \satran{\sastate'} \sastateset$ and can be fired when the stack is $\stack \scomp{\midord} \stack'$ and $\stack$ is accepted from $\sastate' \in \sastates_{(\midord-1)}$. The remainder of the stack $\stack'$ must be accepted from all states in $\sastateset$. At order-$1$, a transition $\tup{\sastate, \cha, \sastateset_\branch, \sastateset} \in \sadelta_1$ is denoted $\sastate \satrancol{\cha}{\sastateset_\branch} \sastateset$ and is a standard alternating $\cha$-transition with the additional requirement that the stack annotating $\cha$ is accepted from all states in $\sastateset_\branch$. A stack is accepted if a subset of $\safinals_\midord$ is reached at the end of each order-$\midord$ stack. Note, we give a more formal definition of a run in Appendix~\ref{sec:aut-particulars}. We write $\stack \in \slang{\sastate}{\ta}$ whenever $\stack$ is accepted from a state $\sastate$. An order-$\maxord$ stack can be represented naturally as an edge-labelled tree over the alphabet $\set{\sopen{\maxord-1},\ldots,\sopen{1}, \sclose{1},\ldots,\sclose{\maxord-1}} \uplus \salphabet$, with $\salphabet$-labelled edges having a second target to the tree representing the annotation. For technical convenience, a tree representing an order-$\midord$ stack does not use $\sopen{\midord}$ or $\sclose{\midord}$ symbols (these appear uniquely at the beginning and end of the stack). An example order-$3$ stack is given below, with only a few annotations shown. The annotations are order-$3$ and order-$2$ respectively. \begin{center} \vspace{4ex} \begin{psmatrix}[nodealign=true,colsep=2ex,rowsep=2ex] \bnode{N1} && \bnode{N2} && \bnode{N3} &\pnode{N34}& \bnode{N4} && \bnode{N5} && \bnode{N6} && \bnode{N7} & \bnode{N8} && \bnode{N9} && \bnode{N10} &\pnode{N1011}& \bnode{N11} && \bnode{N12} && \bnode{N13} & \bnode{N14} &\pnode{N1415}& \bnode{N15} && \bnode{N16} && \bnode{N17} \\ \psset{angle=-90,linearc=.2} \ncline{->}{N1}{N2}^{$\sopen{2}$} \ncline{->}{N2}{N3}^{$\sopen{1}$} \ncline{->}{N3}{N4}^{$\cha$} \ncbar{->}{N34}{N8} \ncline{->}{N4}{N5}^{$\chb$} \ncline{->}{N5}{N6}^{$\sclose{1}$} \ncline{->}{N6}{N7}^{$\sclose{2}$} \ncline{->}{N8}{N9}^{$\sopen{2}$} \ncline{->}{N9}{N10}^{$\sopen{1}$} \ncline{->}{N10}{N11}^{$\chc$} \ncbar{->}{N1011}{N14} \ncline{->}{N11}{N12}^{$\sclose{1}$} \ncline{->}{N12}{N13}^{$\sclose{2}$} \ncline{->}{N14}{N15}^{$\sopen{1}$} \ncline{->}{N15}{N16}^{$\chd$} \ncline{->}{N16}{N17}^{$\sclose{1}$} \end{psmatrix} \end{center} An example (partial) run over this stack is pictured below, using transitions $\sastate_3 \satran{\sastate_2} \sastateset_3 \in \sadelta_3$, $\sastate_2 \satran{\sastate_1} \sastateset_2 \in \sadelta_2$, and $\sastate_1 \satrancol{\cha}{\sastateset_\branch} \sastateset_1 \in \sadelta_1$. The node labelled $\sastateset_\branch$ begins a run on the stack annotating $\cha$. \begin{center} \vspace{2ex} \begin{psmatrix}[nodealign=true,colsep=2ex,rowsep=1.25ex] \Rnode{N1}{$\sastate_3$} & & \Rnode{N2}{$\sastate_2$} & & \Rnode{N3}{$\sastate_1$} & \pnode{N34} & \Rnode{N4}{$\sastateset_1$} & & \Rnode{N5}{$\cdots$} & & \Rnode{N6}{$\sastateset_2$} & & \Rnode{N7}{$\cdots$} & & \Rnode{N8}{$\sastateset_3$} & & \Rnode{N9}{$\cdots$} & & \Rnode{N10}{$\sastateset_\branch$} & & \Rnode{N11}{$\cdots$} \\ \psset{nodesep=.5ex,angle=-90,linearc=.2} \ncline{->}{N1}{N2}^{$\sopen{2}$} \ncline{->}{N2}{N3}^{$\sopen{1}$} \ncline{->}{N3}{N4}^{$\cha$} \ncbar[arm=1.5ex,nodesepA=0]{->}{N34}{N10} \ncline{->}{N4}{N5}^{$\cdots$} \ncline{->}{N5}{N6}^{$\sclose{1}$} \ncline{->}{N6}{N7}^{$\cdots$} \ncline{->}{N7}{N8}^{$\sclose{2}$} \ncline{->}{N8}{N9}^{$\cdots$} \ncline{->}{N10}{N11}^{$\cdots$} \end{psmatrix} \end{center} \subsubsection{Accepting Stack Trees} Annotated stack tree automata are bottom-up tree automata whose transitions are labelled by states from which stacks are accepted. We denote by \[ \tatran{\tastate}{\idxi}{\numof}{\tastate'}{\sastate} \] a transition $\tup{\tastate, \idxi, \numof, \tastate', \sastate} \in \tadelta$. Observe that $\tastate' \notin \tafinals$ by definition. When a node $\tnode$ has children $\tnode_1, \ldots, \tnode_\numof$, the transition above could be applied to the $\idxi$th child $\tnode_\idxi$. It can be applied when $\tnode_\idxi$ is already labelled by $\tastate'$ and the stack $\stack_\idxi$ attached to $\tnode_\idxi$ is accepted from state $\sastate$ of the stack automaton. If it is applied, then $\tastate$ will be set as the label of the parent $\tnode$. Over runs of the automaton we enforce that every child is present and the transitions applied at each child agree on the state assigned to its parent. Let $\ap{\treestacklab}{\tnode} = \stack$ when $\ap{\treelabelling}{\tnode} = \tup{\control, \stack}$ or $\ap{\treelabelling}{\tnode} = \stack$. Given an order-$\maxord$ annotated stack tree $\tree = \tup{\treedom, \treelabelling}$ a run of an automaton $\ta$ is a $\tastates$-labelled tree $\tup{\treedom, \treelabelling'}$ where each leaf $\tnode$ of $\tree$ has $\ap{\treelabelling'}{\tnode} = \control$ whenever $\ap{\treelabelling}{\tnode} = \tup{\control, \stack}$ for some $\stack$, and each internal node $\tnode$ with children $\tnode 1, \ldots, \tnode \numof$ has a label $\ap{\treelabelling'}{\tnode} = \tastate$ only if we have transitions \[ \tatran{\tastate}{1}{\numof}{\tastate_1}{\sastate_1}, \ldots, \tatran{\tastate}{\numof}{\numof}{\tastate_\numof}{\sastate_\numof}, \] and $\ap{\treelabelling'}{\tnode \idxi} = \tastate_\idxi$ and $\ap{\treestacklab}{\tnode \idxi} \in \slang{\sastate_\idxi}{\ta}$ for all $1 \leq \idxi \leq \numof$. Finally $\ap{\treelabelling'}{\troot} = \tastate$ and we have a transition $\tatran{\tastatef}{1}{1}{\tastate}{\sastate}$ with $\tastatef \in \tafinals$ and $\ap{\treestacklab}{\troot} \in \slang{\sastate}{\ta}$. We write $\langof{\ta}$ to denote the set of trees accepted by $\ta$. \subsection{Notation and Conventions} \label{ssec:notations} \subsubsection{Number of Transitions} We assume for all pairs of states $\tastate, \tastate' \in \tastates$ and each $\idxi, \numof$ there is at most one transition of the form $\tatran{\tastate}{\idxi}{\numof}{\tastate'}{\sastate}$. Similarly we assume for all $\sastate \in \sastates_\midord$ and $\sastateset \subseteq \sastates_\midord$ that there is at most one transition of the form $\sastate \satran{\sastate'} \sastateset \in \sadelta_\midord$. This condition can easily be ensured by replacing pairs of transitions $\sastate \satran{\sastate_1} \sastateset$ and $\sastate \satran{\sastate_2} \sastateset$ with a single transition $\sastate \satran{\sastate'} \sastateset$, where $\sastate'$ accepts the union of the languages of stacks accepted from $\sastate_1$ and $\sastate_2$. Similarly for transitions in $\tadelta$. \subsubsection{Short-form Notation} Consider the example run shown above. This run reads the top of every level of the stack: the transition to $\sastateset_3$ reads the topmost order-$2$ stack, the transition to $\sastateset_2$ reads the order-$1$ stack at the top of this stack, and the transition to $\sastateset_1$ and $\sastateset_\branch$ reads the top character of the order-$1$ stack. The saturation algorithm relies on stack updates only affecting the topmost part of the stack. Thus, we need a notation for talking about the beginning of the run. Hence, we will write the run in the figure above (that reads the topmost parts of the stack) as a ``short-form'' transition \[ \satranfull{\sastate_3} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_3} \ . \] In the following, we define this notation formally, and generalise it to transitions of a stack tree automaton. In general, we write \[ \satranfull{\sastate} {\cha} {\sastateset_\branch} {\sastateset_1,\ldots,\sastateset_\midord} \text{ and } \satranfullk{\sastate} {\sastate'} {\sastateset_{\midord'+1}, \ldots, \sastateset_\midord} . \] In the first case, $\sastate \in \sastates_\midord$ and there exist $\sastate_{\midord-1}, \ldots, \sastate_1$ such that $\sastate \satran{\sastate_{\midord-1}} \sastateset_\midord \in \sadelta_\midord$, $\sastate_{\midord-1} \satran{\sastate_{\midord-2}} \sastateset_{\midord-1} \in \sadelta_{\midord-1}$, \ldots, $\sastate_1 \satrancol{\cha}{\sastateset_\branch} \sastateset_1 \in \sadelta_1$. Since we assume at most one transition between any state and set of states, the intermediate states $\sastate_{\midord-1}, \ldots, \sastate_1$ are uniquely determined by $\sastate, \cha, \sastateset_\branch$ and $\sastateset_1, \ldots, \sastateset_\midord$. In the second case, either $\midord = \midord'$ and $\sastate = \sastate' \in \sastates_\midord$, or $\midord > \midord'$ and we have $\sastate \in \sastates_\midord$, $\sastate' \in \sastates_{\midord'}$, and there exist $\sastate_{\midord-1}, \ldots, \sastate_{\midord'+1}$ with $\sastate \satran{\sastate_{\midord-1}} \sastateset_\midord \in \sadelta_\midord$, $\sastate_{\midord-1} \satran{\sastate_{\midord-2}} \sastateset_{\midord-1} \in \sadelta_{\midord-1}$, \ldots, $\sastate_{\midord'+2} \satran{\sastate_{\midord'+1}} \sastateset_{\midord'+2} \in \sadelta_{\midord'+2}$ and $\sastate_{\midord'+1} \satran{\sastate'} \sastateset_{\midord'+1} \in \sadelta_{\midord'+1}$. We lift the short-form transition notation to transitions from sets of states. We assume that state-sets $\sastates_\maxord, \ldots, \sastates_1$ are disjoint. Suppose $\sastateset = \set{\sastate_1,\ldots,\sastate_\numof}$ and for all $1 \leq \idxi \leq \numof$ we have $\satranfull{\sastate_\idxi} {\cha} {\sastateset^\idxi_\branch} {\sastateset^\idxi_1,\ldots,\sastateset^\idxi_\midord}$. Then we have $\satranfull{\sastateset} {\cha} {\sastateset_\branch} {\sastateset_1,\ldots,\sastateset_\midord}$ where $\sastateset_\branch = \bigcup_{1 \leq \idxi \leq \numof} \sastateset^\idxi_\branch$ and for all $\midord$, $\sastateset_\midord = \bigcup_{1 \leq \idxi \leq \numof} \sastateset^\idxi_\midord$. Because an annotation can only be of one order, we insist that $\sastateset_\branch \subseteq \sastates_\midord$ for some $\midord$. We generalise this to trees as follows. We write \[ \tatranfull{\tastate} {\idxi} {\numof} {\tastate'} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord} \quad \text{ and } \quad \tatranfullk{\tastate} {\idxi} {\numof} {\tastate'} {\sastate'} {\sastateset_{\midord+1}, \ldots, \sastateset_\maxord} \] when $\tatran{\tastate}{\idxi}{\numof}{\tastate'}{\sastate}$ and $\satranfull{\sastate} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots \sastateset_\maxord}$ or, respectively, $\satranfullk{\sastate} {\sastate'} {\sastateset_{\midord+1}, \ldots \sastateset_\maxord}$. Finally, we remark that a transition to the empty set is distinct from having no transition. \section{Particulars of Annotated Stack Tree Automata} \label{sec:aut-particulars} Here we discuss various particulars of our stack tree automata: the definition of runs, the effective boolean algebra, membership, emptiness, transformations to normal form, and comparisons with other possible stack tree automata definitions. \subsection{Definition of Runs over Stacks} We give a more formal definition of a run accepting a stack. First we introduce some notation. For $\maxord \geq \midord > 1$, we write $\sastateset_1 \satran{\sastateset'} \sastateset_2$ to denote an order-$\midord$ transition from a set of states whenever $\sastateset_1 = \set{\sastate_1, \ldots, \sastate_\numof}$ and for each $1 \leq \idxi \leq \numof$ we have $\sastate_\idxi \satran{\sastate'_\idxi} \sastateset_\idxi$ and $\sastateset' = \set{\sastate'_1, \ldots, \sastate'_\numof}$ and $\sastateset_2 = \bigcup_{1 \leq \idxi \leq \numof} \sastateset_\idxi$. The analogous notation at order-$1$ is a special case of the short-form notation defined in Section~\ref{ssec:notations}. Formally, fix an annotated stack tree automaton \[ \ta = \tup{ \tastates, \sastates_\maxord,\ldots,\sastates_1, \salphabet, \tadelta, \sadelta_\maxord,\ldots,\sadelta_1, \controls, \tafinals, \safinals_\maxord,\ldots,\safinals_1 } \] We say a node \emph{contains} a character if its exiting edge is labelled by the character. Recall the tree view of an annotated stack, an example of which is given below. \begin{center} \vspace{4ex} \begin{psmatrix}[nodealign=true,colsep=2ex,rowsep=2ex] \bnode{N1} && \bnode{N2} && \bnode{N3} &\pnode{N34}& \bnode{N4} && \bnode{N5} && \bnode{N6} && \bnode{N7} & \bnode{N8} && \bnode{N9} && \bnode{N10} &\pnode{N1011}& \bnode{N11} && \bnode{N12} && \bnode{N13} & \bnode{N14} &\pnode{N1415}& \bnode{N15} && \bnode{N16} && \bnode{N17} \\ \psset{angle=-90,linearc=.2} \ncline{->}{N1}{N2}^{$\sopen{2}$} \ncline{->}{N2}{N3}^{$\sopen{1}$} \ncline{->}{N3}{N4}^{$\cha$} \ncbar{->}{N34}{N8} \ncline{->}{N4}{N5}^{$\chb$} \ncline{->}{N5}{N6}^{$\sclose{1}$} \ncline{->}{N6}{N7}^{$\sclose{2}$} \ncline{->}{N8}{N9}^{$\sopen{2}$} \ncline{->}{N9}{N10}^{$\sopen{1}$} \ncline{->}{N10}{N11}^{$\chc$} \ncbar{->}{N1011}{N14} \ncline{->}{N11}{N12}^{$\sclose{1}$} \ncline{->}{N12}{N13}^{$\sclose{2}$} \ncline{->}{N14}{N15}^{$\sopen{1}$} \ncline{->}{N15}{N16}^{$\chd$} \ncline{->}{N16}{N17}^{$\sclose{1}$} \end{psmatrix} \end{center} Some stack (tree) $\stack$ is accepted by $\ta$ from states $\sastateset_0 \subseteq \sastates_\midord$ --- written $\stack \in \slang{\sastateset_0}{\ta}$ --- whenever the nodes of the tree can be labelled by elements of $\bigcup\limits_{1 \leq \midord' \leq \maxord} 2^{\sastates_{\midord'}}$ such that \begin{enumerate} \item $\sastateset_0$ is a subset of the label of the node containing the first $\sopen{\midord-1}$ character of the word, or if $\midord = 1$, the first character $\cha \in \salphabet$, and \item for any node containing a character $\sopen{\midord'}$ labelled by $\sastateset$, then for all $\sastate_1 \in \sastateset$, there exists some transition $\tup{\sastate_1, \sastate_2, \sastateset_1} \in \sadelta_{\midord'+1}$ such that $\sastate_2$ appears in the label of the succeeding node and $\sastateset_1$ is a subset of the label of the node succeeding the matching $\sclose{\midord'}$ character, and \item for any node containing a character $\sclose{\midord'}$, the label $\sastateset$ is a subset of $\safinals_{\midord'}$, and the final node of an order-$\midord$ stack is labelled by $\sastateset \subseteq \safinals_\midord$, and \item for any node containing a character $\cha \in \salphabet$, labelled by $\sastateset$, for all $\sastate' \in \sastateset$, there exists some transition $\tup{\sastate', \cha, \sastateset_\branch, \sastateset'} \in \sadelta_1$ such that $\sastateset_\branch$ is a subset of the label of the node annotating $\cha$, and $\sastateset'$ is a subset of the label of the succeeding node. \end{enumerate} That is, a stack automaton is essentially a stack- and annotation-aware alternating automaton, where annotations are treated as special cases of the alternation. \subsection{Effective Boolean Algebra} In this section we prove the following. \begin{proposition} Annotated stack tree automata form an effective boolean algebra. \end{proposition} \begin{proof} This follows from Proposition~\ref{prop:aut-union}, Proposition~\ref{prop:aut-intersect}, and Proposition~\ref{prop:aut-negate} below. \end{proof} \begin{proposition} \label{prop:aut-union} Given two automata \[ \ta = \tup{ \tastates, \sastates_\maxord,\ldots,\sastates_1, \salphabet, \tadelta, \sadelta_\maxord,\ldots,\sadelta_1, \controls, \tafinals, \safinals_\maxord,\ldots,\safinals_1 } \] and \[ \ta' = \tup{ \tastates', \sastates_\maxord',\ldots,\sastates_1', \salphabet, \tadelta', \sadelta_\maxord',\ldots,\sadelta_1', \controls', \tafinals', \safinals_\maxord',\ldots,\safinals_1' } \] there is an automaton $\ta''$ which recognises the union of the languages of $\ta$ and $\ta'$. \end{proposition} \begin{proof} Supposing $\ta$ and $\ta'$ are disjoint except for $\controls$ and no state $\control \in \controls$ has any incoming transition, the automaton we construct is: \[ \ta'' = \tup{\begin{array}{l} \tastates \cup \tastates', \\ \sastates_\maxord \cup \sastates_\maxord', \ldots, \sastates_1 \cup \sastates_1', \\ \salphabet, \\ \tadelta \cup \tadelta', \sadelta_\maxord \cup \sadelta_\maxord', \ldots, \sadelta_1 \cup \sadelta_1', \\ \controls, \\ \tafinals \cup \tafinals', \safinals_\maxord \cup \safinals_\maxord', \ldots, \safinals_1 \cup \safinals_1' \end{array}} \] Every run in $\ta$ (resp $\ta'$) is a run of $\ta''$ as every state and transition of $\ta$ is in $\ta''$. A run in $\ta''$ is a run of $\ta$ or of $\ta'$, as every state and transition $\ta''$ is in $\ta$ or in $\ta'$, and as the sets of states and transitions are disjoint except for initial states (which do not have incoming transitions), a valid run is either entirely in $\ta$ or in $\ta'$. \end{proof} \begin{proposition} \label{prop:aut-intersect} Given two automata \[ \ta = \tup{ \tastates, \sastates_\maxord,\ldots,\sastates_1, \salphabet, \tadelta, \sadelta_\maxord,\ldots,\sadelta_1, \controls, \tafinals, \safinals_\maxord,\ldots,\safinals_1 } \] and \[ \ta' = \tup{ \tastates', \sastates_\maxord',\ldots,\sastates_1', \salphabet, \tadelta', \sadelta_\maxord',\ldots,\sadelta_1', \controls', \tafinals', \safinals_\maxord',\ldots,\safinals_1' } \] there is an automaton $\ta''$ which recognises the intersection of the languages of $\ta$ and $\ta'$. \end{proposition} \begin{proof} We construct the following automaton: \[ \ta' = \tup{ \tastates'', \sastates_\maxord'',\ldots,\sastates_1'', \salphabet, \tadelta'', \sadelta_\maxord'',\ldots,\sadelta_1'', \controls'', \tafinals'', \safinals_\maxord'',\ldots,\safinals_1'' } \] For any pair of states $\sastate, \sastate' \in \sastates_\maxord \cup \sastates'_\maxord$ we can assume a state $\sastate \cap \sastate'$ accepting the intersection of the stacks accepted from $\sastate$ and $\sastate'$. This comes from the fact that stack automata form an effective boolean algebra~\cite{BCHS12}. The states and transitions in $\sastates_\maxord'',\ldots,\sastates_1''$, $\sadelta_\maxord'',\ldots,\sadelta_1''$, and $\safinals_\maxord'',\ldots,\safinals_1''$ come from this construction. For $\tastate_1 \in \tastates$ and $\tastate_2 \in \tastates'$, we define $q_{1,2}$ to be in $\tastates''$ such that, for every $\tatran{\tastate_1}{\idxi}{\numof}{\tastate_1'}{\sastate_1}$ and $\tatran{\tastate_2}{\idxi}{\numof}{\tastate_2'}{\sastate_2}$ , we add the transition $\tatran{\tastate_{1,2}}{\idxi}{\numof}{\tastate_{1,2}'}{\sastate_1 \cap \sastate_2}$. We have $\tastate_{1,2} \in \tafinals''$ if and only if $\tastate_1 \in \tafinals$ and $\tastate_2 \in \tafinals'$. A run exists in $\ta''$ if and only if there is a run in $\ta$ and one in $\ta'$, by construction. \end{proof} \begin{proposition} \label{prop:aut-negate} Given an automaton, \[ \ta = \tup{ \tastates, \sastates_\maxord,\ldots,\sastates_1, \salphabet, \tadelta, \sadelta_\maxord,\ldots,\sadelta_1, \controls, \tafinals, \safinals_\maxord,\ldots,\safinals_1 } \] there is an automaton $\ta'$ which accepts a tree if and only if it is not accepted by $\ta$. \end{proposition} \begin{proof} We define the complement as follows. We first assume that for each $\sastate \in \sastates_\maxord$ we also have $\comp{\sastate} \in \sastates_\maxord$ that accepts the complement of $\sastate$. This follows from the complementation of stack automata in ICALP 2012~\cite{BCHS12}. Then, we define $\ta'$ to be the complement of $\ta$, which contains \[ \ta' = \tup{ \tastates', \sastates_\maxord,\ldots,\sastates_1, \salphabet, \tadelta', \sadelta_\maxord,\ldots,\sadelta_1, \controls, \tafinals', \safinals_\maxord,\ldots,\safinals_1 } \] where, letting $\numof_{\text{max}}$ be the maximum number of children that can appear in a tree accepted by $\ta$ (this information is easily obtained from the transitions of $\ta$), we have \[ \tastates' = \bigcup\limits_{\numof \leq \numof_{\text{max}}} \brac{2^{\tastates}}^\numof \ . \] That is, the automaton will label nodes of the tree with a set of states for each child. The $\idxi$th set will be the set of all labels $\tastate$ that could have come from the $\idxi$th child in a run of $\ta$. Since all children have to agree on the $\tastate$ that labels a node, then a label $\tup{\tastateset_1, \ldots, \tastateset_\numof}$ means that the set $\tastateset_1 \cap \cdots \cap \tastateset_\numof$ is the set of states $\tastate$ that could have labelled the node in a run of $\ta$. The transition relation $\tadelta'$ is the set of transitions of the form \[ \tatran{\tup{\tastateset_1, \ldots, \tastateset_\numof}} {\idxi} {\numof} {\tup{\tastateset'_1, \ldots, \tastateset'_{\numof'}}} {\sastate} \] where $\numof, \numof' \leq \numof_{\text{max}}$ and for all $\idxj \neq \idxi$, the set $\tastateset_\idxj$ is any subset of $\tastates$, and $\tastateset_\idxi \subseteq \tastates$ and $\sastate$ are such that \begin{itemize} \item $\sastate = \bigcap\limits_{\tastate \in \tastates} \sastate_\tastate$, and \item if $\tastate \in \tastateset_\idxi$ then \[ \sastate_\tastate = \sastate_1 \cup \cdots \cup \sastate_\numofl \] where $\tatran{\tastate}{\idxi}{\numof}{\tastate_1}{\sastate_1}$, \ldots, $\tatran{\tastate}{\idxi}{\numof}{\tastate_\numofl}{\sastate_\numofl}$ are all transitions to $\tastate$ via the $\idxi$th of $\numof$ children with the property that \[ \tastate_\idxj \in \tastateset'_1 \cap \cdots \cap \tastateset'_{\numof'} \] for all $\idxj$. \item if $\tastate \notin \tastateset_\idxi$ then \[ \sastate_\tastate = \comp{\sastate_1} \cap \cdots \cap \comp{\sastate_\numofl} \] where $\tatran{\tastate}{\idxi}{\numof}{\tastate_1}{\sastate_1}$, \ldots, $\tatran{\tastate}{\idxi}{\numof}{\tastate_\numofl}{\sastate_\numofl}$ are all transitions to $\tastate$ via the $\idxi$th of $\numof$ children with the property that \[ \tastate_\idxj \in \tastateset'_1 \cap \cdots \cap \tastateset'_{\numof'} \] for all $\idxj$. \end{itemize} In each transition, the sets $\tastateset_\idxj$ for all $\idxj \neq \idxi$ have no constraints. The automaton effectively guesses the set of labels that could have come from sibling nodes. The set $\tastateset_\idxi$ contains all labellings that could have come from the $\idxi$th child given the set of labellings that could have labelled the child. The final condition above insists that transitions to any state not in $\tastateset_\idxi$ could not have been applied to the child. The set of accepting states is \[ \setcomp{\tup{\tastateset_1, \ldots, \tastateset_\numof}} {\nexists \tastatef \in \finals . \tastatef \in \tastateset_1 \cap \cdots \cap \tastateset_\numof} \ . \] For the initial states, we alias $\control = \set{\control}$ We prove that this automaton is the complement of $\ta$. Associate to each node $\tnode$ the set $\tastateset_\tnode$ such that $\tastate \in \tastateset_\tnode$ iff there is some (partial, starting from the leaves) run of $\ta$ that labels $\tnode$ with $\tastate$. We prove that all runs of $\ta'$ label $\tnode$ with some $\tup{\tastateset_1, \ldots, \tastateset_\numof}$ such that $\tastateset_\tnode = \tastateset_1 \cap \cdots \cap \tastateset_\numof$. At the leaves of the tree this is immediate since $\ta$ must label the node with some $\control$, and $\ta'$ must label it with $\set{\control}$. Now, suppose we have a node $\tnode$ with children $\tnode 1$, \ldots, $\tnode \numof$ and the property holds for all children. Take some $\tastate \in \tastateset_\tnode$. Let $\tatran{\tastate}{1}{\numof}{\tastate_1}{\sastate_1}$, \ldots, $\tatran{\tastate}{\numof}{\numof}{\tastate_\numof}{\sastate_\numof}$ be the transitions used in the run labelling $\tnode$ with $\tastate$. For each $\idxi$ we must have by induction $\tastate_\idxi$ appearing in all sets labelling $\tnode \idxi$ in a run of $\ta'$. Now suppose $\ta'$ labels $\tnode$ with $\tup{\tastateset_1, \ldots, \tastateset_\numof}$ and moreover $\tastate \notin \tastateset_\idxi$. Then, by construction, we must have that the stack labelling $\tnode \idxi$ is accepted from $\comp{\sastate_\idxi}$. However, since the stack must have been accepted from $\sastate_\idxi$ we have a contradiction. Thus, $\tastate \in \tastateset_\idxi$. Now take some $\tastate \notin \tastateset_\tnode$. Thus, there is some $\idxi$ such that, letting $\tatran{\tastate}{\idxi}{\numof}{\tastate_1}{\sastate_1}$, \ldots, $\tatran{\tastate}{\idxi}{\numof}{\tastate_\numofl}{\sastate_\numofl}$ be all transitions with $\tastate_\idxj$ appearing in $\tastateset_{\tnode\idxi}$, we know the stack labelling $\tnode\idxi$ is not accepted from any $\sastate_\idxj$ (and is accepted from all $\comp{\sastate_\idxj}$). Now suppose $\ta'$ labels $\tnode$ with $\tup{\tastateset_1, \ldots, \tastateset_\numof}$ and moreover $\tastate \in \tastateset_\idxi$. Then, by construction, we must have that the stack labelling $\tnode \idxi$ is accepted from some $\sastate_\idxj$, which is a contradiction. Thus, $\tastate \notin \tastateset_\idxi$. Hence $\tastateset_\tnode = \tastateset_1 \cap \cdots \cap \tastateset_\numof$ as required. Now, assume there is some accepting run of $\ta$ via final state $\tastatef$. Assume there is an accepting run of $\ta'$. Then necessarily the run of $\ta'$ has as its final label some tuple such that $\tastatef \in \tastateset_1 \cap \cdots \cap \tastateset_\numof$. This contradicts the fact that the run of $\ta'$ is accepting. Conversely, take some accepting run of $\ta'$. The accepting state $\tup{\tastateset_1, \ldots, \tastateset_\numof}$ of this run has no final state $\tastatef \in \tastateset_1 \cap \cdots \cap \tastateset_\numof$ and thus there can be no accepting run of $\ta$. \end{proof} \subsection{Membership} In this section we prove the following. \begin{proposition} The membership problem for annotated stack tree automata is in linear time. \end{proposition} \begin{proof} We give an algorithm which checks if a tree $\tree$ is recognised by an automaton. We start by labelling every leaf labelled with control $\control$ with $\set{\control}$. For every node $\tnode$ such that all its sons have been labelled, we label it by every state $\tastate$ such that there exist transitions $\tatran{\tastate}{1}{\numof}{\tastate_1}{\sastate_1}, \cdots, \tatran{\tastate}{\numof}{\numof}{\tastate_\numof}{\sastate_\numof}$ such that each son $\tnode \idxi$ is labelled by a set containing $\tastate_\idxi$ and the stack labelling $\tnode \idxi$ is accepted by $\sastate_\idxi$. Note, checking the acceptance of a stack from $\sastate_\idxi$ can be done in linear time~\cite{BCHS12}. If we can label the root by a final state, the tree is accepted (as at each step, if we can label a node by a state, there is a run in which it is labelled by this state), otherwise, it is not. As knowing if a stack is accepted from a given state is linear in the size of the stack, and we visit each node once, and explore with it once each possible transitions, the complexity of this algorithm is linear in the size of the tree. \end{proof} \subsection{Emptiness} In this section we prove the following. \begin{proposition} The emptiness problem for annotated stack tree automata is in PSPACE-complete. \end{proposition} \begin{proof} We give the following algorithm: We set $\mathrm{Marked} = \controls$. If there exists a $q$ which is not in $\mathrm{Marked}$ such that, there is some $\numof$ such that for each $\idxi \leq \numof$ we have $\tatran{\tastate}{\idxi}{\numof}{\tastate'}{\sastate'}$, with $\tastate' \in \mathrm{Marked}$ and there exists a stack recognised from $\sastate'$, we add $q$ to $\mathrm{Marked}$. We stop when there does not exist such a state. If $\mathrm{Marked} \cap \finals = \emptyset$, the recognised language is empty, otherwise, there is at least one tree recognised. There are at most $|\tastates|$ steps in the algorithm, and the complexity of the emptiness problem for the states $\sastate$ is PSPACE. Thus, the algorithm runs in PSPACE. \end{proof} \subsection{Automata Transformations} In this section we show that annotated stack tree automata can always be transformed to meet the assumptions of the saturation algorithm. Take a stack tree automaton \[ \ta = \tup{ \tastates, \sastates_\maxord,\ldots,\sastates_1, \salphabet, \tadelta, \sadelta_\maxord,\ldots,\sadelta_1, \controls, \tafinals, \safinals_\maxord,\ldots,\safinals_1 } \ . \] We normalise this automaton as follows. It can be easily seen at each step that we preserve the language accepted by the automaton. First we ensure that there are no transitions \[ \tatran{\control}{\idxi}{\numof}{\tastate}{\sastate} \ . \] We do this by introducing a new state $\newtastate{\control}$ for each $\control \in \controls$. Then, we replace each \[ \tatran{\control}{\idxi}{\numof}{\tastate}{\sastate} \] with \[ \tatran{\newtastate{\control}}{\idxi}{\numof}{\tastate}{\sastate} \] and for each \[ \tatran{\tastate}{\idxi}{\numof}{\control}{\sastate} \] in the resulting automaton, add a transition (not replace) \[ \tatran{\tastate}{\idxi}{\numof}{\newtastate{\control}}{\sastate} \ . \] Thus, we obtain an automaton with no incoming transitions to any $\control$. To ensure unique states labelling transitions, we replace each transition \[ \tatran{\tastate}{\idxi}{\numof}{\tastate'}{\sastate} \] with a transition \[ \tatran{\tastate}{\idxi}{\numof}{\tastate'}{\newsastate{\tastate}{\tastate'}} \] where there is one $\newsastate{\tastate}{\tastate'}$ for each pair of states $\tastate, \tastate'$. Then when $\maxord > 1$ we have a transition $\newsastate{\tastate}{\tastate'} \satran{\sastate'} \sastateset$ for each $\sastate \satran{\sastate'} \sastateset$. Notice, if there are multiple possible $\sastate$ then $\newsastate{\tastate}{\tastate'} \satran{\sastate'} \sastateset$ accepts the union of their languages. Furthermore, $\newsastate{\tastate}{\tastate'}$ has no incoming transitions. Moreover, we do not remove any transitions from $\sastate$ but observe that $\sastate$ is no longer initial. When $\maxord = 1$ we have a transition $\newsastate{\sastate}{\sastateset} \satrancol{\cha}{\sastateset_\branch} \sastateset'$ for each $\sastate \satrancol{\cha}{\sastateset_\branch} \sastateset'$. We then iterate from $\midord = \maxord$ down to $\midord = 3$ performing a similar transformation to the above. That is, we replace each transition in the order-$\midord$ transition set \[ \sastate \satran{\sastate'} \sastateset \] with a transition \[ \sastate \satran{\newsastate{\sastate}{\sastateset}} \sastateset \] where there is one $\newsastate{\sastate}{\sastateset}$ for each pair of $\sastate$ and $\sastateset$. Then we have a transition $\newsastate{\sastate}{\sastateset} \satran{\sastate''} \sastateset'$ for each $\sastate' \satran{\sastate''} \sastateset'$. Again, if there are multiple possible $\sastate'$ then $\newsastate{\sastate}{\sastateset} \satran{\sastate''} \sastateset'$ accepts the union of their languages. Furthermore, $\newsastate{\sastate}{\sastateset}$ has no incoming transitions. Finally, for $\midord = 2$ the procedure is similar. We replace each transition in the order-$2$ transition set \[ \sastate \satran{\sastate'} \sastateset \] with a transition \[ \sastate \satran{\newsastate{\sastate}{\sastateset}} \sastateset \] where there is one $\newsastate{\sastate}{\sastateset}$ for each pair of $\sastate$ and $\sastateset$. Then we have a transition $\newsastate{\sastate}{\sastateset} \satrancol{\cha}{\sastateset_\branch} \sastateset'$ for each $\sastate' \satrancol{\cha}{\sastateset_\branch} \sastateset'$. \subsection{Alternative Tree Automaton Definition} An alternative definition of stack tree automata would use transitions \[ \tastate \leftarrow \tup{\tastate_1, \sastate_1}, \ldots, \tup{\tastate_\numof, \sastate_\numof} \] instead of \[ \tatran{\tastate}{1}{\numof}{\tastate_1}{\sastate_1}, \ldots, \tatran{\tastate}{\numof}{\numof}{\tastate_\numof}{\sastate_\numof} \ . \] However, due to the dependency such transitions introduce between $\sastate_1, \ldots, \sastate_\numof$ it is no longer possible to have a unique sequence $\sastate_1, \ldots, \sastate_\numof$ for each sequence $\tastate, \tastate_1, \ldots, \tastate_\numof$ (one cannot simply union the candidates for each $\sastate_\idxi$). For example suppose we had $\tastate \leftarrow \tup{\tastate_1, \sastate_1}, \tup{\tastate_2, \sastate_2}$ and $\tastate \leftarrow \tup{\tastate_1, \sastate'_1}, \tup{\tastate_2, \sastate'_2}$ where $\sastate_1$ accepts $\stack_1$, $\sastate'_1$ accepts $\stack'_1$, $\sastate_2$ accepts $\stack_2$, and $\sastate'_2$ accepts $\stack'_2$. If we were to replace these two transitions with $\tastate \leftarrow \tup{\tastate_1, \sastate_1 \cup \sastate'_1}, \tup{\tastate_2, \sastate_2 \cup \sastate'_2}$ we would mix up the two transitions, allowing, for example, the first child to be labelled by $\stack_1$ and the second by $\stack'_2$. At a first glance, our tree automaton model may appear weaker since we cannot enforce dependencies between the candidate $\sastate_\idxi$s in \[ \tatran{\tastate}{1}{\numof}{\tastate_1}{\sastate_1}, \ldots, \tatran{\tastate}{\numof}{\numof}{\tastate_\numof}{\sastate_\numof} \ . \] However, it turns out that we can overcome this problem with new copies of $\tastate$. That is, suppose we had a set $\tadelta$ of transitions of the form \[ \tastate \leftarrow \tup{\tastate_1, \sastate_1}, \ldots, \tup{\tastate_\numof, \sastate_\numof} \ . \] We could simulate the resulting tree automaton using our model by introducing a state $\tup{\tastate, \tatrant}$ for each $\tastate$ and $\tatrant$. Given a transition $\tatrant$ of the above form, we can use a family of rules \[ \tatran{\tup{\tastate, \tatrant}} {1} {\numof} {\tup{\tastate_1, \tatrant_1}} {\sastate_1}, \ldots, \tatran{\tup{\tastate, \tatrant}} {\numof} {\numof} {\tup{\tastate_\numof, \tatrant_\numof}} {\sastate_\numof} \] for all sequences $\tatrant_1, \ldots, \tatrant_\numof$ of $\tadelta$. (Note that, although there are an exponential number of such families, we can create them all from a polynomial number of transitions). Note that when $\tastate_\idxi = \control$ we would use $\control$ on the right hand side instead of $\tup{\tastate_\idxi, \tatrant_\idxi}$ (recalling that $\control$ has no incoming transitions). \section{Completeness of Saturation} \label{sec:completeness} \begin{namedlemma}{lem:completeness}{Completeness of Saturation} The automaton $\ta$ obtained by saturation from $\ta_0$ is such that $\prestar{\gstrs}{\ta_0} \subseteq \langof{\ta}$. \end{namedlemma} \begin{proof} Completeness is proved via a straightforward induction over the length of the run witnessing $\tree \in \prestar{\gstrs}{\ta_0}$. In the base case we have $\tree \in \langof{\ta_0}$ and since $\ta$ was obtained only by adding transitions to $\ta_0$, we are done. For the induction, take $\tree \in \ap{\strule}{\tree'}$ where $\tree' \in \prestar{\gstrs}{\ta_0}$ and by induction $\ta$ has an accepting run of $\tree'$. We show how the transitions added by saturation can be used to build from the run over $\tree'$ an accepting run over $\tree$. We first consider the cases where $\strule$ adds or removes nodes to/from the tree. The remaining cases when the stack contents are altered are almost identical to the ICALP 2012 proof, and hence are left until the end for the interested reader. \begin{itemize} \item When $\strule = \stpush{\control}{\control_1, \ldots, \control_\numof}$ was applied to node $\tleaf{\tree}{\idxj}$ of $\tree$, we have \[ \tree' = \treemod{\treemod{\treemod{\tree} {\tleaf{\tree}{\idxj}} {\stack}} {\tleaf{\tree}{\idxj}1} {\tup{\control_1, \stack}} \cdots} {\tleaf{\tree}{\idxj}\numof} {\tup{\control_\numof, \stack}} \] where $\tup{\control, \stack}$ labelled $\tleaf{\tree}{\idxj}$. Take the initial transitions over $\tleaf{\tree}{\idxj}$ and $\tleaf{\tree}{\idxj}1$ to $\tleaf{\tree}{\idxj}\numof$ of the accepting run of $\tree'$ \[ \tatranfull{\tastate} {\idxi} {\numof'} {\tastate_1} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord} \] and \[ \tatranfull{\tastate_1} {1} {\numof} {\control_1} {\cha} {\sastateset^1_\branch} {\sastateset^1_1, \ldots, \sastateset^1_\maxord}, \ldots, \tatranfull{\tastate_1} {\numof} {\numof} {\control_\numof} {\cha} {\sastateset^\numof_\branch} {\sastateset^\numof_1, \ldots, \sastateset^\numof_\maxord} \] where the components of $\stack$ were accepted from $\sastateset_\branch$, $\sastateset_1, \ldots, \sastateset_\maxord$ and $\sastateset^1_\branch$, $\sastateset^1_1, \ldots, \sastateset^1_\maxord$, \ldots, $\sastateset^\numof_\branch$, $\sastateset^\numof_1, \ldots, \sastateset^\numof_\maxord$. By saturation we also have \[ \tatranfull{\tastate} {\idxi} {\numof'} {\control} {\cha} {\sastateset'_\branch} {\sastateset'_1, \ldots, \sastateset'_\maxord} \] where $\sastateset'_\branch = \sastateset_\branch \cup \sastateset^1_\branch \cup \cdots \cup \sastateset^\numof_\branch$ and for all $\midord$, we have $\sastateset'_\midord = \sastateset_1 \cup \sastateset^1_\midord \cup \cdots \cup \sastateset^\numof_\midord$ from which we obtain a run of $\ta$ over $\tree$ by simply replacing the transitions of the run over $\tree'$ identified above with $\tatrant$. \item When $\strule = \stpop{\control_1, \ldots, \control_\numof}{\control}$ was applied to nodes $\tleaf{\tree}{\idxj}$ to $\tleaf{\tree}{\idxj+\numof-1}$ of $\tree$, we have $\tree' = \treedel{\tree} {\set{\tleaf{\tree}{\idxj}, \ldots, \tleaf{\tree}{\idxj+\numof-1}}}$ and $\tleaf{\tree}{\idxj}$, \ldots, $\tleaf{\tree}{\idxj+\numof}$ were the only children of their parent $\tnode$. Moreover, let $\tup{\control_1, \stack_1}$ label $\tleaf{\tree}{\idxj}$, and \ldots and, $\tup{\control_\numof, \stack_\numof}$ label $\tleaf{\tree}{\idxj+\numof-1}$ and $\tnode$ have the stack $\stack$ in $\tree$ and $\tup{\control, \stack}$ label $\tnode$ in $\tree'$. The initial transition over $\tnode$ of the accepting run of $\tree'$ was from state $\control$ By saturation we have \[ \tatrant_1 = \tatranfull{\control} {1} {\numof} {\control_1} {\cha_1} {\emptyset} {\emptyset, \ldots, \emptyset}, \quad \ldots, \quad \tatrant_\numof = \tatranfull{\control} {\numof} {\numof} {\control_\numof} {\cha_\numof} {\emptyset} {\emptyset, \ldots, \emptyset} \] for the $\cha_1, \ldots, \cha_\numof$ at the top of $\stack_1$, \ldots, $\stack_\numof$ respectively. We get from this a run of $\ta$ over $\tree$ by adding $\tatrant_1$ to $\tatrant_\numof$ to the run over $\tree'$ to read the nodes $\tleaf{\tree}{\idxj}$ to $\tleaf{\tree}{\idxj+\numof-1}$. \end{itemize} We now consider the cases where $\strule$ applies a stack operation to a single node $\tleaf{\tree'}{\idxj}$ of $\tree'$. Let \[ \tatrant' = \tatranfull{\tastate} {\idxi} {\numof} {\control'} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\numof} \] be the transition applied at node $\tleaf{\tree'}{\idxj}$ in the run. Additionally, let $\stack'$ be the stack labelling the node, and $\control'$ be the control state. There is a case for each type of stack operation, all of which are almost identical to the ICALP 2012 proof. In all cases below, $\tree$ has the same tree structure as $\tree'$ and only differs on the labelling of $\tleaf{\tree'}{\idxj} = \tleaf{\tree}{\idxj}$. \begin{itemize} \item When $\strule = \gtrule{\control}{\srew{\chb}{\cha}}{\control'}$ then we also added the transition \[ \tatrant = \tatranfull{\tastate} {\idxi} {\numof} {\control} {\chb} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\numof} \] to $\ta$. We have \[ \stack' = \annot{\cha}{\stack_\branch} \scomp{1} \stack_1 \scomp{2} \cdots \scomp{\maxord} \stack_\maxord \] and since $\tleaf{\tree}{\idxj}$ is labelled by $\control$ and the stack \[ \stack = \annot{\chb}{\stack_\branch} \scomp{1} \stack_1 \scomp{2} \cdots \scomp{\maxord} \stack_\maxord \] we obtain an accepting run of $\tree$ by simply replacing the application of $\tatrant'$ with $\tatrant$. \item When $\strule = \gtrule{\control}{\scpush{\midord}}{\control'}$ then when $\midord > 1$ we have \[ \stack' = \annot{\cha}{\stack_\midord} \scomp{1} \annot{\cha}{\stack_\branch} \stack_1 \scomp{2} \cdots \scomp{\maxord} \stack_\maxord \ . \] Let \[ \satranfull{\sastateset_1} {\cha} {\sastateset'_\branch} {\sastateset'_1} \] be the first transitions used to accept $\annot{\cha}{\stack_\branch}$. From the saturation algorithm we also added \[ \tatrant = \tatranfull{\tastate} {\idxi} {\numof} {\control} {\cha} {\sastateset'_\branch} {\sastateset'_1, \sastateset_2, \ldots, \sastateset_{\midord-1}, \sastateset_\midord \cup \sastateset_\branch, \sastateset_{\midord+1}, \ldots, \sastateset_\maxord} \] to $\ta$. Since $\tleaf{\tree}{\idxj}$ is labelled by $\control$ and the stack \[ \stack = \annot{\cha}{\stack_\branch} \scomp{1} \stack_1 \scomp{2} \cdots \scomp{\maxord} \stack_\maxord \] we obtain an accepting run of $\tree$ by replacing the application of $\tatrant'$ with $\tatrant$. This follows because $\stack'_1$ was accepted from $\sastateset'_1$, $\stack_\branch$ from $\sastateset'_\branch$ and $\stack_\midord$ was accepted from both $\sastateset_\midord$ and $\sastateset_\branch$. When $\midord = 1$ we have \[ \stack' = \annot{\cha}{\stack_1} \scomp{1} \annot{\cha}{\stack_\branch} \stack_1 \scomp{2} \cdots \scomp{\maxord} \stack_\maxord \ . \] Let \[ \satranfull{\sastateset_1} {\cha} {\sastateset'_\branch} {\sastateset'_1} \] be the first transitions used to accept $\annot{\cha}{\stack_\branch}$. From the saturation algorithm we also added \[ \tatrant = \tatranfull{\tastate} {\idxi} {\numof} {\control} {\cha} {\sastateset'_\branch} {\sastateset'_1 \cup \sastateset_\branch, \sastateset_2, \ldots, \sastateset_\maxord} \] to $\ta$. Since $\tleaf{\tree}{\idxj}$ is labelled by $\control$ and the stack \[ \stack = \annot{\cha}{\stack_\branch} \scomp{1} \stack_1 \scomp{2} \cdots \scomp{\maxord} \stack_\maxord \] we obtain an accepting run of $\tree$ by replacing the application of $\tatrant'$ with $\tatrant$. This follows because $\stack'_1$ was accepted from $\sastateset'_1$, $\stack_\branch$ from $\sastateset'_\branch$ and $\stack_\midord$ was accepted from both $\sastateset_\midord$ and $\sastateset_\branch$. \item When $\strule = \gtrule{\control}{\spush{\midord}}{\control'}$ then we have \[ \stack' = \stack_\midord \scomp{\midord} \stack_\midord \scomp{\midord+1} \stack_{\midord+1} \cdots \scomp{\maxord} \stack_\maxord \quad \text{ and } \quad \stack_\midord = \annot{\cha}{\stack_\branch} \scomp{1} \stack'_1 \scomp{2} \cdots \scomp{(\midord-1)} \stack_{\midord-1} \ . \] Let \[ \satranfull{\sastateset_\midord} {\cha} {\sastateset'_\branch} {\sastateset'_1, \ldots, \sastateset'_\midord} \] be the transitions use to accept the first character of the second appearance of $\stack_\midord$. From the saturation algorithm we also added $\tatrant =$ \[ \tatranfull{\tastate} {\idxi} {\numof} {\control} {\cha} {\sastateset_\branch \cup \sastateset'_\branch} {\sastateset_1 \cup \sastateset'_1, \sastateset_2 \cup \sastateset'_2, \ldots, \sastateset_{\midord-1} \cup \sastateset'_{\midord-1}, \sastateset'_\midord, \sastateset_{\midord+1}, \ldots, \sastateset_\maxord} \] to $\ta$. Since $\tleaf{\tree}{\idxj}$ is labelled by $\control$ and the stack \[ \stack = \annot{\cha}{\stack_\branch} \scomp{1} \stack_1 \scomp{2} \cdots \scomp{\maxord} \stack_\maxord \] we obtain an accepting run of $\tree$ by replacing the application of $\tatrant'$ with $\tatrant$. This follows because stacks $\stack_1$ to $\stack_{\midord-1}$ are accepted from $\sastateset_1$ and $\sastateset'_1$ to $\sastateset_{\midord-1}$ and $\sastateset'_{\midord-1}$ respectively, $\stack_\branch$ from $\sastateset_\branch$ and $\sastateset'_\branch$, and the remainder of the stack from $\sastateset'_\midord$, $\sastateset_{\midord+1}$, \ldots, $\sastateset_\maxord$. \item When $\strule = \gtrule{\control}{\spop{\midord}}{\control'}$ Then we have \[ \stack' = \stack_\midord \scomp{\midord+1} \stack_{\midord+1} \cdots \scomp{\maxord} \stack_\maxord \] and \[ \stack = \annot{\cha}{\stack_\branch} \scomp{1} \stack_1 \scomp{2} \cdots \scomp{\maxord} \stack_\maxord \] for some $\cha$, $\stack_\branch$, $\stack_1$, \ldots, $\stack_{\midord-1}$. We break down $\tatrant'$ to find $\sastate_\midord$ such that \[ \tatranfullk{\tastate} {\idxi} {\numof} {\control'} {\sastate_\midord} {\sastateset_{\midord+1}, \ldots \sastateset_\maxord} \] where $\sastate_\midord$ accepts $\stack_\midord$ and $\sastateset_{\midord+1}$ through to $\sastateset_\maxord$ accept $\stack_{\midord+1}$ through to $\stack_\maxord$ respectively. By saturation we added the transition \[ \tatrant = \tatranfull{\tastate} {\idxi} {\numof} {\control} {\cha} {\emptyset} {\emptyset, \ldots, \emptyset, \set{\sastate_\midord}, \sastateset_{\midord+1}, \ldots, \sastateset_\maxord} \] from which we obtain an accepting run of $\stack$ with $\control$ as required. \item When $\strule = \gtrule{\control}{\scollapse{\midord}}{\control'}$ Then we have \[ \stack' = \stack_\branch, \scomp{\midord+1} \stack_{\midord+1} \cdots \scomp{\maxord} \stack_\maxord \] and \[ \stack = \annot{\cha}{\stack_\branch} \scomp{1} \stack_1 \scomp{2} \cdots \scomp{\maxord} \stack_\maxord \] for some $\cha$, $\stack_\branch$, $\stack_1$, \ldots, $\stack_\midord$. We break down $\tatrant'$ to find $\sastate_\branch$ such that \[ \tatranfullk{\tastate} {\idxi} {\numof} {\control'} {\sastate_\branch} {\sastateset_{\midord+1}, \ldots \sastateset_\maxord} \] where $\sastate_\midord$ accepts $\stack_\branch$ and $\sastateset_{\midord+1}$ through to $\sastateset_\maxord$ accept $\stack_{\midord+1}$ through to $\stack_\maxord$ respectively. By saturation we added the transition \[ \tatrant = \tatranfull{\tastate} {\idxi} {\numof} {\control} {\cha} {\set{\sastate_\branch}} {\emptyset, \ldots, \emptyset, \sastateset_{\midord+1}, \ldots, \sastateset_\maxord} \] from which we obtain an accepting run of $\stack$ with $\control$ as required. \end{itemize} Thus, in all cases we find an accepting run of $\ta$, which completes the proof. \end{proof} \section{Conclusions and Future Work} We gave a saturation algorithm for annotated stack trees -- a generalisation of annotated pushdown systems with the ability to fork and join threads. We build on the saturation method implemented by the \cshore tool. We would like to implement this work. We may also investigate higher-order versions of senescent ground tree rewrite systems~\cite{H14}, which generalises scope-bounding~\cite{lTN11} to trees. \section{Context Bounding} In the model discussed so far, communication between different nodes of the tree had to be done locally (i.e. from parent to child, via the destruction of nodes). In Appendix~\ref{sec:context-bounding}, we show that the saturation algorithm can be extended to allow a bounded amount of communication between distant nodes of the tree. This communication is via a global control state that can always be read, but only changed an \textit{a priori} fixed number of times. \begin{definition} [Order-$\maxord$ GASTRSs with Global State] An \emph{order-$\maxord$ GASTRS with global state} $\gstrs$ is a tuple $\tup{\salphabet, \controls, \globals, \rules}$ where $\salphabet$ is a finite stack alphabet, $\controls$ is a finite set of control states, $\globals$ is a finite set of global states, and $\rules \subset \globals \times \stacktreeops{\maxord}{\salphabet}{\controls} \times \globals$ is a finite set of operations. \end{definition} A configuration is a pair $\config{\gstate}{\tree}$ where $\gstate \in \globals$ and $\tree$ is an order-$\maxord$ annotated stack tree. We have $\config{\gstate}{\tree} \tran \config{\gstate'}{\tree'}$ whenever $\tup{\gstate, \strule, \gstate'} \in \rules$ and $\tree' \in \ap{\strule}{\tree}$. We define and solve the context-bounded reachability problem formally in Appendix~\ref{sec:context-bounding}. Intuitively, a run is context bounded for a bound $\lifespan$ if there are at most $\lifespan$ changes to the control state that occur in the run. The global context-bounded backwards reachability problem is to construct the set of configurations that may reach a given target set. We obtain an algorithm for this reachability problem by guessing the sequence of global control states seen, and then performing a sequence of saturation steps, one for each control state. We begin with $\ta_0$. A saturation step gives us the set of trees that reach $\ta_0$ for a fixed $\gstate$. Then, we update the saturated automaton to reflect a single application in the tree of a rule that changes the global state. Applying another saturation step gives us the set of trees that run in one global state, perform a state change, then run to $\ta_0$. By iterating this procedure, we solve the context-bounded reachability problem. \begin{namedtheorem}{thm:context-bounded}{Context-Bounded Reachability} The global context-bounded backwards reachability problem for GASTRS with global state is decidable. \end{namedtheorem} \section{Context Bounding} \label{sec:context-bounding} \easyicalp{ In the model discussed so far, communication between different nodes of the tree had to be done locally (i.e. from parent to child, via the destruction of nodes). We show that the saturation algorithm can be extended to allow a bounded amount of communication between distant nodes of the tree without destroying the nodes. We begin by defining an extension of our model with global state. We then show that being able to compute $\prestar{\gstrs}{\ta_0}$ can easily be adapted to allow a bounded number of global state changes. \subsection{GASTRS with Global State} \begin{definition} [Order-$\maxord$ Ground Annotatee Stack Tree Rewrite Systems with Global State] An \emph{order-$\maxord$ ground annotated stack tree rewrite system (GASTRS) with global state} $\gstrs$ is a tuple $\tup{\salphabet, \controls, \globals, \rules}$ where $\salphabet$ is a finite stack alphabet, $\controls$ is a finite set of control states, $\globals$ is a finite set of global states, and $\rules \subset \globals \times \stacktreeops{\maxord}{\salphabet}{\controls} \times \globals$ is a finite set of operations. \end{definition} A configuration of an order-$\maxord$ GASTRS with global state is a pair $\config{\gstate}{\tree}$ where $\gstate \in \globals$ and $\tree$ is an order-$\maxord$ annotated stack tree over alphabet $\salphabet$. We have a transition $\config{\gstate}{\tree} \tran \config{\gstate'}{\tree'}$ whenever there is some $\tup{\gstate, \strule, \gstate'} \in \rules$ and $\tree' \in \ap{\strule}{\tree}$. We write $\tree \reaches \tree'$ when there is a run $\tree = \tree_0 \tran \cdots \tran \tree_\numof = \tree'$. }{} \subsection{The Context-Bounded Reachability Problem} The context-bounded reachability problem is to compute the set of configurations from which there is a run to some target set of configurations, and moreover, the global state is only changed at most $\lifespan$ times, where $\lifespan$ is some bound given as part of the input. \begin{definition}[Global Context-Bounded Backwards Reachability Problem] Given a GASTRS with global state $\gstrs$, and a stack tree automaton $\ta^0_\gstate$ for each $\gstate \in \globals$, and a bound $\lifespan$, the \emph{global context-bounded backwards reachability problem} is to compute a stack tree automaton $\ta_\gstate$ for each $\gstate \in \globals$, such that $\tree \in \langof{\ta_\gstate}$ iff there is a run \[ \config{\gstate}{\tree} = \config{\gstate_0}{\tree_0} \tran \cdots \tran \config{\gstate_\numof}{\tree_\numof} = \config{\gstate'}{\tree'} \] with $\tree' \in \langof{\ta^0_{\gstate'}}$ and there are at most $\lifespan$ transitions during the run such that $\gstate_\idxi \neq \gstate_{\idxi+1}$. \end{definition} \subsection{Decidability of Context-Bounded Reachability} Since the number of global state changes is bounded, the sequence of global state changes for any run witnessing context-bounded reachability is of the form $\gstate_0, \ldots, \gstate_\numof$ where $\numof \leq \lifespan$. Let $\gstateseqs$ be the set of such sequences. Suppose we could compute for each such sequence $\gstateseq = \gstate_0, \ldots, \gstate_\numof$ an automaton $\ta_\gstateseq$ such that $\tree \in \langof{\ta_\gstateseq}$ iff there is a run from $\config{\gstate_0}{\tree}$ to $\config{\gstate_\numof}{\tree'}$ with $\tree' \in \langof{\ta_{\gstate_\numof}}$ where the sequence of global states appearing on the run is $\gstateseq$. We could then compute an answer to the global context-bounded backwards reachability problem by taking \[ \ta_\gstate = \bigcup\limits_{\gstate\gstateseq \in \gstateseqs} \ta_{\gstate\gstateseq} \ . \] To compute $\ta_\gstateseq$ we first make the simplifying assumption (without loss of generality) that for each $\gstate \neq \gstate'$ there is a unique $\tup{\gstate, \strule, \gstate'} \in \rules$ and moreover $\strule = \stsop{\control}{\srew{\cha}{\chb}}{\control'}$. Furthermore, for all $\gstate \in \globals$ we define $\gstrs_\gstate = \tup{\salphabet, \controls, \rules_\gstate}$ where \[ \rules_\gstate = \setcomp{\strule} {\tup{\gstate, \strule, \gstate} \in \rules} \ . \] We compute $\ta_\gstateseq$ by backwards induction. Initially, when $\gstateseq = \gstate$ we compute \[ \ta_\gstateseq = \prestar{\gstrs_\gstate}{\ta_\gstate} \ . \] It is immediate to see that $\ta_\gstateseq$ is correct. Now, assume we have $\gstateseq = \gstate\gstateseq'$ and we have already computed $\ta_{\gstateseq'}$, we show how to compute $\ta_\gstateseq$. The first step is to compute $\ta'_\gstateseq$ such that $\tree \in \langof{\ta'_\gstateseq}$ iff $\config{\gstate}{\tree} \tran \config{\gstate'}{\tree'}$ where $\gstate'$ is the first state of $\gstateseq'$ and $\tree' \in \langof{\ta_{\gstateseq'}}$. That is, $\ta'_\gstateseq$ accepts all trees from which we can change the current global state to $\gstate'$. That is, by a single application of the unique rule $\tup{\gstate, \strule, \gstate'}$. Once we have computed this automaton we need simply build \[ \ta_\gstateseq = \prestar{\gstrs_\gstate}{\ta'_\gstateseq} \] and we are done. We first define $\ta''_\gstateseq$ which is a version of $\ta_{\gstateseq'}$ that has been prepared for a single application of $\tup{\gstate, \strule, \gstate'}$. From this we compute $\ta_\gstateseq$. The strategy for building $\ta''_\gstateseq$ is to mark in the states which child, if any, of the node has the global state change rule applied to its subtree. At each level of the tree, this marking information enforces that only one subtree contains the application. Thus, when the root is reached, we know there is only one application in the whole tree. Note, this automaton does not contain any transitions corresponding to the actual application of the global change rule. This is added afterwards to compute $\ta_\gstateseq$. Thus, if \[ \ta_\gstateseq = \tup{ \tastates, \sastates_\maxord, \ldots, \sastates_1, \salphabet, \tadelta, \sadelta_\maxord, \ldots, \sadelta_1, \controls, \tafinals', \safinals_\maxord,\ldots,\safinals_1 } \] then \[ \ta''_\gstateseq = \tup{ \tastates', \sastates_\maxord, \ldots, \sastates_1, \salphabet, \tadelta', \sadelta_\maxord, \ldots, \sadelta_1, \controls, \tafinals', \safinals_\maxord,\ldots,\safinals_1 } \] where, letting $\numof$ be the maximum number of children permitted by any transition of $\ta_\gstateseq$, \[ \tastates' = \controls \cup \tastates \times \set{0, \ldots, \numof} \quad \text{ and } \quad \tafinals' = \setcomp{\tup{\tastatef, \idxi}} {\tastatef \in \tafinals \land 0 < \idxi \leq \numof} \] and we define \[ \begin{array}{rcl} \tadelta' &=& \tadeltainit \cup \tadeltanoapp \cup \tadeltapass \\ \\ \tadeltainit &=& \setcomp{\tatran{\tup{\tastate, 0}}{\idxi}{\numof}{\control}{\sastate}} {\tatran{\tastate}{\idxi}{\numof}{\control}{\sastate} \in \tadelta} \cup \\ & & \setcomp{\tatran{\tup{\tastate, \idxj}}{\idxi}{\numof}{\control}{\sastate}} {\tatran{\tastate}{\idxi}{\numof}{\control}{\sastate} \in \tadelta \land \idxi \neq \idxj} \\ \\ \tadeltanoapp &=& \setcomp{\tatran{\tup{\tastate, 0}} {\idxi} {\numof} {\tup{\tastate', 0}} {\sastate}} {\tatran{\tastate}{\idxi}{\numof}{\tastate'}{\sastate} \in \tadelta} \\ \\ \tadeltapass &=& \setcomp{\tatran{\tup{\tastate, \idxi}} {\idxi} {\numof} {\tup{\tastate, \idxj}} {\sastate}} {\tatran{\tastate}{\idxi}{\numof}{\tastate'}{\sastate} \in \tadelta} \cup \\ & & \setcomp{\tatran{\tup{\tastate, \idxj}} {\idxi} {\numof} {\tup{\tastate, 0}} {\sastate}} {\tatran{\tastate}{\idxi}{\numof}{\tastate'}{\sastate} \in \tadelta \land \idxi \neq \idxj} \ . \end{array} \] In the above $\tadeltainit$ has two kinds of transitions. The first set are the initial transitions for the nodes to which the rewrite rule is not applied (indicated by the $0$). The second set are the rules where the rewrite rule is applied at the $\idxj$th sibling of the $\idxi$th child. Next $\tadeltanoapp$ are the transitions for subtrees which have not been marked as containing the application. Finally, $\tadeltapass$ propagates information about where the application actually occurred up the tree. The first set of transitions in $\tadeltapass$ are used when the $\idxi$th child contains the application (hence it labels the parent with the information that the $\idxi$th child contains the application). The second set of transitions guess that the $\idxj$th sibling contains the application. Thus, at any node, at most one child subtree may contain the application. The set of final states enforce that the application has occurred in some child. To compute $\ta'_\gstateseq$, letting $\strule = \stsop{\control}{\srew{\cha}{\chb}}{\control'}$ be the operation on the global state change, we add to $\ta''_\gstateseq$ a transition \[ \tatranfull{\tup{\tastate, \idxi}} {\idxi} {\numof} {\control} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord} \] for each \[ \tatranfull{\tastate} {\idxi} {\numof} {\control'} {\chb} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord} \] in $\ta_{\gstateseq'}$. We remark that, as defined, $\ta_\gstateseq$ does not satisfy the prerequisites of the saturation algorithm, since initial states reading stacks might have incoming transitions, and, moreover, an initial state may label more than one transition. We can convert $\ta_\gstateseq$ to the correct format using the automata manipulations in Appendix~\ref{sec:aut-particulars}. \begin{lemma} We have $\tree \in \langof{\ta'_\gstateseq}$ iff $\config{\gstate}{\tree} \tran \config{\gstate'}{\tree'}$ via a single application of the transition $\tup{\gstate, \strule, \gstate'}$ and $\tree' \in \langof{\ta_{\gstateseq'}}$. \end{lemma} \begin{proof} First, assume $\tree \in \langof{\ta'_\gstateseq}$. We argue that there is exactly one leaf $\tleaf{\tree}{\idxi}$ read by a transition $\tatran{\tup{\tastate, \idxi}} {\idxi} {\numof} {\control} {\sastate}$ and all other leaves are read by some $\tatran{\tup{\tastate, 0}} {\idxi} {\numof} {\control} {\sastate}$ or $\tatran{\tup{\tastate, \idxj}} {\idxi} {\numof} {\control} {\sastate}$ with $\idxj \neq \idxi$. If there is no such $\tleaf{\tree}{\idxi}$ then all leaf nodes are read by some $\tatran{\tup{\tastate, 0}} {\idxi} {\numof} {\control} {\sastate}$. Thus, all parents of the leaf nodes are labelled by $\tup{\tastate, 0}$. Thus, take any node $\tnode$ and assume its children are labelled by some $\tup{\tastate, 0}$. It must be the case that $\tnode$ is also labelled by some $\tup{\tastate, 0}$ since otherwise it is labelled $\tup{\tastate, \idxi}$ and its $\idxi$th child must be labelled by some $\tup{\tastate, \idxj}$ with $\idxj > 0$, which is a contradiction. Hence, the accepting state of the run must also be some $\tup{\tastatef, 0}$ which is not possible. If there are two or more leaves labelled by some $\tup{\tastate, \idxi}$ with $\idxi > 0$ then each ancestor must also be labelled by some $\tup{\tastate, \idxi}$ with $\idxi > 0$. Take the nearest common ancestor $\tnode$ and suppose it is labelled $\tup{\tastate, \idxi}$. However, since it has two children labelled with non-zero second components, we must have used a transition $\tatran{\tup{\tastate, \idxi}} {\idxj} {\numof} {\tup{\tastate', \idxj'}} {\sastate}$ which, by definition, cannot exist. Hence, we have only one leaf $\tleaf{\tree}{\idxi}$ where \[ \tatranfull{\tup{\tastate, \idxi}} {\idxi} {\numof} {\control} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord} \] is used. Obtain $\tree'$ by applying $\stsop{\control}{\srew{\cha}{\chb}}{\control'}$ at this leaf. We build an accepting run of $\ta_{\gstateseq'}$ by taking the run of $\ta'_\gstateseq$ over $\tree$, projecting out the second component of each label, and replacing the transition used at $\tleaf{\tree}{\idxi}$ with \[ \tatranfull{\tastate} {\idxi} {\numof} {\control'} {\chb} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord} \ . \] Hence, we are done. In the other direction take $\tree$ and $\tree'$ obtained by applying $\stsop{\control}{\srew{\cha}{\chb}}{\control'}$ at leaf $\tleaf{\tree}{\idxi}$. We take the accepting run of $\ta_{\gstateseq'}$ over $\tree'$ and build an accepting run of $\ta'_\gstateseq$ over $\tree$. Let \[ \tatranfull{\tastate} {\idxi} {\numof} {\control'} {\chb} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord} \ . \] be the transition used at $\tleaf{\tree}{\idxi}$. We replace it with \[ \tatranfull{\tup{\tastate, \idxi}} {\idxi} {\numof} {\control} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord} \ . \] Starting from above the root node, let the $\idxj$th child be the first on the path to $\tleaf{\tree}{\idxi}$ (the root node is the $1$st child of ``above the root node''). For all children except the $\idxj$th, take the transition $\tatran{\tastate}{\idxj'}{\numof}{\tastate'}{\sastate}$ used in the run over $\tree'$ and replace it with $\tatran{\tup{\tastate, \idxj}}{\idxj'}{\numof}{\tup{\tastate', 0}}{\sastate}$. The remainder of the run in the descendents of these children requires us to use $\tatran{\tup{\tastate, 0}}{\idxi'}{\numof}{\tup{\tastate', 0}}{\sastate}$ or $\tatran{\tup{\tastate, 0}}{\idxi'}{\numof}{\tastate}{\sastate}$ instead of $\tatran{\tastate}{\idxi'}{\numof}{\tastate'}{\sastate}$. For the $\idxj$th child, we use instead of $\tatran{\tastate}{\idxj}{\numof}{\tastate'}{\sastate}$. the transition $\tatran{\tup{\tastate, \idxj}}{\idxj}{\numof}{\tup{\tastate', \idxj'}}{\sastate}$ when the $\idxj'$th child of this child leads to $\tleaf{\tree}{\idxi}$ or the previously identified transition when the $\idxj'$th child of this child is the leaf. We repeat the routine above until we reach $\tleaf{\tree}{\idxi}$, at which point we've constructed an accepting run of $\ta'_\gstateseq$ over $\tree$. \end{proof} By iterating the above procedure, we obtain \easyicalp{% our result. \begin{theorem}[Context-Bounded Reachability] The global context-bounded backwards reachability problem for GASTRS with global state is decidable. \end{theorem} }{% \reftheorem{thm:context-bounded}. } \section{Correctness of Saturation} \label{sec:correctness} We prove that saturation is correct and runs in $\maxord$-EXPTIME. \begin{proof}[Proof of \refproperty{prop:sat-correct}] The proof of completeness is given in Lemma~\ref{lem:completeness} and soundness is given in Lemma~\ref{lem:soundness}. The complexity is derived as follows. We add at most one transition of the form $\tatran{\tastate}{\idxi}{\numof}{\control}{\sastate}$ for each $\tastate$, $\idxi$, $\numof$ and $\control$. Hence we add at most a polynomial number of transitions to $\tadelta$. Thus, to $\sadelta_\maxord$ we have a polynomial number of states. We add at most one transition of the form $\sastate \satran{\sastate'} \sastateset$ for each $\sastate$ and set of states $\sastateset$. Thus we have at most an exponential number of transitions in $\sadelta_\maxord$. Thus, in $\sastates_\midord$ we have a number of states bounded by a tower of exponentials of height $(\maxord - \midord)$. Since we add at most one transition of the form $\sastate \satran{\sastate'} \sastateset$ for each $\sastate$ and $\sastateset$ we have a number of transitions bounded by a tower of exponentials of height $(\maxord - \midord + 1)$ giving the number of states in $\sastates_{\midord-1}$. Thus, at order-$1$ the number of new transitions is bounded by a tower of height $\maxord$, giving the $\maxord$-EXPTIME complexity. \end{proof} \section{Lower Bounds on the Reachability Problem} \label{sec:lower-bound} We show that that global backwards reachability problem is $\maxord$-EXPTIME-hard for an order-$\maxord$ GASTRS. The proof is by reduction from the $\maxord$-EXPTIME-hardness of determining the winner in an order-$\maxord$ reachability game~\cite{CW07}. \begin{proposition}[Lower Bound] The global backwards reachability problem for order-$\maxord$ GASTRSs is $\maxord$-EXPTIME-hard. \end{proposition} \begin{proof} We reduce from the problem of determining the winner in an order-$\maxord$ pushdown reachability game~\cite{CW07}. We first need to define higher-order stacks and their operations. Essentially, they are just annotated stacks without collapse. That is order-$1$ stacks are of the form $\kstack{1}{\cha_1 \ldots \cha_\numof}$ where $\cha_1 \ldots \cha_\numof \in \salphabet^\ast$. Order-$\midord$ stacks for $\midord > 1$ are of the form $\kstack{\midord}{\stack_1 \ldots \stack_\numof}$ where $\stack_1, \ldots, \stack_\numof$ are order-$(\midord-1)$ stacks. Their operations are \[ \hostackops{\maxord}{\cha} = \setcomp{\hocpush{\cha}}{\cha \in \salphabet} \cup \setcomp{\spush{\midord}}{2 \leq \midord \leq \maxord} \cup \setcomp{\spop{\midord}}{1 \leq \midord \leq \maxord} \ . \] The $\spush{\midord}$ and $\spop{\midord}$ operations are analogous to annotated stacks. We define $\ap{\hocpush{\cha}}{\stack} = \cha \scomp{1} \stack$. Such a game is defined as a tuple $\tup{\controls, \salphabet, \rules, \finals}$ where $\controls = \controls_1 \cup \controls_2$ is a finite set of control states partitioned into those belonging to player 1 and player 2 respectively, $\salphabet$ is a finite set of stack characters, $\rules \subseteq \controls \times \salphabet \times \hostackops{\maxord}{\salphabet} \times \controls$ is a finite set of transition rules, and $\finals \subseteq \controls$ is a set of target control states. Without loss of generality, we assume that for all $\control \in \controls_2$ and $\cha \in \salphabet$ there exactly two rules in $\rules$ of the form $\horule{\control}{\cha}{\sop}{\control'}$ for some $\sop$ and $\control'$. A configuration is a tuple $\config{\control}{\stack}$ of a control state and higher-order stack. A winning play of a game from an initial configuration $\config{\control_0}{\stack_0}$ for player 1 is a tree labelled by configurations such that \begin{itemize} \item all leaf nodes are labelled by configurations $\config{\control}{\stack}$ with $\control \in \finals$. \item if an internal node is labelled $\config{\control}{\stack}$ with $\control \in \controls_1$ then the node has one child labelled by $\config{\control'}{\stack'}$ such that for some $\horule{\control}{\cha}{\sop}{\control'} \in \rules$ we have $\stack = \cha \scomp{1} \stack''$ for some $\stack''$ and $\stack' = \ap{\sop}{\stack}$. \item if an internal node is labelled $\config{\control}{\stack}$ with $\control \in \controls_2$ then when $\stack = \cha \scomp{1} \stack'$ for some $\stack'$ and we have the rules $\horule{\control}{\cha}{\sop_1}{\control_1}$, and $\horule{\control}{\cha}{\sop_2}{\control_2}$, then the node has two children labelled by $\config{\control_1}{\stack_1}$ and $\config{\control_2}{\stack_2}$ with $\stack_1 = \ap{\sop_1}{\stack}$ and $\stack_1 = \ap{\sop_1}{\stack}$. \end{itemize} Note, we assume that the players can always apply all available rules for a given $\control$ and $\cha$ in the game (unless a control in $\finals$ is reached). This is standard and can be done with the use of a ``bottom-of-stack'' marker at each order. Determining if player 1 wins the game is known to be $\maxord$-EXPTIME hard~\cite{CW07}. This amounts to asking whether a winning game tree can be constructed from the initial configuration $\config{\control_0}{\stack_0}$. That the winning game trees are regular can be easily seen: we simply assert that all leaf nodes are labelled by some $\control \in \finals$. We build a GASTRS that constructs play trees. We simulate a move in the game via several steps in the GASTRS, hence its control states will contain several copies of the control states of the game. Suppose we have a rule $\horule{\control}{\cha}{\sop}{\control'}$ where $\control \in \controls_1$. The first step in the simulation will be to check that the top character is $\cha$, for which we will use $\gtrule{\control}{\srew{\cha}{\cha}}{\ccopy{\control}{1}}$ where $\ccopy{\control}{1}$ is a new control state. The next step will create a new node in the play tree using $\stpush{\ccopy{\control}{1}}{\ccopy{\control'}{2}}$ which uses the intermediate control state $\ccopy{\control'}{2}$. The final step is to apply the stack operation and move to $\control'$. When $\sop = \spush{\midord}$ or $\sop = \spop{\midord}$ we can use $\gtrule{\ccopy{\control'}{2}}{\sop}{\control'}$. When $\sop = \hocpush{\chb}$ we use another intermediate control state and $\gtrule{\ccopy{\control'}{2}}{\scpush{1}}{\ccopy{\control'}{3}}$ and $\gtrule{\ccopy{\control'}{3}}{\srew{\cha}{\chb}}{\control'}$. When $\control \in \controls_2$ with the rules $\horule{\control}{\cha}{\sop_1}{\control_1}$ and $\horule{\control}{\cha}{\sop_2}{\control_2}$ we use $\gtrule{\control}{\srew{\cha}{\cha}}{\ccopy{\control}{1}}$, \[ \stpush{\ccopy{\control}{1}}{\ccopy{\control_1}{2}, \ccopy{\control_2}{2}} \ , \] and similar rules to the previous case to apply $\sop$ and move to $\control_1$ or $\control_2$. Let the above GASTRS be $\gstrs$. From the initial single-node tree $\tree_0$ whose node is labelled $\tup{\control_0, \stack_0}$ it is clear that a tree whose leaf nodes are only labelled by control states in $\finals$ can be reached iff there is a winning play of player 1 in the reachability game. We can easily build a tree automaton $\ta_0$ that accepts only these target trees. Since checking membership $\tree_0 \in \prestar{\gstrs}{\ta_0}$ is linear in the size of tree automaton representing $\prestar{\gstrs}{\ta_0}$ we obtain our lower bound as required. \end{proof} \section{Introduction} Modern day programming increasingly embraces higher-order programming, both via the inclusion of higher-order constructs in languages such as C++, JavaScript and Python, but also via the importance of \emph{callbacks} in highly popular technologies such as jQuery and Node.js. For example, to read a file in Node.js, one would write \begin{minted}{javascript} fs.readFile('f.txt', function (err, data) { ..use data.. }); \end{minted} In this code, the call to \verb+readFile+ spawns a new thread that asynchronously reads \verb+f.txt+ and sends the \verb+data+ to the function argument. This function will have access to, and frequently use, the closure information of the scope in which it appears. The rest of the program runs \emph{in parallel} with this call. This style of programming is fundamental to both jQuery and Node.js programming, as well as being a popular for programs handling input events or slow IO operations such as fetching remote data or querying databases (e.g. HTML5's indexedDB). Analysing such programs is a challenge for verification tools which usually do not model higher-order recursion, or closures, accurately. However, several higher-order model-checking tools have been recently developed. This trend was pioneered by Kobayashi\etal~\cite{K11} who developed an \emph{intersection type} technique for analysing \emph{higher-order recursion schemes} -- a model of higher-order computation. This was implemented in the \trecs tool~\cite{K09} which demonstrated the feasibility of higher-order model-checking in practice, despite the high theoretical complexities ($(\maxord-1)$-EXPTIME for an order-$\maxord$ recursion scheme). This success has led to the development of several new tools for analysing recursion schemes: \gtrecs~\cite{K11b,gtrecs2}, \travmc~\cite{NRO12}, \cshore~\cite{BCHS13}, \horsat~\cite{BK13}, and \preface~\cite{RNO14}. In particular, the \cshore tool is based on an automata model of recursion schemes called \emph{annotated (or collapsible) pushdown systems}~\cite{HMOS08}. This is a generalisation of pushdown systems -- which accurately model first-order recursion -- to the higher-order case. \cshore implements a \emph{saturation} algorithm to perform a backwards reachability analysis, which first appeared in ICALP 2012~\cite{BCHS12}. Saturation was popularised by Bouajjani\etal~\cite{BEM97} for the analysis of pushdown systems, which was implemented in the successful Moped tool~\cite{S02,SBSE07}. \paragraph*{Contributions} In this work we introduce a generalisation of annotated pushdown systems: \emph{ground annotated stack tree rewrite systems (GASTRS)}. A configuration of a GASTRS is an \emph{annotated stack tree} -- that is, a tree where each node is labelled by the configuration of an annotated pushdown system. Operations may update the leaf nodes of the tree, either by updating the configuration, creating new leaf nodes, or destroying them. Nodes are created and destroyed using \[ \stpush{\control}{\control_1, \ldots, \control_\numof} \text{ and } \stpop{\control'_1, \ldots, \control'_\numof}{\control'} \] which can be seen as spawning $\numof$ copies of the current process (including closure information) using the first rule, and then later joining these processes with the second rule, returning control to the previous execution (parent node). Alternatively, we can just use $\stpush{\control}{\control_1, \control_2}$ for a basic fork that does not join. This model is a generalisation of \emph{higher-order stack trees} recently introduced by Penelle~\cite{P15}, where the tree nodes are labelled by a restriction of annotated pushdown automata called \emph{higher-order pushdown automata}. As our main contribution, we show that the global backwards reachability problem for GASTRSs can be solved via a saturation technique. That is, given a regular target set of annotated stack trees, we compute a regular representation of all trees from which there is a run of the system to the target set. Note that being able to specify a target set of trees allows us to identify error states such as race conditions between threads. Our result is a generalisation of the ICALP 2012 algorithm, and as such, may be implemented as part of the \cshore tool. Moreover, we define a notion of regularity amenable to saturation which is also closed under the standard boolean operations. As a final contribution, we show that the model can be extended to allow a bounded amount of communication between separate nodes of the tree. I.e., we add a global state to the system and perform a ``context-bounded'' analysis~\cite{QR05}, where the global state can only be changed an \textit{a priori} fixed number of times. \paragraph{Related Work} Annotated pushdown systems are a generalisation of higher-order pushdown systems that provide a model of recursion schemes subject to a technical constraint called \emph{safety}~\cite{M76,KNU02} and are closely related to the Caucal hierarchy~\cite{CW03}. Parys has shown that safety is a genuine constraint on definable traces~\cite{P11}. Panic automata provided the first model of order-$2$ schemes, while annotated pushdown systems model schemes of arbitrary order. These formalisms have good model-checking properties. E.g. $\mu$-calculus decidability~\cite{O06,HMOS08}. Krivine machines can also be used to model recursion schemes~\cite{SW11}. There has been some work studying concurrent variants of recursion scheme model checking, including a context-bounded algorithm for recursion schemes~\cite{KI13}, and further underapproximation methods such as phase-bounded, ordered, and scope-bounding~\cite{H13,S09}. These works allow only a fixed number of threads. Dynamic thread creation is permitted by both Yasukata\etal~\cite{YKM14} and by Chadha and Viswanathan~\cite{CV07}. In Yasukata\etal's model, recursion schemes may spawn and join threads. Communication is permitted only via nested locks, whereas in our model we allow shared memory, but only a bounded number of memory updates. Their work is a generalisation of results for order-1 pushdown systems~\cite{GLMSW11}. Chadha and Viswanathan allow threads to be spawned, but only one thread runs at a time, and must run to completion. Moreover, the tree structure is not maintained. Saturation methods also exist for \emph{ground tree rewrite systems} and related systems~\cite{L03,B69,LS98}, though use different techniques. Our context-bounded model relates to weak GTRS with state introduced by Lin~\cite{L12}. Adding such weak state to process rewrite systems was considered by Kret{\'{\i}}nsk{\'{y}\etal~\cite{KRS04}. A saturation technique has also been developed for dynamic trees of pushdown processes~\cite{BMT05}. These are trees where each process on each node is active (in our model, only the leaf nodes are active). However, their spawn operations do not copy the current process, losing closure information. It would be interesting and non-trivial to study the combination of both approaches. Penelle proves decidability of first order logic with reachabilty over rewriting graphs of ground stack tree rewriting systems~\cite{P15}. This may be used for a context-bounded reachability result for higher-order stack trees. This result relies on MSO decidability over the configuration graphs of higher-order pushdown automata, through a finite set interpretation of any rewriting graph of a ground stack tree rewriting system into a configuration graph of a higher pushdown automaton. This does not hold for annotated pushdown automata. \section{Preliminaries} \subsubsection*{Trees} An ordered tree over arity at most $\maxarity$ over a set of labels $\treelabels$ is a tuple $\tup{\treedom, \treelabelling}$ where $\treedom \subset \set{1,\ldots,\maxarity}^\ast$ is a tree domain such that $\tnode \tdiri \in \treedom$ implies $\tnode \in \treedom$ (prefix closed), and $\tnode \tdirj \in \treedom$ for all $\tdirj < \tdiri$ (younger-sibling closed), and $\treelabelling : \treedom \rightarrow \treelabels$ is a labelling of the nodes of the tree. \easyicalp{ Let $\tnode \treeanc \tnode'$ denote that $\tnode$ is an ancestor (inclusive) of $\tnode'$ in the tree. }{} We write $\treemod{\tree}{\tnode}{\treelab}$ to denote the tree $\tree' = \tup{\treedom \cup \set{\tnode}, \treelabelling'}$ where $\ap{\treelabelling'}{\tnode} = \treelab$ and $\ap{\treelabelling'}{\tnode'} = \ap{\treelabelling}{\tnode'}$ for $\tnode' \neq \tnode$, whenever $\tree = \tup{\treedom, \treelabelling}$ and $\treedom \cup \set{\tnode}$ is a valid tree domain. We will also write $\tree' = \treedel{\tree}{\tnodeset}$ to denote the tree obtained by removing all subtrees rooted at $\tnode \in \tnodeset$ from $\tree$. \easyicalp{ That is $\tree' = \tup{\treedom', \treelabelling'}$ when $\tree = \tup{\treedom, \treelabelling}$ and \[ \begin{array}{rcl} \treedom' &=& \treedom \setminus \setcomp{\tnode'} {\tnode \in \tnodeset \land \tnode \treeanc \tnode'} \\ \ap{\treelabelling'}{\tnode} &=& \begin{cases} \ap{\treelabelling}{\tnode} & \tnode \in \treedom' \\ \text{undefined} & \text{otherwise.} \end{cases} \end{array} \] }{} \subsubsection*{Annotated stacks} Let $\salphabet$ be a set of stack symbols. An annotated stack of order-$\maxord$ is an order-$\maxord$ stack in which stack symbols are annotated with stacks of order at most $\maxord$. For the rest of the paper, we fix the maximal order to $\maxord$, and use $\midord$ to range between $\maxord$ and $1$. We simultaneously define for all $1 \leq \midord \leq \maxord$, the set $\akstacks{\midord}{\maxord}{\salphabet}$ of stacks of order-$\midord$ whose symbols are annotated by stacks of order at most $\maxord$. Note, we use subscripts to indicate the order of a stack. We ensure all stacks are finite by using the least fixed-point. When the maximal order $\maxord$ is clear, we write $\stacks{\midord}{\salphabet}$ instead of $\akstacks{\midord}{\maxord}{\salphabet}$. \begin{definition}[Annotated Stacks] The family of sets $\brac{\akstacks{\midord} {\maxord} {\salphabet}}_{1 \leq \midord \leq \maxord}$ is the smallest family (for point-wise inclusion) such that: \begin{itemize} \item for all $2 \leq \midord \leq \maxord$, $\akstacks{\midord}{\maxord}{\salphabet}$ is the set of all (possibly empty) sequences $\kstack{\midord}{\stack_1 \ldots \stack_\numof}$ with $\stack_1, \ldots, \stack_\numof \in \akstacks{\midord-1}{\maxord}{\salphabet}$. \item $\akstacks{1}{\maxord}{\salphabet}$ is all sequences $\kstack{1}{ \annot{\cha_1}{\stack_1} \ldots \annot{\cha_\numof}{\stack_\numof}}$ with $\numof \geq 0$ and for all $1 \leq \idxi \leq \numof$, j $\cha_\idxi$ is a stack symbol in $\salphabet$ and $\stack_\idxi$ is an annotated stack in $\bigcup\limits_{1 \leq \midord \leq \maxord} \akstacks{\midord}{\maxord}{\salphabet}$. \end{itemize} \end{definition} We write $\stack \scomp{\midord} \stack'$ --- where $\stack$ is order-$(\midord-1)$ --- to denote the stack obtained by placing $\stack$ on top of $\stack'$. That is, \begin{itemize} \item if $\stack' = \kstack{\midord}{\stack_1 \ldots \stack_\numof}$ then $\stack \scomp{\midord} \stack' = \kstack{\midord} {\stack \stack_1 \ldots \stack_\numof}$, and \item if $\stack' = \kstack{\midord'}{\stack_1 \ldots \stack_\numof}$ with $\midord' > \midord$ then $\stack \scomp{\midord} \stack' = \kstack{\midord'} {\brac{\stack \scomp{\midord} \stack_1} \stack_2 \ldots \stack_\numof}$. \end{itemize} This composition associates to the right. For example, the order-$3$ stack $\kstack{3}{\kstack{2}{\kstack{1}{\annot{\cha}{\stack} \chb}}}$ can be written $\stack_1 \scomp{3} \stack_2$ where $\stack_1$ is the order-$2$ stack $\kstack{2}{\kstack{1}{\annot{\cha}{\stack} \chb}}$ and $\stack_2$ is the empty order-$3$ stack $\kstack{3}{}$. Then $\stack_1 \scomp{3} \stack_1 \scomp{3} \stack_2$ is $\kstack{3}{\kstack{2}{\kstack{1}{\annot{\cha}{\stack} \chb}} \kstack{2}{\kstack{1}{\annot{\cha}{\stack} \chb}}}$. Note that we cannot write $\brac{\stack_1 \scomp{\midord} \stack_2} \scomp{\midord} \stack_3$ since $\brac{\stack_1 \scomp{\midord} \stack_2}$ is not order-$(\midord-1)$. \subsubsection*{Operations on Order-$\maxord$ Annotated Stacks} For a given alphabet $\salphabet$, we define the set $\stackops{\maxord}{\salphabet}$ of stack operations inductively as follows: \[ \begin{array}{c} \stackops{0}{\salphabet} = \setcomp{\srew{\cha}{\chb}}{\cha, \chb \in \salphabet} % \qquad % \stackops{1}{\salphabet} = \set{\scpush{1}, \spop{1}} \cup \stackops{0}{\salphabet} \\ \stackops{\maxord}{\salphabet} = \set{\scpush{\maxord}, \spush{\maxord}, \spop{\maxord}, \scollapse{\maxord}} \cup \stackops{(\maxord-1)}{\salphabet} \end{array} \] We define each operation for a stack $\stack$. Annotations are created by $\scpush{\midord}$, which adds a character to the top of a stack $\stack \scomp{(\midord+1)} \stack'$ annotated by $\ap{\spop{\midord}}{\stack}$. This gives the new character access to the context in which it was created. \begin{enumerate} \item We set $\ap{\srew{\cha}{\chb}} {\annot{\cha}{\stack'} \scomp{1} \stack} = \annot{\chb}{\stack'} \scomp{1} \stack$. \item We set $\ap{\scpush{\midord}}{\stack} = \annot{\cha}{\stack_\midord} \scomp{1} \stack$ when $\stack = \annot{\cha}{\stack_1} \scomp{1} \stack_2 \scomp{2} \cdots \scomp{\midord} \stack_\midord \scomp{(\midord+1)} \cdots \scomp{\maxord} \stack_\maxord$. \item We set $\ap{\spush{\midord}}{\stack \scomp{\midord} \stack'} = \stack \scomp{\midord} \stack \scomp{\midord} \stack'$. \item We set $\ap{\spop{\midord}}{\stack \scomp{\midord} \stack'} = \stack'$. \item We set $\ap{\scollapse{\midord}}{\annot{\cha}{\stack} \scomp{1} \stack_1 \scomp{(\midord+1)} \stack_2} = \stack \scomp{(\midord+1)} \stack_2$ when $\stack$ is order-$\midord$ and $\maxord > \midord \geq 1$; and $\ap{\scollapse{\maxord}}{\annot{\cha}{\stack} \scomp{1} \stack'} = \stack$ when $\stack$ is order-$\maxord$. \end{enumerate} \section{Backwards Reachability Analysis} Fix a GASTRS $\gstrs$ and automaton $\ta_0$ for the remainder of the article. We define \[ \prestar{\gstrs}{\ta_0} = \setcomp{\tree} {\tree \reaches \tree' \land \tree' \in \langof{\ta_0}} \ . \] We give a saturation algorithm for computing an automaton $\ta$ such that $\langof{\ta} = \prestar{\gstrs}{\ta_0}$. Indeed, we prove the following theorem. The upper bound is discussed in the sequel. The lower bound comes from alternating higher-order pushdown automata~\cite{CW07} and appears in Appendix~\ref{sec:lower-bound}. \begin{theorem} Given an order-$\maxord$ GASTRS $\gstrs$ and stack tree automaton $\ta_0$, $\prestar{\gstrs}{\ta_0}$ is regular and computable in $\maxord$-EXPTIME, which is optimal. \end{theorem} For technical reasons assume for each $\control$ there is at most one rule $\stpop{\control_1, \ldots, \control_\numof}{\control}$. E.g., we cannot have $\stpop{\control_1, \control_2}{\control}$ and $\stpop{\control'_1, \control'_2}{\control}$. This is not a real restriction since we can introduce intermediate control states. E.g. $\stpop{\control_1, \control_2}{\control_{1, 2}}$ and $\gtrule{\control_{1, 2}}{\srew{\cha}{\cha}}{\control}$ and $\stpop{\control'_1, \control'_2}{\control'_{1, 2}}$ and $\gtrule{\control'_{1, 2}}{\srew{\cha}{\cha}}{\control}$ for all $\cha \in \salphabet$. \subsubsection*{Initial States} We say that all states in $\controls$ are \emph{initial}. Furthermore, a state $\sastate$ is initial if there is a transition $\tatran{\tastate}{\idxi}{\numof}{\tastate'}{\sastate}$ or if there exists a transition $\sastate' \satran{\sastate} \sastateset$ in some $\sadelta_\midord$. We make the assumption that all initial states do not have any incoming transitions and that they are not final\footnote{Hence automata cannot accept empty stacks from initial states. This can be overcome by introducing a bottom-of-stack symbol.}. Furthermore, we assume any initial state only appears on one transition. \subsubsection*{New Transitions} When we add a transition $\tatranfull{\tastate} {\idxi} {\numof} {\tastate'} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord}$ to the automaton, then, we add $\tatran{\tastate}{\idxi}{\numof}{\tastate'}{\sastate_\maxord}$ to $\tadelta$ if it does not exist, else we use the existing $\sastate_\maxord$, and then for each $\maxord \geq \midord > 1$, we add $\sastate_\midord \satran{\sastate_{\midord-1}} \sastateset_\midord$ to $\sadelta_\midord$ if a transition between $\sastate_\midord$ and $\sastateset_\midord$ does not already exist, otherwise we use the existing transition and state $\sastate_{\midord-1}$; finally, we add $\sastate_1 \satrancol{\cha}{\sastateset_\branch} \sastateset_1$ to $\sadelta_1$. \subsubsection*{The Algorithm} We give the algorithm formally here, with intuitive explanations given in the follow section. Saturation is a fixed point algorithm. We begin with a GASTRS $\gstrs = \tup{\salphabet, \rules}$ and target set of trees by $\ta_0$. Then, we apply the saturation function $\satfn$ and obtain a sequence of automata $\ta_{\idxi+1} = \ap{\satfn}{\ta_\idxi}$. The algorithm terminates when $\ta_{\idxi+1} = \ta_{\idxi}$ in which case we will have $\langof{\ta_{\idxi+1}} = \prestar{\gstrs}{\ta_0}$. Following the conventions described above for adding transitions to the automaton, we can only add a finite number of states to the automaton, which implies that only a finite number of transitions can be added. Hence, we must necessarily reach a fixed point for some $\idxi$. Given $\ta_\idxi$, we define $\ta_{\idxi+1} = \ap{\satfn}{\ta_\idxi}$ to be the automaton obtained by adding to $\ta_\idxi$ the following transitions and states. \begin{itemize} \item For each rule $\stsop{\control}{\srew{\cha}{\chb}}{\control'} \in \rules$ and transition $\tatranfull{\tastate} {\idxj} {\numof} {\control'} {\chb} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord}$ in $\ta_\idxi$, add to $\ta_{\idxi+1}$ the transition $\tatranfull{\tastate} {\idxj} {\numof} {\control} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord}$. \item For each rule $\stsop{\control}{\scpush{\midord}}{\control'} \in \rules$, transition $\tatranfull{\tastate} {\idxj} {\numof} {\control'} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord}$, and $\sastateset_1 \satrancol{\cha}{\sastateset'_\branch} \sastateset'_1$ in $\ta_\idxi$, add \[ \tatranfull{\tastate} {\idxj} {\numof} {\control} {\cha} {\sastateset'_\branch} {\sastateset'_1, \sastateset_2, \ldots, \sastateset_{\midord-1}, \sastateset_\midord \cup \sastateset_\branch, \sastateset_{\midord+1}, \ldots, \sastateset_\maxord} \] to $\ta_{\idxi+1}$ when $\midord > 1$, and $\tatranfull{\tastate} {\idxj} {\numof} {\control} {\cha} {\sastateset'_\branch} {\sastateset'_1 \cup \sastateset_\branch, \sastateset_2, \ldots, \sastateset_\maxord}$ when $\midord = 1$. \item For each rule $\stsop{\control}{\spush{\midord}}{\control'} \in \rules$ and $\tatranfull{\tastate} {\idxj} {\numof} {\control'} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord}$ and $\satranfull{\sastateset_\midord} {\cha} {\sastateset'_\branch} {\sastateset'_1, \ldots, \sastateset'_\midord}$ in $\ta_\idxi$, add to $\ta_{\idxi+1}$ \[ \tatranfull{\tastate} {\idxj} {\numof} {\control} {\cha} {\sastateset_\branch \cup \sastateset'_\branch} {\sastateset_1 \cup \sastateset'_1, \ldots, \sastateset_{\midord-1} \cup \sastateset'_{\midord-1}, \sastateset'_\midord, \sastateset_{\midord+1}, \ldots, \sastateset_\maxord} \ . \] \item For each rule $\stsop{\control}{\spop{\midord}}{\control'} \in \rules$ and $\tatranfullk{\tastate} {\idxj} {\numof} {\control'} {\sastate_\midord} {\sastateset_{\midord+1}, \ldots, \sastateset_\maxord}$ in $\ta_\idxi$, add to $\ta_{\idxi+1}$ for each $\cha \in \salphabet$ \[ \tatranfull{\tastate} {\idxj} {\numof} {\control} {\cha} {\emptyset} {\emptyset, \ldots, \emptyset, \set{\sastate_\midord}, \sastateset_{\midord+1}, \ldots, \sastateset_\maxord} \ . \] \item For each rule $\stsop{\control}{\scollapse{\midord}}{\control'} \in \rules$ and $\tatranfullk{\tastate} {\idxj} {\numof} {\control'} {\sastate_\midord} {\sastateset_{\midord+1}, \ldots, \sastateset_\maxord}$ in $\ta_\idxi$, add to $\ta_{\idxi+1}$ for each $\cha \in \salphabet$ \[ \tatranfull{\tastate} {\idxj} {\numof} {\control} {\cha} {\set{\sastate_\midord}} {\emptyset, \ldots, \emptyset, \sastateset_{\midord+1}, \ldots, \sastateset_\maxord} \ . \] \item For each rule $\stpush{\control}{\control_1, \ldots, \control_\numof} \in \rules$ and $\tatranfull{\tastate} {\idxj} {\numof'} {\tastate'} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord}$ and \[ \tatranfull{\tastate'} {1} {\numof} {\control_1} {\cha} {\sastateset^1_\branch} {\sastateset^1_1, \ldots, \sastateset^1_\maxord}, \ldots, \tatranfull{\tastate'} {\numof} {\numof} {\control_2} {\cha} {\sastateset^2_\branch} {\sastateset^2_1, \ldots, \sastateset^2_\maxord} \] in $\ta_\idxi$, add to $\ta_{\idxi+1}$ \[ \tatranfull{\tastate} {\idxj} {\numof'} {\control} {\cha} {\sastateset'_\branch} {\sastateset'_1, \ldots, \sastateset'_\maxord} \] where $\sastateset'_\branch = \sastateset_\branch \cup \sastateset^1_\branch \cup \cdots \cup \sastateset^\numof_\branch$ and for all $\midord$, we have $\sastateset'_\midord = \sastateset_1 \cup \sastateset^1_\midord \cup \cdots \cup \sastateset^\numof_\midord$. \item For each rule $\stpop{\control_1, \ldots, \control_\numof}{\control} \in \rules$ and $\cha_1, \ldots, \cha_\numof \in \salphabet$ add to $\ta_{\idxi+1}$ the transitions $\tatranfull{\control} {\idxj} {\numof} {\control_\idxj} {\cha_\idxj} {\emptyset} {\emptyset, \ldots, \emptyset}$ for each $1 \leq \idxj \leq \numof$. \end{itemize} \subsubsection{Intuition of the Algorithm} Since rules may only be applied to the leaves of the tree, the algorithm works by introducing new initial transitions that are derived from existing initial transitions. Consider a tree $\tree$ with a leaf node $\tnode$ labelled by $\brac{\annot{\chb}{\stack_\branch} \scomp{1} \stack}$. Suppose this tree were already accepted by the automaton, and the initial transition $\tatranfull{\tastate} {\idxi} {\numof} {\control} {\chb} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord}$ is applied to $\tnode$. If we had a rule $\stsop{\control'}{\srew{\cha}{\chb}}{\control}$ then we could apply this rule to a tree $\tree'$ that is identical to $\tree$ except $\tnode$ is labelled by $\brac{\annot{\cha}{\stack_\branch} \scomp{1} \stack}$. After the application, we would obtain $\tree$. Thus, if $\tree$ is accepted by the automaton, then $\tree'$ should be accepted. The saturation algorithm will derive from the above rule and transition a new transition $\tatranfull{\tastate} {\idxi} {\numof} {\control'} {\chb} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord}$. This transition simply changes the control state and top character of the stack. Thus, we can substitute this transition into the accepting run of $\tree$ to build an accepting run of $\tree'$. For a rule $\stpop{\control_1}{\control}$ we would introduce a transition $\tatranfull{\control} {1} {1} {\chb} {\control_1} {\emptyset} {\emptyset, \ldots, \emptyset}$. We can add this transition to any accepting run of a tree with a leaf with control state $\control$ and it will have the effect of adding a new node with control state $\control_1$. Since we can obtain the original tree by applying the rule, the extended tree should also be accepted. The intuition is similar for the $\spop{\midord}$ and $\scollapse{\midord}$ operations. To understand the intuition for the $\spush{\midord}$, $\scpush{\midord}$ and $\stpush{\control}{\control_1, \ldots, \control_\numof}$ rules, one must observe that these rules, applied backwards, have the effect of replacing multiple copies of identical stacks with a single stack. Thus, the new transitions accept the intersection of the stacks that could have been accepted by multiple previous transitions: taking the union of two sets of automaton states means that the intersection of the language must be accepted. \subsubsection*{Correctness} We have the following \easyicalp{% property. }{% property, proved in Appendix~\ref{sec:correctness}. } \begin{namedproperty}{prop:sat-correct}{Correctness of Saturation} Given an order-$\maxord$ GASTRS, saturation runs in $\maxord$-EXPTIME and builds an automaton $\ta$ such that $\langof{\ta} = \prestar{\gstrs}{\ta_0}$. \end{namedproperty} \easyicalp{ \begin{proof} The proof of completeness is given in Lemma~\ref{lem:completeness} and soundness is given in Lemma~\ref{lem:soundness}. The complexity is derived as follows. We add at most one transition of the form $\tatran{\tastate}{\idxi}{\numof}{\control}{\sastate}$ for each $\tastate$, $\idxi$, $\numof$ and $\control$. Hence we add at most a polynomial number of transitions to $\tadelta$. Thus, to $\sadelta_\maxord$ we have a polynomial number of states. We add at most one transition of the form $\sastate \satran{\sastate'} \sastateset$ for each $\sastate$ and set of states $\sastateset$. Thus we have at most an exponential number of transitions in $\sadelta_\maxord$. Thus, in $\sastates_\midord$ we have a number of states bounded by a tower of exponentials of height $(\maxord - \midord)$. Since we add at most one transition of the form $\sastate \satran{\sastate'} \sastateset$ for each $\sastate$ and $\sastateset$ we have a number of transitions bounded by a tower of exponentials of height $(\maxord - \midord + 1)$ giving the number of states in $\sastates_{\midord-1}$. Thus, at order-$1$ the number of new transitions is bounded by a tower of height $\maxord$, giving the $\maxord$-EXPTIME complexity. \end{proof} }{ Completeness is easily proved by induction over the length of the run to a target configuration. Soundness generalises the notion, used in ICALP 2012, of a ``sound'' stack automaton to trees and requires some non-trivial definitions to handle the tree structures. Finally, the complexity follows from the fact that only a polynomial number of states can be added at order-$\maxord$, which, due to alternation, blows up by one exponential for each level of nesting. } \section{Soundness of Saturation} We prove that the automaton $\ta$ constructed by saturation only accepts trees in $\prestar{\gstrs}{\ta_0}$. The proof relies on the notion of a ``sound'' automaton. There are several stages to the proof. \begin{itemize} \item We assign meanings to each state of the automaton that ultimately capture inclusion in $\prestar{\gstrs}{\ta_0}$. \item We use these meanings to derive a notion of sound transitions. \item We define a sound automaton based on the notion of sound transitions. \item We show sound tree automata only accept trees in $\prestar{\gstrs}{\ta_0}$. \item We show the initial automaton $\ta_0$ is sound, and moreover, each saturation step preserves soundness, from which we conclude soundness of the saturation algorithm. \end{itemize} To define the meanings of the states we need to reason about partial runs of our stack tree automata. Hence for a tree automaton $\ta$ we define \[ \tweaklang{\ta} \] to accept trees over the set of control states $\tastates$ (instead of $\controls$). That is, we can accept prefixes of trees accepted by $\ta$ by labelling the leaves with the states that would have appeared on an accepting run of the full tree. Furthermore, we write \[ \tlang{\tastate_1, \ldots, \tastate_\numof}{\ta} \] to denote the set of trees $\tree$ in $\tweaklang{\ta}$ such that $\tree$ has $\numof$ leaves and the ``control'' states (which now includes all states in $\tastates$) appearing on the leaves are $\tastate_1, \ldots, \tastate_\numof$ respectively. As a special case, $\tlang{\tastatef}{\ta}$ for all $\tastatef \in \tafinals$ contains only the empty tree. \subsection{Meaning of a State} We assign to each state of the automaton a ``meaning''. This meaning captures the requirement that the states $\control$ of the automaton should accept $\prestar{\gstrs}{\ta_0}$, while the meanings of the non-initial states are given by the automaton itself (i.e. the states should accept everything they accept). For states accepting stacks, the non-initial states again have the trivial meaning (they should accept what they accept), while the meanings of the initial states are inherited from the transitions that they label. We write $\tastateseq$ to denote a sequence $\tastate_1, \ldots, \tastate_\numof$ and $\seqlen{\tastate_1, \ldots, \tastate_\numof}$ is $\numof$. Let $\tenv$ be a partial mapping of nodes to states in $\tastates$, let $\tenvempty$ be the empty mapping, and let \[ \ap{\envmod{\tenv}{\tnode}{\tastate}}{\tnode'} = \begin{cases} \tastate & \tnode = \tnode' \\ \ap{\tenv}{\tnode'} & \tnode \neq \tnode' \ . \end{cases} \] We use these mappings in definition below to place conditions on nodes in the tree that restrict runs witnessing membership in $\prestar{\gstrs}{\ta_0}$. \begin{definition}[$\tree \tmodels{\tenv} \tastate_1, \ldots, \tastate_\numof$] If $\tree$ has $\numof$ leaves labelled $\tastate_1, \ldots, \tastate_\numof$ respectively then $\tree \tmodels{\tenv} \tastate_1, \ldots, \tastate_\numof$ whenever $\tree \in \prestar{\gstrs}{\tweaklang{\ta_0}}$ and there is a run to some $\tree' \in \tweaklang{\ta_0}$ such that -- fixing an accepting run of $\ta_0$ over $\tree'$ -- for all nodes $\tnode$ of $\tree$ with $\ap{\tenv}{\tnode} = \tastate$, then \begin{itemize} \item if $\tastate \in \controls$ then $\tnode$ appears as a leaf during the run and on the first such tree in the run, $\tnode$ has control state $\tastate$. \item if $\tastate \notin \controls$ then $\tnode$ is not a leaf of any tree on the run and the accepting run of $\ta$ over $\tree'$ labels $\tnode$ with $\tastate$. \end{itemize} As a special case, when $\tree$ is empty we have $\tree \tmodels{\tenvempty} \tastatef$ and $\tastatef \in \tafinals$. \end{definition} Once we have assigned meanings to the states of $\tastates$, we need to derive meanings for the states in $\sastates_\maxord, \ldots, \sastates_1$. We first introduce some notation. \[ \tree \treeplus{\idxi} \tup{\tastate_1, \stack_1}, \ldots, \tup{\tastate_\numof, \stack_\numof} = \treemod{ \treemod{ \treemod{\tree} {\tleaf{\tree}{\idxi}} {\stack} } {\tleaf{\tree}{\idxi}1} {\tup{\tastate_1, \stack_1}} \cdots } {\tleaf{\tree}{\idxi}\numof} {\tup{\tastate_\numof, \stack_\numof}} \] when $\tree$ is non-empty and $\stack$ is the stack labelling $\tleaf{\tree}{\idxi}$ in $\tree$. When $\tree$ is empty we have \[ \tree \treeplus{0} \tup{\tastate_1, \stack_1} \] is the single-node tree labelled by $\tup{\tastate_1, \stack_1}$. In the definition below we assign meanings to states accepting stacks. The first case is the simple case where a state is non-initial, and its meaning is to accept the set of stacks it accepts. The second case derives a meaning of a state in $\sastates_\midord$ by inheriting the meaning from the states of $\sastates_{\midord+1}$. Intuitively, if we have a transition $\sastate_{\midord+1} \satran{\sastate_\midord} \sastateset_{\midord+1}$ then the meaning of $\sastate_\midord$ is that it should accept all stacks that could appear on top of a stack in the meaning of $\sastateset_{\midord+1}$ to form a stack in the meaning of $\sastate_\midord$. The final case is a generalisation of the above case to trees. The states in $\sastates_\maxord$ should accept all stacks that could appear on a node of the tree consistent with a run of the stack tree automaton and the meanings of the states in $\tastates$. \begin{definition}[$\stack \smodels \sastate$] For any $\sastateset \subseteq \sastates_\midord$ and any order-$\midord$ stack $\stack$, we write $\stack \smodels \sastateset$ if $\stack \smodels \sastate$ for all $\sastate \in \sastateset$. We define $\stack \smodels \sastate$ by a case distinction on $\sastate$. \begin{enumerate} \item When $\sastate$ is a non-initial state in $\sastates_\midord$, then we have $\stack \smodels \sastate$ if $\stack$ is accepted from $\sastate$. \item If $\sastate_\midord$ is an initial state in $\sastates_\midord$ with $\midord < \maxord$ labelling a transition $\sastate_{\midord+1} \satran{\sastate_\midord} \sastateset_{\midord+1} \in \sadelta_{\midord+1}$ then we have $\stack \smodels \sastate_\midord$ if for all stacks $\stack'$ such that $\stack' \smodels \sastateset_{\midord+1}$ we have $\stack \scomp{\midord+1} \stack' \smodels \sastate_{\midord+1}$. \item \label{item:order-n-states} We have $\stack \smodels \sastate$ where $\tatran{\tastate}{\idxi}{\numof}{\tastate'}{\sastate}$ if for all transitions \[ \tatran{\tastate}{1}{\numof}{\tastate_1}{\sastate_1}, \ldots, \tatran{\tastate}{\numof}{\numof}{\tastate_\numof}{\sastate_\numof} \] trees $\tree \tmodels{\tenv} \tastateseq_1, \tastate, \tastateseq_2$ and stacks $\stack_1, \ldots, \stack_\numof$ such that \[ \tree \treeplus{\idxj} \tup{\tastate_1, \stack_1}, \ldots, \tup{\tastate_\numof, \stack_\numof} \tmodelsm{\tenv}{\tleaf{\tree}{\idxj}}{\tastate} \tastateseq_1, \tastate_1, \ldots, \tastate_\numof, \tastateseq_2 \] where $\idxj = \seqlen{\tastateseq_1} + 1$, we have \begin{multline*} \tree \treeplus{\idxj} \tup{\tastate_1, \stack_1}, \ldots, \tup{\tastate_{\idxi-1}, \stack_{\idxi-1}}, \tup{\tastate', \stack}, \tup{\tastate_{\idxi+1}, \stack_{\idxi+1}}, \ldots, \tup{\tastate_\numof, \stack_\numof} \\ \tmodelsm{\tenv}{\tleaf{\tree}{\idxj}}{\tastate} \tastateseq_1, \tastate_1, \ldots, \tastate_{\idxi-1}, \tastate', \tastate_{\idxi+1}, \ldots, \tastate_\numof, \tastateseq_2 \ . \end{multline*} \end{enumerate} \end{definition} Note that item \ref{item:order-n-states} of the definition of $\smodels$ contains a vacuity in that there may be no $\stack_1, \ldots, \stack_\numof$ satisfying the antecedent (in which case all stacks would be in the meaning of $\sastate$). Hence, we require a non-redundancy condition on the automata. \begin{definition}[Non-Redundancy] An order-$\maxord$ annotated stack tree automaton \[ \ta = \tup{ \tastates, \sastates_\maxord,\ldots,\sastates_1, \salphabet, \tadelta, \sadelta_\maxord,\ldots,\sadelta_1, \controls, \tafinals, \safinals_\maxord,\ldots,\safinals_1 } \] is \emph{non-redundant} if for all $\tastate \in \tastates$ we have that either $\tastate$ has no-incoming transitions, or there exist \[ \tatran{\tastate}{1}{\numof}{\tastate_1}{\sastate_1}, \ldots, \tatran{\tastate}{\numof}{\numof}{\tastate_\numof}{\sastate_\numof} \in \tadelta \] such that for all $\tree \tmodels{\tenv} \tastateseq_1, \tastate, \tastateseq_2$ there exist $\stack_1, \ldots, \stack_\numof$ such that \[ \tree \treeplus{\idxj} \tup{\tastate_1, \stack_1}, \ldots, \tup{\tastate_\numof, \stack_\numof} \tmodelsm{\tenv}{\tleaf{\tree}{\idxj}}{\tastate} \tastateseq_1, \tastate_1, \ldots, \tastate_\numof, \tastateseq_2 \] where $\idxj = \seqlen{\tastateseq_1} + 1$. \end{definition} This property can be easily satisfied in $\ta_0$ by removing states $\tastate$ that do not satisfy the non-redundancy conditions (this does not change the language since there were no trees that could be accepted using $\tastate$). We show later that the property is maintained by saturation. \subsection{Soundness of a Transition} After assigning meanings to states, we can define a notion of soundness for the transitions of the automata. Intuitively, a transition is sound if it respects the meanings of its source and target states. One may derive some more intuition by considering a transition $q \xrightarrow{a} q'$ of a finite word automaton. The transition would be sound if, for every word $w$ in the meaning of $q'$, the same word with an $a$ in front is in the meaning of $q$. That is, the transition is sound if an $a$ can appear on anything accepted from $q'$. The following definition translates the same idea to the case of stack trees. \begin{definition}[Soundness of transitions] There are two cases given below. \begin{enumerate} \item A transition $\satranfull{\sastate_\midord} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\midord}$ is sound if for any $\stack_1 \smodels \sastateset_1$, \ldots, $\stack_\midord \smodels \sastateset_\midord$ and $\stack_\branch \smodels \sastateset_\branch$ we have $\annot{\cha}{\stack_\branch} \scomp{1} \stack_1 \scomp{2} \cdots \scomp{\midord} \stack_\midord \smodels \sastate_\midord$. \item A transition \[ \tatranfull{\tastate} {\idxi} {\numof} {\tastate'} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord}, \] is sound if for all trees $\tree \tmodels{\tenv} \tastateseq_1, \tastate, \tastateseq_2$ and stacks $\stack_1 \smodels \sastateset_1$, \ldots $\stack_\numof \smodels \sastateset_\numof$, and $\stack_\branch \smodels \sastateset_\branch$ and for all \[ \tatran{\tastate}{1}{\numof}{\tastate_1}{\sastate_1}, \ldots, \tatran{\tastate}{\numof}{\numof}{\tastate_\numof}{\sastate_\numof} \] and stacks $\stack'_1, \ldots, \stack'_\numof$ such that \[ \tree \treeplus{\idxj} \tup{\tastate_1, \stack'_1}, \ldots, \tup{\tastate_\numof, \stack'_\numof} \tmodelsm{\tenv}{\tleaf{\tree}{\idxj}}{\tastate} \tastateseq_1, \tastate_1, \ldots, \tastate_\numof, \tastateseq_2 \] where $\idxj = \seqlen{\tastateseq_1} + 1$, we have \begin{multline*} \tree \treeplus{\idxj} \tup{\tastate_1, \stack'_1}, \ldots, \tup{\tastate_{\idxi-1}, \stack'_{\idxi-1}}, \tup{\tastate', \stack}, \tup{\tastate_{\idxi+1}, \stack'_{\idxi+1}}, \ldots, \tup{\tastate_\numof, \stack'_\numof} \\ \tmodelsm{\tenv}{\tleaf{\tree}{\idxj}}{\tastate} \tastateseq_1, \tastate_1, \ldots, \tastate_{\idxi-1}, \tastate', \tastate_{\idxi+1}, \ldots, \tastate_\numof, \tastateseq_2 \ . \end{multline*} where \[ \stack = \annot{\cha}{\stack_\branch} \scomp{1} \stack_1 \scomp{2} \cdots \scomp{\maxord} \stack_\maxord\ . \] \end{enumerate} \end{definition} In the proof, we will have to show that saturation builds a sound automaton. This means proving soundness for each new transition. The following lemma shows that it suffices to only show soundness for the outer collections of transitions. \begin{namedlemma}{lem:sound-cascade}{Cascading Soundness} If a transition \[ \tatranfull{\tastate} {\idxi} {\numof} {\tastate'} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord}, \] is sound then all transitions $\satranfull{\sastate_\midord} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\midord}$ appearing within the transition are also sound. \end{namedlemma} \begin{proof} We march by induction. Initially $\midord = \maxord$ and we have $\satranfull{\sastate} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord}$ where $\tatran{\tastate}{\idxi}{\numof}{\tastate_\idxi}{\sastate}$. To prove soundness of the transition from $\sastate$, take $\stack_1 \smodels \sastateset^\idxi_1$, \ldots, $\stack_\maxord \smodels \sastateset^\idxi_\maxord$, and $\stack_\branch \smodels \sastateset^\idxi_\branch$. We need to show \[ \stack = \annot{\cha}{\stack_\branch} \scomp{1} \stack_1 \scomp{2} \cdots \scomp{\maxord} \stack_\maxord \smodels \sastate \ . \] This is the case if, letting $\idxj = \seqlen{\tastateseq_1}$, for all transitions \[ \tatran{\tastate}{1}{\numof}{\tastate_1}{\sastate_1}, \ldots, \tatran{\tastate}{\numof}{\numof}{\tastate_\numof}{\sastate_\numof} \] trees $\tree \tmodels{\tenv} \tastateseq_1, \tastate, \tastateseq_2$ and stacks $\stack_1, \ldots, \stack_\numof$ such that \[ \tree \treeplus{\idxj} \tup{\tastate_1, \stack_1}, \ldots, \tup{\tastate_\numof, \stack_\numof} \tmodelsm{\tenv}{\tleaf{\tree}{\idxj}}{\tastate} \tastateseq_1, \tastate_1, \ldots, \tastate_\numof, \tastateseq_2 \] we have \begin{multline*} \tree \treeplus{\idxj} \tup{\tastate_1, \stack_1}, \ldots, \tup{\tastate_{\idxi-1}, \stack_{\idxi-1}}, \tup{\tastate', \stack}, \tup{\tastate_{\idxi+1}, \stack_{\idxi+1}}, \ldots, \tup{\tastate_\numof, \stack_\numof} \\ \tmodelsm{\tenv}{\tleaf{\tree}{\idxj}}{\tastate} \tastateseq_1, \tastate_1, \ldots, \tastate_{\idxi-1}, \tastate', \tastate_{\idxi+1}, \ldots, \tastate_\numof, \tastateseq_2 \ . \end{multline*} These properties are derived immediately from the fact that \[ \tatranfull{\tastate} {\idxi} {\numof} {\tastate'} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord}, \] is sound, hence we are done. When $\midord < \maxord$ we assume $\satranfull{\sastate_{\midord+1}} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_{\midord+1}}$ is sound and $\sastate_{\midord+1} \satran{\sastate_\midord} \sastateset_{\midord+1}$. We show $\satranfull{\sastate_\midord} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\midord}$ is also sound. For this, we take any stacks $\stack_1 \smodels \sastateset_1$, \ldots $\stack_\midord \smodels \sastateset_\midord$, and $\stack_\branch \smodels \sastateset_\branch$. We need to show \[ \stack = \annot{\cha}{\stack_\branch} \scomp{1} \stack_1 \scomp{2} \cdots \scomp{\midord} \stack_\midord \smodels \sastate_\midord \ . \] For this, we need for all $\stack' \models \sastateset_{\midord+1}$ that $\stack \scomp{(\midord+1)} \stack' \smodels \sastate_{\midord+1}$. From the soundness of $\satranfull{\sastate_{\midord+1}} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_{\midord+1}}$ we have \[ \stack \scomp{(\midord+1)} \stack' = \annot{\cha}{\stack_\branch} \scomp{1} \stack_1 \scomp{2} \cdots \scomp{(\midord+1)} \stack_{\midord+1} \smodels \sastate_{\midord+1} \] and we are done. \end{proof} \subsection{Soundness of Annotated Stack Tree Automata} We will prove the saturation constructs a sound automaton. We first define what it means for an automaton to be sound and prove that a sound automaton only accepts trees in $\prestar{\gstrs}{\ta_0}$. \begin{definition}[Soundness of Annotated Stack Tree Automata] An annotated stack tree automaton $\ta$ is sound if \begin{enumerate} \item $\ta$ is obtained from $\ta_0$ by adding new initial states to $\sastates_1, \ldots, \sastates_\maxord$ and transitions starting at initial states, and \item in $\ta$, all transitions \[ \tatranfull{\tastate} {\idxi} {\numof} {\tastate'} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord} \] and \[ \satranfull{\sastate_\midord} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\midord} \] are sound, and \item $\ta$ is non-redundant. \end{enumerate} \end{definition} We show that a sound annotated stack tree automaton can only accept trees belonging to $\prestar{\gstrs}{\ta_0}$. In fact, we prove a more general result. In the following lemma, note the particular case where $\tree \in \tlang{\tastateseq}{\ta}$ and $\tastateseq$ is a sequence of states in $\controls$ then we have $\tree \in \prestar{\gstrs}{\ta_0}$. That is, $\langof{\ta} \subseteq \prestar{\gstrs}{\ta_0}$. \begin{namedlemma}{lem:sound-trees}{Sound Acceptance} Let $\ta$ be a sound annotated stack automaton. For all $\tree \in \tlang{\tastateseq}{\ta}$ we have $\tree \tmodels{\emptyset} \tastateseq$. \end{namedlemma} Before we can prove the result about trees, we first prove a related result about stacks. This result and proof is taken almost directly from ICALP 2012~\cite{BCHS12}. \begin{namedlemma}{lem:sound-stacks}{Sound Acceptance of Stacks} Let $\ta$ be a sound annotated stack automaton. If $\ta$ accepts an order-$\midord$ stack $\stack$ from $\sastate \in \sastates_\midord$ then $\stack \smodels \sastate$. \end{namedlemma} \begin{proof} We proceed by induction on the size of the stack (where the size of an annotated stack is defined to be the size of a tree representing the stack). Let $\stack$ be an order-$\midord$ stack accepted from a state $\sastate \in \sastates_\midord$. We assume that the property holds for any smaller stack. If $\stack$ is empty then $\sastate$ is a final state. Recall that by assumption final states are not initial, hence $\sastate$ is not initial. It follows that the empty stack is accepted from $\sastate$ in $\ta_0$ and hence $\stack \smodels \sastate$. If $\stack$ is a non-empty stack of order-$1$, then $\stack = \annot{\cha}{\stack_\branch} \scomp{1} \stack_1$. As $\stack$ is accepted from $\sastate$, there exists a transition $\satranfull{\sastate}{\cha}{\sastateset_\branch}{\sastateset_1}$ such that $\stack_1$ is accepted from $\sastateset_1$ and $\stack_\branch$ is accepted from $\sastateset_\branch$. By induction we have $\stack_1 \smodels \sastateset_1$ and $\stack_\branch \smodels \sastateset_\branch$. Since the transition is sound, we have $\stack \smodels \sastate$. If $\stack$ is a non-empty stack of order-$\midord$, then $\stack = \stack_{\midord-1} \scomp{\midord} \stack_\midord$. As $\stack$ is accepted from $\sastate$, there exists a transition $\sastate \satran{\sastate'} \sastateset$ such that $\stack_\midord$ is accepted from $\sastateset$ and $\stack_{\midord-1}$ is accepted from $\sastate'$. By induction we have $\stack_{\midord-1} \smodels \sastate'$ and $\stack_\midord \smodels \sastateset_\midord$. Thus, by the definition of $\stack_{\midord-1} \smodels \sastate'$ we also have $\stack = \stack_{\midord-1} \scomp{\midord} \stack_\midord \smodels \sastate$. \end{proof} We are now ready to prove \reflemma{lem:sound-trees}. \begin{proof}[Proof of \reflemma{lem:sound-trees}] We proceed by induction on the number of nodes in the tree. In the base case, we have $\tree \in \tlang{\tastatef}{\ta}$ for some $\tastatef \in \tafinals$ and $\tree$ is empty. Thus, we immediately have $\tree \tmodels{\tenvempty} \tastatef$. Thus, take some non-empty $\tree \in \tlang{\tastateseq}{\ta}$. Let the sequence $\tleaf{\tree}{\idxi}, \ldots, \tleaf{\tree}{\idxi+\numof}$ be the first complete group of siblings that are all leaf nodes and let $\tastateseq = \tastateseq_1, \tastate_1, \ldots, \tastate_\numof, \tastateseq_2$ be the decomposition of $\tastateseq$ such that $\tastateseq_1$ is of length $(\idxi-1)$. That is, $\tastate_1, \ldots, \tastate_\numof$ label the identified leaves of $\tree$. Furthermore, let $\stack_1, \ldots, \stack_\numof$ be the respective stacks labelling these leaves. Take the set of transitions \[ \tatran{\tastate}{1}{\numof}{\tastate_1}{\sastate_1}, \ldots \tatran{\tastate}{\numof}{\numof}{\tastate_{\numof}}{\sastate_{\numof}} \] that are used in the accepting run of $\tree$ and the identified leaves. Let $\tree'$ be the tree obtained by removing $\tleaf{\tree}{\idxi}, \ldots, \tleaf{\tree}{\idxi+\numof'}$. We have $\tree' \in \tlang{\tastateseq_1, \tastate, \tastateseq_2}{\ta}$ and by induction $\tree' \tmodels{\tenvempty} \tastateseq_1, \tastate, \tastateseq_2$. Since $\tastate$ has incoming transitions and $\ta$ is non-redundant, we know there exists \[ \tatran{\tastate}{1}{\numof}{\tastate'_1}{\sastate'_1}, \ldots \tatran{\tastate}{\numof}{\numof}{\tastate'_{\numof}}{\sastate'_{\numof}} \] and $\stack'_1, \ldots, \stack'_\numof$ such that \[ \tree' \treeplus{\idxi} \tup{\tastate'_1, \stack'_1}, \ldots, \tup{\tastate'_\numof, \stack'_\numof} \tmodelsm{\tenvempty}{\tleaf{\tree}{\idxi}}{\tastate} \tastateseq_1, \tastate'_1, \ldots, \tastate'_\numof, \tastateseq_2 \ . \] Since $\stack_1 \smodels \sastate_1$ we infer from the definition of $\smodels$ at $\sastate_1$ that \[ \tree' \treeplus{\idxi} \tup{\tastate_1, \stack_1}, \tup{\tastate'_2, \stack'_2}, \ldots, \tup{\tastate'_\numof, \stack'_\numof} \tmodelsm{\tenvempty}{\tleaf{\tree}{\idxi}}{\tastate} \tastateseq_1, \tastate_1, \tastate'_2, \ldots, \tastate'_\numof, \tastateseq_2 \ . \] By repeated applications of the above for each $1 < \idxj \leq \numof$, we obtain \[ \tree' \treeplus{\idxi} \tup{\tastate_1, \stack_1}, \tup{\tastate_2, \stack_2}, \ldots, \tup{\tastate_\numof, \stack_\numof} \tmodelsm{\tenvempty}{\tleaf{\tree}{\idxi}}{\tastate} \tastateseq_1, \tastate_1, \ldots, \tastate_\numof, \tastateseq_2 \ . \] This implies $\tree \tmodels{\tenvempty} \tastateseq$ since $\tmodels{\tenvempty}$ is less restrictive than $\tmodelsm{\tenvempty}{\tleaf{\tree}{\idxi}}{\tastate}$. \end{proof} \subsection{Soundness of Saturation} We first prove that $\ta_0$ is sound, and then that saturation maintains the property. \begin{namedlemma}{lem:init-sound}{Soundness of $\ta_0$} The initial automaton $\ta_0$ is sound. \end{namedlemma} \begin{proof} It is trivial that $\ta_0$ is obtained from $\ta_0$, and moreover, we assume the non-redundancy condition. Hence, From \reflemma{lem:sound-cascade} we only need to prove soundness of non-initial transitions of the form \[ \satranfull{\sastate_\midord} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord} \] and for transitions in $\tadelta$. We first show the case for non-initial \[ \satranfull{\sastate_\midord} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord} \] which is the same as in ICALP 2012. First note that $\sastateset_1, \ldots, \sastateset_\maxord$ and $\sastateset_\branch$ do not contain initial states. Then we take $\stack_1 \smodels \sastateset_1$, \ldots $\stack_\midord \smodels \sastateset_\midord$ and $\stack_\branch \smodels \sastateset_\branch$. We have to show $\annot{\cha}{\stack_\branch} \scomp{1} \stack_1 \scomp{2} \cdots \scomp{\midord} \stack_\midord \smodels \sastate_\midord$. In particular, since $\sastate_\midord$ is not initial, we only need to construct an accepting run. Since $\sastateset_\idxi$ and $\sastateset_\branch$ are not initial, we have accepting runs from these states. Hence, we build immediately the run beginning with $\satranfull{\sastate_\midord} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord}$. We now prove the case for \[ \tatranfull{\tastate} {\idxi} {\numof} {\tastate'} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord} \ . \] Thus, take any $\stack_1 \smodels \sastateset_1$, \ldots $\stack_\numof \smodels \sastateset_\numof$, and $\stack_\branch \smodels \sastateset_\branch$ and any tree $\tree \tmodels{\tenv} \tastateseq_1, \tastate, \tastateseq_2$ and, letting $\idxj = \seqlen{\tastateseq_1} + 1$, any \[ \tatran{\tastate}{1}{\numof}{\tastate_1}{\sastate_1}, \ldots \tatran{\tastate}{\numof}{\numof}{\tastate_\numof}{\sastate_\numof} \] and any $\stack'_1, \ldots, \stack'_\numof$ such that $\tree \treeplus{\idxj} \tup{\tastate_1, \stack'_1} \ldots \tup{\tastate_\numof, \stack'_\numof} \tmodelsm{\tenv}{\tleaf{\tree}{\idxj}}{\tastate} \tastateseq_1, \tastate_1, \ldots, \tastate_\numof, \tastateseq_\numof$. Since initial states have no incoming transitions, we know $\tastate$ is not a control sate. We thus have a run $\run$ from $\tree \treeplus{\idxj} \tup{\tastate_1, \stack'_1} \ldots \tup{\tastate_\numof, \stack'_\numof}$ to some $\tree' \in \tweaklang{\ta}$ such that $\tleaf{\tree}{\idxj}$ does not appear as a leaf of any tree in the run. To prove soundness we argue that \begin{multline} \tree \treeplus{\idxj} \tup{\tastate_1, \stack'_1}, \ldots, \tup{\tastate_{\idxi-1}, \stack'_{\idxi-1}}, \tup{\tastate', \stack}, \tup{\tastate_{\idxi+1}, \stack'_{\idxi+1}}, \ldots, \tup{\tastate_\numof, \stack'_\numof} \\ \tmodelsm{\tenv}{\tleaf{\tree}{\idxj}}{\tastate} \tastateseq_1, \tastate_1, \ldots, \tastate_{\idxi-1}, \tastate', \tastate_{\idxi+1}, \ldots, \tastate_\numof, \tastateseq_2 \label{eqn:initial-soundness-long} \end{multline} where $\stack = \annot{\cha}{\stack_\branch} \scomp{1} \stack_1 \scomp{2} \cdots \scomp{\maxord} \stack_\maxord$. To do so, we take the run $\run$ obtained above and build a run $\run'$ by removing all operations applied to nodes that are descendants of $\tleaf{\tree}{\idxj}\idxi$. Observe that $\run'$ can be applied to \[ \tree \treeplus{\idxj} \tup{\tastate_1, \stack'_1}, \ldots, \tup{\tastate_{\idxi-1}, \stack'_{\idxi-1}}, \tup{\tastate', \stack}, \tup{\tastate_{\idxi+1}, \stack'_{\idxi+1}}, \ldots, \tup{\tastate_\numof, \stack'_\numof} \] since none of the operations apply to a descendant of $\tleaf{\tree}{\idxj}\idxi$. By applying this run we obtain a tree $\tree''$ which is $\tree'$ less all nodes that are strict descendants of $\tleaf{\tree}{\idxj}\idxi$ and where $\tleaf{\tree}{\idxj}\idxi$ is labelled by $\tup{\tastate', \stack}$. Thus, we take the accepting run of $\tree'$ witnessing $\tree' \in \tweaklang{\ta_0}$, remove all nodes that are strict descendants of $\tleaf{\tree}{\idxj}\idxi$ and label $\tleaf{\tree}{\idxj}\idxi$ by $\tastate'$. This gives us a run witnessing $\tree'' \in \tweaklang{\ta_0}$ by using \[ \tatranfull{\tastate} {\idxi} {\numof} {\tastate'} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord} \ . \] at $\tleaf{\tree}{\idxj}\idxi$ and the accepting runs from the non-initial $\sastateset_\branch$, $\sastateset_1, \ldots, \sastateset_\maxord$. This gives us (\ref{eqn:initial-soundness-long}) as required. \end{proof} We now show that, at every stage of saturation, we maintain a sound automaton. \begin{namedlemma} {lem:saturation-soundness-step} {Soundness of the Saturation Step} Given a sound automaton $\ta$, we have $\ta' = \ap{\satfn}{\ta}$ is sound. \end{namedlemma} \begin{proof} We analyse all new transitions \[ \tatranfull{\tastate} {\idxi} {\numof} {\control} {\cha} {\sastateset^\mnew_\branch} {\sastateset^\mnew_1, \ldots, \sastateset^\mnew_\maxord} \ . \] Proving these transitions are sound and do not cause redundancy is sufficient via \reflemma{lem:sound-cascade}. Let us begin with the transitions introduced by rules that do not remove nodes from the tree. We argue that for all trees $\tree \tmodels{\tenv} \tastateseq_1, \tastate, \tastateseq_2$ and stacks $\stack_1 \smodels \sastateset^\mnew_1$, \ldots $\stack_\numof \smodels \sastateset^\mnew_\numof$, and $\stack_\branch \smodels \sastateset^\mnew_\branch$ and for all \[ \tatran{\tastate}{1}{\numof}{\tastate_1}{\sastate_1}, \ldots, \tatran{\tastate}{\numof}{\numof}{\tastate_\numof}{\sastate_\numof} \] and stacks $\stack'_1, \ldots, \stack'_\numof$ such that \[ \tree \treeplus{\idxj} \tup{\tastate_1, \stack'_1}, \ldots, \tup{\tastate_\numof, \stack'_\numof} \tmodelsm{\tenv}{\tleaf{\tree}{\idxj}}{\tastate} \tastateseq_1, \tastate_1, \ldots, \tastate_\numof, \tastateseq_2 \] where $\idxj = \seqlen{\tastateseq_1} + 1$ we have, letting \[ \tree_1 = \tree \treeplus{\idxj} \tup{\tastate_1, \stack'_1}, \ldots, \tup{\tastate_{\idxi-1}, \stack'_{\idxi-1}}, \tup{\control, \stack}, \tup{\tastate_{\idxi+1}, \stack'_{\idxi+1}}, \ldots, \tup{\tastate_\numof, \stack'_\numof} \] and $\tastateseq'_1 = \tastateseq_1, \tastate_1, \ldots, \tastate_{\idxi-1}$ and $\tastateseq'_2 = \tastate_{\idxi+1}, \ldots, \tastate_\numof, \tastateseq_2$ that \begin{equation} \label{eqn:soundness-step-prop} \tree_1 \tmodelsm{\tenv}{\tleaf{\tree}{\idxj}}{\tastate} \tastateseq'_1, \control, \tastateseq'_2 \end{equation} where $\stack = \annot{\cha}{\stack_\branch} \scomp{1} \stack_1 \scomp{2} \cdots \scomp{\maxord} \stack_\maxord$. We proceed by a case distinction on the rule $\strule$ which led to the introduction of the new transition. In each case, let $\tree_2 \in \ap{\strule}{\tree_1}$ be the result of applying $\strule$ at node $\tleaf{\tree}{\idxj}\idxi$. In all cases except when $\strule$ removes nodes, $\tastate$ already has an incoming transition, hence we do not need to argue non-redundancy (since $\ta$ is non-redundant). \begin{itemize} \item When $\strule = \gtrule{\control'}{\srew{\chb}{\cha}}{\control}$ we derived the new transition from some transtion \[ \tatranfull{\tastate} {\idxi} {\numof} {\control'} {\chb} {\sastateset^\mnew_\branch} {\sastateset^\mnew_1, \ldots, \sastateset^\mnew_\maxord} \] and since this transition is sound $\tree_2 \tmodelsm{\tenv}{\tleaf{\tree}{\idxj}}{\tastate} \tastateseq'_1, \control, \tastateseq'_2$. We take the run witnessing soundness for $\tree_2$ and prepend the application of $\strule$ to $\tree_1$. This gives us a run witnessing (\ref{eqn:soundness-step-prop}) as required. \item When $\strule = \gtrule{\control'}{\scpush{\midord}}{\control}$, then when $\midord > 1$ we derived the new transition from some \[ \tatranfull{\tastate} {\idxi} {\numof} {\control'} {\cha} {\sastateset_\branch} {\sastateset_1, \sastateset_2, \ldots, \sastateset_\maxord} \] and $\sastateset_1 \satrancol{\cha}{\sastateset'_\branch} \sastateset'_1$ and the new transition is of the form \[ \tatranfull{\tastate} {\idxj} {\numof} {\control} {\cha} {\sastateset'_\branch} {\sastateset'_1, \sastateset_2, \ldots, \sastateset_{\midord-1}, \sastateset_\midord \cup \sastateset_\branch, \sastateset_{\midord+1}, \ldots, \sastateset_\maxord} \] Furthermore, we have $\tree_2$ has at $\tleaf{\tree}{\idxj}\idxi$ the stack \[ \annot{\cha}{\stack_\midord} \scomp{1} \annot{\cha}{\stack_\branch} \scomp{1} \stack_1 \scomp{2} \cdots \scomp{\maxord} \stack_\maxord \] and we have $\stack_\midord \smodels \sastateset^\mnew_\midord = \sastateset_\midord \cup \sastateset_\branch$ and $\stack_1 \smodels \sastateset^\mnew_1 = \sastateset'_1$ and from soundness of $\sastateset_1 \satrancol{\cha}{\sastateset'_\branch} \sastateset'_1$ we have $\annot{\cha}{\stack_\branch} \scomp{1} \stack_1 \smodels \sastateset_1$. Thus, we can apply soundness of the transition from $\control'$ to obtain $\tree_2 \tmodelsm{\tenv}{\tleaf{\tree}{\idxj}}{\tastate} \tastateseq'_1, \control', \tastateseq'_2$. We prepend to the run witnessing this property an application of $\strule$ to $\tree_1$ at node $\tleaf{\tree}{\idxj}\idxi$ to obtain a run witnessing (\ref{eqn:soundness-step-prop}) as required. When $\midord = 1$ we began with a transition \[ \tatranfull{\tastate} {\idxi} {\numof} {\control'} {\cha} {\sastateset_\branch} {\sastateset_1, \sastateset_2 \ldots, \sastateset_\maxord} \] and $\sastateset_1 \satrancol{\cha}{\sastateset'_\branch} \sastateset'_1$ and the new transition is of the form \[ \tatranfull{\tastate} {\idxj} {\numof} {\control} {\cha} {\sastateset'_\branch} {\sastateset'_1 \cup \sastateset_\branch, \sastateset_2, \ldots, \sastateset_\maxord} \ . \] Furthermore, we have $\tree_2$ has at $\tleaf{\tree}{\idxj}\idxi$ the stack \[ \annot{\cha}{\stack_1} \scomp{1} \annot{\cha}{\stack_\branch} \scomp{1} \stack_1 \scomp{2} \cdots \scomp{\maxord} \stack_\maxord \] and we have $\stack_1 \smodels \sastateset^\mnew_1 = \sastateset'_1 \cup \sastateset_\branch$ and from $\stack_\branch \smodels \sastateset^\mnew_\branch = \sastateset'_\branch$ and soundness of $\sastateset_1 \satrancol{\cha}{\sastateset'_\branch} \sastateset'_1$ we have $\annot{\cha}{\stack_\branch} \scomp{1} \stack_1 \smodels \sastateset'_1$. Thus, we can apply soundness of the transition from $\control'$ using $\stack_1 \smodels \sastateset_\branch$ (since $\stack_1 \smodels \sastateset^\mnew_1 = \sastateset'_1 \cup \sastateset_\branch$) to obtain $\tree_2 \tmodelsm{\tenv}{\tleaf{\tree}{\idxj}}{\tastate} \tastateseq'_1, \control', \tastateseq'_2$. We prepend to the run witnessing this property an application of $\strule$ to $\tree_1$ at node $\tleaf{\tree}{\idxj}\idxi$ to obtain a run witnessing (\ref{eqn:soundness-step-prop}) as required. \item When $\strule = \gtrule{\control}{\spush{\midord}}{\control'}$ we started with a transition \[ \tatranfull{\tastate} {\idxi} {\numof} {\control'} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord} \] and $\satranfull{\sastateset_\midord} {\cha} {\sastateset'_\branch} {\sastateset'_1, \ldots, \sastateset'_\midord}$ and the new transition is of the form \[ \tatranfull{\tastate} {\idxj} {\numof} {\control} {\cha} {\sastateset_\branch \cup \sastateset_\branch} {\sastateset_1 \cup \sastateset'_1, \ldots, \sastateset_{\midord-1} \cup \sastateset'_{\midord-1}, \sastateset'_\midord, \sastateset_{\midord+1}, \ldots, \sastateset_\maxord} \ . \] Let $\stack' = \annot{\cha}{\stack_\branch} \scomp{1} \stack_1 \scomp{2} \cdots \scomp{\midord-1} \stack_{\midord-1}$, we have that $\tree_2$ has at node $\tleaf{\tree}{\idxj}\idxi$ the stack \[ \annot{\cha}{\stack_\branch} \scomp{1} \stack_1 \scomp{2} \cdots \scomp{(\midord-1)} \stack_{\midord-1} \scomp{\midord} \stack' \scomp{\midord} \stack_{\midord+1} \scomp{(\midord+1)} \cdots \scomp{\maxord} \stack_\maxord \ . \] Note, by assumption we have $\stack_1 \smodels \sastateset^\mnew_1 = \sastateset_1 \cup \sastateset'_1$, \ldots, $\stack_{\midord-1} \smodels \sastateset^\mnew_{\midord-1} = \sastateset_{\midord-1} \cup \sastateset'_{\midord-1}$ and $\stack_\branch \smodels \sastateset^\mnew_\branch = \sastateset_\branch \cup \sastateset'_\branch$. Thus from soundness of $\satranfull{\sastateset_\midord} {\cha} {\sastateset'_\branch} {\sastateset'_1, \ldots, \sastateset'_\midord}$ we have $\stack' \smodels \sastateset_\midord$. Consequently, from the soundness of the transition from $\control'$ we have $\tree_2 \tmodelsm{\tenv}{\tleaf{\tree}{\idxj}}{\tastate} \tastateseq'_1, \control', \tastateseq'_2$. We prepend to the run witnessing this property an application of $\strule$ to $\tree_1$ at node $\tleaf{\tree}{\idxj}\idxi$ to obtain a run witnessing (\ref{eqn:soundness-step-prop}) as required. \item When $\strule = \gtrule{\control}{\spop{\midord}}{\control'}$ we derived the new transition from \[ \tatranfullk{\tastate} {\idxi} {\numof} {\control'} {\sastate_\midord} {\sastateset_{\midord+1}, \ldots, \sastateset_\maxord} \] and the new transition is of the form \[ \tatranfull{\tastate} {\idxj} {\numof} {\control} {\cha} {\emptyset} {\emptyset, \ldots, \emptyset, \set{\sastate_\midord}, \sastateset_{\midord+1}, \ldots, \sastateset_\maxord} \] The tree $\tree_2$ has labelling $\tleaf{\tree}{\idxj}\idxi$ the stack $\stack' = \stack_\midord \scomp{(\midord+1)} \cdots \scomp{\maxord} \stack_\maxord$ and since $\stack_{\midord+1} \smodels \sastateset_{\midord+1}$, \ldots, $\stack_{\maxord} \smodels \sastateset_{\maxord}$ we have from the definition of $\tmodels{\tenv}$ and $\stack_\midord \smodels \sastate_\midord$ that $\tree_2 \tmodelsm{\tenv}{\tleaf{\tree}{\idxj}}{\tastate} \tastateseq'_1, \control', \tastateseq'_2$. As before, we prepend to the run witnessing this property an application of $\strule$ to $\tree_1$ at node $\tleaf{\tree}{\idxj}\idxi$ to obtain a run witnessing (\ref{eqn:soundness-step-prop}) as required. \item When $\strule = \gtrule{\control}{\scollapse{\midord}}{\control'}$ we began with a transition \[ \tatranfullk{\tastate} {\idxi} {\numof} {\control'} {\sastate_\midord} {\sastateset_{\midord+1}, \ldots, \sastateset_\maxord} \] and the new transition has the form \[ \tatranfull{\tastate} {\idxj} {\numof} {\control} {\cha} {\set{\sastate_\midord}} {\emptyset, \ldots, \emptyset, \sastateset_{\midord+1}, \ldots, \sastateset_\maxord} \] The tree $\tree_2$ has labelling $\tleaf{\tree}{\idxj}\idxi$ the stack $\stack' = \stack_\branch \scomp{(\midord+1)} \stack_{\midord+1} \scomp{(\midord+2)} \cdots \scomp{\maxord} \stack_\maxord$ and since $\stack_{\midord+1} \smodels \sastateset_{\midord+1}$, \ldots, $\stack_{\maxord} \smodels \sastateset_{\maxord}$ we have from the definition of $\tmodels{\tenv}$ and $\stack_\branch \smodels \sastate_\midord$ that $\tree_2 \tmodelsm{\tenv}{\tleaf{\tree}{\idxj}}{\tastate} \tastateseq'_1, \control', \tastateseq'_2$. As before, we prepend to the run witnessing this property an application of $\strule$ to $\tree_1$ at node $\tleaf{\tree}{\idxj}\idxi$ to obtain a run witnessing (\ref{eqn:soundness-step-prop}) as required. \item When $\strule = \stpush{\control}{\control_1, \ldots, \control_{\numof'}}$ we had transitions \[ \tatranfull{\tastate} {\idxi} {\numof} {\tastate'} {\cha} {\sastateset_\branch} {\sastateset_1, \ldots, \sastateset_\maxord} \] and \[ \tatranfull{\tastate'} {1} {\numof'} {\control_1} {\cha} {\sastateset^1_\branch} {\sastateset^1_1, \ldots, \sastateset^1_\maxord}, \ldots, \tatranfull{\tastate'} {\numof'} {\numof'} {\control_{\numof'}} {\cha} {\sastateset^{\numof'}_\branch} {\sastateset^{\numof'}_1, \ldots, \sastateset^{\numof'}_\maxord} \] and the new transition added is of the form \[ \tatranfull{\tastate} {\idxi} {\numof} {\control} {\cha} {\sastateset^\text{new}_\branch} {\sastateset^\text{new}_1, \ldots, \sastateset^\text{new}_\maxord} \] where $\sastateset^\text{new}_\branch = \sastateset_\branch \cup \sastateset^1_\branch \cup \cdots \cup \sastateset^{\numof'}_\branch$ and for all $\midord$, we have $\sastateset^\text{new}_\midord = \sastateset_1 \cup \sastateset^1_\midord \cup \cdots \cup \sastateset^{\numof'}_\midord$. Letting $\tree'_1 =$ \[ \tree \treeplus{\idxj} \tup{\tastate_1, \stack'_1}, \ldots, \tup{\tastate_{\idxi-1}, \stack'_{\idxi-1}}, \tup{\tastate', \stack}, \tup{\tastate_{\idxi+1}, \stack'_{\idxi+1}}, \ldots, \tup{\tastate_{\numof}, \stack'_{\numof}} \] and $\tenv' = \envmod{\tenv}{\tleaf{\tree}{\idxj}}{\tastate}$ we have from $\sastateset^\mnew_\branch = \sastateset_\branch \cup \sastateset^1_\branch \cup \cdots \cup \sastateset^{\numof'}_\branch$ and $\sastateset^\mnew_1 = \sastateset_1 \cup \sastateset^1_1 \cup \cdots \cup \sastateset^{\numof'}_1$, \ldots, $\sastateset^\mnew_\maxord = \sastateset_\maxord \cup \sastateset^1_\maxord \cup \cdots \cup \sastateset^{\numof'}_\maxord$, and by soundness of the transition from $\tastate'$ that $\tree'_1 \tmodels{\tenv'} \tastateseq'_1, \tastate', \tastateseq'_2$. Thus, from non-redundancy and repeated applications of the soundness of the transition from $\control_1$ to the soundness from $\control_{\numof'}$ (as in the proof of \reflemma{lem:sound-trees}) we have \[ \tree_2 = \tree'_1 \treeplus{(\idxj+\idxi)} \tup{\control_1, \stack}, \ldots, \tup{\control_{\numof'}, \stack} \tmodelsm{\tenv'}{\tleaf{\tree}{\idxj}\idxi}{\tastate'} \tastateseq'_1, \control_1, \ldots, \control_{\numof'}, \tastateseq'_2 \ . \] We prepend to the run witnessing this property an application of $\strule$ to $\tree_1$ at node $\tleaf{\tree}{\idxj}\idxi$ to obtain a run witnessing (\ref{eqn:soundness-step-prop}) as required. \end{itemize} The remaining case is for the operations that remove nodes from the tree. For $\stpop{\control_1, \ldots, \control_\numof}{\control}$ we introduced \[ \tatranfull{\control} {1} {\numof} {\control_1} {\cha} {\emptyset} {\emptyset, \ldots, \emptyset} \] to \[ \tatranfull{\control} {\numof} {\numof} {\control_\numof} {\cha} {\emptyset} {\emptyset, \ldots, \emptyset} \ . \] We prove soundness of the first of these rules, with the others being symmetrical. Taking any sequence of transitions \[ \tatran{\control}{1}{\numof}{\tastate_1}{\sastate_1}, \ldots, \tatran{\control}{\numof}{\numof}{\tastate_\numof}{\sastate_\numof} \] any $\tree \tmodels{\tenv} \tastateseq_1, \control, \tastateseq_2$ and $\stack_1$, \ldots, $\stack_\numof$ such that, letting $\idxj = \seqlen{\tastateseq_1} + 1$, \[ \tree' = \tree \treeplus{\idxj} \tup{\tastate_1, \stack_1}, \ldots, \tup{\tastate_\numof, \stack_\numof} \tmodelsm{\tenv}{\tleaf{\tree}{\idxj}}{\control} \tastateseq_1, \tastate_1, \ldots, \tastate_\numof, \tastateseq_2 \ . \] We need to show for any stack with top character $\cha$ that \[ \tree \treeplus{\idxj} \tup{\control_1, \stack}, \tup{\tastate_2, \stack_2}, \ldots, \tup{\tastate_\numof, \stack_\numof} \tmodelsm{\tenv}{\tleaf{\tree}{\idxj}}{\control} \tastateseq_1, \control_1, \tastate_2, \ldots, \tastate_\numof, \tastateseq_2 \ . \] Take the run witnessing the property for $\tree'$. This must necessarily pass some tree where $\tleaf{\tree}{\idxj}$ is exposed and contains control state $\control$. Moreover, this is the first such exposure of the node. Since we assume, for all $\control$, there is only one rule $\stpop{\control'_1, \ldots, \control'_2}{\control}$ for any $\control'_1, \ldots, \control'_\numof$, the node must be exposed by an application of $\strule$. Thus, we can remove from the run all operations applied to a descendant of $\tleaf{\tree}{\idxj}1$ before its exposure. This run then can be applied to \[ \tree \treeplus{\idxj} \tup{\control_1, \stack}, \tup{\tastate_2, \stack_2} \ldots, \tup{\tastate_\numof, \stack_\numof} \] to witness $\tree \treeplus{\idxj} \tup{\control_1, \stack}, \tup{\tastate_2, \stack_2}, \ldots, \tup{\tastate_\numof, \stack_\numof} \tmodelsm{\tenv}{\tleaf{\tree}{\idxj}}{\control} \tastateseq_1, \control_1, \tastate_2, \ldots, \tastate_\numof, \tastateseq_2$. To prove non-redundancy, we simply take any stacks $\stack_1$, \ldots, $\stack_\numof$ and apply $\strule$ to $\tree \treeplus{\idxj} \tup{\control_1, \stack_1}, \ldots, \tup{\control_\numof, \stack_\numof}$ to obtain $\tree$ from which the remainder of the run exists by assumption. \end{proof} \begin{namedlemma}{lem:soundness}{Soundness of Saturation} The automaton $\ta$ obtained by saturation from $\ta_0$ is such that $\langof{\ta} \subseteq \prestar{\gstrs}{\ta_0}$. \end{namedlemma} \begin{proof} By \reflemma{lem:init-sound} we have that $\ta_0$ is sound. Thus, by induction, assume $\ta$ is sound. We have $\ta' = \ap{\satfn}{\ta}$ and by \reflemma{lem:saturation-soundness-step} we have that $\ta'$ is sound. Thus, the $\ta$ that is the fixed point of saturation is sound, and we have from \reflemma{lem:sound-trees} that $\langof{\ta} \subseteq \prestar{\gstrs}{\ta_0}$. \end{proof}
1,108,101,563,921
arxiv
\section{Introduction} \label{sec:introduction} In 1913, Georges Sagnac published a short communication~\cite{Sagnac:1913tx} in the proceedings of the French Academy of Sciences describing an experiment to prove the existence of the ether. He elaborated on this a couple of months later~\cite{Sagnac:1913tz}. The experiment consisted of an interferometer on a rotating table in which two light rays propagating in opposite directions along the same path are brought to interference and a phase shift is detected depending on the angular velocity of the table and the area enclosed by the traveling light. We refer to this as the ``classical Sagnac effect''. Unbeknownst to Sagnac, Franz Harre\ss, a German graduate student, conducted a similar experiment in 1911~(see the report by Knopf~\cite{Knopf:1920df}) where he considered counter-propagating light in a ring of totally reflecting prisms. His objective was entirely different from Sagnac and, in fact, his experiment did not agree with his expectations since he neglected the very effect that Sagnac exhibited. Max von Laue, in 1920 compared both experiments from a special relativistic point of view~\cite{vonLaue:1920fd}. The Sagnac effect has many applications and ramifications in experiments and technology. Post~\cite{Post:1967hu} gives an extensive review until 1967. Since then many new developments have occurred. We mention only a few which have relevance for Relativity. The Sagnac effect is present in the GPS and other systems of satellites and has to be accounted for in order to achieve the high precision of operation~\cite{Allan:1985gr}. In a similar way, the Hafele-Keating experiment~\cite{Hafele:1972gn} can be regarded as a manifestation of the Sagnac effect. The Sagnac effect is also present in counter-propagating matter waves, see~\cite{Hasselbach:1993kq}. The Sagnac interferometer with laser light propagating in opposite directions \emph{along the same path} has several advantages over the Michelson interferometer as already recognised by Michelson himself. The classical Sagnac effect is proportional to the area enclosed by the path. However, this is the \emph{signed} area (for details see below). This implies that by choosing the path appropriately, one can make the classical Sagnac effect disappear. The resulting \emph{zero-area Sagnac} interferometers are insensitive to rotations (but not to accelerations, see below). Due to this property this type of interferometer has become of interest to the gravitational wave community. They are considered as a possible alternative to the traditional Michelson layout for third generation gravitational wave detectors, see~\cite{Huttner:2016ep,Bond:2017vy} There are several theoretical improvements over the classical result. Ori and Avron~\cite{Ori:2016bf} present a special-relativistic analysis of deformable interferometers such as the Sagnac interferometer and give also an explanation of the Wang experiment~\cite{Wang:2004ef}. Tartaglia~\cite{Tartaglia:1998dn} derives general relativistic corrections to the classical Sagnac effect in a Kerr space-time. In this paper we present an approach to discuss the Sagnac and related experiments in rather general situations. We rederive the classical effect and some corrections to it due to the motion of the reference frame defined by the laboratory and possible curvature effects. The plan of the paper is as follows. We explain our setup in sect.~\ref{sec:gener-sagn-effect} and derive the Sagnac effect in full generality. Sect.~\ref{sec:sagnac-effect-fermi} to \ref{sec:some-example-paths} are devoted to a discussion of three contributions to the effect which arise in a certain approximation based on (generalized) Fermi coordinates. In sect.~\ref{sec:stat-space-times} we apply our framework to general relativistic stationary space-times and sect.~\ref{sec:fizeau-experiment} discusses the Fizeau experiment as a special case. The conventions used here are those of Penrose and Rindler~\cite{Penrose:1984wm}. \section{The general Sagnac effect} \label{sec:gener-sagn-effect} Generally speaking, the Sagnac effect can be described as the difference in travel time between two photons traveling along the same path in opposite directions. In this section we will derive a formula for this quantity in general and then evaluate it in the following section using some reasonable approximations. We assume that things happen in a 4-dimensional space-time $(\mathscr{M},g)$ with $g$ a Lorentzian metric. We choose an observer, i.e., a time-like world-line $O$ parameterized by proper time $\tau$ not necessarily a geodesic. Let $a^b$ be the acceleration of the world-line then the 4-velocity $t^a$ of $O$ satisfies the equation \begin{equation} \label{eq:1} \nabla_t t^b = a^b. \end{equation} This motivates the introduction of the Fermi derivative $\mathcal{F}_t$ along $O$ defined by \begin{equation} \label{eq:2} \mathcal{F}_t X^b = \nabla _t X^b + a_c X^c t^b - t_c X^c a^b \end{equation} for any vector field $X^a$ defined along $O$. The Fermi derivative has the well-known properties that \textbf{(i)} $t^b$ is Fermi constant along $O$: $\mathcal{F}_t t^b = 0$ and \textbf{(ii)} it is compatible with the metric: $\mathcal{F}_t g = 0$. The expression~\eqref{eq:2} is not the most general one satisfying the conditions \textbf{(i)} and \textbf{(ii)}. In fact, we can incorporate a skew-symmetric tensor $\omega_{ab} = -\omega_{ba}$ perpendicular to $t^b$ resulting in the generalized Fermi transport law \begin{equation} \label{eq:3} \mathcal{F}_t X^b - \omega^b{}_c X^c = 0 \end{equation} We can now define a Fermi frame adapted to the observer $O$ by transporting a tetrad $(\mathbf{e}_0,\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3)$ along $O$ using the generalized Fermi transport \begin{equation} \label{eq:4} \mathcal{F}_t \mathbf{e}^b_k - \omega^b{}_c \mathbf{e}^c_k = 0, \quad \text{for } k=0:3 \qquad \text{and } \mathbf{e}_0^a = t^a. \end{equation} Physically, this models an observer together with his lab in which he carries out experiments. The three space-like frame vectors span the lab, the space $\Sigma_\tau$ of simultaneity at a given instant of time $\tau$. The lab is allowed to accelerate and to rotate. When $a^b=0$ and $\omega^a{}_b=0$ then the reference frame is freely falling and non-rotating, i.e., it is an inertial frame. The simultaneity spaces foliate $\mathscr{M}$ near the world-line $O$ of the observer. For a given $\tau$ the space $\Sigma_\tau$ intersects $O$ perpendicularly at the proper time~$\tau$. We assume that this foliation is global and define a global time function $\tau: \mathscr{M} \to \mathbb{R}$ so that $\Sigma_\tau = \{\tau = \text{const}\}$. Then we can write the metric in the familiar $1+3$-form \begin{equation} \label{eq:5} g = g_{00} \mathrm{d} \tau^2 + 2 g_{0k} \mathrm{d} \tau \mathrm{d} x^k + g_{ik} \mathrm{d} x^i \mathrm{d} x^k \end{equation} where $(x^k)_{k=1:3}$ are arbitrary coordinates on $\Sigma_\tau$. Let us now fix an arbitrary event on $O$ which we may label without loss of generality by $\tau=0$ and let $\gamma$ be an arbitrary closed curve in $\Sigma_0$ starting at $P\in\Sigma_0$ having the coordinates $x^i_0$. Let $v$ be the tangent vector to $\gamma$. Then we have $v^a\nabla_a \tau = 0$, i.e., $v = v^i\partial_i$. In order to compute the travel time of a photon around $\gamma$ we need to find the null curve $\hat\gamma$ in $\mathscr{M}$ which projects to $\gamma$. The null tangent vector $l$ to $\hat\gamma$ can be written as $l = L\partial_\tau + v^i\partial_i$ with $L>0$. At each point of $\hat\gamma$ the equation \begin{equation} \label{eq:6} g_{00} L^2 + 2 g_{0i}L v^i + g_{ik}v^iv^k = 0 \end{equation} holds. It can be solved for $L$ \begin{equation} \label{eq:7} L = \frac1{g_{00}} \left(\sqrt{(g_{0i}g_{0k} - g_{00}g_{ik})v^iv^k} - g_{0k}v^k \right) \end{equation} where the sign was chosen so that $L>0$. We abbreviate this expression as $L(g,v)$. The null curve $\hat\gamma(s) = (\tau(s),\mathbf{x}(s))$, where $\mathbf{x}(s)$ is an abbreviation for a parametrization $(x^i(s))_{i=1:3}$ of $\gamma$, can now be obtained by solving the system of differential equations \begin{equation} \left. \begin{aligned} \dot \tau(s) &= L(\tau(s),\mathbf{x}(s)) , \\ \dot x^k(s) &= v^k(\mathbf{x}(s)) \end{aligned}\right\} \qquad \text{with } \tau(0) = 0, \quad \mathbf{x}(0) = \mathbf{x}_0.\label{eq:8} \end{equation} Since the solution of the spatial part of this equation is, by construction, the curve $\gamma$ we may regard $\mathbf{x}(s)$ and hence $\mathbf{v}(s) = \dot\mathbf{x}(s)$ as known in the time component of \eqref{eq:8}. Then, this equation is of the form $\dot\tau(s) = L(g(\tau(s),\mathbf{x}(s)),\mathbf{v}(s)) = f(\tau(s),s)$ which cannot be solved explicitly unless we know $f$, i.e., the curve $\gamma(s)$ and the metric components. For the sake of simplicity we take a parametrization of $\gamma$ with $s\in [0,1]$ so that $P=\gamma(0) = \gamma(1)$. Let $\tau(s)$ be the solution of \eqref{eq:8}. Then the travel time $T[\gamma]$ of the photon traveling along $\gamma$ is given formally by the integral \begin{equation} \label{eq:9} T[\gamma] = \tau(1) = \int_0^1 L(g(\tau(s),\mathbf{x}(s)),\mathbf{v}(s))\, \mathrm{d} s. \end{equation} Next, let $\tilde\gamma$ be the reversed curve parameterized by $\tilde\gamma(s) = \gamma(1-s)$ and let $\tilde\tau(s)$ be the solution of the equation \begin{equation} \dot{\tilde\tau}(s) = L(g(\tilde\tau(s),\tilde\mathbf{x}(s)),\tilde\mathbf{v}(s)), \quad \tilde\tau(0) = 0.\label{eq:10} \end{equation} Then we obtain the travel time of a photon traveling in the opposite direction as \[ T[\tilde\gamma] = \tilde\tau(1) = \int_0^1 L(g(\tilde\tau(s),\tilde\mathbf{x}(s)),\tilde\mathbf{v}(s))\, \mathrm{d} s. \] Thus, the time difference $\Delta_\gamma T = T[\gamma] - T[\tilde\gamma]$ is $\Delta T = \tau(1) - \tilde\tau(1)$. In order to evaluate this formula for a given curve $\gamma$ in a space-time with a given metric $g$ we need to solve the differential equations \eqref{eq:8} and \eqref{eq:10}. We can rewrite this formula in a slightly different way. Using the relationship between the two paths we get $\tilde\mathbf{x}(s) = \mathbf{x}(1-s)$ and $\tilde\mathbf{v}(s) = - \mathbf{v}(1-s)$ and hence \[ L(g(\tilde\tau(s),\tilde\mathbf{x}(s)),\tilde\mathbf{v}(s)) = L(g(\tilde\tau(s),\mathbf{x}(1-s)),-\mathbf{v}(1-s)) . \] and the travel time becomes \begin{equation} \label{eq:11} T[\tilde\gamma] = \int_0^1 L(g(\tilde\tau(1-s),\mathbf{x}(s)),-\mathbf{v}(s))\, \mathrm{d} s. \end{equation} Thus, the time difference $\Delta T$ is given as one integral \begin{equation} \label{eq:12} \Delta_\gamma T = \int_0^1 L(g(\tau(s),\mathbf{x}(s)),\mathbf{v}(s)) - L(g(\tilde\tau(1-s),\mathbf{x}(s)),-\mathbf{v}(s))\,\mathrm{d} s. \end{equation} \section{The Sagnac effect in Fermi coordinates} \label{sec:sagnac-effect-fermi} The general expression~(\ref{eq:12}) for the time difference due to the Sagnac effect is implicit and too general to allow any detailed statements. One could start to approximate the solutions $\tau$ and $\tilde\tau$ by iterating the differential equations but this would lead to complicated formulae and probably to the same results as what we will do next. To proceed further we introduce Fermi coordinates adapted to the observer world-line $O$. Then the metric coefficients up to terms cubic in $x$ are (see app.~\ref{sec:fermi-coordinates}) \begin{align} g_{00} &= 1 - 2 a_l x^l + 3 (a_mx^m)^2 + \omega_{im} \omega^i{}_n x^m x^n + R_{m0n0} x^m x^n + O(x^3),\label{eq:13}\\ g_{0k} &= \omega_{kl} x^l + \frac23 R_{m0nk} x^m x^n+ O(x^3),\label{eq:14}\\ g_{kl} &= -\delta_{kl} + \frac13 R_{mlnk} x^m x^n+ O(x^3).\label{eq:15} \end{align} Here, $\omega_{ik}$, $a_k$ and the Riemann coefficients are functions of $\tau$ defined on $O$. In order to proceed we will now make the following two reasonable but significant approximations: \begin{itemize}[wide] \item The expected time difference $\Delta T$ is small compared to the time scale of changes in the reference frame and the gravitational field. \item The path $\gamma$ is not ``too extended'' so that we remain in the $O(x^3)$ neighbourhood of the coordinate system. \end{itemize} By the first assumption we can take $a^k$, $\omega_{ik}$ and the Riemann tensor coefficients as time independent. Then the metric coefficients are time independent so that $L(g(\tau,\mathbf{x}),\mathbf{v}) = L(g(\mathbf{x}),\mathbf{v})$. Inserting this into the formula for $\Delta_\gamma T$ yields \[ \begin{aligned} \Delta_\gamma T &= \int_0^1 L(g(\mathbf{x}(s)),\mathbf{v}(s)) - L(g(\mathbf{x}(s)),-\mathbf{v}(s))\,\mathrm{d} s \\ &= -2 \int_0^1 \frac{g_{0i}}{g_{00}}v^i\, \mathrm{d} s \end{aligned} \] since the terms involving the square root cancel each other. We may express this formula in terms of the \emph{Sagnac 1-form} ${\boldsymbol{\sigma}} = -2\frac{g_{0i}}{g_{00}}\mathrm{d} x^i$ as \begin{equation} \label{eq:16} \Delta_\gamma T = \int_\gamma {\boldsymbol{\sigma}} = \int_{S} \mathrm{d} {\boldsymbol{\sigma}} \end{equation} where $S$ is any spanning 2-surface bounded by $\gamma$\footnote{Such surfaces always exists. One particular class consists of minimal surfaces which can be constructed as solutions of Plateau's problem~\cite{Douglas:1931bf}}. By the second assumption, the metric is given in the ${\mathcal{O}}(x^3)$ neighbourhood defined by Fermi coordinates by the equations~(\ref{eq:13}-\ref{eq:15}). Up to cubic terms we find \[ \frac1{g_{00}} = 1 + 2 a_l x^l + (a_mx^m)^2 - \omega_{im} \omega^i{}_n x^m x^n - R_{m0n0} x^m x^n + O(x^3) \] and the Sagnac form becomes \begin{equation} -2\frac{g_{0l}}{g_{00}} = -2\omega_{lk}x^k + 4 (a_mx^m) \omega_{lk}x^k - \frac43 R_{m0nl}x^m x^n + O(x^3).\label{eq:17} \end{equation} These are three terms with different characteristic properties. We discuss them in order in the next section. \section{Discussion of the contributions} \label{sec:discussion-effects} In this section we will make use of the usual 3-vector notation writing $a^k$, $x^k$ as $\mathbf{a}$, $\mathbf{x}$ etc. This is justified by the fact that the vector field $x^i\partial_i$ behaves like the Euclidean position vector $\mathbf{x}$ to the order we are interested in. We also introduce the angular velocity vector ${\boldsymbol{\omega}}$ with components $\omega^i = \frac12\epsilon^{ikl}\omega_{kl}$ and the area form $\mathrm{d}^2S$ of a surface $S$ by $\mathrm{d} x^l\wedge\mathrm{d} x^k = \epsilon^{lkm}n_m\,\mathrm{d}^2S$, where $n_k$ is the unit-normal to $S$. Also, we note that we raise and lower spatial indices $i,j,k,\ldots$ with $\delta_{ik} = -\eta_{ik}$. This allows us to also make use of the notation $\mathbf{x}\cdot \mathbf{a} = x_i a^i$ for the usual Euclidean inner product. \subsection{The pure rotation effect} \label{sec:pure-rotation-effect} Consider the first term in \eqref{eq:17}. It depends only on the angular velocity and contributes a time difference of \[ \Delta_\omega T = -2\int_\gamma\omega_{lk}x^k \mathrm{d} x^l = 2\int_S \omega_{lk}\mathrm{d} x^l \wedge \mathrm{d} x^k = 4 \int_S ({\boldsymbol{\omega}}\cdot \mathbf{n})\, \mathrm{d}^2S. \] Thus, the time difference due to this term is a multiple of the ``rotation flux'' through the surface $S$ spanned by $\gamma$. This contribution is invariant under translations~\cite{Schwartz:2017jx}, i.e., under the transformation $x^k \mapsto x^k + q^k$ for constant $q^k$ and rotations $x^k \mapsto \alpha^k{}_l x^l$, where $\alpha^k{}_l$ is a constant orthogonal matrix. This is the classical Sagnac effect as described in~\cite{Sagnac:1913tz} for, if we assume that $\omega_{ik}$ describes a rotation around the $3$-axis, i.e., when ${\boldsymbol{\omega}} = \omega \mathbf{e}_3$ and that, further, $\gamma$ is a simple closed curve in the $(12)$-plane then we obtain \[ \Delta_\omega T = 4 \omega \int_S (\mathbf{n}\cdot \mathbf{e}_3)\,\mathrm{d}^2S = \pm4 \omega\, \text{area}(S). \] Of course, the sign depends on the relative orientation between the angular velocity and the curve~$\gamma$. It is easy to design curves for which the time difference vanishes. Trivial examples are curves which lie in a plane parallel to the rotation axis. Non-trivial curves have the shape of a figure eight or something more complicated when projected parallel to the rotation axis. These curves give rise to the ``zero-area'' Sagnac configurations~\cite{Bond:2017vy}. More complicated examples are straightforward to construct. \subsection{The acceleration dependent effect} \label{sec:accel-depend-effect} Next, we consider the second term in~\eqref{eq:17}. This term, which vanishes when $a^k=0$, contributes a time difference \[ \Delta_a T = 4 \int_\gamma (a_mx^m) \omega_{lk}x^k \mathrm{d} x^l = -8 \int_S a_{(k} \omega_{m)l} x^m \mathrm{d} x^k\mathrm{d} x^l . \] Again, this can be written as the flux of a vector field $\mathbf{V}$ through the surface $S$ spanned by $\gamma$. In this case, the vector field is $\mathbf{V} = (\mathbf{a} \cdot {\boldsymbol{\omega}}) \mathbf{x} - 3 (\mathbf{a} \cdot \mathbf{x}) {\boldsymbol{\omega}}$ so that \begin{equation} \label{eq:18} \Delta_a T = 4 \int_S (\mathbf{a} \cdot {\boldsymbol{\omega}}) (\mathbf{n}\cdot \mathbf{x}) - 3 (\mathbf{a} \cdot \mathbf{x}) ({\boldsymbol{\omega}}\cdot\mathbf{n})\,\mathrm{d}^2S. \end{equation} In contrast to the pure rotation effect, the acceleration effect is \emph{not} translation invariant: under a translation $\mathbf{x}\mapsto \mathbf{x} + \mathbf{q}$ the time difference changes according to \[ \Delta_a T \mapsto \Delta_a T + 4 \int_S (\mathbf{a} \cdot {\boldsymbol{\omega}}) (\mathbf{n}\cdot \mathbf{q}) - 3 (\mathbf{a} \cdot \mathbf{q}) ({\boldsymbol{\omega}}\cdot\mathbf{n})\,\mathrm{d}^2S. \] This means that we can make the time difference vanish by shifting the curve. \subsection{The gravitational effect} \label{sec:gravitational-effect} Finally, we come to the third term which is entirely due to the gravitational field in the form of the Riemann tensor. Before the detailed discussion we need to briefly digress to introduce the decomposition of the Riemann tensor $R_{abc}{}^d$ into the Schouten or ``Rho'' tensor $P_{ab}$ (representing the Ricci tensor~\cite{Penrose:1984wm}) and the Weyl tensor $C_{abc}{}^d$. \[ R_{ab}{}^{cd} = C_{ab}{}^{cd} -4 \delta_{[a}{}^{[c} P_{b]}{}^{d]}. \] The right dual of the Riemann tensor is \[ R^\star_{abcd} = C^\star_{abcd} - \epsilon_{abc}{}^{e} P_{ed} + \epsilon_{abd}{}^e P_{ce} . \] With $t^a$ the time-like 4-velocity of the observer we obtain \[ t^ct^d R^\star_{acbd} = t^ct^dC^\star_{acbd} - t^c t^d \epsilon_{acb}{}^{e} P_{ed}. \] With the relationship $P_{ab} = -\frac12 \left(G_{ab} - 12 g_{ab} G\right)$ and the Einstein equation $G_{ab} = - 8\pi T_{ab}$ this becomes \[ t^ct^d R^\star_{acbd} = B_{ab} - 4\pi t^dt^c\epsilon_{abc}{}^{e} \left(T_{de} - \frac16 g_{de} T\right) = B_{ab} - 4\pi \epsilon_{ab}{}^{c}j_c. \] Here, we have used the definitions $\epsilon_{abc} = t^e\epsilon_{eabc}$, $t^eT_{ea} = j_a$ and $B_{ab} = C^\star_{acbd}t^ct^d$ for the 3-dimensional volume form, the momentum density and the magnetic part of the Weyl tensor with respect to the time-direction $t^a$. Expressed in terms of the Fermi coordinates this equation becomes\footnote{keeping in mind that indices $i$, $k$ etc.\ are moved with $\delta_{ik} = -\eta_{ik}$.} \begin{equation} \label{eq:19} R^\star_{i0k0} = B_{ik} + 4\pi \epsilon_{ik}{}^{l}j_l. \end{equation} The time delay from the Riemann tensor is \[ \Delta_R T = \frac43 \int_\gamma R_{0mnl}x^m x^n \,\mathrm{d} x^l = 4 \int_S R_{0(mn)l}x^n\mathrm{d} x^m\wedge\mathrm{d} x^l. \] It can be expressed in terms of the right dual of the Riemann tensor \begin{equation} \label{eq:20} \Delta_R T = 4 \int_S R^\star_{0n}{}^{0i}x^nn_i\mathrm{d}^2S \end{equation} which in turn, using~\eqref{eq:19}, can be cast into the form \begin{equation} \label{eq:21} \Delta_R T = 4 \int_S B_{i}{}^k x^in_k\mathrm{d}^2S - 16\pi \int_S (\mathbf{x} \times \mathbf{j})\cdot \mathbf{n}\,\mathrm{d}^2S. \end{equation} This shows that the gravitational Sagnac effect in the first order is entirely due to ``magnetic'' interaction, both in the gravitational wave part due to $B_{ab}$ and the matter part due to the flux of angular momentum density through $S$. As the acceleration effect, the gravitational effect is also not translation invariant. \section{Some example paths} \label{sec:some-example-paths} In order to get some idea about how different shapes of paths influence the time delay we now consider a restricted class of paths, 3-dimensional Lissajous curves, given in parameterised form by \begin{equation} \label{eq:22} {\boldsymbol{\gamma}}(s) = \begin{bmatrix} A_1 \sin (l s + \alpha_1)\\ A_2 \sin (m s + \alpha_2)\\ A_3 \sin (n s + \alpha_3) \end{bmatrix}, \qquad s,\alpha_1,\alpha_2,\alpha_3 \in [0, 2\pi], \quad l,m,n \in \mathbb{Z}. \end{equation} We assume these curves have period $2\pi$ which implies $\gcd(l,m,n) = 1$, i.e., $l$, $m$ and $n$ are relatively prime. It is straightforward to insert the parameterisation into the expressions for the time delay. Again, we discuss the different contributions sequentially. \subsection{The rotation term} \label{sec:rotation-term} By choosing the axes of the frame appropriately we can arrange that ${\boldsymbol{\omega}} = \omega\,\mathbf{e}_3$ and then the time delay due to $\omega$ becomes \begin{equation} \label{eq:23} \Delta_\omega T = 4\pi m\omega\,A_1 A_2 \left(\delta_{l,-m}\sin(\alpha_1+\alpha_2) + \delta_{l,m} \sin(\alpha_1-\alpha_2) \right). \end{equation} This shows, that $\Delta T$ vanishes unless the projection of the curve perpendicular to ${\boldsymbol{\omega}}$ is a non-degenerate ellipse. Choosing $l\ne \pm m$ yields ``zero-area'' paths. Since the behaviour of the curve in the direction of $\omega$ is irrelevant these paths can be chosen without self-intersections. \subsection{The acceleration term} \label{sec:acceleration-term} Keeping ${\boldsymbol{\omega}}$ along the $\mathbf{e}_3$ axis we can rotate the frame around ${\boldsymbol{\omega}}$ to make the acceleration vector $\mathbf{a}$ lie in the plane spanned by $\mathbf{e}_1$ and $\mathbf{e}_3$. Then we can write $\mathbf{a} = a(\mathbf{e}_3 + \lambda \mathbf{e}_1)$ for some real $\alpha$ and $\lambda$. With these simplifications we can write the contribution of the acceleration term due to the class of curves~\eqref{eq:23} as $\Delta_\mathbf{a} T = 8\pi A_1A_2 a \,\omega J(l,m,n)$ where \begin{equation} \label{eq:24} \begin{aligned} J(l,m,n) &= A_1 \lambda \left[(l-m) \cos (2 \alpha_1+\alpha_2)\,\delta_{m,-2l} - (l+m) \cos (2 \alpha_1-\alpha_2)\,\delta_{m,2l} \right] \\ + &A_3 \left[(l+m) \bigl(\cos (\alpha_1-\alpha_2-\alpha_3)\, \delta_{n,l-m} - \cos(\alpha_1-\alpha_2+\alpha_3)\,\delta_{n,m-l}\bigr)\right.\\ +& \left.(l-m) \bigl(\cos (\alpha_1+\alpha_2+\alpha_3) \delta_{n,-(l+m)} - \cos (\alpha_1+\alpha_2-\alpha_3)\,\delta_{n,l+m}\bigr)\right]. \end{aligned} \end{equation} The first two terms in this expression are due to the misalignment of angular velocity vector and acceleration. They vanish for $\lambda =0$. Let us first discuss this case. Then the time delay is proportional to $A_1A_2A_3$, i.e., to the volume of the rectangular box which contains the space curve. It vanishes unless at least one of the four equations \begin{equation} \label{eq:25} n+l+m = 0,\quad n+l-m = 0,\quad n-l+m = 0,\quad n-l-m = 0 \end{equation} holds. It is easy to see that for a non-degenerate curve these equations cannot hold simultaneously. It is also not possible for just one of them to be violated. Thus, at most two of the equations can hold simultaneously. When two equations hold then it follows that one integer must vanish, while the other two are equal in magnitude and then they must be equal to $\pm1$. In these cases, the curve is planar, being contained in a plane perpendicular to one of the coordinate axes. If this plane is perpendicular to the $\mathbf{e}_i$-axis, then it is a distance $A_i\sin(\alpha_i)$ away from the origin. Thus, one can make the time-delay vanish by choosing $A_i=0$ or $\alpha_i=0$. This is a consequence of the translation dependence of the acceleration term. Due to the geometry, the case $n=0$ is different from $l=0$ which, in turn, is equivalent to $m=0$. In the former case, we obtain for $l=m=1$ (the case $l=-m$ can be obtained by reversing the orientation of the curve and replacing $\alpha_2$ by its negative) \[ J(1,1,0) = 4 A_3 \sin(\alpha_1-\alpha_2)\sin(\alpha_3) \] while the case $l=0$ with $n=m=1$ yields \[ J(0,1,1) = 2 A_3 \sin(\alpha_1-\alpha_3)\sin(\alpha_2), \] the case $m=-1$ again corresponding to an orientation reversal. For the general case, when only one of equations~\eqref{eq:25} holds, we may take as an example $n=l+m$. Then $l$ and $m$ are non-zero and relatively prime and the corresponding curve is non-planar. Its contribution becomes \[ J(l,m,l+m) = -A_3(l-m) \cos(\alpha_1+\alpha_2-\alpha_3), \] which is non-zero unless the phases are chosen in a very specific way. When angular velocity and acceleration are not aligned then there are two additional possible terms in~\eqref{eq:24}. They are proportional to $\lambda$ and they contribute only when $m=\pm2l$. This condition does not involve $n$ which can therefore be chosen so that one of the equations~\eqref{eq:25} is satisfied. One possibility is $l=1$, $m=2$, $n=3$ which yields \begin{equation} \label{eq:26} J(1,2,3) = -3 A_1 \lambda \cos(2\alpha_1-\alpha_2) + A_3 \cos(\alpha_1+\alpha_2-\alpha_3). \end{equation} \subsection{The gravitational term} \label{sec:gravitational-term} As mentioned in sec.~\ref{sec:gravitational-effect} this term contains two contributions, one due to the Weyl tensor and another due to the matter. We first discuss the Weyl term. It is mediated by the magnetic part $B_{ik}$ of the Weyl tensor. Let us assume that this term is due to a gravitational wave propagating in the $\mathbf{e}_3$ direction. Then $B_{ik}$ has the form \[ B_{ik} = \begin{bmatrix} a_1 & a_2 & 0 \\ a_2 & -a_1 & 0 \\ 0 & 0 & 0 \end{bmatrix} \] for some real constants $a_1$ and $a_2$. Inserting the parameterisation for the curves~\eqref{eq:22} we find the time delay for the Weyl contribution to be \begin{equation} \label{eq:27} \begin{aligned} \Delta_B T = \frac{3\pi}2 a_1 A_1 A_2 A_3 n \;\bigl( &\cos(\alpha_1 - \alpha_2 - \alpha_3)\;\delta_{n,l-m} + \cos(\alpha_1 - \alpha_2 + \alpha_3)\;\delta_{n,m-l}\\ - &\cos(\alpha_1 + \alpha_2 - \alpha_3)\;\delta_{n,l+m} - \cos(\alpha_1 + \alpha_2 + \alpha_3)\;\delta_{n,-l-m}\bigr)\\ +\frac{3\pi}2 A_3 a_2\bigl( l A_1^2 &\left[\cos(2 \alpha_1 - \alpha_3)\,\delta_{n,2l} - \cos(2 \alpha_1 + \alpha_3)\,\delta_{n,-2l} \right]\\ + m A_2^2 &\left[\cos(2 \alpha_2 + \alpha_3)\,\delta_{n,-2m} - \cos(2 \alpha_2 - \alpha_3)\delta_{n,2m}\right]\bigr). \end{aligned} \end{equation} This comes in two pieces each corresponding to a different polarisation state of the wave. The first is proportional to $a_1$ and corresponds to the $+$-polarisation. Its 'signature' is the same as the one for the aligned acceleration case --- they are non-zero for the same class of curves. As an example we pick a curve with $n=l+m$ and obtain \begin{equation} \label{eq:28} \Delta_+ T = -\frac{3\pi}2 a_1 A_1 A_2 A_3 n \; \cos(\alpha_1 + \alpha_2 - \alpha_3) \end{equation} The $\times$-polarisation contributes the term proportional to $a_2$. It has the same signature as the misaligned acceleration case. It is non-zero only if the curve has a figure eight projection in a direction perpendicular to the propagation of the wave. Choosing $n=2l\ne2|m|$ yields the contribution \begin{equation} \label{eq:29} \Delta_\times T = \frac{3\pi}2 A_1^2 A_3 a_2 l \cos(2 \alpha_1 - \alpha_3). \end{equation} \section{Stationary space-times} \label{sec:stat-space-times} As a further application we discuss the Sagnac formula~\eqref{eq:12} in a stationary space-time $\mathscr{M}$ where we have a time-like Killing vector $\xi^a$. The length of the Killing vector is a scalar function on $\mathscr{M}$ defined by \begin{equation} \label{eq:30} \xi_a\xi^a = {\mathrm{e}}^{2U} \end{equation} and we have the following relations \begin{equation} \label{eq:31} {\mathscr{L}}_\xi g_{ab} = -2\nabla_{(a}\xi_{b)} = 0, \quad \xi^a\nabla_aU = 0, \quad \nabla_a\xi_b = 2 \nabla_{[a}U \xi_{b]} + \omega_{ab}, \end{equation} where $\omega_{ab} = -\omega_{ba}$ and $\xi^a\omega_a = 0$. We write $t^a:={\mathrm{e}}^{-U}\xi^a$ for the unit-vector in the direction of $\xi^a$. We also pick one integral curve $O$ of $\xi^a$. Since $U$ is constant along $O$ we can scale $\xi^a$ to become a unit-vector along $O$. With $t$ the parameter along $\xi^a$, i.e., $\xi^a\nabla_at=1$ we find that $t$ measures proper time for an observer along $O$, i.e., with 4-velocity $t^a = \xi^a$. The metric can be written in the form \[ g = g_{00} \mathrm{d} t^2 + 2 g_{0k}\mathrm{d} t \mathrm{d} x^k + g_{ik} \mathrm{d} x^i \mathrm{d} x^k \] with $\partial_tg_{\mu\nu}=0$ and $g_{00}={\mathrm{e}}^{2U}$. With transformations of the form $t\mapsto t+\alpha_ix^i$ for constants $\alpha^i$ we can arrange that $g_{i0}=0$ on $O$ and $x^i \mapsto x^i + \beta^i{}_kx^k$ for constants $\beta^i{}_k$ achieves that $g_{ik} = - \delta_{ik}$ on $O$. In terms of these coordinates, $\xi^a \doteq \partial_t$ and $\xi_a \doteq g_{0\mu}\mathrm{d} x^\mu$. Furthermore, the observer along $O$ is accelerated since \[ a_b = t^a\nabla_at_b = {\mathrm{e}}^{-2U} \xi^a\nabla_a \xi_b = -\nabla_b U. \] We now set up a \emph{stationary frame} for the observer on $O$ by choosing in a neighbourhood of $O$ three vector fields $\mathbf{e}_i$ which, together with $t^a$, form an orthonormal basis along $O$ and which are invariant under $\xi^a$, i.e., for which ${\mathscr{L}}_\xi \mathbf{e}_i=0$ holds. These vector fields can be chosen to be the coordinate vector fields $\partial_i$. We are now in the same situation as for the derivation of the Sagnac formula~\eqref{eq:12}. Due to the stationarity of the space-time there is no dependence on $t$ and we can evaluate the integrals as before resulting in the same formula \begin{equation} \label{eq:32} \Delta T = -2 \int_\gamma \frac{g_{0i}}{g_{00}}\,\mathrm{d} x^i, \end{equation} except that now this formula is exact. The integrand is easily identified as the pull-back to the curve of the $1$-form $\alpha_a:=\xi_a/(\xi_c\xi^c)$, the ``inverted Killing vector''. Using the Stokes theorem as before we can write the integral as a surface integral over a spanning surface $S$ for the curve $\gamma$ of the $2$-form \[ \nabla_{[a}\alpha_{b]} = \frac{\nabla_{[a}\xi_{b]}}{\xi_c\xi^c} - \frac{\xi_{[b}\nabla_{a]} \left( \xi_c\xi^c\right)}{(\xi_c\xi^c)^2} = {\mathrm{e}}^{-2U} \left( \nabla_{[a}\xi_{b]} - 2\xi_{[b}\nabla_{a]} U\right) = {\mathrm{e}}^{-2U} \omega_{ab}. \] The time difference therefore becomes \begin{equation} \label{eq:33} \Delta T = -2 \int_S {\mathrm{e}}^{-2U}\omega_{ik}\mathrm{d} x^i\wedge\mathrm{d} x^k. \end{equation} The quantity $\omega_{ik}$ is in fact closely related to the angular velocity of the stationary frame with respect to a locally non-rotating Fermi transported frame. This can be seen by comparing Fermi- and Killing transport: let $v^a$ be Lie dragged along the Killing vector so that $\xi^c\nabla_cv^a = v^c\nabla_c \xi^a$ holds. We compute the Fermi derivative of $v^a$ along the unit-vector $t^c$ \[ \begin{multlined} \mathcal{F}_t v^a = t^c\nabla_c v^a + t^aa_cv^c - a^at_c v^c = {\mathrm{e}}^{-U}\left( v^c\nabla_c \xi^a + \xi^aa_cv^c - a^a\xi_c v^c\right) \\= {\mathrm{e}}^{-U}\left( v^c(\xi^a \nabla_cU - \xi_c \nabla^aU + \omega_c{}^a )+ \xi^aa_cv^c - a^a\xi_c v^c\right) = {\mathrm{e}}^{-U}\left( v^c\omega_c{}^a\right) \end{multlined} \] which shows that the angular velocity of the stationary frame with respect to the Fermi frame is $-{\mathrm{e}}^{-U}\omega_{ik}$. In contrast to the discussion in sect.~\ref{sec:gener-sagn-effect}, here the formula is exact. The acceleration terms which appear there correspond to the factor ${\mathrm{e}}^{-2U}$ here. This factor partly corrects for the difference between the Killing time $t$ and proper time and partly serves to introduce the ``gravitational force'' $\nabla_aU$ which is responsible for the acceleration. \section{Light in a moving medium and the Fizeau experiment} \label{sec:fizeau-experiment} As a final example we consider Minkowski space $\mathbb{M}$ with its flat metric $\eta_{ab}$ filled with a homogeneous, isotropic medium which is transparent and without dispersion. Then the light rays move with a different velocity $\bar{c}$ which is related to the speed of light in vacuum $c$ by $\bar{c} = c/n$ where $n$ is the refraction index of the material, defined in terms of its permittivity $\epsilon$ and permeability $\mu$. By assumption these, and therefore $n$, are constant. The material is described by a 4-velocity $u^a$ with $u_au^a = 1$. Let $t^a$ be the 4-velocity of an observer. To simplify things we assume that $t^a$ is covariantly constant. Together with a spatial frame of covariantly constant unit-vectors $t^a$ forms a basis and we can introduce global Cartesian coordinates $(t,x^i)$ on $\mathbb{M}$. Following \cite{Gordon:1923vm} and \cite{Ehlers:1967dk} we describe the motion of the light by null geodesics with respect to the ``optical metric'' \begin{equation} g_{ab} = \eta_{ab} - u_a u_b (1-1/n^2).\label{eq:34} \end{equation} The observer splits the matter 4-velocity $u^a$ into time and space components\footnote{We use the Minkowski metric $\eta_{ab}$ for moving indices.}, \[ u^a = \gamma \left(t^a + v^a\right), \qquad\text{with } \gamma = (1-v^2)^{-\tfrac12}, \quad t_av^a=0 \] where we have defined $v^2 := -v_av^a = \mathbf{v}\cdot\mathbf{v}$. The 4-velocity $t^a$ is a time-like Killing vector for $\eta_{ab}$ and, assuming that $u^a$ is Lie dragged along $t^a$, the optical metric has the property that ${\mathscr{L}}_t g_{ab}=0$. Thus, we are in the situation of sect.~\ref{sec:stat-space-times} describing a stationary system. Considering a spatial path $\gamma$ traversed by light in opposite directions we find in general the time difference given by~\eqref{eq:32}. Evaluating the metric coefficients we find \[ g_{00} = 1-(1-1/n^2) \gamma^2, \qquad g_{0i} = -(1-1/n^2)\gamma^2 v_i. \] If the material velocity is such that its spatial part $v^a$ has closed stream-lines then we can take the path $\gamma$ of the light to be parallel to a stream-line of length $L$, traversing it parallel to the motion of the medium. Thus, we may write $v^i = v \dot x^i$ if we assume parametrisation of $\gamma$ by arc-length. Finally, let us assume that $v$ is constant along a stream-line then we obtain for the time difference for the light moving along $\gamma$ in opposite directions \[ \Delta T = 2 \int_\gamma \frac{(1-1/n^2)\gamma^2}{ 1-(1-1/n^2) \gamma^2} v_i\dot{x}{}^i(s) \, \mathrm{d} s = - 2 \frac{(1-1/n^2)\gamma^2}{ 1-(1-1/n^2) \gamma^2} v L. \] This formula can be simplified to \begin{equation} \label{eq:35} \Delta T = -2L v \frac{n^2 - 1}{1 - n^2 v^2}. \end{equation} This setup describes the classical ``aether-drag'' experiment by Fizeau\cite{Fizeau:1851ta} to determine the difference of the speed of light in a medium moving in opposite directions. For a very nice summary of that experiment and the classical theoretical background we refer to the paper by Lahaye et. al.~\cite{Lahaye:2012fa}. The classical derivation makes use of the special relativistic addition formula for velocities to find the speed of light (in units of $c$) in (opposite to) the direction of a moving medium (water) as \[ v_{\pm} = \frac{v \pm 1/n}{1\pm v/n}. \] Therefore, the difference in travel time along a path of length $L$ in opposite directions is \[ \Delta T = \frac{L}{v_+} - \frac{L}{v_-} = -2L v \frac{n^2 - 1}{1 - n^2 v^2} \] in complete agreement with~\eqref{eq:34}. This shows that the effects of Sagnac and Fizeau are merely facets of the same coin, a fact which seems to have been suspected for some time, see e.g.,~\cite{Leeb:1979wa}. Obviously, the formula~\eqref{eq:35} given above, can be readily generalised to non-homogeneous media and cases where the light path is not aligned with a closed stream line. However, it may not be possible to give a closed form expression. \section{Conclusion} \label{sec:conclusion} In this paper we derived the Sagnac effect, i.e., the difference in travel time for light moving in opposite directions along the same spatial path, from first principles within Einstein's general theory of relativity. The resulting formula is difficult to evaluate in full generality since one needs to solve a differential equation along the path. We have considered several special cases where the evaluation is possible. The first case addressed a general space-time but considered only a small neighbourhood around an observer. Introducing Fermi coordinates and assuming that the travel time of the light is much smaller than the time scale for changes in the observers frame we were able to give a closed expression for the time difference. Within the approximation used, there are three contributions to this time difference: the first one is the classical Sagnac effect caused by the rotation of the reference frame. The next order term is caused by a combination of the rotation and the acceleration of the frame and, in the same order, there is a contribution from the curvature of the space-time. The structure of the terms is such that one can set up (combinations of) light paths which are sensitive to one single term only. One possible application of this could be to measure acceleration and rotation of a reference frame or, alternatively, to measure gravitational wave signals. Having said that, one should point out that we have not at all discussed the size of the expected time differences in concrete situations. We should also point out that our approach can not immediately take care of experiments in the spirit of Wang~\cite{Wang:2004ef} since we assume that the path is contained in a hyper-surface of constant time. In contrast, in the Wang setup the path is allowed to change its shape during the experiment. However, it should be possible to take care of this effect in a more or less straightforward way within our framework. The second case we discussed involved stationary space-times. We found that the time difference there was caused by the flux of the curl of the ``inverted Killing vector'', which amounts to the rescaled rotation part of the Killing vector. In this case, the time difference is due to the appropriately measured angular velocity of the stationary frame with respect to a freely falling frame. Again, without looking at the size of the effect, this could provide a means to measure the dragging of inertial frames in a rotating gravitational system. Finally, we specialised the stationary case to a moving homogeneous and isotropic medium. We showed that with the use of the appropriate optical metric it is possible to reproduce the classical explanation for the Fizeau experiment, which demonstrated the dependence of the speed of light on the relative motion between medium and observer. \section{Acknowledgments} \label{sec:acknowledgment} I wish to thank the CNRS of France for a visiting position at the Département de Mathématiques at Université de Bourgogne in Dijon, France where some of this research was carried out. My thanks also go to Eyal Schwartz for sparking my interest in the Sagnac effect and to Niels Kjaergaard for pointing me to reference~\cite{Lahaye:2012fa}.
1,108,101,563,922
arxiv
\section{Appendix: Additional Experimental Results} \subsection{Perturb One Feature for CERT} In this section we present the experimental results of applying different anomaly explanation methods by perturbing only one of the twelve features in the CERT data set as an injected anomaly in Figure \ref{fig:onepert}. The left column shows the contributions of each feature calculated by the anomaly explanation method, and the right column the KL-divergence between the distribution of the calculated contributions and the uniform distribution. As we can see from Figure \ref{fig:onepert}, ACE and ACE-KL perform well across all six examples consistently, while Autoencoder and LIME fail to capture the contribution of the anomaly in some cases even there is only one anomaly feature. \begin{figure}[!ht] \centering \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figs/cert/0.pdf} \caption{Features 0 is perturbed.} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figs/cert/2.pdf} \caption{Features 2 is perturbed.} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figs/cert/4.pdf} \caption{Features 4 is perturbed.} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figs/cert/6.pdf} \caption{Features 6 is perturbed.} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figs/cert/8.pdf} \caption{Features 8 is perturbed.} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figs/cert/10.pdf} \caption{Features 10 is perturbed.} \end{subfigure} \caption{Feature contribution calculated by different methods on 6 synthetic examples, where each of them has one feature perturbed. The left side are the contributions of each feature calculated using different method, and the right side are the KL-divergence for each method.} \label{fig:onepert} \vspace{-1em} \end{figure} \subsection{Perturb Two Feature for CERT} In this section, we present the experimental results of applying different anomaly explanation methods by perturbing two of the twelve features in the CERT data set as an injected anomaly in Figure \ref{fig:twopert}. As previously described, the left column shows the contributions of each feature calculated by anomaly explanation method, and the right column the KL-divergence between the distribution of the calculated contributions and the true distribution. As seen from Figure \ref{fig:twopert}, ACE and ACE-KL perform well across all four examples consistently, while Autoencoder only captures the second anomaly in the third and the fourth examples, and LIME fails to capture any of the anomalies in an accurate manner, with higher KL-divergence compared to the true distribution. These results further empirically support our claim that LIME is not suitable for anomaly explanation in the security domain while ACE and ACE-KL are very powerful tools in this application domain. \begin{figure}[!ht] \centering \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figs/cert/0-1.pdf} \caption{Features 0, 1 are perturbed.} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figs/cert/0-2.pdf} \caption{Features 0, 2 are perturbed.} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figs/cert/0-3.pdf} \caption{Features 0, 3 are perturbed.} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figs/cert/0-4.pdf} \caption{Features 0, 4 are perturbed.} \end{subfigure} \caption{Feature contribution calculated by different methods on 4 synthetic examples, where each of them has two features perturbed. The left side are the contributions of each feature calculated using different method, and the right side are the KL-divergence for each method.} \label{fig:twopert} \vspace{-1em} \end{figure} \section{Conclusions} In this paper we proposed methods for explaining results of complex security anomaly detection models in terms of feature contributions, which we define as the percentage of a particular feature contributing to the anomaly score. Based on our experimental results on synthetic and real data sets, we demonstrated that ACE consistently outperforms the baseline approaches for anomaly detection explanation. ACE-KL helps provide a simpler explanation focusing on the most significant contributors. Both approaches have valuable applications in the area of anomaly detection explanation in security. In the future, we plan to further validate our approach in other security problems and other domains. \section{Experiments and Results} \subsection{Data sets} We validate our methods on three security related data sets. The first data set is the CERT Insider Threat v6.2 (abbreviated as CERT) \cite{lindauer2014,glasser2013}. It is a synthetically generated, realistic data set consisting of application-level system logs, such as HTTP get/post requests, emails and user login/logout events. The second data set contains \textit{pcap} traces from UNB \cite{Shiravi2012}, which we converted to netflow logs using \textit{nfdump}. It is partially labelled with port scanning and intrusion events. Lastly, the third data set--AndroidMalware \cite{Zhou2012}--is a collection of about 1,200 malwares observed on Android devices. \subsection{Feature Extraction} To evaluate our methods, we build anomaly detection models on these data sets. Note that the models can be supervised or unsupervised as long as they produce an anomaly score. Furthermore, while we ensure these models have reasonable accuracy, building the best possible anomaly detection models for these data sets is not the focus of this work. We extract the following features from the data sets. \noindent \textbf{CERT.} Similar to a previous study \cite{Tuor2017}, we extract count features conditioned on time of day, where a day is uniformly discretized into four intervals. In our experiments, we use one day as the smallest time window and each example is the composite record of day-user. We examine three different Internet activities: \textit{``WWW visit"}, \textit{``WWW upload"} and \textit{``WWW download"}. Hence, in this setting, the total number of features are $ 3\times4= 12$, and so is the input dimensionality of the autoencoder model (one of the baselines). \noindent \textbf{UNB Netflow:} We extracted 108 features that can be categorized into three sets: Count, Bitmap, and Top-K. The Count features count the number of bytes/packets for incoming and outgoing traffic; the Bitmap features include type of services, TCP flags, and protocols; the Top-K features encode the IP addresses with traffic flows ranked in top k over all the addresses. \noindent \textbf{AndroidMalware:} 122 binary features are extracted, mainly related to frequent permission requests from apps. \subsection{Evaluation Metrics} We consider the contributions to be a distribution over features. To quantitatively evaluate contributions produced by a method, we use its Kullback-Leibler (KL) divergence with respect to ground truth contributions. The KL divergence measures how one probability distribution diverges from another probability distribution. Given the distribution of modeled contributions, $ Q, $ and the ground truth contributions distribution of the data point, $ P, $ the KL divergence is formulated as: \par\nobreak{\small \begin{equation} KL(P||Q)= \sum_{i}^{M}P(f_i)\log \dfrac{P(f_i)}{Q(f_i)}, \end{equation} } where $ f_i $ is the $ i $th feature. The lower the KL divergence, the closer the modeled contribution is to the real contribution for that data point. Note that this KL divergence metrics is different from the regularizer term in ACE-KL, which forces the formulated distribution away from a uniform distribution. \subsection{Baseline Methods} We use LIME \cite{Ribeiro2016a} as our main baseline. We consider it representative of similar methods since it is recent and well cited. LIME only works for classification problems; however, most anomaly detection problems require an anomaly score to express the confidence of detection. We therefore extend LIME to support regression problems. This extension is straightforward: in classification problems, each feature is mapped onto a classification class by looking at the estimated weights of each feature to decide the importance of that feature to the particular class. In a regression problem, we can assume it to be a one-class classification problem. We therefore transform LIME from multi-class classification to a one-class problem, and examine the importance of each feature to the anomaly score. \subsection{Evaluation on CERT} To evaluate ACE and ACE-KL on CERT, first we train an autoencoder as our black-box model, although in principle it could be any model. Its anomaly score ($As$) on a data point is computed as the \textit{mean squared error} (MSE) between the input and the output vector. In addition to applying ACE and ACE-KL, we compute feature contributions from the autoencoder model using the reconstruction error of each of the inputs, similar to \cite{Tuor2017}. Thus, the autoencoder model serves as an additional baseline. While the CERT data set has some anomalies, we also artificially inject some by perturbing the input features. The data set contains two years of activities. We use the first year of the data set for injected anomalies detection, as it has no anomaly marked. We also detect anomalies present in the second year. \subsubsection{Evaluation on Injected Anomalies} We perturb individual features and groups of features. However, due to space limitations, we only present perturbation of groups of five features. The rest of the results on injected anomalies are described in the appendix. \paragraph{Multiple Feature Perturbation} The synthetic anomalies are created as follows. We first calculated the mean values of each feature based on the non-preprocessed raw data, and draw from a Poisson distribution based on the mean value of each feature using $ P(x) = e^{-\lambda}\frac{\lambda^{k}}{k!}. $ This sampling approach ensures that first, all the synthesized features are integers; second, the original value is around the mean of the raw data. After we sample from this distribution, we perturbed it by adding a value $ \lambda$ to the feature $x$: $ x' = x + \lambda $. This new value's expectation is $ \mathbb{E}[x']=2\lambda, $ which exceeds the mean value of $ \lambda $ by a large magnitude, thus this perturbation can represent an anomaly from the original data. We randomly chose five features to perturb. Each feature of a data point is standardized to $\mathcal{N}(0, 1)$ using the mean and the standard deviation of that feature of the training set, and fed into the trained black-box to create an anomaly score. The results are shown in Figure ~\ref{fig:fivevpert}. ACE accurately identified the contributions in both anomalies, and performed significantly better than both baselines considered according to the KL-divergence metric. ACE-KL, while not as accurate as ACE, highlights the top contributors. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figs/cert/1-3-10-8-7.pdf} \caption{Features 1, 3, 10, 8, 7 were perturbed.} \label{fig:eg0} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figs/cert/6-0-9-7-3.pdf} \caption{Features 6, 0, 9, 7, 3 were perturbed.} \label{fig:eg1} \end{subfigure} \caption{\textbf{(left)} Feature contributions on two synthetic examples, with perturbation on five randomly chosen features. Contribution is the percentage of a feature towards the anomaly score. \textbf{(right)} KL-divergence of each method with respect to the ground truth.} \label{fig:fivevpert} \end{figure} \subsubsection{Evaluation on Real Anomalies} The CERT data set contains labeled scenarios where insiders behave maliciously. Figure~\ref{fig:realinsider} shows contribution analysis on the days that have the malicious activities. In Figure \ref{fig:realinsider}(a) and \ref{fig:realinsider}(c), feature $7$ captures the malicious activities, while in Figure \ref{fig:realinsider}(b) feature $8$ is the ground-truth anomalous feature. The experimental results and the corresponding KL-divergence are shown in Figure~\ref{fig:realinsider}. ACE and ACE-KL accurately capture the feature responsible for the anomalies. Both ACE and ACE-KL have significantly lower KL divergence, outperforming the baselines. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figs/cert/CMP2946-398.pdf} \label{fig:ceg0} \caption{\footnotesize WWW download anomaly, feature 7, day 398.} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figs/cert/CMP2946-404.pdf} \label{fig:ceg1} \caption{\footnotesize WWW upload anomaly, feature 8, day 404.} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figs/cert/CMP2946-409.pdf} \label{fig:ceg2} \caption{\footnotesize WWW download anomaly, feature 7, day 409.} \end{subfigure} \caption{Three real anomalies in the CERT data set. \textbf{(left)} Feature contributions using ACE, ACE-KL and two baselines. \textbf{(right)} KL-divergence between feature contributions computed by the methods and the ground truth contributions. ACE and ACE-KL has the most similar contribution as the ground truth (which is always 1.0). } \label{fig:realinsider} \end{figure} \subsection{Evaluation on UNB Netflow Data Set} This section presents the evaluation of ACE and ACE-KL on UNB Netflow with similar settings as CERT. A separately trained autoencoder is used as a black-box anomaly detection model. Due to space limitations, we present results of applying ACE and ACE-KL to only two anomalies here. Table~\ref{tab:x} provides a short description of the top 10 features that are useful to interpret the results. Figure \ref{fig:netcontribution} shows the feature contributions for the anomalies, and Table~\ref{tab:table1} provides details on the feature values and their contributions. Since the annotation is at the packet level, it is not easy for a person to manually determine the root cause for the anomaly. \begin{table*}[t] \small \begin{subtable}{1.0\textwidth} \centering \begin{tabular}{ |p{0.3\textwidth}|p{0.5\textwidth}| } \hline \# std src ports & Number of standard source ports\\ avg std src ports per dst ip & Average number of standard source ports per destination IP\\ protos out 3 & Third bit in the Protocol feature (3 bit feature indicating TCP, UDP, or Other)\\ top1 out & Top 1st outgoing IP address (in terms of bytes)\\ top3 out & Top 3rd outgoing IP address (in terms of bytes)\\ \hline \end{tabular} \caption{\small Features for outgoing flows (when IP is source)} \label{tab:xa} \end{subtable}% \vfill \begin{subtable}{1.0\textwidth} \centering \begin{tabular}{ |p{0.3\textwidth}|p{0.5\textwidth}| } \hline max duration in & Maximum incoming flow duration\\ \# std dst ports & Number of standard destination ports\\ avg std dst ports per src ip & Average number of standard destination ports per source IP\\ Flags in 3 & Third bit in the flags field\\ total duration in & Total duration of the incoming flows\\ \hline \end{tabular} \caption{\small Features for incoming flows (when IP is destination)} \label{tab:xb} \end{subtable}% \caption{Short descriptions of top features in the results.} \label{tab:x} \end{table*} For anomaly 1, the highest contributing feature is `\textbf{max\_duration\_in}', which is the maximum duration of an incoming flow into this IP address (192.168.1.103). After examining the netflow records, we found that the high value for this feature was related to long-lived (i.e., persistent) TCP connections. Although benign, this was an unusual activity relative to other recorded traffic. The other high values correspond to the number of standard source and destination ports. This was found to be related to a port scanning activity, which was not previously discovered, i.e., was not labeled. Anomaly 2 is almost exactly similar to Anomaly 1 with a similar port scanning activity. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{figs/netflow/1-newmag.pdf} \caption{Anomaly 1} \label{fig:net1} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{figs/netflow/2-newmag.pdf} \caption{Anomaly 2} \label{fig:net2} \end{subfigure} \caption{Contribution analysis on two anomalies in netflow data.} \label{fig:netcontribution} \end{figure} \begin{table*}[t] \small \begin{subfigure}[b]{0.5\textwidth} \centering \begin{tabular}{ |l|p{0.3\textwidth}|c|c|c| } \hline Index & Feature Name & ACE-KL & ACE & value\\ \hline 43 & max duration in & \textbf{0.207} & \textbf{0.317} & \textbf{239.961}\\ 0 & \# std src ports & \textbf{0.195} & 0.030 & \textbf{158}\\ 30 & \# std dst ports & \textbf{0.174} & 0.071 & \textbf{156}\\ 13 & max duration out & 0.100 & 0.011 & 240.085\\ 3 & avg std src ports per dst ip & 0.064 & \textbf{0.177} & \textbf{1}\\ 26 & min n bytes out & 0.062 & 0.034 & 20\\ 33 & avg std dst ports per src ip & 0.052 & \textbf{0.250} & \textbf{1}\\ 70 & protos out 3 & 0.046 & 0.021 & 1\\ 56 & min n bytes in & 0.041 & 0.035 & 20\\ 98 & top1out & 0.025 & 0.005 & \tiny{192.168.1.101}\\ 12 & total duration out & 0.018 & 0.008 & 62154.867\\ 42 & total duration in & 0.018 & 0.041 & 44405.557\\ \hline \end{tabular} \caption{\footnotesize Anomaly1: 192.168.1.103, Sunday} \label{tab:table1_b} \end{subfigure} \hspace{0.5em} \begin{subfigure}[b]{0.5\textwidth} \centering \begin{tabular}{ |l|p{0.3\textwidth}|c|c|c| } \hline Index & Feature Name & ACE-KL & ACE & value\\ \hline 0 & \# std src ports & \textbf{0.275} & 0.087 & \textbf{158}\\ 30 & \# std dst ports & \textbf{0.246} & \textbf{0.175} & \textbf{156}\\ 92 & flags in 3 & \textbf{0.104} & 0.0496 & \textbf{0}\\ 3 & avg std src ports per dst ip & 0.090 & 0.097 & 0\\ 33 & avg std dst ports per src ip & 0.074 & \textbf{0.156} & \textbf{0}\\ 70 & protos out 3 & 0.065 & \textbf{0.130} & \textbf{1}\\ 101 & top4 out & 0.030 & 0.024 & \tiny{67.220.214.50}\\ 105 & top3 in & 0.030 & 0.029 & \tiny{61.112.44.178}\\ 104 & top2 in & 0.023 & 0.109 & \tiny{125.6.176.113}\\ 107 & top5 in & 0.023 & 0.059 & \tiny{192.168.5.122}\\ 13 & max duration out & 0.020 & 0.005 & 280.53\\ 102 & top5 out & 0.020 & 0.078 & \tiny{203.73.24.75}\\ \hline \end{tabular} \caption{\footnotesize Anomaly2: 192.168.2.110, Sunday} \label{tab:table1_c} \end{subfigure} \caption{Contributions and feature values for top two anomalies in netflow data. The contributions in bold are the top ones.} \label{tab:table1} \end{table*} Identifying anomalies from netflow records is a time consuming and laborious (and thus error-prone) task. Since our method is able to systematically provide a basic explanation (in terms of features) of why some of the anomalies were identified as such, the internal security expert who we consult is convinced that our method is trustworthy and practical. As noted earlier, several of the IP addresses exhibited multiple distinct anomalous behaviors, as well as benign characteristics such as the persistent TCP connections for certain applications. As future work the expert recommended investigating how to systematically discern between multiple anomalies involving a single IP address, to make it easier for a security analyst to understand which are malicious and require their attention, and which are benign and can be ignored. This would accelerate an analyst's ability to respond faster to malicious activities, and therefore improve the security of the analyst's organizations. \subsection{Evaluation on Android Malware Data Set} Finally, we evaluate ACE and ACE-KL on the Android malware data set\cite{Zhou2012}. This data set captures various features related to app activities, including their installation methods, activation mechanisms as well as their susceptibility to carry malicious payloads. In this data set, each example is a numeric, binary vector of 122 dimensions, representing features for malware detection. Peng et al. \cite{Peng2012} successfully built probabilistic generative models for ranking risks of those Android malwares in a semi-supervised learning setting by using a large amount of additional unlabeled data. The risk scoring procedure is a form of anomaly detection, and the risk scores equate to anomaly scores. Thus, in this evaluation, we used the pre-built hierarchical mixture of naive Bayes (HMNB) model \cite{Peng2012} as the black-box model to generate an anomaly score, and applied our approach to explain the anomaly. As the HMNB model calculates the likelihood of a malware in the population, we use the negative log-likelihood as the anomaly score. We inspected the four malwares that obtained the highest anomaly score by using the pre-trained HMNB model. Before we analyzed the anomalies using ACE and ACE-KL, all the $0$s in the features were replaced by $-1$s, as a $0$ feature will result in a constant contribution of a feature. The contributions of each feature is calculated using ACE and ACE-KL. The final results are presented in Fig~\ref{fig:malwarecontribution}. The feature indices are sorted by the contributions calculated by ACE, and we only show the top 10 features. In all four cases, ACE and ACE-KL produce consistent contributions, although their results differ from LIME. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{figs/android/mw1.pdf} \caption{Anomaly 1} \label{fig:mw1} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{figs/android/mw2.pdf} \caption{Anomaly 2} \label{fig:mw2} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{figs/android/mw3.pdf} \caption{Anomaly 3} \label{fig:mw3} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{figs/android/mw4.pdf} \caption{Anomaly 4} \label{fig:mw4} \end{subfigure} \caption{Contribution analysis on four anomalies in Android malware data. We only show the top 10 features that contribute most significantly to the anomaly score in terms of percentage.} \label{fig:malwarecontribution} \end{figure} To gain a better understanding of the difference between ACE, ACE-KL and LIME, we show the probability mass graph of all the features as the contributions for Malware 1 in Figure \ref{fig:androiddist}. As stated, both ACE and ACE-KL identified the same features that contribute most to the anomaly. Further, the contribution distribution induced by ACE-KL forms a more skewed distribution, highlighting those features that contribute most to the anomaly while neglecting those with small contributions. In contrast, the contribution distribution calculated by LIME is relatively flat compared to ACE and ACE-KL. \begin{figure}[!htb] \centering \includegraphics[width=0.5\textwidth]{figs/android/dist.pdf} \caption{Probability mass function for each feature in Malware 1. This forms the whole contribution distribution to the anomaly score for this Malware.} \label{fig:androiddist} \end{figure} \textbf{Anomaly Remediation:} Although the Android Malware data set is labeled with anomalies, the contributing features to these anomalies are unknown, making it difficult to validate our results. To get some degree of validation, we conducted additional experiments which we call ``anomaly remediation". Essentially, we change input feature values (flip binary features) to repair a particular anomaly, i.e., to see if the anomaly score reduces significantly for a particular example. In these experiments, we first flip the top 10 binary contributing features detected by ACE (or ACE-KL, in all four cases the top 10 features are identical for ACE and ACE-KL) for the four anomalies, and the top 10 features selected by LIME. We also randomly sample 10 features among all the 112 features, and flip them. Our conjecture is as follows: if the true features causing the Android app to be classified as malware correspond to those detected by ACE, then fixing the anomaly (by flipping the features) should result in much higher drop in the anomaly score than if the 10 features were randomly picked. The results of our experiments are summarized in Figure \ref{fig:androidamendment}. \begin{figure}[!htb] \centering \includegraphics[width=0.5\textwidth]{figs/android/amendment.pdf} \caption{Comparison of original anomaly scores, the scores after anomaly remediation for ACE/ACE-KL and LIME, and the scores after random feature selection. Remediation with ACE/ACE-KL greatly reduces the anomaly score after the correct features contributing mostly to the anomaly score are identified, while LIME and randomly choosing a feature to remedy increase the anomaly score (by flipping a feature that does not contributing significantly to the anomaly originally).} \label{fig:androidamendment} \end{figure} As can be seen in Figure \ref{fig:androidamendment}, by flipping the top 10 features detected by ACE/ACE-KL, the anomaly scores generated by the well-trained black-box model significantly drop for all four malwares. If we randomly pick the 10 features, the anomaly scores increase for all the four malwares. This is expected since only a small number of features are likely to cause a particular anomaly, and random sampling is more likely to select non-contributing features. Surprisingly, remediation of the top 10 features selected by LIME result in a higher increase in the score than random selection, which further shows LIME is not suitable for this problem. We suspect this is likely because LIME only considers the weight vector of the regression framework, neglecting the importance of whether the feature is 1 or -1. \section{Introduction} Cyber-security is a key concern for both private and public organizations, given the high cost of security compromises and attacks; malicious cyber-activity cost the U.S. economy between \$57 billion and \$109 billion in 2016 \cite{whitehousereport}. As a result, spending on security research and development, and security products and services to detect and combat cyber-attacks has been increasing \cite{forbesreport}. Organizations produce large amounts of network, host and application data that can be used to gain insights into cyber-security threats, misconfigurations, and network operations. While security domain experts can manually sift through some amount of data to spot attacks and understand them, it is virtually impossible to do so at scale, considering that even a medium sized enterprise can produce terabytes of data in a few hours. Thus there is a need to automate the process of detecting security threats and attacks, which can more generally be referred to as security anomalies. Major approaches to detect such anomalies fall into two broad categories: human expert driven (mostly rules-based) and machine learning based (mostly unsupervised) \cite{Veeramachaneni2016}. The first approach involves codifying domain expertise into rules, for example, if the number of login attempts exceeds a threshold, or more than a threshold number of bytes are transferred during the night, and so on. While rules formulated by security experts are useful, they are ineffective against new (zero-day) and evolving attacks; furthermore, they are brittle and difficult to maintain. On the other hand, enabled by the vast amounts of data collected in modern enterprises, machine learning based approaches have become the preferred choice for detecting security anomalies. The machine learning models to detect security anomalies typically output a severity or anomaly score; this score allows ranking and prioritization of the anomalies. A security analyst can then further investigate these anomalies to understand their root causes, if they are true positives, and if any remedial action is required. However, anomaly detectors typically do not provide any assistance in this process. In fact, any direction or pointers in terms of features, or groups of features, responsible for a high anomaly score would allow prioritization of causes to look at first and thus save an analyst's time and effort; this would help even though the information may not directly reveal the root cause of an anomaly. For example, based on the contributions an anomaly detector assigns to features related to external traffic volume, number of ports active, number of external hosts, etc, analysts would decide the course of their investigation into the underlying causes of that particular anomaly. However, most anomaly detection models are black-boxes that output an anomaly score without any associated explanation or reasoning. In fact, there is an inverse relationship between building complex models that can make accurate predictions and explaining these predictions in a human-interpretable way. For example, explaining the predictions of simpler models, such as linear regression, logistic regression or decision trees, is considerably easier compared to complex models such as random forests or deep neural networks, which build complex non-linear relationships between the input and the predicted output. As a result, when models that can explain their output are needed, as is often the case, for example, in medical diagnosis (a doctor needs to provide a detailed explanation of the diagnosis to the patients \cite{Caruana2015}), or credit card application (an explanation of why or why not a particular application is approved is usually required \cite{Shi2012}), simpler models are preferred. However, interpretability comes at a cost since in most instances complex models tend to have higher accuracy. Therefore, there is an unavoidable trade-off between model interpretability and model accuracy. Recently deep learning models are being successfully used for cyber-security applications \cite{Tuor2017, yousefi2017autoencoder, berman2019survey, cui2018detection}. In fact, a part of the focus of a recently organized workshop is the application of deep learning to security \cite{dls}. In this paper, we focus on explaining the outputs of complex models in the cyber-security anomaly detection domain, where outputs are usually anomaly scores. We propose ACE -- \textit{A}nomaly \textit{C}ontribution \textit{E}xplainer, to bridge the gap between the predictions provided by an anomaly detection model and the interpretation required to support human intervention in realistic applications. Specifically, ACE provides explanations, in terms of the features' contributions, by building a specialized linear model to locally approximate the anomaly score that a black-box anomaly detection model generates. These explanations aid a security analyst to quickly diagnose the reported anomaly. Our source code is publicly available\footnote{Source code available at \url{https://github.com/cosmozhang/ACE-KL}}. Our key contributions are: \begin{itemize} \item We design and implement two methods, ACE and ACE-KL, for explaining scores of individual anomalies detected by black-box models in terms of feature contributions. \item We validate our methods on three data sets: 1) a synthetically generated insider threat data set; 2) a real world netflow data set; and 3) a real world Android malware data set. In all of these cases, the results are encouraging and improved upon a recent work \cite{Ribeiro2016a} as a baseline. \end{itemize} The high-level overview of our approach is shown in Figure~\ref{fig:overview}, and we focus on the meta-model approximation to explain the score of a particular anomaly. \begin{figure}[!htb] \centering \includegraphics[width=0.48\textwidth]{figs/overview.png} \caption{Overview of the anomaly detection and interpretation workflow for the ACE or ACE-KL meta-model.} \label{fig:overview} \end{figure} \section{Methods} \subsection{Problem Statement} Formally, the model explanation problem in the context of anomaly detection can be stated as follows: Given 1) a black-box anomaly detection model $f$, an arbitrary function with an input $\bm{x}$ having $M$ features: $x^1,\dots,x^M$, which outputs an anomaly score $As$, that is, $ f: \bm{x} \rightarrow As, $ where $ As \in [0, \infty] $ is a scalar; and 2) a random data point $\bm{x}'$ that produces score $As'$, the goal is to estimate $c^1,\dots, c^M$, the normalized contributions of each feature of $\bm{x}'$. Note that a $c^i$ may be zero if it does not contribute to an anomaly. \subsection{Assumptions and Observations} We assume the output of the anomaly detector is an anomaly score in the range $[0, \infty]$, with 0 indicating no anomaly, and the score monotonically increasing with the severity of the anomaly. Such a score is widely used in anomaly detectors. Note that if an anomaly detector outputs an anomaly probability, $P_A(X)$, it is easy to convert it to such a score by the transformation: $As = -\log(1- P_A(X))$. A careful study of existing techniques such as LIME \cite{Ribeiro2016a} reveals their unsuitability for anomaly explanation. These methods can only explain the importance of a feature locally, but not the whole contribution of that feature. For example, consider a linear regression scenario, i.e., $ \sum\limits_{i=1}^{M}x^i\cdot w^i = As$, where $ x^i $ is the $i$th feature of the vector (here we also encode the bias term $b$ as $w^M$, and the corresponding $x^M$ is always $ 1 $). Assume a given feature $ x^a $ is of ``small importance'', determined by its non-significant corresponding weight $ w^a $, (assuming $ w^a $ is a value close to $ 0.01 $). However, if $ x^a $ is extremely large in a new example, for instance, $ 1000 $, and $ As=50 $, the multiplication of $ x^a $ and $ w^a $ still makes a large contribution to the predicted value $ As $. This observation has practical implications in anomaly detection problems, especially in security-related problems. For example, when a feature tends to appear in some range of values in training, a trained black-box model will weigh it accordingly. After the well-trained model is deployed, a new attack prototype can evolve focusing on specific attributes, which were neglected at training time, but now takes high attribute values. Even if the anomaly may be detected by a well trained black-box model as it results in high output scores, the underlying reason might escape the security analysts' attention. \subsection{Anomaly Contribution Explainer (ACE)} ACE explains a specific anomaly by computing its feature contribution vector obtained through a local linear approximation of the anomaly score. Using this simple approximation, the real contribution that $i$th feature $ x^i $ makes to form $ As $ is naturally $ x^i\cdot w^i $. However, it is possible some $ x^i\cdot w^i $ are negative. These terms correspond to features that negatively impact an anomaly, and thus cannot be its cause. We want to discard these terms and focus on the features positively contributing to an anomaly. Therefore, we use the ``softplus'' function \cite{Dugas2000}, which is a ``smoothed'' \textit{relu} function to model the contribution of $ x^i\cdot w^i $ towards the entire anomaly. The intuition behind this choice is evident: we calculate the contribution by neglecting the negative components while considering the positive part linearly; this function forces all negative components to $0$ and retains all the positive components linear to their original value; further, the convexity of this function simplifies the computation. We define $As$ as the anomaly score, calculated by the blackbox model. Further, to normalize all of the contributions towards the anomaly score, by denoting the normalized contribution of feature $ i $ as $ c^i $, we formally define the normalized contribution (``contribution" thereafter) of each feature as \par\nobreak{\small \begin{equation} c^{i}=\dfrac{\log(1+e^{ x^i\cdot w^i})}{\sum\limits_{j=1}^{M}\log(1+e^{x^i\cdot w^i})}. \label{cal_contribution} \end{equation} } To approximate a particular anomaly score $As$ generated by a black-box model at a point $\bm{x}$ of interest, we form the loss function with a modified linear regression, by sampling the neighborhood of $\bm{x}$ to obtain $N$ neighbors and obtaining their corresponding $N$ anomaly scores: \begin{equation} loss = \dfrac{1}{N}\sum\limits_{j=1}^{N} \pi_{\bm{x}}(\bm{x}_{j})\cdot(\bm{w}^{\intercal}\bm{x}_{j}-As_{j})^{2} + \alpha ||\bm{w}||^{2}_{2}\notag, \end{equation} where $ As_{j} $ is the anomaly score generated by a black-box model for the $ j $th neighbor, $ \alpha $ set to be $1$ in this study is the hyper parameter that controls the $ L2 $ norm regularizer, and $ \pi_{\bm{x}}(\bm{x}_{j}) $ is the weight calculated by a distance kernel for the $ j $th neighbor. The parameters are estimated by minimizing the loss function, using the neighbourhood of the original example formed through sampling. Based on the fact that this neighbourhood is close enough to the point of intersection between the surface and the tangent plane, we use this neighbourhood to approximate the tangent plane, which is the linear regression. We choose the normal distribution $ \mathcal{N}(\bm{x}, 0.01\mathbf{I}) $, where $ \mathbf{I} $ is an identity matrix for continuous features, as the neighborhood region to ensure the samples are close enough to the examined point; and a $\operatorname{Bern} \left(\bm{0.1}\right)$ distribution to flip the value for binary features $\bm{x} \in \{0, 1\}^{M}$ for the same reason. A distance kernel $ \pi_{\bm{x}} $ is used to calculate the distance between the examined point and the neighbors as such: $ \pi_{\bm{x}}(\bm{x}_{j}) = \exp(-D(\bm{x}, \bm{x}_{j})^{2}/\sigma^{2}) $, where $ D(\bm{x}, \bm{x}_{j}) $ is the distance between the original point $ \bm{x} $ and the neighbor $ \bm{x}_{j} $, which in our study was used as the Euclidean distance.$ \sigma $ is a pre-defined kernel width; here we use $ 0.75\times \sqrt{M} $. Thus, the larger the distance, the smaller the weight of that neighbor in parameter estimation, and vice versa. The overview of this approach is shown in Algorithm~\ref{ace}. \begin{algorithm}[!htb] \caption{Anomaly Contribution Explainer (ACE)} \label{ace} \begin{algorithmic}[1] \Require{black-box model \textit{f}, Number of neighbors \textit{N}} \Require{The sample \textit{x} to be examined} \Require{Distance kernel $ \pi_{\bm{x}} $, Number of feature \textit{K}} \Comment{$ \pi_{\bm{x}} $ measures the distance between a sample and $ \bm{x} $, which is used as the inverse weight} \State{$ \mathcal{Z} \leftarrow \{\} $} \For{$ j \in \{1, 2, 3, \dots, N\} $} \If{$\bm{x} \in \mathbb{R}^{M}$} \State{$ \bm{x}_{j} \leftarrow \textit{sample\_from}\, \mathcal{N}(\bm{x}, 0.01) $} \Comment{$ \mathcal{N} $ is a normal distribution} \ElsIf{$\bm{x} \in \{-1,1\}^{M}$} \State{$\bm{x}_{j} \leftarrow flip(\bm{x})\_with\,\operatorname{Bern} \left(\bm{0.1}\right)$} \Comment{$ \operatorname{Bern} $ is a Bernoulli distribution} \EndIf \State{$As_{j} \leftarrow \textit{f}(\bm{x}_{j})$} \State{$ \mathcal{Z} \leftarrow \mathcal{Z}\cup \langle \bm{x}_{j}, \textit{f}(\bm{x}_{j}), \pi_{\bm{x}}(\bm{x}_{j}) \rangle $} \EndFor \State{$ \bm{w} \leftarrow \min\limits_{\bm{w}}\,loss(\mathcal{Z}) $} \State{Compute and sort $ w^{i}_{j}\cdot x^{i}_{j} $ for each $ i $} \Comment{$ i $ is the index for $ i $th feature} \State{Pick the top $ K $ from the sorted results and calculate the contribution (Eq. \ref{cal_contribution})} \end{algorithmic} \end{algorithm} \vspace{0em} \subsection{Anomaly Contribution Explainer with KL Regularizer (ACE-KL)} The ACE-KL model extends the ACE model by adding an additional regularizer. This regularizer tries to maximize the KL divergence between a uniform distribution and the calculated distribution of contributions of all the inspected features. By adding this regularizer to our loss function, our anomaly contribution explainer assigns contribution to inspected features in a more distinguishable way, inducing more contributions from the dominant ones and reducing the contributions from those less dominant ones. The KL divergence between a uniform distribution and a particular distribution takes the following form: \par\nobreak{\small \begin{equation} KL(P||Q)= \sum_{i}^{M}P(i)\log \dfrac{P(i)}{Q(i)},\quad P(i) \sim Uniform, \end{equation} } where $ P(i) $ is the uniform distribution and $ Q(i) $ is the calculated distribution. Hence, the loss function is formalized as following: \begin{align*} loss = &\dfrac{1}{N}\sum\limits_{j=1}^{N} \pi_{\bm{x}}(\bm{x}_{j})\cdot(\bm{w}^{\intercal}\bm{x}_{j}-As_{j})^{2}\\ &+ \alpha ||\bm{w}||^{2}_{2} - \beta KL_{j}(P||Q),\nonumber \end{align*} where $ \beta $ set to $50$ in this study is the hyper parameter to control the KL regularizer. This formulation forces the calculated distribution to be peaky. Therefore, in terms of contributions, those features that contribute most get better explained than others. Intuitively, this characteristic yields a better visualization for security analysts in real applications. Further, a merit that our ACE-KL model retains is that the new loss function is still a convex function. We sketch the proof by taking advantage of the \textit{Scalar Composition Theorem} \cite{Boyd2004}: \begin{corollary} The loss function of ACE-KL model is a convex function, w.r.t. its model parameters. \end{corollary} \begin{proof} The formulation of the loss function for ACE-KL consists of two parts: a regular ridge regression and an additional regularizer. It is trivial to show a ridge regression is a convex function w.r.t. its parameters.\\ Now we show the additional regularizer is also a convex function (for each $j$): \par\nobreak{\small \begin{align*} KL(P||Q) & = \sum\limits_{i}^{M}P(i)\log \dfrac{P(i)}{Q(i)}\\ & = \sum\limits_{i}^{M}P(i)\log P(i) - P(i)\log Q(i)\\ & = \sum\limits_{i}^{M} C - c\log Q(i) \\ &\textit{($ P(i) \sim Uniform $ , so $ P(i) $ is a constant)}\nonumber\\ & = \sum\limits_{i}^{M} C - c\log \dfrac{\log(1+e^{w^{j}\cdot x^{j}})}{\sum\limits_{j=1}^{M}\log(1+e^{w^{j}\cdot x^{j}})}\\ & = \sum\limits_{i}^{M} C - c\log\log(1+e^{w^{i}\cdot x^{i}})+c\log\sum\limits_{j=1}^{M}\log(1+e^{w^{j}\cdot x^{j}}), \end{align*} } where $\log(1+e^{w^{i}\cdot x^{i}})$ is convex. By the \textit{Scalar Composition Theorem}, the $KL$ regularizer is convex. Then a linear combination of the convex ridge regression part and the $KL$ regularizer retain the property of convexity. \end{proof} \section{Related Work} While model interpretability and explanation have a long history \cite{biranSurvey2017,doshi2017,Lipton2018}, the recent success and rise in popularity of complex machine learning models (such as deep neural networks) has led to a surge of interest in model interpretability, as these complex models offer no explanation of their output predictions. Given the extensive literature on this subject, we only discuss work most related to ours; Guidotti et al.\cite{GuidottiSurvey2018} provide a comprehensive survey on explainability. Methods for generating explanations from complex models fall into two main categories: 1) model-specific ~\cite{suermondt1992,feraud2002methodology,robnik2011,landecker2013,martens2008}, which exploit a model's internal structure and as a result only work for that specific model type; and 2) model-agnostic \cite{robnik2008,kononenko2013,baehrens2010} or black-box methods, which do not depend on the underlying model type. Our work belongs to the second category. Several model-agnostic methods investigate the sensitivity of output with respect to the inputs to explain the output: An early attempt called \textit{ExplainD} used additive models to weight the importance of features with a graphical explanation \cite{Poulin2006}. In Strumbelj et al.'s work \cite{Strumbelj2010}, the authors exploited notions from coalitional game theory to explain the contribution of value of different individual features. \textit{LIME}\cite{Ribeiro2016a}, designed for classification problems, was built to explain a new data point after a black-box model is trained. \textit{MFI} \cite{Vidovic2016} is a non-linear method able to detect features impacting the prediction through their interaction with other features. More recently, \textit{LORE} \cite{Guidotti2018} used a synthetic neighbourhood generated through a genetic algorithm to explain the feature importance and \textit{SHAP} \cite{Lundberg2017} assigns each feature an importance value for a particular prediction. Our proposed methods belong to this category, and are closest to LIME \cite{Ribeiro2016a}. Anomaly detection is widely studied and an important topic in data mining. However, explanation of the detected anomalies has received relatively little attention from researchers. For instance, one of the most widely cited surveys on anomaly detection \cite{chandola2009} makes no reference to explainability. Research on anomaly detection and explainability includes: work on anomaly localization \cite{Hara2015,Jiang2011}, which refers to the task of identifying faulty sensors from a population of sensors; feature selection or importance \cite{Hara2017}; estimating model sensitivity \cite{Woodall2003}; and method specific techniques \cite{Hirose2009,Tsuyoshi2009}. Despite their advantages, these methods are either tailored too closely for specific anomaly detection methods, or only consider sensitivity of inputs, not their entire contribution, and are not suitable for the security domain where anomalies and methods to detect evolve rapidly.
1,108,101,563,923
arxiv
\section{Introduction} \label{sec:intro} \Qn There is a discussion about the Church-Turing thesis at the Computer Science Theory StackExchange \cite{StackExchange}. It involves your paper \cite{G188} with Nachum Dershowitz where you prove the thesis. Peter Shor is skeptical about it: \begin{quote} ``The Dershowitz-Gurevich paper says nothing about probabilistic or quantum computation. It does write down a set of axioms about computation, and prove the Church-Turing thesis assuming those axioms. However, we're left with justifying these axioms. Neither probabilistic nor quantum computation is covered by these axioms (they admit this for probabilistic computation, and do not mention quantum computation at all), so it's quite clear to me these axioms are actually false in the real world, even though the Church-Turing thesis is probably true.'' \end{quote} What do you say? \Ar The Church-Turing thesis asserts that if a string function is effectively computable then it is computable by a Turing machine. Here a string function is a partial function from strings in a finite alphabet to strings in a finite alphabet. In the 1930s, when Church and Turing worked on their versions of the thesis, there was a robust notion of algorithm. These traditional algorithms are known also as classical or sequential algorithms. It is this notion of algorithm which is axiomatized in \cite{G141}. In the original thesis, the effective computability of a string function means that it is computable by an effective classical algorithm. It is that original thesis which is proven in the Dershowitz-Gurevich paper \cite{G188}. Since the 1930s, many new species of algorithms have been introduced, and the notion of algorithm continues to evolve \cite{G209}. Apparently Peter Shor thinks that we pretend to prove the unconstrained version of the thesis, for the algorithms of all species, and that the unconstrained thesis is true. \Qn But surely the validity of the thesis is not restricted to the classical algorithms. \Ar I believe that the thesis can be proven for a number of well-understood species of algorithms, in particular for algorithms in the quantum circuit model. But the unconstrained version of the thesis cannot possibly be true. \Qn Please explain. \section{The original Church-Turing Thesis \label{sec:orig}\mbox{}\\[-6ex] \Ar Let me quickly revisit the original thesis; details and relevant references are found at \cite{G188}. I will address the unconstrained version later. \subsection{Classical algorithms \label{sub:seq}\mbox{} The 1930s notion of algorithm was robust. People recognized algorithms when they saw them. These algorithms compute in steps, one step after another, and the steps are of bounded complexity \cite{Kolmogorov}. Various names are used today for those algorithms: traditional, classical, sequential. \Qn None of the three names seems perfect to me. Tradition changes with time. ``Classical'' may mean merely not quantum. ``Sequential'' seems consistent with unbounded complexity of steps. \Ar This is true. To distinguish between bounded and unbounded complexity of steps, we spoke about small-step and wide-step algorithms in \cite{G164}. But even that neglects the distinction between classical algorithms and algorithms interacting with their environment as well as the distinction between classical and learning algorithms. \begin{terms} \emph{Classical algorithms} are algorithms in the sense of the 1930s (rather than merely non-quantum algorithms). \end{terms} Classical algorithms were analyzed and axiomatized in \cite{G141}. The analysis and axiomatization were refined in \cite{G201}, mostly because the original analysis abstracted from details of intra-step computation. \subsection{The thesis \label{sub:thesis}\mbox{} There are many equivalent formulations of the Church-Turing thesis. The Dershowitz-Gurevich article \cite{G188} was published in a logic journal. There, respecting logic tradition, we formulated the thesis in terms of partial recursive functions. Here, respecting computer science tradition, we formulate the thesis in terms of Turing machines. \begin{terms}\label{pro:terms}\mbox{}\rm \begin{itemize} \item A \emph{string function} is a partial function from strings in a finite alphabet to strings in a finite alphabet. \item A string function is \emph{Turing computable} if it is computable by some Turing machine. \end{itemize} \end{terms} Now we can formulate the Church-Turing thesis succinctly. Here is a generic version of the thesis which leaves open what is meant by effective computability. \begin{thesis}[Generic Church-Turing thesis]\label{the:gen} If a string function is effectively computable then it is Turing computable. \end{thesis} \noindent The approriate version of the original/classical thesis is this: \begin{thesis}[Classical Church-Turing thesis]\label{the:cls} If a string function is computed by an effective classical algorithm then it is Turing computable. \end{thesis} \Qn What does it mean exactly that an algorithm computes a string function? \Ar Without loss of generality, we can define this in a way convenient for our purposes. An algorithm $A$ computes a string function $f$ if \begin{itemize} \item[-] inputs of $A$ are strings $x$ in the input alphabet of $f$, \item[-] if $f$ is defined at $x$ then the computation of $A$ on $x$ eventually converges and outputs $f(x)$, \item[-] if $f$ is not defined at $x$ then the computation of $A$ on $x$ produces an error message or diverges, i.e., goes on forever. \end{itemize} Notice that this definition abstracts from limited resources. In the real world, a computation of an algorithm $A$ on input $x$ may break because we ran out of time or money or something else. \Qn What does it mean that an algorithm $A$ is effective? \Ar An algorithm $A$ is effective if, given sufficient resources, the computation of $A$ on any input $x$ can be carried out in the real world. \Qn Show me some noneffective algorithms. \subsection{Noneffective classical algorithms \label{sub:ineff}\mbox{} \Ar One example is Euclid's algorithm for lengths. You know Euclid's algorithm for natural numbers; given two natural numbers, the algorithm computes their greatest common divisor. Euclid used a similar algorithm for lengths; today we can think of lengths as nonnegative real numbers. Given two lengths, the algorithm finds their greatest common divisor if the two lengths are commensurable and diverges otherwise. Another example is Gauss Elimination algorithm for real numbers. \Qn In both cases, reals can be approximated by rationals as closely as desired, and the computation on rationals can be carried out effectively. \Ar This is true though the approximating algorithm will be much more involved, and there are some subtleties. For example, two reals may or may not be commensurable while any two rationals are commensurable. Besides, noneffective classical algorithms may be more abstract. For example, Gauss Elimination works over every field. \Qn Neither of the two noneffective algorithms computes a string function. \Ar Oracle algorithms, which compute string functions, may be and often are noneffective. In particular, a Turing machine with an appropriate oracle solves the halting problem for oracle-free Turing machines. \Qn Using oracles looks like cheating. \Ar But it may be useful. Turing used oracles machines already in 1939 \cite{Turing39}. \subsection{Proving the original thesis \label{sub:proof}\mbox{} \Qn Your proof of the thesis appeared only on 2008 \cite{G188}. How come that the thesis wasn't proven earlier? \Ar One reason for that could be that it is easier to axiomatize all classical algorithms rather than only effective ones. The proof of the thesis builds on the axiomatization of classical algorithms in \cite{G141} \Qn But people could think of all classical algorithms earlier on. \Ar It was natural to restrict attention to effective algorithms. Turing for example ignores noneffective algorithms completely in his thesis paper \cite{Turing}. With time, software grew more involved, and software specifications started to use oracles and even work with genuine reals. \Qn Did you axiomatize algorithms with an eye on proving the Church-Turing thesis? \Ar No, not at all. I introduced abstract state machines (originally called evolving algebras) and posited a thesis that every algorithm is an abstract state machine \cite{G103}. The purpose of the axiomatization in \cite{G141} was to prove the new thesis for classical algorithms. Later, Nachum Dershowitz and I extended that axiomatization with an axiom saying essentially that there is no funny stuff in the initial state of the algorithm. This allowed us to derive the Church-Turing thesis \cite{G188}. \section{Unconstrained Church-Turing thesis \label{sec:gen} \Ar Let's formulate the unconstrained thesis more explicitly. \medskip \begin{thesis}[Unconstrained Church-Turing thesis]\label{the:un} If a string function is computed by any effective algorithm whatsoever then it is Turing computable. \end{thesis} \noindent Now I am ready to posit my antithesis. \medskip \begin{anti*} The unconstrained thesis cannot possibly be true. \end{anti*} \Qn How do you justify the Antithesis? \Ar Let me give you three arguments. \subsection{A moving target \label{sub:evol}\mbox{} \smallskip My first argument is related to the evolution of the notion of algorithm. The notion of algorithm keeps evolving and getting more liberal \cite{G209}. This makes it a moving target. In that sense it is analogous to the notion of number. We have already many species of numbers, e.g., \begin{itemize} \item integers, rationals and reals, \item complex numbers and algebraic numbers, \item quaternions, octonions, sedenions, \item ordinal numbers and cardinal numbers, \item non-standard numbers, introduced by Abraham Robinson, and surreal numbers, introduced by John Conway. \end{itemize} And surely new species of numbers will be introduced. One should be careful about claiming that a property is common to all species of numbers. \newpage Similarly, we have already many species of algorithms, e.g., \begin{itemize} \item sequential and parallel algorithms, \item nondeterministic algorithms, \item real-time and analog algorithms, \item randomized and probabilistic algorithms, \item distributed algorithms, \item quantum algorithms, \item biology-inspired algorithms, \item learning algorithms. \end{itemize} And surely new species of algorithms will be introduced. One should be careful about claiming that a property is common to all species of algorithms. \Qn I cannot think of any intrinsic property of all numbers. Some but not all numbers are quantities, some but not all numbers represent orderings. Yet, as far as I know, addition and multiplication are defined for all species of numbers. This property seems to survive the introduction of new species of numbers; it may be common to all species of numbers, present and future. By analogy, there should be properties common to all species of algorithms, present and future. It is possible a priori that the validity of the Church-Turing thesis is such a property. \Ar I will argue that this is not the case. \subsection{Engineering \label{sub:eng}\mbox{} Classical algorithms are mathematical objects. Large real-world algorithms of today are engineering systems. Typically they perform tasks and provide services, but sometimes they compute string functions as well. My second argument in favor of the Antithesis is that, in the case of large real-world algorithms, the Church-Turing thesis is sort of trivially true and therefore uninteresting. Consider for example a popular industrial compiler for some common programming language, e.g.\ C++, which has been written by many people. Typically such a compiler runs on numerous platforms, but for simplicity let's fix a platform. The compiler computes a string function: In comes a source code, and out goes an object code or an error message. \Qn But are compilers algorithms? \Ar Semantically, any software product is an algorithm, in my opinion. But notice that the generic Thesis~\ref{the:gen} does not use the term algorithm. It is about effective computability. We could reformulate the unconstrained Thesis~\ref{the:un} by replacing ``algorithm'' with a term that sounds more inclusive, e.g.\ ``computing system.'' \Qn A question arises whether the Church-Turing thesis holds for real-world algorithms --- or computing systems --- like compilers. \Ar Any real-world compiler accepts only finitely many source programs. It doesn't accept source programs which are too long or too involved. The function computed by the compiler is finite and therefore recursive. \Qn This is disappointing. The thesis is true but uninteresting. Can we abstract from limited resources in this case? \Ar Any popular industrial compiler is updated from time to time. Some bugs are fixed, and the new version may accept some source programs which had not been accepted earlier. Assume that the compiler will be updated forever and that there are infinitely many source programs $P$ such that some version of the compiler accepts $P$. \Qn For our purposes, there is an ambiguity problem with such a continuously developing compiler. It does not compute a single-valued string function. Different versions may treat the same source program differently. \Ar Furnish every application of (any version of) the compiler with a unique identity. Formally, the identity is a part of compiler input, and this way we solve the ambiguity problem. But the compilation process does not use the identity. \Qn The resulting string function does not seem to be Turing computable which challenges the Church-Turing thesis. But people may disagree that a continuously developing compiler is an algorithm or even a computing system. \Ar This brings me to my third argument. \subsection{Changing attitude \label{sub:attitude}\mbox{} Let me start with another example and then formulate my third argument in favor of the Antithesis. Consider Google Translate \cite{GoogleTranslate} and fix some source language, say English, and some target language, say Russian. An English text (a query) is translated into Russian. I presume that every application of Google Translate is furnished with a unique identity. Such an application can be seen as a pair $(X,Y)$ where $X$ is a so-called unique query, i.e.\ an English text with the unique identity, and $Y$ is the resulting translation to Russian. All such pairs $(X,Y)$ form a function which I will call GT. The abstraction of unlimited resources renders GT infinite. \Qn I do not like when you apply the abstraction of unlimited resources to real-world systems. Companies come and go, and so do their tools. But at least in this case the abstraction looks more natural than in the compiler case. Even though Google Translate is continuously learning and thus continuously changing, or maybe because of this, it is more naturally perceived as one entity than a sequence of compiler versions. \Ar Do you think that GT is Turing computable? \Qn Surely not. Let's suppose that a Turing machine $T$ computes GT. Then $T$ ``knows'' how English and Russian will develop, in particular what English slang will emerge and how it will be translated to Russian with its own new slang. This is absurd. \Ar Would you consider GT effectively computable? \Qn Hmm, GT is certainly computable in practice. As a frequent user of Google Translate, I know that it works. Furthermore, it works fast, almost in no time. The translation may be poor but this is beside the point. If effective algorithms are algorithms that work in practice then Google Translate is an effective algorithm. I am somewhat bothered that Google Translate is so different from algorithms of my college days. It is being trained on huge data. Its program keeps changing. What do you think? \Ar My opinion is that practically computable functions like GT are effectively computable. My third argument in favor of the Antithesis is that this opinion will become more and more common. There is an informative analogy between the following two questions. \begin{itemize} \item Are practically computable functions effectively computable? \item Can machines think? \end{itemize} Here is an instructive quote of Turing \cite[\S6]{Turing50}: \begin{quote} ``The original question, `Can machines think?' I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.'' \end{quote} \noindent Life is a better opinion-changer than arguments. \subsection{Finale \label{sec:final}\mbox{} \Qn Let me review your arguments to get a better overall picture. Your first argument in favor of the Antithesis is that the notion of algorithm is a moving target and therefore one should be cautious with universal claims about all algorithms. Your second argument is that, in the case of large real-world programs, the Church-Turing thesis is at best uninteresting. This erodes the thesis somewhat but does not demolish it. It is the third argument that is most damaging to the thesis. String functions like the translation function GT are practically computable but not Turing computable. So why you don't claim that the unconstrained thesis is plainly false? \Ar My opponents may argue that Google Translate is not really an algorithm because, in addition to a given text in the source language, huge data has been used to train Google Translate, or because Google Translate keeps changing its program. In my view, the continuing progress will render these counterarguments less and less convincing. \Qn Do you expect that it will be recognized eventually that the unconstrained Church-Turing thesis is false? \Ar This outcome is possible. Notice, however, that the thesis requires the unlimited-resources abstraction. In the case of Google Translate, this abstraction requires that Google Translate works forever. In the real world, the unlimited-resources abstraction is absurd and, I expect, will be viewed as such. The unconstrained thesis itself will be considered meaningless. In either case, whether the unconstrained thesis is considered false or meaningless, it is not true, and so the Antithesis holds. \subsection*{Acknowledgments}\mbox{} Many thanks to Andreas Blass for most useful discussions throughout my work on this dialog. I am grateful also to my colleagues who took time to comment on the penultimate version of the dialog: Cris Calude, Anuj Dawar, Pierre Lescanne, Leonid Levin, Naphtali Rishe, Alexander Shen, Volodya Vovk.
1,108,101,563,924
arxiv
\section{\textbf{Introduction and Problem Statement}} Let $\mathbb{S}$ be a subset of the natural numbers $\mathbb{N}$ and $k\in \mathbb{N}$ be fixed. Then $\mathbb{S}$ is said to be an \textbf{additive base} of order $k$ if every natural number can be expressed as a sum of $k$ elements of $\mathbb{S}$. The weak Goldbach conjecture suggests the set of prime numbers is an additive base of order \textbf{three} \cite{helfgott2013ternary}. The Erd\H{o}s-Tur\'{a}n additive bases conjecture is the assertion that all additive bases qualifies very sufficiently to be an additive additive base in the very large. In particular we have the following conjecture of Erd\H{o}s and Tur\'{a}n (see \cite{erdos1941problem}) \begin{conjecture}[Erd\H{o}s-Tur\'{a}n]\label{J-erdos_turan} Let $\mathbb{B}\subset \mathbb{N}$ and consider \begin{align} r_{\mathbb{B}}(n):=\# \left \{(a,b)\in \mathbb{B}^2|~ a+b=n\right \}.\nonumber \end{align} If $r_{\mathbb{B}}(n)>0$ for all sufficiently large values of $n$, then \begin{align} \limsup \limits_{n\longrightarrow \infty}~r_{\mathbb{B}}(n)=\infty.\nonumber \end{align} \end{conjecture} This conjecture has garnered the attention of many authors but remains unresolved \cite{tao2006additive}. By introducing the language of \textbf{Circles of Partition} and associated statistics we reformulate the conjecture in the following manner \begin{conjecture}[Erd\H{o}s-Tur\'{a}n] Let $\mathbb{B}\subset \mathbb{N}$ and consider \begin{align} \mathcal{G}_{\mathbb{B}}(n)=\nu(n,\mathbb{B})=\# \left \{ \mathbb{L}_{[x],[y]} ~ \hat{\in}~ \mathcal{C}(n,\mathbb{B})\right \}.\nonumber \end{align} If $\mathcal{G}_{\mathbb{B}}(n)>0$ for all sufficiently large values of $n$, then \begin{align} \limsup \limits_{n\longrightarrow \infty}\mathcal{G}_{\mathbb{B}}(n)=\infty.\nonumber \end{align} \end{conjecture} By exploiting the notion of the density of circles of partition, the notion of ascending, descending and stationary circles of partition and the $l$ th fold energy of circle of partitions we study the Erd\H{o}s-Tur\'{a}n additive bases conjecture. \section{\textbf{The Circle of Partition}} In this section we introduce the notion of the circle of partition. We study this notion in-depth and explore some potential applications in the following sequel. \begin{definition}\label{major} Let $n\in \mathbb{N}$ and $\mathbb{M}\subset \mathbb{N}$. We denote with \begin{align} \mathcal{C}(n,\mathbb{M})=\left \{[x]\mid x,y\in \mathbb{M},n=x+y\right \}\nonumber \end{align} the Circle of Partition generated by $n$ with respect to the subset $\mathbb{M}$. We will abbreviate this in the further text as CoP. We call members of $\mathcal{C}(n,\mathbb{M})$ as points and denote them by $[x]$. For the special case $\mathbb{M}=\mathbb{N}$ we denote the CoP shortly as $\mathcal{C}(n)$. \end{definition} \begin{definition}\label{axis} We denote the line $\mathbb{L}_{[x],[y]}$ joining the point $[x]$ and $[y]$ as an axis of the CoP $\mathcal{C}(n,\mathbb{M})$ if and only if $x+y=n$. We say the axis point $[y]$ is an axis partner of the axis point $[x]$ and vice versa. We do not distinguish between $\mathbb{L}_{[x],[y]}$ and $\mathbb{L}_{[y],[x]}$, since it is essentially the the same axis. The point $[x]\in \mathcal{C}(n,\mathbb{M})$ such that $2x=n$ is the \textbf{center} of the CoP. If it exists then it is their only point which is not an axis point. The line joining any two arbitrary point which are not axes partners on the CoP will be referred to as a \textbf{chord} of the CoP. The length of the chord joining the points $[x],[y]\in \mathcal{C}(n,\mathbb{M})$, denoted as $\mathcal{D}([x],[y])$ is given by \begin{align} \mathcal{D}([x],[y])=|x-y|.\nonumber \end{align} \end{definition} \bigskip It is important to point out that the \textbf{median} of the weights of each co-axis point coincides with the center of the underlying CoP if it exists. That is to say, given all the axes of the CoP $\mathcal{C}(n,\mathbb{M})$ as \begin{align} \mathbb{L}_{[u_1],[v_1]},\mathbb{L}_{[u_2],[v_2]},\ldots, \mathbb{L}_{[u_k],[v_k]}\nonumber \end{align} then the following relations hold \begin{align} \frac{u_1+v_1}{2}=\frac{u_2+v_2}{2}=\cdots=\frac{u_k+v_k}{2}=\frac{n}{2}\nonumber \end{align} which is equivalent to the conditions for any of the pair of axes $\mathbb{L}_{[u_i],[v_i]},\mathbb{L}_{[u_j],[v_j]}$ for $1\leq i,j\leq k$ \begin{align} \mathcal{D}([u_i],[u_j])=\mathcal{D}([v_i],[v_j])\nonumber \end{align} and \begin{align} \mathcal{D}([v_j],[u_i])=\mathcal{D}([u_j],[v_i]).\nonumber \end{align} \bigskip The above language in many ways could be seen as a criterion determining the plausibility of carrying out a partition in a specified set. Indeed this feasibility is trivial if we take the set $\mathbb{M}$ to be the set of natural numbers $\mathbb{N}$. The situation becomes harder if we take the set $\mathbb{M}$ to be a special subset of natural numbers $\mathbb{N}$, as the corresponding CoP $\mathcal{C}(n,\mathbb{M})$ may not always be non-empty for all $n\in \mathbb{N}$. One archetype of problems of this flavour is the binary Goldbach conjecture, when we take the base set $\mathbb{M}$ to be the set of all prime numbers $\mathbb{P}$. One could imagine the same sort of difficulty if we extend our base set to other special subsets of the natural numbers. \begin{remark} It is important to notice that a typical CoP need not have a center. In the case of an absence of a center then we say the circle has a deleted center. It is easy to see that the CoP $\mathcal{C}(n)$ contains all points whose weights are positive integers from $1$ to $n-1$ inclusive: $$\mathcal{C}(n)=\left \{[x]\mid~x\in \mathbb{N},x<n\right \}.$$ Therefore the CoP $\mathcal{C}(n)$ has $\left \lfloor \frac{n-1}{2}\right \rfloor$ different axes. \end{remark} \bigskip In the sequel we will denote the assignment of an axis $\mathbb{L}_{[x],[y]}$ to a CoP $\mathcal{C}(n,\mathbb{M})$ as \begin{align*} \mathbb{L}_{[x],[y]}&~ \hat{\in}~ \mathcal{C}(n,\mathbb{M}) \mbox{ which means}\\ [x], [y] &\in \mathcal{C}(n,\mathbb{M}) \quad \textbf{and} \quad x+y=n \end{align*} and the number of axes of a CoP as \begin{align} \nu(n,\mathbb{M}):=\#\lbrace\mathbb{L}_{[x],[y]}~ \hat{\in}~ \mathcal{C}(n,\mathbb{M})\rbrace.\nonumber \end{align} Additionally we let \begin{align} \mathbb{N}_n=\left \{m\in \mathbb{N}\mid ~m\leq n\right \}\nonumber \end{align} be the \textbf{sequence} of the first $n$ natural numbers. Further we will denote \begin{align} \Vert[x]\Vert:=x\nonumber \end{align} as the \textbf{weight} of the point $[x]$ and correspondingly the weight set of points in the CoP $\mathcal{C}(n,\mathbb{M})$ as $||\mathcal{C}(n,\mathbb{M})||$. \bigskip \begin{proposition}\label{unique} Each axis is uniquely determined by points $[x]\in \mathcal{C}(n,\mathbb{M})$. \end{proposition} \begin{proof} Let $\mathbb{L}_{[x],[y]}$ be an axis of the CoP $\mathcal{C}(n,\mathbb{M})$. Suppose as well that $\mathbb{L}_{[x],[z]}$ is also an axis with $z\neq y$. Then it follows by Definition \ref{axis} that we must have $n=x+y=x+z$ and therefore $y=z$. This cannot be and the claim follows immediately. \end{proof} \begin{corollary}\label{partner} Each point of a CoP $\mathcal{C}(n,\mathbb{M})$ has exactly one axis partner. \end{corollary} \begin{proof} Let $[x]\in \mathcal{C}(n,\mathbb{M})$ be a point without an axis partner. Then holds for every point $[y]\neq [x]$ \[ \Vert[x]\Vert+\Vert[y]\Vert\neq n. \] This contradiction to the Definition \ref{major}. Due to Proposition \ref{unique} the case of more than one axis partners is impossible. This completes the proof. \end{proof} \section{\textbf{The Density of Points on the Circle of Partition}} In this section we introduce the notion of density of points on CoP $\mathcal{C}(n,\mathbb{M})$ for $\mathbb{M}\subseteq \mathbb{N}$. We launch the following language in that regard. We exploit this notion in a careful manner to study the Erd\H{o}s-Tur\'{a}n additive bases conjecture. \begin{definition} Let $\mathbb{H}\subset\mathbb{N}$. Then the quantity \[ \mathcal{D}\left(\mathbb{H}\right)=\lim_{n\rightarrow\infty} \frac{\vert\mathbb{H}\cap \mathbb{N}_n\vert}{n} \] denotes the density of $\mathbb{H}$. \end{definition} \begin{definition} Let $\mathcal{C}(n,\mathbb{M})$ be CoP with $\mathbb{M}\subset \mathbb{N}$ and $n\in \mathbb{N}$. Suppose $\mathbb{H}\subset \mathbb{M}$ then by the density of points $[x]\in \mathcal{C}(n,\mathbb{M})$ such that $x\in \mathbb{H}$, denoted $\mathcal{D}(\mathbb{H}_{\mathcal{C}(\infty,\mathbb{M})})$, we mean the quantity \begin{align} \mathcal{D}\left(\mathbb{H}_{\mathcal{C}(\infty,\mathbb{M})}\right)=\lim \limits_{n\longrightarrow \infty}\frac{\# \lbrace\mathbb{L}_{[x],[y]} ~ \hat{\in}~ \mathcal{C}(n,\mathbb{M})|~ \{x,y\} \cap \mathbb{H}\neq \emptyset \rbrace}{ \nu(n,\mathbb{M})}.\nonumber \end{align} \end{definition} \bigskip \begin{proposition}\label{propertydensity} Let $\mathbb{H}\subset \mathbb{M}$ with $\mathbb{M}\subseteq \mathbb{N}$ and suppose $\mathcal{D}(\mathbb{H}_{\mathcal{C}(\infty,\mathbb{M})})$ exists. Then the following properties hold: \begin{enumerate} \item [(i)] $\mathcal{D}(\mathbb{M}_{\mathcal{C}(\infty,\mathbb{M})})=1$ and $\mathcal{D}(\mathbb{H}_{\mathcal{C}(\infty,\mathbb{M})})\leq 1$. \item [(ii)] $1-\lim \limits_{n\longrightarrow \infty}\dfrac{\nu(n,\mathbb{M}\setminus\mathbb{H})}{\nu(n,\mathbb{M})}=\mathcal{D}(\mathbb{H}_{\mathcal{C}(\infty,\mathbb{M})})$. \item [(iii)] If $|\mathbb{H}|<\infty$ then $\mathcal{D}(\mathbb{H}_{\mathcal{C}(\infty,\mathbb{M})})=0$. \end{enumerate} \end{proposition} \begin{proof} It is easy to see that \textbf{Property} $(i)$ and \textbf{Property} $(iii)$ are both easy consequences of the definition of density of points on the CoP $\mathcal{C}(n,\mathbb{M})$. We establish \textbf{Property} $(ii)$, which is the less obvious case. We observe by the uniqueness of the axes of CoPs that we can write \begin{align*} 1&=\lim \limits_{n\longrightarrow \infty}\frac{\nu(n,\mathbb{M})}{\nu(n,\mathbb{M})}\\ &=\lim \limits_{n\longrightarrow \infty}\frac{\# \lbrace\mathbb{L}_{[x],[y]} ~ \hat{\in}~ \mathcal{C}(n,\mathbb{M})|~ x\in \mathbb{H}~,y\in \mathbb{M}\setminus \mathbb{H}\rbrace}{\nu(n,\mathbb{M})}\\ &+\lim \limits_{n\longrightarrow \infty}\frac{\nu(n,\mathbb{H})}{\nu(n,\mathbb{M})} +\lim \limits_{n\longrightarrow \infty}\frac{\nu(n,\mathbb{M}\setminus\mathbb{H})}{\nu(n,\mathbb{M})}\\ &=\mathcal{D}(\mathbb{H}_{\mathcal{C}(\infty,\mathbb{M})}) +\lim \limits_{n\longrightarrow \infty}\frac{\nu(n,\mathbb{M}\setminus\mathbb{H})}{\nu(n,\mathbb{M})} \end{align*} and $(ii)$ follows immediately. \end{proof} \begin{proposition}\label{inequality} Let $\mathcal{C}(n)$ with $n\in \mathbb{N}$ be a CoP and $\mathbb{H}\subset \mathbb{N}$. Then the following inequality holds \begin{align} \mathcal{D}(\mathbb{H})=\lim \limits_{n\longrightarrow \infty}\frac{\left \lfloor \frac{|\mathbb{H}\cap \mathbb{N}_n|}{2}\right \rfloor}{\left \lfloor \frac{n-1}{2}\right \rfloor}\leq \mathcal{D}(\mathbb{H}_{\mathcal{C}(\infty)})\leq \lim \limits_{n\longrightarrow \infty}\frac{|\mathbb{H}\cap \mathbb{N}_n|}{\left \lfloor \frac{n-1}{2}\right \rfloor}=2\mathcal{D}(\mathbb{H}).\nonumber \end{align} \end{proposition} \begin{proof} The upper bound is obtained from a configuration where no two points $[x],[y]\in \mathcal{C}(n)$ such that $x,y\in \mathbb{H}$ lie on the same axis of the CoP. That is, by the uniqueness of the axes of CoPs with $\nu(n,\mathbb{H})=0$, we can write \begin{align} \# \left \{\mathbb{L}_{[x],[y]}\in \mathcal{C}(n)|~\{x,y\}\cap \mathbb{H}\neq \emptyset \right \}&=\nu(n,\mathbb{H})+\# \left \{\mathbb{L}_{[x],[y]}\in \mathcal{C}(n)|~x\in \mathbb{H},~y\in \mathbb{N}\setminus \mathbb{H}\right \} \nonumber \\&=\# \left \{\mathbb{L}_{[x],[y]}\in \mathcal{C}(n)|~x\in \mathbb{H},~y\in \mathbb{N}\setminus \mathbb{H}\right \} \nonumber \\&=|\mathbb{H}\cap \mathbb{N}_n|.\nonumber \end{align} The lower bound however follows from a configuration where any two points $[x],[y]\in \mathcal{C}(n)$ with $x,y\in \mathbb{H}$ are joined by a axis of the CoP. That is, by the uniqueness of the axis of CoPs with $\# \left \{\mathbb{L}_{[x],[y]}\in \mathcal{C}(n)|~x\in \mathbb{H},~y\in \mathbb{N}\setminus \mathbb{H}\right \} =0$, then we can write \begin{align} \# \left \{\mathbb{L}_{[x],[y]}\in \mathcal{C}(n)|~\{x,y\}\cap \mathbb{H}\neq \emptyset \right \}&=\nu(n,\mathbb{H})\nonumber \\&=\left \lfloor \frac{|\mathbb{H}\cap \mathbb{N}_n|}{2}\right \rfloor \nonumber \end{align} \end{proof} \begin{remark} Though we are nowhere near the proof of this conjecture, we prove a weaker version by imposing some suitable conditions. The result is encapsulated in the following theorem. \end{remark} \begin{theorem} Let $ \mathbb{B}\subset \mathbb{N}$ with \begin{align} \lim \limits_{n\longrightarrow \infty}\frac{|\mathbb{B}\cap \mathbb{N}_n|}{n}>0\nonumber \end{align} such that \begin{align} \# \left \{\mathbb{L}_{[x],[y]}~ \hat{\in}~ \mathcal{C}(n)|~x\in \mathbb{N}\setminus \mathbb{B},~y\in \mathbb{B}\right\}\leq \nu(n,\mathbb{B}).\nonumber \end{align} If $\mathcal{G}_{\mathbb{B}}(n)=\nu(n,\mathbb{B})>0$ for all sufficiently large values of $n$, then \begin{align} \limsup \limits_{n\longrightarrow \infty} \mathcal{G}_{\mathbb{B}}(n)=\infty.\nonumber \end{align} \end{theorem} \begin{proof} Suppose $\mathbb{B}\subset \mathbb{N}$ and let $\mathcal{G}_{\mathbb{B}}(n)>0$ for all sufficiently large values of $n$. Suppose to the contrary that \begin{align} \limsup \limits_{n\longrightarrow \infty} \mathcal{G}_{\mathbb{B}}(n)<\infty.\nonumber \end{align} Consider the CoP $\mathcal{C}(n,\mathbb{B})$, then we note that by the uniqueness of axes of CoPs we can compute the density of points $[x]\in \mathcal{C}(n)$ with $||[x]||\in \mathbb{B}$ in the following way \begin{align*} \mathcal{D}(\mathbb{B}_{\mathcal{C}(\infty)}) &=\lim \limits_{n\longrightarrow \infty}\frac{\# \lbrace\mathbb{L}_{[x],[y]}~ \hat{\in}~ \mathcal{C}(n)|\lbrace x,y\rbrace\cap \mathbb{B}\neq \emptyset\rbrace}{\left \lfloor \frac{n-1}{2}\right \rfloor}\\ &=\lim \limits_{n\longrightarrow \infty}\frac{\# \lbrace\mathbb{L}_{[x],[y]}~ \hat{\in}~ \mathcal{C}(n)|~x\in \mathbb{N}\setminus \mathbb{B},~y\in \mathbb{B}\rbrace}{\left \lfloor \frac{n-1}{2}\right \rfloor} +\lim \limits_{n\longrightarrow \infty}\frac{\nu(n,\mathbb{B})}{\left \lfloor \frac{n-1}{2}\right \rfloor}\\ &\leq 2\lim \limits_{n\longrightarrow \infty}\frac{\nu(n,\mathbb{B})}{\left \lfloor \frac{n-1}{2}\right \rfloor}\\ &=0 \end{align*} by virtue of the earlier assumption. By applying Proposition \ref{inequality}, it follows that \begin{align} \lim \limits_{n\longrightarrow \infty}\frac{\left \lfloor \frac{|\mathbb{B}\cap \mathbb{N}_n|}{2}\right \rfloor}{\left \lfloor \frac{n-1}{2}\right \rfloor}=0.\nonumber \end{align} It follows that $\mathcal{D}(\mathbb{B})=0$, thereby contradicting the requirement of the statement. \end{proof} \section{\textbf{Ascending, Descending and Stationary Circles of Partition}} In this section we introduce the notion of \textbf{ascending}, \textbf{descending} and \textbf{stationary} CoPs between generators. We formalize this notion in the following language. We exploit this notion to improve on the result concerning the Erd\H{o}s-Tur\'{a}n additive bases conjecture in section 2. \begin{definition} Let $\mathbb{M}\subset \mathbb{N}$ with $\mathcal{C}(n,\mathbb{M})$ be a CoP. Then we say the CoP $\mathcal{C}(n,\mathbb{M})$ is \textbf{ascending} from $n$ to the \textbf{spot} $m$ if for $n<m$ holds \begin{align*} \nu(n,\mathbb{M})<\nu(m,\mathbb{M}). \end{align*} Similarly, we say it is \textbf{descending} from $n$ to the \textbf{spot} $m$ if for $n<m$ then \begin{align*} \nu(n,\mathbb{M})>\nu(m,\mathbb{M}). \end{align*} We say it is \textbf{globally} ascending (resp. descending) if at $\forall m \in \mathbb{N}$ it is ascending (resp. descending). We say the CoP $\mathcal{C}(n,\mathbb{M})$ is \textbf{stationary} from $n$ to the \textbf{spot} $m$ if for $n<m$ then \begin{align*} \nu(n,\mathbb{M})=\nu(m,\mathbb{M}). \end{align*} Similarly, we say it is \textbf{globally stationary} if it is stationary at all spots $m\in \mathbb{N}$. If the CoP $\mathcal{C}(n,\mathbb{M})$ is neither globally ascending, descending nor stationary, then we say it is globally \textbf{oscillatory}. \end{definition} \begin{theorem}\label{ascendingspots} Let $\mathbb{H}\subset \mathbb{N}$ and $\mathcal{C}(n,\mathbb{H})$ be a CoP. If \begin{align} \lim \limits_{n\longrightarrow \infty}\frac{|\mathbb{H}\cap \mathbb{N}_n|}{n}>0\nonumber \end{align} with \begin{align} \lim \limits_{n\longrightarrow \infty}\frac{|(\mathbb{N}\setminus \mathbb{H})\cap \mathbb{N}_n|}{n}<\frac{1}{2}\lim \limits_{n\longrightarrow \infty}\frac{|\mathbb{H}\cap \mathbb{N}_n|}{n}\nonumber \end{align} then $\mathcal{C}(n,\mathbb{H})$ is ascending at infinitely many spots. \end{theorem} \begin{proof} Let $\mathcal{C}(n,\mathbb{H})$ be a CoP and assume to the contrary that there are finitely many spots at which it is ascending. Let us name and arrange the spots as follows $m_1<m_2<\cdots<m_k$. It follows that \begin{align} \nu(n,\mathbb{H})\geq \nu(m_{k+1},\mathbb{H})\geq \cdots \nu(m_{k+i},\mathbb{H})\geq \cdots \nonumber \end{align} for all $i\geq 1$. The upshot is that \begin{align} \lim \limits_{n\longrightarrow \infty}\frac{\nu(n,\mathbb{H})}{\left \lfloor \frac{n-1}{2}\right \rfloor}=0.\nonumber \end{align} Next, by virtue of uniqueness of axes of CoPs, we can compute the density of points with weight in $\mathbb{H}$ on the CoP $\mathcal{C}(n)$ as follows \begin{align*} \mathcal{D}(\mathbb{H}_{\mathcal{C}(\infty)})&=\lim \limits_{n\longrightarrow \infty}\frac{\# \left \{\mathbb{L}_{[x],[y]}~ \hat{\in}~ \mathcal{C}(n)|~\{x,y\}\cap \mathbb{H}\neq \emptyset\right \}}{\left \lfloor \frac{n-1}{2}\right \rfloor}\\ &=\lim \limits_{n\longrightarrow \infty}\frac{\# \left \{\mathbb{L}_{[x],[y]}~ \hat{\in}~ \mathcal{C}(n)|~x\in \mathbb{H},~y\in \mathbb{N}\setminus \mathbb{H}\right \}}{\left \lfloor \frac{n-1}{2}\right \rfloor} +\lim \limits_{n\longrightarrow \infty}\frac{\nu(n,\mathbb{H})}{\left \lfloor \frac{n-1}{2}\right \rfloor}\\ &=\lim \limits_{n\longrightarrow \infty}\frac{\# \left \{\mathbb{L}_{[x],[y]}~ \hat{\in}~ \mathcal{C}(n)|~x\in \mathbb{H},~y\in \mathbb{N}\setminus \mathbb{H}\right \}}{\left \lfloor \frac{n-1}{2}\right \rfloor}\\ &\leq \lim \limits_{n\longrightarrow \infty}\frac{|(\mathbb{N}\setminus \mathbb{H})\cap \mathbb{N}_n|}{\left \lfloor \frac{n-1}{2}\right \rfloor}\\ &\leq 2\lim \limits_{n\longrightarrow \infty}\frac{|(\mathbb{N}\setminus \mathbb{H})\cap \mathbb{N}_n|}{n}. \end{align*} Invoking Proposition \ref{inequality}, we have the inequality \begin{align} \lim \limits_{n\longrightarrow \infty}\frac{|\mathbb{H}\cap \mathbb{N}_n|}{n}\leq 2\lim \limits_{n\longrightarrow \infty}\frac{|(\mathbb{N}\setminus \mathbb{H})\cap \mathbb{N}_n|}{n}.\nonumber \end{align} This, however, violates the requirement of the statement, thereby ending the proof. \end{proof} \begin{remark} Next we obtain from this result another weak variant of the Erd\H{o}s-Tur\'{a}n conjecture. Roughly speaking, it purports very dense sequences sufficiently qualifies to be an additive base. \end{remark} \begin{corollary} Let $\mathbb{H}\subset \mathbb{N}$ with $\mathcal{D}(\mathbb{H})>0$ such that $\mathcal{D}(\mathbb{N}\setminus \mathbb{H})<\frac{1}{2}\mathcal{D}(\mathbb{H})$. If \begin{align} r_{\mathbb{H}}(n):=\# \left \{(a,b)\in \mathbb{H}^2|~a+b=n\right \}\nonumber \end{align} then $\lim \limits_{n\longrightarrow \infty}r_{\mathbb{H}}(n)=\infty$. \end{corollary} \section{\textbf{The $l^{th}$ Fold Energy of Circles of Partition}} In this section we introduce and study the notion of the $l^{th}$ fold energy of CoPs and exploit some applications in this context. This notion tends to more effective and extends very much to sequences not necessarily having a positive density. \begin{definition} Let $\mathbb{M}\subset \mathbb{N}$ and $\mathcal{C}(n,\mathbb{M})$ be a CoP. Then by the $l^{th}$-fold energy of the CoP $\mathcal{C}(n,\mathbb{M})$, we mean the quantity \begin{align} \mathcal{E}(l,\mathbb{M}):=\sum \limits_{n=3}^{\infty} \frac{\nu(n^l,\mathbb{M})}{\left \lfloor \frac{n^l-1}{2}\right \rfloor}\nonumber \end{align} for a fixed $l\in \mathbb{N}$. \end{definition} \bigskip It is important to remark that the $l^{th}$ energy of a typical CoP $\mathcal{C}(n,\mathbb{M})$ could either be infinite or finite. In that latter case it certainly should have a finite value. To that effect we state the following proposition. \begin{proposition}\label{boundedenergy} Let $\mathbb{J}^l\subset \mathbb{N}$ be the set of all $l^{th}$ powers. Then $\mathcal{E}(l,\mathbb{J}^l)<\infty$ for all $l\geq 3$ and $\mathcal{E}(2,\mathbb{J}^2)=\infty$. \end{proposition} \begin{proof} Let $l\geq 3$ be fixed and consider the CoP $\mathcal{C}(n^l,\mathbb{J}^l)$, where $\mathbb{J}^l\subset \mathbb{N}$ is the set of all $l^{th}$ powers. Then it follows from the configuration of CoPs the following inequality \begin{align} \mathcal{E}(l,\mathbb{J}^l)&=\sum \limits_{n=3}^{\infty}\frac{\nu(n^l,\mathbb{J}^l)}{\left \lfloor \frac{n^l-1}{2}\right \rfloor}\nonumber \\ &\leq \sum \limits_{n=3}^{\infty}\frac{\frac{n}{2}}{\left \lfloor \frac{n^l-1}{2}\right \rfloor}\nonumber \\ & \ll \sum \limits_{n=3}^{\infty}\frac{1}{n^{l-1}}<\infty \nonumber \end{align} for all $l\geq 3$. \end{proof} \begin{proposition}\label{spotenergy} Let $\mathbb{M}\subset \mathbb{N}$ and $\mathcal{C}(n,\mathbb{M})$ be a CoP. If $\mathcal{E}(l,\mathbb{M})=\infty$ for $l\geq 2$, then $\mathcal{C}(n^l,\mathbb{M})$ is ascending at infinitely many spots. \end{proposition} \begin{proof} Let $\mathcal{E}(l,\mathbb{M})=\infty$ and assume to the contrary that the CoP $\mathcal{C}(n^l,\mathbb{M})$ is ascending at finitely many spots. Then it follows that \begin{align*} \lim \limits_{n\longrightarrow \infty}\nu(n^l,\mathbb{M})<\infty. \end{align*} This implies $\mathcal{E}(l,\mathbb{M})<\infty$, thereby contradicting the requirement of the statement. \end{proof} \begin{theorem}\label{proofconjecture} Let $\mathbb{B}\subset \mathbb{N}$ with $\# \left \{n\leq x|~n\in \mathbb{B}\right \}\sim x^{1-\epsilon}$ for any $0<\epsilon\leq \frac{1}{2}$ and consider \begin{align*} \mathcal{G}_{\mathbb{B}}(n)=\nu(n,\mathbb{B}) \end{align*} If $\mathcal{G}_{\mathbb{B}}(n)>0$ for all sufficiently large values of $n$, then \begin{align} \limsup \limits_{n\longrightarrow \infty}\mathcal{G}_{\mathbb{B}}(n)=\infty.\nonumber \end{align} \end{theorem} \begin{proof} First we compute the two fold energy $\mathcal{E}(2,\mathbb{B})$ of the CoP $\mathcal{C}(n,\mathbb{B})$. Since $\mathcal{G}_{\mathbb{B}}(n)>0$ for all sufficiently large values of $n$, it follows that $\mathcal{G}_{\mathbb{B}}(n^2)>0$ for all sufficiently large values of $n$ so that for all $k$ large enough there exist some constant $\mathcal{L}=\mathcal{L}(k)>0$ such that we can write \begin{align} \sum \limits_{n=3}^{k}\frac{\mathcal{G}_{\mathbb{B}}(n^2)}{\left \lfloor \frac{n^2-1}{2}\right \rfloor}&=\mathcal{L}(k)(1+o(1))\sum \limits_{n=3}^{k}\frac{\left \lfloor \frac{n^{2-2\epsilon}-1}{2}\right \rfloor}{\left \lfloor \frac{n^2-1}{2}\right \rfloor}\nonumber \\& \gg_k \sum \limits_{n=3}^{k}\frac{1}{n^{2\epsilon}}.\nonumber \end{align} By taking limits on both sides as $k\longrightarrow \infty$ and noting that $0<\epsilon \leq \frac{1}{2}$, we deduce $\mathcal{E}(2,\mathbb{B})=\infty$. Appealing to Proposition \ref{spotenergy}, it follows that \begin{align} \limsup \limits_{n\longrightarrow \infty}\mathcal{G}_{\mathbb{B}}(n^2)=\infty.\nonumber \end{align} Since $\left \{n^2\in \mathbb{N}\right \}\subset \left \{n\in \mathbb{N}\right \}$, it follows that $\limsup \limits_{n\longrightarrow \infty}\mathcal{G}_{\mathbb{B}}(n)=\infty$. \end{proof} \bigskip Let $\mathbb{B}$ be an additive base of order $2$, then it is well-known that \begin{align} \# \left \{n\leq x|~n\in \mathbb{B}\right \}\geq \sqrt{x}.\nonumber \end{align} In line with this tied with Theorem \ref{proofconjecture} the solution to the Erd\H{o}s-Tur\'{a}n additive bases conjecture is an easy consequence. \footnote{ \par .}% . \bibliographystyle{amsplain}
1,108,101,563,925
arxiv
\section{Introduction} \label{sec:introduction} {The widespread use of personal devices generates new challenges while opening new appealing scenarios for future applications, such as, for example, those entailing \ac{D2D} interactions or Big Data management issues.} To meet these new {trends}, different disruptive technologies have been recently proposed for the next \ac{5G} wireless communications networks \cite{chin2014emerging, khaitan2011indoor}. In particular, large-scale antenna arrays at \acp{BS} or femtocells \acp{AP} allow to smartly direct the power flux towards intended users thus increasing data rates, whereas \ac{mm-wave} communication provides a less crowded and larger spectrum \cite{larsson2014massive,rusek2013scaling,swindlehurst2014millimeter}. {In next years, it is expected that personal devices localization and communication capabilities will play a crucial role \cite{ di2014location}: in fact, the possibility of localizing nodes in indoor environments will be an essential feature of future devices.} In this context, the \ac{AP} could be used as a single-anchor node, i.e., a node whose position is \textit{a-priori} known, in a radio-localization perspective permitting the mobile users to be aware of their own position. {Furthermore, the adoption of more than one antenna at the {\ac{Tx}} and {\ac{Rx}}, will enable the user orientation estimation at an accuracy higher than that provided by compass and gyroscopes. Such feature could play a key role in applications beyond \ac{5G} as for example augmented reality and \ac{SLAM}, where trajectory errors, comprising both position and orientation estimation inaccuracies, dramatically affect the performance \cite{guidi2016personal}.} Contrarily to traditional scenarios where dedicated multiple anchor nodes are necessary to allow classic triangulation/multilateration techniques \cite{dardari2015indoor}, here the possibility to centralize both communication and localization capabilities in a single multi-antenna \ac{AP} working at \ac{mm-wave} frequencies is envisioned with the advantage of drastically decreasing the overall system complexity and cost. Moreover, when operating at such high frequencies, not only \ac{AP}s but also user terminals could adopt massive arrays {thanks to the reduced wavelength \cite{hong2014study},} thus increasing even more the localization accuracy given the potential huge set of measurements \cite{razavizadeh2014three,witrisal2016high,guerra2015position,garcia2016direct}. While at microwave frequencies the antenna array technology is quite mature, at \ac{mm-wave} severe technological constraints are still present and must be taken into account when designing positioning systems. Recently, massive antennas prototypes have been proposed with electronic beamsteering capabilities. In order to reduce the complexity, they adopt simple switches and thus, {the resulting non-perfect signals phasing operations} could impact the array radiation characteristics \cite{kaouach2011wideband,clemente2013wideband,GuiEtAl:J17}. {In such a scenario, it becomes of great interest to understand the fundamental limits on localization error with massive antenna arrays both at \ac{AP} and mobile terminals using only a single reference node.} {Concerning the ultimate localization performance evaluation, a rich literature has been produced for the analysis of wideband multiple anchors systems. Specifically, in \cite{shen2010accuracy,shen2010fundamental} authors explore the localization accuracy for a wideband sensors network composed of several independent anchors. Their results are further discussed in \cite{han2016performance} where a more realistic {\ac{Rx} architecture} able to exploit the carrier phases information has been taken in consideration while deriving the localization performance. Differently from these works, where anchors send orthogonal waveforms, we consider a signal model dependent {on} the particular arrays architecture chosen where both orthogonal and non-orthogonal waveforms can be transmitted. {Moreover, our work is not focused on a specific {\ac{Rx}} structure, as in \cite{han2016performance}, but it aims to {compare} different {{\ac{Tx}} array} architectures.}} In \cite{shahmansoori20155g,mallat2009crbs}, a joint delay-angle estimation is reported considering different array technologies and frequency bandwidths. Nevertheless, these works analyze the performance in terms of \ac{CRB} on {delay and angular information} rather than {directly} on localization, and neither a comparison between different array schemes, nor the time synchronization issue and the impact of multipath are treated. {In our previous work {\cite{guerra2015position,guerra2016position_c}}, some preliminary results on positioning accuracy considering only beamforming strategies have been presented, but the comparison with \ac{MIMO}, as well as the impact of \acp{MPC}, was not considered.} Stimulated by this framework, in this paper we conduct a \ac{CRB}-based analysis of a localization system exploiting the next \ac{5G} technologies potentialities. {Differently from the state-of-the-art, we adopt a 1-step approach in which the {\ac{Tx}} position and orientation are directly inferred from the received signals and, thus, without the need of estimating intermediate parameters (e.g., \ac{TOA}-\ac{DOA}) or applying geometrical methods which do not ensure the optimality of the approach \cite{cover2012elements_c}.} {The main contributions of this work can be summarized as follows: \begin{itemize} \item Derivation of the theoretical performance limits on the localization and orientation error for different array configurations in a single-anchor scenario; \item Proposal of a signal model valid for any antenna array geometry, configuration (i.e., MIMO, phased, timed arrays), and frequency band. As a case study, in the numerical results the focus is on the adoption of mm-wave massive arrays due to their expected attractiveness in next \ac{5G} applications; \item Introduction of low-complexity random weighting approach, i.e., randomly chosen beamforming weights, and analysis of its performance compared to that of classical beamforming and MIMO solutions; \item Investigation of the \ac{CRB} tightness in massive array regime (i.e., letting the number of antennas $\rightarrow \infty$) for any \ac{SNR} condition; \item Analysis of the \textit{trade-off} between \ac{SNR} enhancement obtained via beamforming and diversity gain of \ac{MIMO} considering the impact of different types of uncertainties, as, for example, the \acp{MPC}, beamforming weights and time synchronization errors; \item Demonstration that in the massive array regime (i.e., array antennas $\rightarrow \infty$), the effect of multipath can be made negligible on average. \end{itemize} } { The rest of the paper is organized as follows. Sec.~\ref{sec:SystemModel} describes the geometry of the localization system. Then, Sec.~\ref{sec:signalmodel} introduces the signal model taking into account different array structures. In Sec.~\ref{sec:posbound} the localization performance limits derivation is reported. Sec.~\ref{sec:crbtight} analyzes the asymptotic conditions for which the \ac{CRB} can be considered a tight bound. Sec.~\ref{sec:idealscenario} derived compact formulas for a ideal free-space case. The multipath impact on localization performance are investigated in Sec.~\ref{sec:mp_loc_ac}. Finally, Sec.~\ref{sec:numerical} presents the localization performance results and Sec.~\ref{sec:conclusions} concludes the work. } \paragraph*{Notation} Lower case and capital letters in bold denote vectors and matrices, respectively. The subscripts $\left[\cdot \right]^{\scriptscriptstyle \text{T}}$, $\left[\cdot \right]^{*}$ and $\left[\cdot \right]^{\scriptscriptstyle \text{H}}$ indicate the transpose, the conjugate and the Hermitian operators. $\lVert \cdot \rVert_2$ is the Euclidean norm, $\mathbf{A} \succeq \mathbf{B}$ indicates that the matrix $\mathbf{A} - \mathbf{B}$ is non-negative definite, and $\text{diag}\left(\cdot\right)$ represents the diagonal operator. The subscripts $(\cdot)^\text{t}$ and $(\cdot)^\text{r}$ refer to quantities related to the transmitting and receiving array, respectively, while the subscript $(\cdot)^{\text{tr}}$ to elements that can be referred to both the {\ac{Tx}} and the {\ac{Rx}}. $(\cdot)^\text{FS}$ indicates the free-space scenario. $\mathcal{F}\left(\cdot\right)$ denotes the Fourier transform operation, $\mathcal{U}\left(a,b\right)$ a uniform distribution in the interval $\left[a, b\right]$, and $\mathcal{CN}\left(\mu,\sigma^2\right)$ a circularly symmetric Gaussian distribution with mean $\mu$ and variance $\sigma^2$. {The notations of frequently-used symbols are listed as follows. \begin{description}[\IEEEsetlabelwidth{Very long label}\IEEEusemathlabelsep] \item[$N_{\text{tx}}, N_{\text{rx}}$ ] Number of Tx-Rx array antennas \item[$L$ ] Number of \acp{MPC} \item[$A^\text{t}, A^\text{r}$ ] Area of the Tx-Rx array \item[$\mathbf{p}^\tx,\bm{\vartheta}^\tx $ ] Tx centroid position and orientation \item[$\mathbf{p}^\rx,\bm{\vartheta}^\text{r}$] Rx centroid position and orientation \item[$d$] Distance between Tx-Rx centroids \item[$S=A^\text{r}/d^2$] Ratio between the Rx array area and the squared inter-array distance \item[$\mathbf{p}_i^\text{t}, \mathbf{p}_m^\text{r}$] Tx/Rx antenna position \item[$d_\text{ant}$] Inter-antenna spacing \item[$\mathbf{d}\left(\bm{\theta} \right)$] Direction cosine \item[$\bm{\kappa}$] Multipath parameters vector \item[$\bm{\theta}_1$] Direct path wave direction \item[$\bm{\theta}_l$] $l$th path wave direction \item[$\bm{\theta}_0$] Steering direction \item[$\tau_{im1}, \tau_{iml}$] Propagation delay relative to the direct and $l$th path between the $i$th Tx-$m$th Rx antenna \item[$\tau_1, \tau_l$] Propagation delay relative to the direct and $l$th path between centroids \item[$a_1, \alpha_l$ ] Direct path amplitude and $l$th complex channel coefficient \item[$\tau_i^{\tx} (\bm{\theta}_l^{\text{t}},\bm{\vartheta}^\tx)$ ] Inter-antenna delay between the $i$th Tx antenna and the relative array centroid \item[$\tau_m^{\rx} (\bm{\theta}_l^{\text{r}},\bm{\vartheta}^\text{r})$ ] Inter-antenna delay between the $m$th Rx antenna and the relative array centroid \item[$\fc, W, \beta$ ] Transmitted signal carrier frequency, bandwidth, baseband effective bandwidth \item[$ T_\text{obs}$] Observation interval \item[$ E_\text{tot}, E$ ] Total and normalized energy at each antenna element \item[$N_0$] Single-side noise power spectral density \item[$N_\text{F}$] Receiver noise figure \item[$\mathsf{SNR}_1$ ] SNR relative to the direct path \item[$S_i(f), \mathbf{s}(f)$] Equivalent low-pass signal at the $i$th Tx antenna and transmitted signals vector in the frequency domain \item[$P_i(f)$] Equivalent low-pass unitary-energy signal at the $i$th Tx antenna in the frequency domain \item[$R_m(f), \mathbf{r}(f)$] Received signal at the $m$th Rx antenna and received signals vector in the frequency domain \item[$X_m(f), \mathbf{x}(f)$] Useful Rx signal at the $m$th Rx antenna and useful Rx signals vector in the frequency domain \item[$N_m(f), \mathbf{n}(f)$] Noise component at the $m$th Rx antenna and noise vector in the frequency domain \item[$\omega_i, \mathbf{B}(f, \bm{\theta}_0)$] Beamforming weight and matrix \item[$\mu_i^\text{t}(\bm{\theta}_0), \tau_i^\text{t}(\bm{\theta}_0), \nu_i$] Beamforming phase, TDL and random weight \item[$\delta_i^\text{t}, \Delta \tau_i^\text{t}$] Beamforming phase and TDL errors \item[$\tilde{\omega}_i, \mathbf{Q}(f)$] Beamforming weight and matrix with errors \item[$\epsilon^\text{s}$] Time synchronization error \item[$\bm{\psi}$] Estimation parameter vector \item[$\mathbf{J}_{\bm{\psi}}, \mathbf{J}_{\bm{\psi}}^{\text{d}}, \mathbf{J}_{\bm{\psi}}^{\text{p}}$] Bayesian FIM, FIM relative to data, \textit{a-priori} FIM \item[$\mathsf{CRB}\left(\mathbf{q} \right)$] CRB on position and orientation \item[$\mathsf{CRB}_0$] Single-antenna CRB on ranging error \end{description} } \section{Antenna Array Geometric Configuration} \label{sec:SystemModel} \subsection{Geometric Relationships} We consider a {3D} localization scenario, as the one reported in Fig. \ref{fig:sys2}, consisting of a single \ac{AP} acting as reference receiving node equipped with an antenna array, with $N_{\text{rx}}$ antennas, and a transmitting mobile terminal with a $N_{\text{tx}}$-antenna array. {The localization process aims at directly inferring:\footnote{ {As previously stated, we consider the {\ac{Tx}} position and orientation with respect to the relative centroid (see \eqref{eq:ch3_coord}-\eqref{eq:Rot_matrix} in the following) as we adopt a 1-step approach in which the {\ac{Tx}} position and orientation are directly inferred from the received signals. Thus, we do not estimate neither the \ac{DOA} (i.e., angle between arrays centroids) nor the direct path \ac{TOA}.} } \begin{itemize} \item the position of the {\ac{Tx}} centroid $\mathbf{p}^\tx~=~\left[x^{\text{t}}_0, y^{\text{t}}_0, z^{\text{t}}_0\right]^{\scriptscriptstyle \text{T}}=\left[x, y, z\right]^{\scriptscriptstyle \text{T}}$; \item the orientation of the {\ac{Tx}} $\bm{\vartheta}^\tx=\left[\vartheta^\tx, \varphi^\tx \right]^{\scriptscriptstyle \text{T}}$ \end{itemize}} \noindent when the {\ac{Rx}} centroid position $\mathbf{p}^\rx = \left[x^{\text{r}}_0, y^{\text{r}}_0, z^{\text{r}}_0\right]^{\scriptscriptstyle \text{T}}=\left[0, 0, 0\right]^{\scriptscriptstyle \text{T}}$ and orientation $\bm{\vartheta}^\text{r}=\left[\vartheta^\text{r}, \varphi^\text{r} \right]^{\scriptscriptstyle \text{T}}$ are known.\footnote{Without loss of generality, the {\ac{Rx}} is assumed located at the origin of the coordinates system.} With reference to Fig.~\ref{fig:sys2}, $\mathbf{p}^\tx_i\left(\bm{\vartheta}^\text{t} \right)= \left[x^{\tx}_{i},\,\, y^{\tx}_{i},\,\, z^{\tx}_{i} \right]^{\scriptscriptstyle \text{T}}$ indicates the position of the $i$th transmitting antenna relative to the {\ac{Tx}} geometric center and dependent on the {\ac{Tx}} orientation, and $\mathbf{p}^\rx_m(\bm{\vartheta}^\text{r})=\left[x^{\rx}_{m},\, y^{\rx}_{m},\, z^{\rx}_{m} \right]^{\scriptscriptstyle \text{T}}$ the position of the $m$th receiving antenna relative to the {\ac{Rx}} geometric center. Considering spherical coordinates, we have { \begin{align}\label{eq:ch3_coord} &\mathbf{p}_{i/m}^{\text{tr}}(\bm{\vartheta}^{\text{tr}}) \!=\! {\rho}_{i/m}^{\text{tr}}\,\mathbf{R}\left(\bm{\vartheta}^{\text{tr}} \right)\, \mathbf{d}^{\scriptscriptstyle \text{T}}\left(\bm{\theta}_{i/m}^{\text{tr}} \right) \end{align} with the direction cosine is expressed as \begin{equation} \mathbf{d}\left(\bm{\theta} \right)=\left[\sin(\theta)\cos(\phi), \, \sin(\theta)\sin(\phi), \, \cos(\theta)\right]\, \end{equation} and ${\rho}_{i/m}^\text{tr}=\lVert \mathbf{p}_{i/m}^{\text{tr}}(\bm{\vartheta}^{\text{tr}})-\mathbf{p}^{\text{tr}} \rVert_2$ and $\bm{\theta}_{i/m}^{\text{tr}}=\left[\theta_{i/m}^{\text{tr}},\phi_{i/m}^{\text{tr}} \right]^{\scriptscriptstyle \text{T}}$ being the distance and the couple of angles between the considered array antenna from the correspondent array centroid.}\footnote{Note that the elevation angle in all the text is indicated with $\theta$ and it can assume values in the interval $\left[0, \pi \right)$. Contrarily the azimuthal angle is denoted with $\phi$ and it ranges between $\left[0, 2\,\pi \right)$.} \begin{figure}[t!] \psfrag{tx}[c][c][0.8]{Transmitter \qquad\quad} \psfrag{rx}[c][c][0.8]{Receiver \quad\quad} \psfrag{pr}[c][c][0.75]{$\mathbf{p}^\rx$} \psfrag{prm}[c][c][0.75]{$\,\, \mathbf{p}^\rx_m$} \psfrag{pt}[c][c][0.75]{$\mathbf{p}^\tx$} \psfrag{pti}[c][c][0.75]{$\mathbf{p}^\tx_i$} \psfrag{thetarx}[c][c][0.8]{$\theta^{\text{r}}_{m}$} \psfrag{phirx}[c][c][0.8]{$\phi^{\text{r}}_{m}$} \psfrag{rrx}[c][c][0.8]{$\rho^{\text{r}}_{m}$} \psfrag{toa}[c][c][0.8]{$\tau$} \psfrag{toaim}[c][c][0.8]{\,\,\,$\tau_{im}$} \psfrag{thetatx}[c][c][0.8]{$\theta^{\text{t}}_{i}$} \psfrag{phitx}[c][c][0.8]{$\phi^{\text{t}}_{i}$} \psfrag{rtx}[c][c][0.8]{$\rho^{\text{t}}_{i}$} \psfrag{Gtx}[c][c][0.65]{$\quad\quad\quad \quad \quad \! \text{Tx centroid}$} \psfrag{Grx}[c][c][0.65]{$\quad\quad\quad \quad \quad \text{Rx centroid}$} \psfrag{tau}[c][c][0.8]{$\tau^{(1)}$} \psfrag{tauim}[c][c][0.8]{$\tau_{im}^{(1)}$} \psfrag{x}[c][c][0.55]{$x'$} \psfrag{y}[c][c][0.55]{$y'$} \psfrag{z}[c][c][0.55]{$z'$} \psfrag{x1}[c][c][0.55]{$x''$} \psfrag{y1}[c][c][0.55]{$y''$} \psfrag{z1}[c][c][0.55]{$z''$} \psfrag{x2}[c][c][0.55]{$x$} \psfrag{y2}[c][c][0.55]{$y$} \psfrag{z2}[c][c][0.55]{$z$} \psfrag{D}[c][c][0.7]{$D$} \psfrag{appr}[c][c][0.7]{\qquad\qquad\qquad $D \ll c\tau$} \psfrag{pc}[c][c][0.8]{$\mathbf{p}_\text{0}$} \psfrag{pant}[c][c][0.8]{$\mathbf{p}_\text{ant}$} \psfrag{phi}[c][c][0.8]{$\phi$} \psfrag{th}[c][c][0.8]{$\theta$} \psfrag{ro}[c][c][0.8]{$\rho$} \centerline{\includegraphics[width=0.35\textwidth]{Figures/Scenario5.eps}} \caption{Multi-antenna system configuration.} \label{fig:sys2} \end{figure} The rotational matrix $\mathbf{R}(\bm{\vartheta}^{\text{tr}})$ is given by \begin{equation}\label{eq:Rot_matrix} \mathbf{R}(\bm{\vartheta}^{\text{tr}})=\mathbf{R}_z(\varphi^{\text{tr}})\, \mathbf{R}_x(\vartheta^{\text{tr}}) \end{equation} where $\mathbf{R}_z(\varphi^{\text{tr}})$ and $ \mathbf{R}_x(\vartheta^{\text{tr}})$ define the counter-clock wise rotation around the $z$-axis and the clock wise rotation around the $x$-axis, respectively. Finally $\bm{\theta}_1=\left[ \theta_1, \phi_1 \right]^{\scriptscriptstyle \text{T}}$ designates the angle of incidence between arrays centroids (direct path) and $\bm{\theta}_0=\left[\theta_0, \phi_0\right]^{\scriptscriptstyle \text{T}}$ represents the intended pointing direction of the steering process when applied. The diameter $D$ of the transmitting and receiving arrays is assumed much smaller than the inter-array distance $d=\left\lVert \mathbf{p}^\rx-\mathbf{p}^\tx \right\rVert_2$, i.e., $D \ll d$. Note that this hypothesis is especially verified at \ac{mm-wave} where the array dimensions are very small thanks to the reduced wavelength. Moreover the arrays are supposed to be sufficiently far from the surrounding scatterers thus obtaining identical angles of incidence for both direct and \acp{MPC} at each antenna element. We take $L$ \acp{MPC} into consideration as nuisance parameters in the localization process and the first path is assumed always experiencing a \ac{LOS} propagation condition. For what the \acp{MPC} parameters are concerned, we follow the same notation introduced in \cite{han2016performance}. In particular, let $\bm{\theta}_l^{\text{t}}=\left[ \theta_l^{\text{t}},\phi_l^{\text{t}} \right]^{\scriptscriptstyle \text{T}}=\left[ \theta_1+\Delta \theta_l^{\text{t}}, \phi_1+\Delta \phi_l^{\text{t}}\right]^{\scriptscriptstyle \text{T}}$ and $\bm{\theta}_l^{\text{r}}=\left[ \theta_l^{\text{r}},\phi_l^{\text{r}} \right]^{\scriptscriptstyle \text{T}}=\left[ \theta_1+\Delta \theta_l^{\text{r}}, \phi_1+\Delta \phi_l^{\text{r}} \right]^{\scriptscriptstyle \text{T}}$, with $l=1,2,\ldots,L$ , indicate the angles of departure from the transmitting array and of incidence at the {\ac{Rx}} side of the $l\,$th path, respectively. The angular biases $\left[\Delta \theta_l^{\text{t}}, \Delta \phi_l^{\text{t}} \right]^{\scriptscriptstyle \text{T}}$ and $\left[\Delta \theta_l^{\text{r}}, \Delta \phi_l^{\text{r}} \right]^{\scriptscriptstyle \text{T}}$ are the displacement with respect to the direct path at the {\ac{Tx}} and {\ac{Rx}} side. Obviously, it is $\left[\Delta \theta^{\text{t}}_1,\Delta \phi^{\text{t}}_1 \right]^{\scriptscriptstyle \text{T}}=\left[ \Delta \theta^{\text{r}}_1,\Delta \phi^{\text{r}}_1 \right]^{\scriptscriptstyle \text{T}}=\left[ 0,0 \right]^{\scriptscriptstyle \text{T}}$, when direct path is considered. Let $\tau_1 \triangleq {\left\lVert \mathbf{p}^\rx - \mathbf{p}^\tx\right\rVert_{2}}/{c}={d}/{c}$ and $\tau_{im1} \triangleq {\left\lVert \mathbf{p}^\rx_m - \mathbf{p}^\tx_i\right\rVert_{2}}/{c}$ being the propagation delay related to the direct path between the transmitting and receiving centroids and between the $i$th and $m$th antenna, respectively, where $c$ is the speed of light. Considering the multipath, the $l$th propagation delay between array centroids is defined as $\tau_l=\tau_1+ \Delta \tau_l$ where $\Delta \tau_l$ is the non-negative delay bias of the $l$th path with $\Delta \tau_1= 0$ \cite{han2016performance}. According to the geometric assumption previously described, the \ac{TOA} and amplitude between each couple of transmitting and receiving antennas can be expressed using the following approximations \cite{mallat2009crbs,han2016performance} \begin{align}\label{eq:tauapprox} &1) \,\, \tau_{iml} \approx \tau_l+\tau_i^{\tx} (\bm{\theta}_l^{\text{t}},\bm{\vartheta}^\tx)-\tau_m^{\rx} (\bm{\theta}_l^{\text{r}},\bm{\vartheta}^\text{r}) \quad 2) \,\, a_{iml}\approx a_l \end{align} where $a_{iml}$ is the amplitude of the $l$th path between the $m$th receiving and the $i$th transmitting antenna, and $\tau_m^{\rx}(\bm{\theta}_l^{\text{r}},\bm{\vartheta}^\text{r})$ and $\tau_i^{\tx}(\bm{\theta}_l^{\text{t}},\bm{\vartheta}^\tx)$ are, respectively, the receiving and transmitting inter-antenna propagation delays defined as \begin{align} \label{eq:ritardi} &\tau_{i/m}^\text{tr}(\bm{\theta}_l^\text{tr},\bm{\vartheta}^\text{tr})= \frac{1}{c}\,\mathbf{d}\left(\bm{\theta}_l^\text{tr}\right) \, \mathbf{p}_{i/m}^\text{tr} \left(\bm{\vartheta}^\text{tr} \right) \end{align} \begin{figure}[t!] \psfrag{TX}[lc][lc][0.8]{Transmitter} \psfrag{RX}[lc][lc][0.8]{Receiver} \psfrag{theta1rx}[lc][lc][0.8]{{$\theta_1^\text{r}$}} \psfrag{theta1tx}[lc][lc][0.8]{{$\theta_1^\text{t}$}} \psfrag{phi1rx}[lc][lc][0.8]{{$\phi_1^\text{r}$}} \psfrag{phi1tx}[lc][lc][0.8]{{$\phi_1^\text{t}$}} \psfrag{orient}[lc][lc][0.8]{$\vartheta$} \psfrag{orienp}[lc][lc][0.8]{$\varphi$\quad} \psfrag{tauim}[lc][lc][0.8]{$\tau_{im1}$} \psfrag{taum}[lc][lc][0.8]{$\tau_{m}^\text{r}$} \psfrag{taui}[lc][lc][0.8]{$\tau_{i}^\text{t}$} \psfrag{theta}[lc][lc][0.8]{$\theta$ \quad} \psfrag{phi}[lc][lc][0.8]{$\phi$} \psfrag{x}[lc][lc][0.7]{$x$} \psfrag{y}[lc][lc][0.7]{$y$} \psfrag{z}[lc][lc][0.7]{$z$} \psfrag{tau}[lc][lc][0.7]{$\tau_1$} \psfrag{tau2}[lc][lc][0.7]{$\tau_2$} \psfrag{tau2mez}[lc][c][0.7]{$\tau_2 / 2$} \psfrag{i}[lc][lc][0.6]{$i$th} \psfrag{m}[lc][lc][0.6]{$m$th} \psfrag{ptx}[lc][c][0.6]{$\mathbf{p}^\tx$} \psfrag{prx}[lc][c][0.6]{$\mathbf{p}^\rx$} \psfrag{a}[lc][lc][0.6]{(a)} \psfrag{b}[lc][lc][0.6]{(b)} \psfrag{c}[lc][lc][0.6]{(c)} \psfrag{dant}[lc][lc][0.7]{$d_\text{ant}$} \psfrag{m}[lc][lc][0.7]{$m$th} \psfrag{mx}[lc][lc][0.7]{$m_x\,d_\text{ant}$} \psfrag{mz}[lc][lc][0.7]{$m_z\,d_\text{ant}$} \psfrag{x}[lc][lc][0.7]{$x$} \psfrag{z}[lc][lc][0.7]{$z$} \psfrag{a1}[lc][lc][0.7]{$\frac{\sqrt{N}}{2}$} \psfrag{b1}[lc][lc][0.7]{$-\frac{\sqrt{N}}{2}$\quad } \centerline{\includegraphics[width=0.5\textwidth]{./Figures/planar_geometry3.eps}} \caption{Array geometric configuration.} \label{fig:fig1} \end{figure} Fig.~\ref{fig:fig1} reports a graphical explanation of the considered system and delays. As it can be seen, the approximation in \eqref{eq:tauapprox} permits to write the \ac{TOA} as the summation of the inter-antenna delays and the delay between the array centroids. \begin{figure*}[t] \begin{minipage}{0.95\textwidth} \psfrag{label0}[c][c][0.8]{$S(f)$} \psfrag{label1}[c][c][0.8]{$\tau_1^\text{t}(\bm{\theta}_0)+\Delta \tau_1^\text{t}$} \psfrag{label2}[c][c][0.8]{ {$\mu_1^\text{t}(\bm{\theta}_0)+\delta_1^\text{t}$}} \psfrag{label2r}[c][c][0.8]{$ {\upsilon_1}$} \psfrag{label3}[c][c][0.8]{$\tau_i^{\tx}(\bm{\theta}_0)+\Delta \tau_i^\text{t}$} \psfrag{label4}[c][c][0.8]{ {$\mu_i^\text{t}(\bm{\theta}_0)+\delta_i^\text{t}$}} \psfrag{label4r}[c][c][0.8]{$ {\upsilon_i}$} \psfrag{label5}[c][c][0.8]{$\tau_{N_{\text{tx}}}^\text{t}(\bm{\theta}_0)+\Delta \tau_{N_{\text{tx}}}^\text{t}$} \psfrag{label6}[c][c][0.8]{ {$\mu_{N_{\text{tx}}}^\text{t}(\bm{\theta}_0)+\delta_{N_{\text{tx}}}^\text{t}$}} \psfrag{label6r}[c][c][0.8]{$ {\upsilon_{N_{\text{tx}}}}$} \psfrag{label7}[c][c][0.8]{$S_{1}(f)$} \psfrag{label8}[c][c][0.8]{$S_{i}(f)$} \psfrag{label9}[c][c][0.8]{$S_{N_{\text{tx}}}(f)$} \psfrag{TDL}[c][c][0.65]{$\mathsf{TDL}$} \psfrag{timed}[c][c][0.8]{(b) Timed Array} \psfrag{phas}[c][c][0.8]{(a) Phased Array} \psfrag{mimo}[c][c][0.8]{(c) MIMO} \psfrag{rand}[c][c][0.8]{(d) {Random Weighting}} \centerline{\includegraphics[width=0.85\textwidth]{./Figures/MultiAnt_Scheme_4.eps}} \caption{From the left to the right: Phased, timed, MIMO and {{random weighting}} array schemes.} \label{fig:musch} \end{minipage} \end{figure*} \subsection{ {Special Case: Planar Array Geometry}} \label{sec:planar} Planar array configurations appear to be the most suitable when considering the integration of massive arrays in portable devices or in small spaces. {For this reason, in addition to the general analysis valid for any geometric configuration (i.e., any antennas spatial deployment and arrays orientation),} some compact specialized equations will be derived in the following sections for squared arrays of area $A^\text{t}\!=\!d^2_\text{ant} {N_{\text{tx}}}$ ($A^\text{r}\!=\!d^2_\text{ant} {N_{\text{rx}}}$), with the antennas equally spaced of $d_\text{ant}$. Both arrays are considered lying on the $XZ$-plane and being located one in front of the other with $\mathbf{p}^\rx\!=\!\left[0,\, 0,\, 0 \right]^{\scriptscriptstyle \text{T}}$ and $\mathbf{p}^\tx=\left[0,\, y,\, 0 \right]^{\scriptscriptstyle \text{T}}$ with $y>0$, so that $d\!=\!y$ and thus $\tau_1=y/c$. In this case, the antenna coordinates in \eqref{eq:ch3_coord} becomes \begin{align}\label{eq:ch3_coord_planar} \!\!\!\!\!\mathbf{p}_m^\text{tr} \left(\bm{\vartheta}^\text{rt} \right)\!=\! \left[x_m^\text{tr},\, 0,\, z_m^\text{tr} \right]^{\scriptscriptstyle \text{T}}\!\!=\!\mathbf{R}\left(\bm{\vartheta}^\text{tr}\right) \! \left[m_x\, d_\text{ant},\, 0,\, m_z\, d_\text{ant} \right]^{\scriptscriptstyle \text{T}} \end{align} where $m_x=m_z=-\frac{\sqrt{N}-1 }{2},-\frac{\sqrt{N}-1}{2}+1, \ldots, \frac{\sqrt{N}-1}{2}$ are the antenna indexes along the $x-$ and $z-$axis respectively, and $N$ indicates the number of antennas. We assume for now a free-space propagation condition so that $\bm{\theta}_1^\text{t}\!=\!\bm{\theta}_1^\text{r}\!\!=\!\!\bm{\theta}_1\!\!= {\left[\theta_1,\phi_1 \right]^{\scriptscriptstyle \text{T}}} =\!\!\left[\frac{\pi}{2},-\frac{\pi}{2} \right]^{\scriptscriptstyle \text{T}}$ and $\mathbf{d}(\bm{\theta}_1)=\left[0, \, -1, \, 0\right]$. Consequently it is possible to specialize \eqref{eq:ritardi} as \begin{align}\label{eq:ritardi_planar} &\tau_m^{\rx}(\bm{\theta}_1,\bm{\vartheta}^\text{r})= -\frac{d_\text{ant}}{c}\,\left(m_x\, \sin\left(\varphi^\text{r}\right)+m_z \, \cos\left(\varphi^\text{r} \right)\, \sin\left(\vartheta^\text{r} \right) \right) \nonumber \\ &\tau_i^{\tx}(\bm{\theta}_1,\bm{\vartheta}^\tx)= -\frac{d_\text{ant}}{c}\,\left(i_x\, \sin\left(\varphi^\text{t}\right)+i_z \, \cos\left(\varphi^\text{t} \right)\, \sin\left(\vartheta^\text{t} \right) \right) \end{align} Note that, {in the special case in which} the {\ac{Rx}} and {\ac{Tx}} orientation is $\bm{\vartheta}^\text{r}=\bm{\vartheta}^\text{t}=\left[0, 0 \right]^{\scriptscriptstyle \text{T}}$, the inter-antenna delays are zeros, i.e. $\tau_m^{\rx}(\bm{\theta}_1,\bm{\vartheta}^\text{r})=\tau_i^{\tx}(\bm{\theta}_1,\bm{\vartheta}^\tx)=0$ $\forall m,i$, as the antennas are aligned to the array centroids, thus the incident wave impinges simultaneously at all the antennas. \section{Antenna Array Schemes and Signal Model} \label{sec:signalmodel} In this section, different types of antenna array schemes are analyzed starting from a unified signal model with the purpose to highlight their beamforming and diversity gain properties. Specifically, the four array structures reported in Fig.~\ref{fig:musch} will be analysed from a signal processing point-of-view and by focusing on how the different signaling schemes translate into different localization capabilities. {Table \ref{tab:comparison_array} reports a comparison in terms of arrays complexity, capabilities and cost.} \begin{table*}\caption{{Array schemes comparison.}} \label{tab:comparison_array} \begin{minipage}{0.95\textwidth} \begin{center}% { \begin{tabular}{|l l l c c|} \hline &&&& \\ & Signal Design & Array Implementation Complexity & Cost & Beamforming capabilities \\ &&&& \\ \hline &&&& \\ Timed & Low: same signal for all antenna branches & High: {TDLs} needed when $W\gg \fc$ & High & Yes \\ &&&& \\ Phased & Low: same signal for all antenna branches & Medium: only {PSs} & Medium & Yes \\ &&&& \\ MIMO & High: one different signal for all antenna branches & High: a RF chain for each branch & High & No \\ &&&& \\ Random & Low: same signal for all antenna branches & Low: only {PSs} & Low&No\\ & & & & \\ \hline \end{tabular} } \end{center} \end{minipage} \end{table* \subsection{Transmitted Signal Model} \label{sec:ch3_txmodel} The transmitted signal at the $i$th transmitting antenna is denoted with $g_i(t)=\,\Re\left\{ s_i (t) \, e^{j\, 2\, \pi \fc t}\right\}$ where $s_i (t) $ represents the equivalent low-pass signal and $\fc$ the carrier frequency. We consider a constraint on the total transmitted energy $E_{\text{tot}}$ which is uniformly allocated among antennas, thus $E \!\!=\!\! E_{\text{tot}} / N_{\text{tx}} \!\!=\!\! \int_{\Tob} \lvert s_i(t) \rvert^2 dt$, $i=1,2,\ldots, N_{\text{tx}}$, represents the normalized energy at each antenna element. We introduce the Fourier transform of $s_i(t)$ as $S_i(f)=\mathcal{F}\left\{ s_i(t) \right\}$, with $\mathcal{F}\left\{\cdot \right\}$ denoting the Fourier transform operation in a suitable observation interval $\Tob$ containing the signal support. For further convenience, the vector $\mathbf{s}(f)=\left[S_1(f),\, \ldots,\, S_{N_{\text{tx}}}(f) \right]^{\scriptscriptstyle \text{T}}$ contains all the baseband {transmitted} signals. In the following, the signal model for each array configuration will be further detailed with reference to Fig.~\ref{fig:musch}. \subsubsection{Timed and phased arrays} \label{sec:ch3_txtimed} In multi-antenna systems, beamforming is obtained by applying a progressive time delay at each array element so that the emitted signals result summed up coherently towards the intended steering direction. {Considering the signal bandwidth $W$, when the condition $W \ll \fc$ holds,} this process can be well approximated using only \acp{PS} (phased arrays). On the contrary, when $W \approx \fc$, phase shifts are no longer sufficient to align all the signals. As a consequence, to avoid distortion and beamsteering\footnote{The terms beamsteering and beamforming are used as synonymous.} degradation (squinting effect), timed arrays consisting of \acp{PS} and \acp{TDL} must be introduced. {The following analysis considers both array structures in order to preserve the generality of the model. Nevertheless, in Sec.~\ref{sec:numerical}, arrays operating at \ac{mm-wave} frequencies with $W \ll \fc$ (narrowband) will be adopted in simulations.\footnote{{As expected, since $W \ll \fc$, the localization performance of timed and phased arrays coincides.}} In \cite{guerra2015position}, different fractional bandwidths have been taken into account in the results.} {Moreover, differently from \cite{alkhateeb2014channel_c,garcia2016location_c}, where multiple beams are generated, here we consider a single-beam scenario in order to maximize the \ac{SNR} in the desired steering direction and to reduce the processing time.} Given these array schemes, the transmitted signal is the same for all transmitting antennas, i.e., ${s}_i(t)={s}(t)=\sqrt{E}\,\, p(t) \,\,\, \forall i=1,\, \ldots,\, N_{\text{tx}}$, with $p(t)$ being the unitary energy normalized version of $s(t)$, and beamforming weights are applied to each branch of the array to focus the power flux in a precise direction in space. Specifically, when no quantization errors are present in the weights, the ideal beamforming matrix can be defined as \begin{align}\label{eq:beamformvect} &\mathbf{B}\left(f, \bm{\theta}_0 \right)=\text{diag}\left(\omega_1,\, \omega_2,\,\ldots,\, \omega_i,\, \ldots\, \omega_{N_{\text{tx}}} \right). \end{align} \newcounter{MYtempeqncnt0} \begin{figure*}[t!] \normalsize \setcounter{MYtempeqncnt0}{\value{equation}} \begin{align}\label{eq:rxsignal} &\mathbf{r}(f)=\sum_{l=1}^{L} \mathbf{a}^{\text{r}}(f, \bm{\theta}_l^{\text{r}},\bm{\vartheta}^\text{r})\, \mathbf{c}(f,\tau_l)\, \mathbf{A}^{\text{t}}(f, \bm{\theta}_l^{\text{t}}, \bm{\vartheta}^\tx)\, \mathbf{Q}(f)\, \mathbf{B}(f, \bm{\theta}_0)\, \mathbf{s}(f)+ \mathbf{n}(f) =\mathbf{x}(f)+\mathbf{n}(f) \end{align} \hrulefill \vspace*{4pt} \end{figure*} The $i$th beamforming weight is $\omega_i \!\!=\!\! b_i(f)\, b_i^\text{c}$ having indicated $b_i(f) \!\!=\!\! e^{j 2\, \pi\, f \, \tau^{\text{t}}_{i}(\bm{\theta}_0)}$ and {$b_i^\text{c}\!\!=\!\! e^{j \mu_{i}^{\text{t}}(\bm{\theta}_0)}$}, where {$\mu_i^\text{t}(\bm{\theta}_0) \!\!=\!\! 2\, \pi\, \fc\, \tau^{\text{t}}_{i}(\bm{\theta}_0)$} and $\tau^{\text{t}}_{i}(\bm{\theta}_0)$ are the transmitting steering phase and delay related to the $i$th \ac{PS} and \ac{TDL} of the array, respectively. The main difference between phased and timed array is the way in which the beamsteering process is performed: in the former only \acp{PS} are present (i.e., $\tau^{\text{t}}_{i}(\bm{\theta}_0)\!\!=\!\!0$ $\forall i=1,\,\ldots,\, N_{\text{tx}}$, refer to Fig.~\ref{fig:musch}-(a)) while in the latter \acp{TDL} and \acp{PS} are both employed to counteract the beamsquinting effect caused by a larger $W/\fc$ ratio (see Fig.~\ref{fig:musch}-(b)). Nevertheless, some technological issues could induce errors in the beamforming vector. Firstly, when digitally controlled \acp{PS} are used in place of their high-resolution analog counterparts, the presence of quantization errors has to be accounted for \cite{guidi2016personal}. {As shown in \cite{kaouach2011wideband,clemente2013wideband}, where some massive arrays prototypes working in the X- and V-bands have been proposed, \acp{PS} can be realized by simply adopting switches, or by rotating patch antennas. Therefore, continuous phase shifts ranging from $0^\circ$ to $360^\circ$ are not realizable in practice and the quantization errors generated by the consequent discretization of phases should be taken into account when considering real massive arrays.} Secondly, {time synchronization} between the {\ac{Tx}} and the {\ac{Rx}} is required to estimate the position. There are several techniques to accomplish this task \cite{dardari2009ranging}, with the two-way ranging being one of the most used. Unfortunately due to several factors such as clock drift, a residual {time synchronization} error is always present and it is accounted by the term $\epsilon^\text{s}$ in our model. In the presence of such non-perfect weights and {time synchronization} error, a matrix accounting for all the non-idealities is introduced \begin{align}\label{eq:errorvect} \!\!\!\!&\mathbf{Q}(f)= e^{-j\,2\, \pi\, \left(f+\fc\right) \epsilon^\text{s}}\,\text{diag}\left(\varsigma_1,\, \varsigma_2,\, \ldots,\, \varsigma_i,\, \ldots,\, \varsigma_{N_{\text{tx}}}\right) \end{align} where $\varsigma_i$ takes into account the $i\,$th beamforming weight quantization error, i.e., $\varsigma_i=e^{j \left(2\, \pi\, f \, \Delta \tau^{\text{t}}_i+\delta_i^{\text{t}} \right)}$ with $\delta_i^\tx$ being the phase error and $\Delta \tau_i^{\tx}$ the \ac{TDL} error. For further convenience, let indicate with {$\tilde{\omega}_i=\tilde{b}_i(f)\, \tilde{b}_i^{\text{c}}$ where} $\tilde{b}_i(f) \!\!=\!\! e^{j 2\, \pi\, f \,( \tau^{\text{t}}_{i}(\bm{\theta}_0)+ \Delta \tau^{\text{t}}_i)}$ and {$\tilde{b}_i^{\text{c}}\!\!=\!\! e^{j (\mu_{i}^{\text{t}}(\bm{\theta}_0)+\delta_i^{\text{t}} )}$} the quantized weights. After the transmitting beamforming process, the signal at each antenna element can be written as $\mathbf{Q}(f)\, \mathbf{B}(f, \bm{\theta}_0)\, \mathbf{s}(f)$. \subsubsection{MIMO arrays} \label{sec:ch3_txmimo} Contrarily to timed or phased arrays, which perform accurate beamforming, \ac{MIMO} arrays take advantage of {the diversity gain provided} by multiple {different} waveforms \cite{li2007mimo,fishler2006spatial} (see Fig.~\ref{fig:musch}-(c)).\footnote{ {Note that here we refer to \ac{MIMO} as done in radar literature rather than in communications.}} To make the {\ac{Rx}} able to discriminate the signal components coming from each single transmitting antenna, orthogonal waveforms are typically adopted \cite{hassanien2010phased,fishler2006spatial,he2010target,haimovich2008mimo}. As an example, in \cite{fishler2006spatial} a class of signals (i.e., frequency spread signals) are demonstrated to maintain orthogonality for time delays and frequency Doppler shifts. This comes at the expense of a large bandwidth or symbol duration time and of a higher complexity. In \ac{MIMO} arrays, the normalized baseband transmitted signals are indicated with $P_i(f)=\mathcal{F}\left\{ p_i(t) \right\}=\frac{1}{\sqrt{E}}\, \mathcal{F}\left\{s_i (t)\right\}$, where $\int_{\Tob} \lvert p_i(t) \rvert^2 dt=1, i=1,2, \ldots, N_{\text{tx}}$. We consider orthogonal waveforms, such that the correlation function is \begin{align}\label{eq:orth} R_p\left(\Delta \tau_{ij}^{(l,k)} \right) & = \! \int_{W} \, P_{i}(f)\,P_{j}^{*}(f)\, e^{-j 2 \pi f \Delta \tau_{ij}^{(l,k)}}\, df \nonumber \\ &=\begin{cases} & \!\! 0 \qquad \,\,\,\,\,\, \text{$i\neq j$} \\ &\! \! \neq 0 \qquad \text{$i=j$} \end{cases} \quad {\forall l,k=1,\ldots,L} \end{align} {where $\Delta \tau_{ij}^{(l,k)}=\tau_{iml}-\tau_{jmk}$ with $m=1,\ldots, N_{\text{rx}}$ and $i,j=1,\ldots, N_{\text{tx}}$}. The possibility to provide orthogonal waveforms permits to increase the diversity gain, as it will be detailed in next sections, but it requires a greater bandwidth demand and a more complex {\ac{Tx}} structure. In \ac{MIMO}, the matrix in \eqref{eq:beamformvect} is an identity matrix $\mathbf{B}\left(f, \bm{\theta}_0 \right)=\mathbf{B}=\mathbf{I}_{N_{\text{tx}}}$. In presence of the {time synchronization} error, \eqref{eq:errorvect} becomes $\mathbf{Q}(f)= e^{-j\,2\, \pi\, \left(f+\fc\right) \epsilon^\text{s}}\,\mathbf{I}_{N_{\text{tx}}}$. \subsubsection{{Random Weighting}} \label{sec:ch3_txrandom} To avoid the complexity of MIMO, we propose a strategy relying on the same structure of phased arrays, i.e., with only \acp{PS} at each antenna branch (see Fig.~\ref{fig:musch}-(d)), with the fundamental difference that the value assigned to each \ac{PS} is randomly chosen. The beamforming matrix of \eqref{eq:beamformvect} becomes \begin{equation}\label{eq:randbeamvect} \mathbf{B}\left(f,\bm{\theta}_0 \right)=\mathbf{B}=\text{diag}\left(e^{j\, {\upsilon_1}}, e^{j\, {\upsilon_2}}, \ldots,\, e^{j\, {\upsilon_i}}, \ldots,\, e^{j\, {\upsilon_{N_{\text{tx}}}}} \right) \end{equation} with {$\upsilon_i \sim \mathcal{U}\left( 0, 2\pi \right)$}. Note that in this configuration the matrix in \eqref{eq:randbeamvect} does not depend on the frequency and on the steering direction, thus resulting in an array pattern with a random shape \cite{guerra2015position}. In the simplest implementation, {{random weighting}} could be realized using switches as discrete \acp{PS} randomly changing their status \cite{clemente2013wideband}. An important aspect is that, for both \ac{MIMO} and {{random weighting}}, the rank of $\mathbf{B}$ is maximum and equal to $N_{\text{tx}}$. \subsection{Received Signal Model} \label{sec:ch3_rxsignal} In this section, a general framework for the received signal model is illustrated. The received signals are collected in a vector $\mathbf{r}(f) \!\!=\!\!\left[R_1(f),\, \hdots,\, R_m(f),\, \hdots,\, R_{N_{\text{rx}}}(f) \right]^{\scriptscriptstyle \text{T}}$, where $R_m(f)=\mathcal{F}\left\{r_m (t) \right\}$ is evaluated in $\Tob$ and $r_m (t)$ is the equivalent low-pass received signal at the $m$th receiving antenna. {Specifically, the received signal can be written as in \eqref{eq:rxsignal}. The receiving and transmitting direction matrices for the inter-antennas delays and {\ac{Tx}} orientation are given by \begin{align}\label{eq:atar} \! \!\! \!&\mathbf{a}^{\text{r}}(f, \bm{\theta}_l^{\text{r}})=\left[e^{j\, \gamma_1^\text{r}},\,\hdots,\,e^{j\, \gamma_m^\text{r}},\,\hdots,\,e^{j\, \gamma_{N_{\text{rx}}}^\text{r}} \right]^{\scriptscriptstyle \text{T}} \\ \! \!\! \!&\mathbf{A}^{\text{t}}(f, \bm{\theta}_l^{\text{t}},\bm{\vartheta}^\tx) \! =\! \text{diag} \left(e^{-j\, \gamma_1^\text{t}},\,\hdots,\,e^{-j\, \gamma_i^\text{t}},\,\hdots,\,e^{-j\, \gamma_{N_{\text{tx}}}^\text{t}}\right) \label{eq:atar2} \end{align} where $\gamma_{i/m}^{\text{tr}}=2\, \pi\, \left( f+\fc\right)\, \tau_{i/m}^\text{tr} \left(\bm{\theta}_l^\text{tr}, \bm{\vartheta}^{\text{tr}} \right)$;} while $\mathbf{c}(f,\tau_l) \!\!=\!\! c_l\, \mathbf{1}_{1 \times N_{\text{tx}}}$ is the $1 \times N_{\text{tx}}$ channel vector whose generic element is $c_l\!\!=\!\!a_l\,e^{-j\, 2\, \pi \, (f+\fc) \, \tau_l}\!\!=\!\!{\alpha}_l\,e^{-j\, 2\, \pi \, f \, \tau_l} $. Specifically, the dominant \ac{LOS} component related to direct path (i.e., $l \!\!=\!\! 1$) is considered deterministic while, for $l > 1$, $ \alpha_{l} \sim \mathcal{CN} \left(0, \sigma_l^{ 2} \right) $ is a circularly symmetric Gaussian \ac{RV} statistically modelling the $l$th \ac{MPC} \cite{molisch2007wireless}. Finally, $\mathbf{x}(f)\!\!=\!\!\left[{X}_1(f),\, \ldots,\, {X}_m(f),\, \ldots,\, {X}_{N_{\text{rx}}}(f) \right]^{\scriptscriptstyle \text{T}}$ is the set of useful received signals and $\mathbf{n}(f)\!\!=\!\!\left[{N}_1(f),\, \ldots,\, {N}_m(f),\, \ldots,\, {N}_{N_{\text{rx}}}(f) \right]^{\scriptscriptstyle \text{T}}$ is the noise vector with $N_{m}(f)=\mathcal{F}\left\{n_{m}(t) \right\}$, with $n_{m}(t) \sim \mathcal{CN} \left(0, N_0\right)$ being a circularly symmetric, zero-mean, complex Gaussian noise. For further convenience, define $\nu_t= {E_{\text{tot}}}/N_0=\nu \, N_{\text{tx}}$, with $\nu= E/N_0$. The (total) \ac{SNR} at each receiving antenna element is $\mathsf{SNR}_{\text{t}}= N_{\text{tx}} \mathsf{SNR}_1$, where $\mathsf{SNR}_1=\left(a_1\right)^2\,\nu$ represents the \ac{SNR} component related to the direct path between a generic couple of TX-RX antenna elements. \section{Position and Orientation Error Bound} \label{sec:posbound} \subsection{Unknown Parameters} The aim of the system is to estimate the position $\mathbf{p}^\tx$ of the {\ac{Tx}} and its orientation $\bm{\vartheta}^\tx$ starting from the set of received waveforms $\mathbf{r}(f)$. In this context, \acp{MPC} and the residual {time synchronization} error represent nuisance parameters when evaluating the ultimate performance of the estimator. Thus, the unknown parameters vector is defined as \begin{align}\label{eq:trueparametervector} &\bm{\psi}=\left[\mathbf{q}^{\scriptscriptstyle \text{T}}, \,\bm{\kappa}^{\scriptscriptstyle \text{T}},\, \epsilon^\text{s} \right]^{\scriptscriptstyle \text{T}} \end{align} where {the parameters of interest related to localization and orientation} are collected in $\mathbf{q}=\left[\left(\mathbf{p}^\text{t}\right)^{\scriptscriptstyle \text{T}}, \, \left(\bm{\vartheta}^\text{t}\right)^{\scriptscriptstyle \text{T}}\right]^{\scriptscriptstyle \text{T}}$, and the multipath parameters in {$\bm{\kappa}=\left[\bm{\kappa}_1^{\scriptscriptstyle \text{T}},\bm{\kappa}_2^{\scriptscriptstyle \text{T}}, \ldots, \bm{\kappa}_l^{\scriptscriptstyle \text{T}}, \ldots, \bm{\kappa}_L^{\scriptscriptstyle \text{T}} \right]^{\scriptscriptstyle \text{T}}$}, with \begin{align}\label{eq:multipathparam} \bm{\kappa}_l=\begin{cases} \left[a_1 \right]^{\scriptscriptstyle \text{T}}\qquad\quad\,\,\, \text{if $l=1$} \\ \left[\alpha^{\Re}_l, \,\, \alpha^{\Im}_l \right]^{\scriptscriptstyle \text{T}}\quad \text{if $l>1$}. \end{cases} \end{align} The terms $\alpha^{\Re}_l=\Re\left\{\alpha_l \right\}$ and $\alpha^{\Im}_l=\Im\left\{\alpha_l \right\}$ indicate the real and imaginary part of the complex channel coefficient, respectively \cite{godrich2009analysis,witrisal2012performance}. The {time synchronization} error is modeled as independent Gaussian zero-mean \ac{RV} with standard deviation $\sigma_{\epsilon}^2$. Note that the nuisance parameters $\bm{\psi_\text{r}}=\left[\bm{\kappa}_2^{\scriptscriptstyle \text{T}},\, \ldots,\, \bm{\kappa}_L^{\scriptscriptstyle \text{T}} ,\, \epsilon^\text{s} \right]^{\scriptscriptstyle \text{T}} $ are described statistically (\textit{a-priori} information available) whereas $\bm{\psi_\text{nr}}=\left[\mathbf{q}^{\scriptscriptstyle \text{T}}, \,a_1\right]^{\scriptscriptstyle \text{T}} $ are treated as deterministic (no \textit{a-priori} information available). {In the following, we will discern among two different cases based on the orientation awareness. Specifically, we refer to the \textit{orientation-unaware} case for indicating the situation in which the {\ac{Tx}} orientation is not known at \ac{Rx}. Contrarily, the \textit{orientation-aware} case is the opposite situation in which the orientation is exactly known at the {\ac{Rx}} side and it can be removed from the list of unknown parameters in \eqref{eq:trueparametervector}.} {Moreover, we suppose that an initial search is conducted by the {\ac{Tx}} in order to coarsely estimate its own position and orientation with respect to the {\ac{Rx}}. Consequently, the beamforming weights can be communicate to the {\ac{Rx}} by exploiting the communication link dedicated to data exchange.} \subsection{\ac{CRB} General Expression} The performance of any unbiased estimator $\widehat{\bm{\psi}}=\widehat{\bm{\psi}}\left(\mathbf{r}\left( f\right) \right)$ can be bounded by the hybrid \ac{CRB} defined as \cite{van2004detection} \begin{equation} \mathbb{E}_{\mathbf{r}, \bm{\psi}_\text{r}} \left\{\left[\widehat{\bm{\psi}}-\bm{\psi} \right] \left[\widehat{\bm{\psi}}-\bm{\psi}\right]^{\scriptscriptstyle \text{T}} \right\} \succeq \mathbf{J}_{\bm{\psi}}^{-1}=\mathsf{CRB}\left({\bm{\psi}} \right) \end{equation} where $\mathbf{J}_{\bm{\psi}}$ is the Bayesian \ac{FIM} defined as \begin{align}\label{eq:FIM} &\!\!\mathbf{J}_{\bm{\psi}} \triangleq - \mathbb{E}_{\mathbf{r}, \bm{\psi}_\text{r}} \left\{\nabla_{\bm{\psi}\bm{\psi}}^2 \, \ln f\left(\mathbf{r}, \bm{\psi}_\text{r} \right) \right\} =\mathbf{J}^{\text{d}}_{\bm{\psi}} + \mathbf{J}^{\text{p}}_{\bm{\psi}} \, \nonumber \\ &\!\!\!\!=\!\left[\begin{array}{c:cc} \mathbf{J}_{\mathbf{q}\mathbf{q}}^\text{d}&\mathbf{J}_{\mathbf{q}\bm{\kappa}}^\text{d}&\mathbf{J}_{\mathbf{q}\epsilon^\text{s}}^\text{d} \\ \hdashline \mathbf{J}_{\bm{\kappa} \mathbf{q}}^\text{d}&\mathbf{J}_{\bm{\kappa} \bm{\kappa}}^\text{d}+\mathbf{J}_{\bm{\kappa} \bm{\kappa}}^\text{p}&\mathbf{J}_{\bm{\kappa}\epsilon^\text{s}}^\text{d} \\ \mathbf{J}_{\epsilon^\text{s} \mathbf{q}}^\text{d}&\mathbf{J}_{\epsilon^\text{s} \bm{\kappa}}^\text{d}&\mathbf{J}_{\epsilon^\text{s} \epsilon^\text{s}}^\text{d}+\mathbf{J}_{\epsilon^\text{s} \epsilon^\text{s}}^\text{p} \end{array}\right]\! \!=\!\!\left[\begin{array}{ll} \mathbf{A} & \mathbf{C} \\ \mathbf{C}^{\scriptscriptstyle \text{H}} & \mathbf{D} \end{array}\right] . \end{align} The symbol $\nabla_{\bm{\psi}\bm{\psi}}^2=\left( {\partial^2}/{\partial \bm{\psi} \partial \bm{\psi}} \right)$ denotes the second partial derivatives with respect to the elements in $\bm{\psi}$ and \begin{align}\label{eq:jtheta} &\mathbf{J}^{\text{d}}_{\bm{\psi}} =- \mathbb{E}_{\mathbf{r}, \bm{\psi}_\text{r}} \left\{\nabla_{\bm{\psi}\bm{\psi}}^2\, \ln f\left(\mathbf{r} \lvert \bm{\psi}_\text{r} \right) \right\} \quad \nonumber \\ &\mathbf{J}^{\text{p}}_{\bm{\psi}}=- \mathbb{E}_{\bm{\psi}_\text{r}} \left\{\nabla_{\bm{\psi}\bm{\psi}}^2\, \ln f\left(\bm{\psi}_{r} \right) \right\} \end{align} are the \ac{FIM} related to data and the \ac{FIM} containing the \textit{a-priori} statistical information on the parameters, respectively. Since the observations at each receiving antenna element are independent, the log-likelihood function $\ln f\left(\mathbf{r} \lvert \bm{\psi}_\text{r} \right)$ can be written as \begin{equation} \ln f\left(\mathbf{r} \lvert \bm{\psi}_\text{r} \right) \propto -\frac{1}{N_0} \sum_{m=1}^{\Nrx} \int_{W} \lvert R_{m}(f)-X_m(f) \rvert^2 \, df. \end{equation} Moreover, based on the statistical information of $\bm{\psi}_\text{r}$, it is possible to derive the \textit{a-priori} probability density function of parameters $\bm{\psi}_\text{r}$ whose expression is reported in Appendix~\ref{appA}. All \ac{FIM} elements are reported in details in Appendixes A and B. Finally, by using the Schur complement, the \ac{CRB} expression related to the localization and orientation estimation error can be easily derived as \begin{equation}\label{eq:crb11} \mathsf{CRB}\left(\mathbf{q} \right)=\left(\mathbf{A}- \mathbf{C}\,\,\mathbf{D}^{-1}\,\,\mathbf{C}^{\scriptscriptstyle \text{H}} \right)^{-1} \, . \end{equation} Equation \eqref{eq:crb11} is a general bound valid for different set-up (\ac{MIMO}, timed, phased and {random weighting} arrays) and accounting for signal weights quantization effects, {time synchronization} mismatch and multipath. Specialized expressions can be derived from \eqref{eq:crb11} for specific cases to get insights on the key parameters affecting the performance as will be done in {Sec.~\ref{sec:idealscenario}}. { \section{On the CRB tightness in massive array regime} \label{sec:crbtight} It is well known that the \ac{CRB} is a meaningful metric when the global ambiguities are negligible \cite{van2004detection}. Such a condition is satisfied when operating at high \ac{SNR} (asymptotic \ac{SNR} regime) but, unfortunately, the required high \acp{SNR} cannot be in general obtained, especially at high frequencies. \\ \indent Therefore, in the following, we demonstrate that the global ambiguities can be made negligible without imposing the \ac{SNR} to be very large by letting the antenna array being massive (\textit{massive array regime}). In particular, we aim to show that, under random \ac{Rx} array orientations, the number of geometric configurations in which the ambiguities are not negligible vanishes as the number of receiving antennas increases. \\ \indent To this purpose, the \ac{AF} is a powerful tool to investigate the presence of ambiguities especially used in radar systems and, it can be derived from the \ac{ML} discarding the thermal noise component \cite{san2007mimo_c}.\\ Let define the normalized \ac{AF} as \begin{align}\label{eq:AFnorm} \mathsf{AF}\left(\mathbf{p},\tilde{\mathbf{p}} \right)& =\Bigg\lvert \, \frac{T_\text{obs}}{N_{\text{tx}}\, N_{\text{rx}}} \, \int_{W} \mathbf{x}^{\scriptscriptstyle \text{H}}(f,\mathbf{p}) \mathbf{x}(f,\tilde{\mathbf{p}} ) \, df\, \Bigg\rvert^2 \end{align} where $\mathbf{p}$ is the true \ac{Tx} position, $\tilde{\mathbf{p}}$ is a test position and, $\mathbf{x}$ is the useful signal vector reported in \eqref{eq:rxsignal}. Asymptotically for $N_{\text{rx}} \rightarrow \infty$ (\textit{massive array regime}), for the weak law of the large number \cite{van2000asymptotic_c}, we can write \begin{align}\label{eq:AFeq} \mathsf{AF}\left(\mathbf{p},\tilde{\mathbf{p}} \right) \xrightarrow[]{\,P \,} \Bigg\lvert \, \frac{T_\text{obs}}{N_{\text{tx}}\, N_{\text{rx}}} \, \int_{W} \mathbb{E}\left[ \,\mathbf{x}^{\scriptscriptstyle \text{H}}(f,\mathbf{p}) \mathbf{x}(f,\tilde{\mathbf{p}} )\right] \, df\, \Bigg\rvert^2 \end{align} where the operator $\xrightarrow[]{P}$ indicates the convergence in probability. \\ In the following, we will consider the free-space and the multipath cases, separately, in order to show how the sidelobes level behaves in the massive array regime. The analysis in non-massive array regime is considered in Sec. VIII. \subsection{Free-space Scenario} \label{sec:freespace_AF} Here we focus our attention to the free-space scenario (i.e., $l=k=1$). In this case, the expectation term in \eqref{eq:AFeq} becomes \begin{align}\label{eq:ris_par} & \frac{1}{N_{\text{tx}}\, N_{\text{rx}}} \mathbb{E}\left[ \, \mathbf{x}^{\scriptscriptstyle \text{H}}(f,\mathbf{p}) \mathbf{x}(f,\tilde{\mathbf{p}} ) \right] \propto \frac{1}{N_{\text{tx}} \, N_{\text{rx}}}\, \mathbb{E} \left[\mathbf{H}(\mathbf{p}, \tilde{\mathbf{p}}) \right] \end{align} where $\mathbf{H}(\mathbf{p}, \tilde{\mathbf{p}})$, is a $N_{\text{tx}} \times N_{\text{tx}}$ matrix whose generic element is given by \begin{align} \left[\mathbf{H}(\mathbf{p}, \tilde{\mathbf{p}})\right]_{i,j} &= \lvert a_1 \rvert^2 \, e^{-j\, 2\, \pi\, (f+\fc)\Delta \tau_1(\mathbf{p}, \tilde{\mathbf{p}})}\, \tilde{\omega}_i\, \tilde{\omega}_j^* \nonumber \\ &\times \, e^{j \Psi_{ij}^{(1,1)}(\mathbf{p}, \tilde{\mathbf{p}})}\, \sum_{m=1}^{N_{\text{rx}}} \, \mathbb{E}\left[ e^{j \Psi_{m}^{(1,1)}(\mathbf{p}, \tilde{\mathbf{p}}, \bm{\vartheta}^\text{r})} \right] \, \nonumber \\ & = \begin{cases} &\lvert a_1 \rvert^2 N_{\text{rx}} \, e^{j \Psi_{ij}^{(1,1)}(\mathbf{p}, \tilde{\mathbf{p}})} \quad \text{$\mathbf{p}=\tilde{\mathbf{p}}$} \\ & 0 \qquad\qquad\qquad\qquad\quad \text{otherwise} \end{cases} \end{align} where we have defined $\Delta \tau_1(\mathbf{p}, \tilde{\mathbf{p}})= \tau_1(\mathbf{p})-\tau_1(\tilde{\mathbf{p}})$, $\Psi_{ij}^{(1,1)}(\mathbf{p}, \tilde{\mathbf{p}})= \gamma_i^\text{t}(\mathbf{p}, \bm{\vartheta}^\text{t})-\gamma_j^\text{t}(\tilde{\mathbf{p}}, \bm{\vartheta}^\text{t})$ and $\Psi_{m}^{(1,1)}(\mathbf{p}, \tilde{\mathbf{p}}, \bm{\vartheta}^\text{r})= -\gamma_m^\text{r}(\mathbf{p}, \bm{\vartheta}^\text{r})+\gamma_m^\text{r}(\tilde{\mathbf{p}}, \bm{\vartheta}^\text{r}) $ which depends on the \ac{Rx} array orientation. Note that $\Psi_{m}^{(1,1)}(\mathbf{p}, \tilde{\mathbf{p}}, \bm{\vartheta}^\text{r})=0$ for $\mathbf{p}=\tilde{\mathbf{p}}$ regardless the \ac{Rx} orientation. On the other side, when $\mathbf{p}\neq \tilde{\mathbf{p}} $, in the presence of a large number of antenna elements ($N_{\text{rx}} \rightarrow \infty$) and considering random \ac{Rx} orientations, the inter-antenna phase terms $\Psi_{m}(\mathbf{p}, \tilde{\mathbf{p}}, \bm{\vartheta}^\text{r})$ can be modeled as independent \acp{RV} uniformly distributed in $\left[ 0,2\pi\right)$. In fact, different geometric configurations permit to span all the angles especially when large arrays are considered \footnote{{The goodness of the fitting with a uniform distribution has been validated through simulations.}}. \\ \indent This means that the percentage of geometrical configurations of the \ac{Rx} for which the ambiguities are not negligible (i.e., $\mathsf{AF}(\mathbf{p}, \tilde{\mathbf{p}}) \rightarrow 0$ when $\mathbf{p} \neq \tilde{\mathbf{p}}$), vanishes as $N_{\text{rx}}$ increases. \\ \indent In other words, the conditions that permit to operate in the non-ambiguity region during the CRB evaluation are twofold: the first is to increase the SNR (high-SNR regime) by keeping the number of antennas fixed, whereas the second fixes the SNR (even not extremely large) and let the number of antennas explode. \subsection{Multipath Scenario} \label{sec:MP_AF} This section aims at investigating if the \ac{CRB} still remains a meaningful metric in the presence of multipath. To this purpose, we consider the normalized \ac{AF} by putting in evidence the multipath contribution, as \begin{align}\label{eq:afnormmultipath} \mathsf{AF}\left(\mathbf{p},\tilde{\mathbf{p}} \right)& =\Bigg\lvert \, \frac{T_\text{obs}}{N_{\text{tx}}\, N_{\text{rx}}} \, \int_{W} \mathbf{x}^{\scriptscriptstyle \text{H}}(f,\mathbf{p}) \mathbf{x}(f,\tilde{\mathbf{p}}) \, df\, \Bigg\rvert^2 \nonumber \\ &=\Bigg\lvert \, \frac{T_\text{obs}}{N_{\text{tx}}\, N_{\text{rx}}} \, \int_{W} \left( \mathbf{x}_1(f,\mathbf{p})+\mathbf{x}_{l>1}(f,\mathbf{p}) \right)^{\scriptscriptstyle \text{H}} \nonumber \\ & \times \left( \mathbf{x}_1(f,\tilde{\mathbf{p}})+\mathbf{x}_{l>1}(f,\tilde{\mathbf{p}}) \right) \, df\, \Bigg\rvert^2 \nonumber \\ &=\Bigg\lvert \, \int_W \frac{f_\text{AWGN}\left(\mathbf{p},\tilde{\mathbf{p}} \right)}{N_{\text{tx}}\, N_{\text{rx}}} + \frac{f_\text{MP}\left(\mathbf{p},\tilde{\mathbf{p}} \right)}{N_{\text{tx}} N_{\text{rx}}} df \, \Bigg\rvert^2 \, \end{align} where $\mathbf{x}_1(f,\mathbf{p})$ and $\mathbf{x}_{l>1}(f,{\mathbf{p}})$ indicate the expected received (noise-free) signal due to the direct path and multipath, respectively. Given the expression in \eqref{eq:afnormmultipath}, the following asymptotic analysis aims at verifying that the number of times the multipath impacts on the \ac{AF} shape is negligible compared to the number of times it has not an effect at all, provided that the number of {\ac{Rx}} antennas goes to infinity and that random array orientations are considered. More precisely, recalling the weak law of the large numbers, it is \begin{equation}\label{eq:problem} \frac{f_\text{MP}\left(\mathbf{p},\tilde{\mathbf{p}} \right)}{N_{\text{tx}} N_{\text{rx}}} \xrightarrow[]{\,\,\, P \,\,\, } \frac{1}{N_{\text{tx}}\, N_{\text{rx}}} \mathbb{E}\left[f_\text{MP}\left(\mathbf{p},\tilde{\mathbf{p}} \right) \right]\, \end{equation} where we aim at verifying that the right-hand side of \eqref{eq:problem} is $0$ for $\mathbf{p} \neq \tilde{\mathbf{p}}$, meaning that \ac{AF} sidelobes depending on multipath disappear when $N_{\text{rx}}$ is large and random orientations are considered.\\ The expectation argument in \eqref{eq:problem} is given by \begin{align}\label{eq:mpexp} \mathbb{E}\left[f_\text{MP}\left(\mathbf{p},\tilde{\mathbf{p}} \right) \right] &= \mathbb{E} \left[\mathbf{x}_{l>1}^\text{H} (f,\mathbf{p}) \mathbf{x}_1(f,\tilde{\mathbf{p}})\right] \nonumber \\ &+ \mathbb{E}\left[\mathbf{x}_1^\text{H}(f,\mathbf{p}) \mathbf{x}_{l>1} (f,\tilde{\mathbf{p}})\right] \nonumber \\ & + \mathbb{E}\left[\mathbf{x}_{l>1}^\text{H}(f,\mathbf{p}) \mathbf{x}_{l>1} (f,\tilde{\mathbf{p}})\right]. \end{align} Treating separately the terms in \eqref{eq:mpexp}, we have \begin{align} &\mathbb{E}\left[ \mathbf{x}_1^\text{H}(f,\mathbf{p}) \mathbf{x}_{l>1} (f,\tilde{\mathbf{p}}) \right]= \nonumber \\ &= \sum_{mij} \sum_{k=2}^{L} \alpha_1^* \,\alpha_k \, S_i(f)S_j^*(f)\, \tilde{\omega}_i\, \tilde{\omega}_j^*\,e^{-j 2\, \pi\, f \, \tau_k}\, e^{-j \,\Psi_{ij}^{(1,k)}\left( \mathbf{p}, \tilde{\mathbf{p}} \right)}\nonumber \\ & \times \mathbb{E}\left[ e^{j \Psi_m^{(1,k)}(\mathbf{p}, \tilde{\mathbf{p}})} \right] = 0 \qquad \forall \tilde{\mathbf{p}} \end{align} where $\sum_{mij}=\sum_{m=1}^{N_{\text{rx}}}\sum_{i=1}^{N_{\text{tx}}}\sum_{j=1}^{N_{\text{tx}}}$, $\Psi_m^{(1,k)}(\mathbf{p}, \tilde{\mathbf{p}})=-\gamma_m(\bm{\theta}_1, \bm{\vartheta}^\text{r}) + \gamma_m(\bm{\theta}_k, \bm{\vartheta}^\text{r}) $, $\Psi_{ij}^{(1,k)}=-\gamma_i(\bm{\theta}_1(\mathbf{p}))+\gamma_j(\bm{\theta}_1(\tilde{\mathbf{p}}))$, and $\mathbb{E}\left[ e^{-j \left( 2\, \pi\, f \, \tau_k+\, \Psi_m^{(1,k)}(\mathbf{p}, \tilde{\mathbf{p}})\right)} \right] = 0 $ as the phases are assumed uniformly distributed between $0$ and $2 \, \pi$. Similar considerations are valid for $\mathbb{E} \left[\mathbf{x}_{l>1}^\text{H} (f,\mathbf{p}) \mathbf{x}_1(f,\tilde{\mathbf{p}})\right]$. \\ Finally, consider the last term in \eqref{eq:mpexp}, i.e. \begin{align}\label{eq:mp_terms} &\mathbb{E}\left[ \mathbf{x}_{l>1}^\text{H}(f,\mathbf{p}) \mathbf{x}_{l>1} (f,\tilde{\mathbf{p}}) \right]= \nonumber \\ &= \sum_{mij}\sum_{l=2}^{L}\sum_{k=2}^{L}\, S_i(f)\, S_j(f)\,\tilde{\omega}_i\, \tilde{\omega}_j^*\, e^{-j\, \Psi_{ij}^{(l,k)} \left( \mathbf{p}, \tilde{\mathbf{p}} \right)} \nonumber \\ &\times \alpha_l\, \alpha_k^*\,e^{-j\, 2\, \pi\, f \, \Delta \tau_{lk}} \, \mathbb{E}\left[e^{-j\, \Psi_m^{(l,k)} \left( \mathbf{p}, \tilde{\mathbf{p}} \right)} \right] \end{align} where $\Delta \tau_{lk}=\tau_l-\tau_k$, $\Psi_m^{(l,k)} \left( \mathbf{p}, \tilde{\mathbf{p}} \right)=\gamma_m^\text{r}(\bm{\theta}_l, \mathbf{p}, \bm{\vartheta}^\text{r})- \gamma_m^\text{r}(\bm{\theta}_k, \tilde{\mathbf{p}}, \bm{\vartheta}^\text{r})$, $\Psi_{ij}^{(l,k)} \left( \mathbf{p}, \tilde{\mathbf{p}} \right)=\gamma_i^\text{t}(\bm{\theta}_l, \mathbf{p})- \gamma_i^\text{t}(\bm{\theta}_k, \tilde{\mathbf{p}})$.\\ In this case, since it holds \begin{align}\label{eq:effect} & \mathbb{E}\left[ \,e^{-j\, \Psi_m^{(l,k)} \left( \mathbf{p}, \tilde{\mathbf{p}} \right)}\right] \nonumber \\ & =\begin{cases} &0 \quad \text{if $l \neq k, \quad \forall \tilde{\mathbf{p}}$} \\ &1 \quad \text{if $l = k, \quad \mathbf{p} = \tilde{\mathbf{p}} $ }\\ &\mathbb{E}\left[ e^{-j\, \Psi_m^{(l,l)} \left( \mathbf{p}, \tilde{\mathbf{p}} \right)} \right] =0 \quad \text{if $l = k, \quad \mathbf{p} \neq \tilde{\mathbf{p}} $\, , } \end{cases} \end{align} it follows that \eqref{eq:mp_terms} is equal to $0$ for $\mathbf{p} \neq \tilde{\mathbf{p}} $, i.e. in all those cases in which a global ambiguity can arise.\\ \indent The obtained result shows that the global ambiguities due to the multipath are, on average, negligible. Nevertheless, the effect of multipath still remains in the correspondence of the true peak of the \ac{AF}, i.e., that for $\mathbf{p}=\tilde{\mathbf{p}}$, as reported in \eqref{eq:effect} for $l=k$. Consequently, even if we can state that the \ac{CRB} is a valid metric in establishing the ultimate performance provided that $N_{\text{rx}}$ is sufficiently large, the effect of multipath on the localization accuracy necessitates to be investigated. Specifically, Sec.~\ref{sec:mp_loc_ac} analyzes the effect of multipath from a localization accuracy point-of-view. } \section{{ Free-Space Localization Bound }} \label{sec:idealscenario} Here we provide an example on how the general expression \eqref{eq:crb11} can be simplified in absence of beamforming weights errors and \acp{MPC}. Specifically, in free-space conditions, \eqref{eq:FIM} can be reduced to \begin{equation} \label{eq:FIMetaThetaAWGN} \mathbf{J}^{\text{d}}_{\bm{\psi}}=\mathbf{J}_{\bm{\psi}}=\left[\begin{array}{cc} \mathbf{J}_{\mathbf{q}\mathbf{q}}& \mathbf{J}_{\mathbf{q}a_1} \\ \mathbf{J}_{a_1 \mathbf{q}}& {J}_{a_1 a_1} \, \end{array}\right]=\left[\begin{array}{cc} \mathbf{J}_{\mathbf{q}\mathbf{q}}^{\text{ {FS}}} & {\mathbf{0}} \\ {\mathbf{0}} & {J}_{a_1 a_1}\, \end{array}\right] \end{equation} where its elements are reported in Appendix~\ref{appB} and where the superscript $^\text{d}$ is omitted as in this case all the parameters to estimate are deterministic. {For readability convenience, we report here the expression of the \ac{FIM} related to the localization parameters, that is: \begin{align}\label{eq:Jqq_main} J_{q_b\, q_a}=& 8\, \pi^2\, \nu \, a_1^2 \, \sum_{mij} \Re \left\{\tilde{b}_{ij}^\text{c}\,\xi_{ij}^{(1,1)}\, \chi_{ij}^{(1,1)}(2)\, \right\} \nonumber \\ &\times \nabla_{q_a}\left( \tau_{im1}\right) \nabla_{q_b}\left( \tau_{jm1} \right) \end{align} where {$q_{a/b}$ are two elements in the set $\left\{x,\, y,\, z,\, \vartheta^\tx ,\, \varphi^\tx \right\}$, and} \begin{align}\label{eq:chiij2} &\chi_{ij}^{(1,1)}(2)\!\!=\!\!\int_W\!\!\!\tilde{b}_{ij}(f)\left(f+\fc \right)^2\! e^{-j\, 2\, \pi\, f\, \Delta \tau_{ij}^{(1,1)}}P_i(f)\, P_j^*(f) \,df \end{align} with $\Delta \tau_{ij}^{(1,1)}=\tau_{im1}-\tau_{jm1}$, $\xi_{ij}^{(1,1)}=e^{-j\, 2\, \pi\, \fc\, \Delta \tau_{ij}^{(1,1)}}$, $\tilde{b}_{ij}(f)=\tilde{b}_i(f)\, \tilde{b}_j^*(f)$, and $\tilde{b}_{ij}^\text{c}=\tilde{b}_i^\text{c}\, \left(\tilde{b}_j^\text{c}\right)^*$. In \eqref{eq:Jqq_main}, the derivatives translate the \ac{TOA} and \ac{DOA} in position and orientation information. In particular, for the position we have \begin{align}\label{eq:der_toa_q0} & \nabla_p\left(\tau_{im1} \right)= \frac{1}{c}\, \left\{ c\,\nabla_p\left( \tau_1\right)+\nabla_p\left( \bm{\theta}_1\right) \left[\mathbf{p}_i^\text{t} \left(\bm{\vartheta}^\text{t}\right)-\mathbf{p}_m^\text{r}\left(\bm{\vartheta}^\text{r} \right) \right] \right\}. \end{align} The term $\nabla_p\left(\tau_1 \right)$ expresses the dependence of the position from the direct path \ac{TOA}; while \begin{align}\label{eq:etaq1} \nabla_p\left(\bm{\theta}_1 \right)=& \nabla_p\left({\theta}_1 \right)\, \cos(\theta_1) \left[\begin{array}{c} \cos(\phi_1) \\ \sin(\phi_1) \\ - \tan(\phi_1) \end{array}\right]^{\scriptscriptstyle \text{T}} \nonumber \\ &+ \nabla_p\left({\phi}_1 \right)\, \sin(\theta_1) \left[\begin{array}{c} -\sin(\phi_1) \\ \cos(\phi_1) \\ 0 \end{array}\right]^{\scriptscriptstyle \text{T}} \end{align} includes the dependence of the position from the \ac{DOA} information. Finally, for what the orientation information is regarded, we have \begin{align}\label{eq:nablaorie} & \nabla_{\bm{\vartheta}^\text{t}}( {\tau_{im1}})= \nabla_{\bm{\vartheta}^\text{t}}( {\tau_{i}^\text{t}\left(\bm{\theta}_1, \bm{\vartheta}^\text{t} \right)})= \frac{1}{c}\, \mathbf{d}\left( \bm{\theta}_1 \right)\, \nabla_{\bm{\vartheta}^\text{t}}\left(\mathbf{p}_i^\text{t}\left( \bm{\vartheta}^\text{t}\right) \right). \end{align} By further analyzing \eqref{eq:Jqq_main}, one can notice the dependence of the \ac{FIM} from the beamforming weights given by the coefficients $\tilde{b}_{ij}^\text{c}$ and $\tilde{b}_{ij}(f)$.\\ Given the \ac{FIM} in \eqref{eq:Jqq_main} and starting from \eqref{eq:FIMetaThetaAWGN}, it can be easily found that for beamforming and \ac{MIMO} it is } \begin{equation}\label{eq:crbawgn} \mathsf{CRB}^\text{FS}\left(\mathbf{q} \right)=\!\left(\mathbf{J}_{\mathbf{q}\mathbf{q}}^\text{FS}\right)^{-1}= \left(\breve{\mathbf{J}}_{\mathbf{q}\mathbf{q}}^\text{FS}\,\, \mathbf{G} \right)^{-1} \end{equation} where {we have separated} the effect of signal design $\breve{\mathbf{J}}_{\mathbf{q}\mathbf{q}}^\text{FS}$, i.e., that related to \eqref{eq:chiij2}, from that of the geometry $\mathbf{G}$, i.e., that related to \eqref{eq:der_toa_q0}-\eqref{eq:nablaorie}. Specifically for timed arrays, we have \begin{align}\label{eq:crbawgn_elem} &\breve{\mathbf{J}}_{\mathbf{q}\mathbf{q}}^\text{FS}={8 \pi^2 \,\mathsf{SNR}_1} \left({\beta^2}+\fc^2 \right),\,\,\mathbf{G}=\sum_{mij} \nabla_{\mathbf{q}\mathbf{q}}\left(\tau_{im1} ,\tau_{jm1} \right) \end{align} where {$\nabla_{\mathbf{q}\mathbf{q}}\left(\tau_{im1} ,\tau_{jm1} \right)$ is a $5 \times 5$ matrix whose entries are given by $\nabla_{{q}_a}\left(\tau_{im1}\right) \nabla_{{q}_b}\left(\tau_{jm1} \right)$}, and $\beta$ is the baseband effective bandwidth of $p(t)$, defined as \begin{align} &\beta=\left(\int_{W}\, f^2\, \lvert P(f) \rvert^2\,df \right)^{\frac{1}{2}}\,. \end{align} \noindent Similarly, for \ac{MIMO} arrays, it is possible to find \begin{align} &\breve{\mathbf{J}}_{\mathbf{q}\mathbf{q}}^\text{FS}= {8 \pi^2 \,\mathsf{SNR}_1} \left(\beta^2_i+\fc^2 \right), \,\, \mathbf{G}=\sum_{mi} \nabla_{\mathbf{q}\mathbf{q}}\left(\tau_{im1} ,\tau_{im1} \right) \end{align} where $\sum_{mi}=\sum_{m=1}^{N_{\text{rx}}}\sum_{i=1}^{N_{\text{tx}}}$ and $\beta^2_i=\frac{\beta^2}{N_{\text{tx}}}$ is the squared baseband effective bandwidth of $p_i(t)$. The matrix $\mathbf{G}$ provides, through derivatives, the relationship between the \ac{TOA} at each TX-RX antenna element couple and the {\ac{Tx}} position and orientation. To improve the comprehension of \eqref{eq:crbawgn}-\eqref{eq:crbawgn_elem}, in the next sections two particular cases of planar \ac{MIMO} and timed arrays will be discussed considering a fixed {\ac{Tx} and \ac{Rx}} orientation {i.e., $\bm{\vartheta}^\tx=\bm{\vartheta}^\text{r}=\left[0, 0 \right]^{\scriptscriptstyle \text{T}}$}. Note that the overall \ac{CRB} analysis is still valid for any orientation. In Secs.~\ref{sec:MIMOplanarArray} and \ref{sec:TIMEDplanarArray}, we choose a specific case just to provide some insights on how the number of transmitting and receiving antennas can impact the performance. {In Appendix~\ref{appC}, the matrix $\mathbf{G}$ is evaluated, considering this specific array geometry.} \subsubsection{ {Special Case: Planar MIMO Array}} \label{sec:MIMOplanarArray} For the planar geometric configuration and in the \textit{orientation-unaware} case, the diagonal elements in the position and orientation \ac{CRB} matrix derived starting from \eqref{eq:crbawgn}-\eqref{eq:crbawgn_elem} and from \eqref{eq:Gresults1}, are given by \begin{align}\label{eq:crbMIMOpeo} &\mathsf{CRB}\left(x \right)=\mathsf{CRB}\left(z \right)=\mathsf{CRB}_0\,\frac{12}{S\, \left(N_{\text{rx}}-1\right)} \nonumber \\ &\mathsf{CRB}\left(y \right)=\frac{\mathsf{CRB}_0}{N_{\text{rx}}} \nonumber \\ &\mathsf{CRB}\left(\vartheta^\tx \right)\!=\!\mathsf{CRB}\left(\varphi^\tx \right) \!=\!\mathsf{CRB}_0\,\frac{12\,\left(N_{\text{tx}}+N_{\text{rx}}-2\right)}{A_\text{rx} \left(N_{\text{tx}}-1\right)\! \left(N_{\text{rx}}-1\right)} \end{align} where $\mathsf{CRB}_0={c^2}/{\left(8 \pi^2 \, \mathsf{SNR}_{\text{t}}\, \left(\beta^2_i+\fc^2\right)\right)}$ is the \ac{CRB} of the ranging error one would obtain using single antenna, and $S={A^\text{r}}/{y^2}$ represents the ratio between the {\ac{Rx}} array area and the squared {\ac{Tx}-\ac{Rx}} distance. Note that $\mathsf{CRB}_0$ depends on the carrier frequency $\fc$, on the shape of the pulse through $\beta_i^2$, on the received \ac{SNR}, and it does not depend on the number of transmitting antennas. The analytical derivation is reported in Appendix~\ref{appC}. From \eqref{eq:crbMIMOpeo}, it is possible to remark that the \ac{CRB} of the estimation error in the $y$-coordinate is inversely proportional to the number of the receiving antenna elements accounting for the number of independent measurements available at the {\ac{Rx}}. Regarding the other two coordinates, a key parameter on the estimation accuracy is $S$ which is related to the ratio between the dimension of the {\ac{Rx}} array and the distance between the arrays: as this ratio becomes smaller (i.e., as the distance between the arrays becomes larger with respect to the array size), the positioning accuracy degrades. From \eqref{eq:crbMIMOpeo} it is also possible to notice that the accuracy in estimating the orientation depends both on the transmitting and receiving antennas. Specifically both $N_{\text{tx}}$ and $N_{\text{rx}}$ must be greater than one to make the orientation possible, whereas for the positioning, the constraint is only on the number of receiving elements that must be larger than $1$. Moreover, non-zero off-diagonal elements remark a correlation between the error on the estimation of position and orientation. Specifically we have \begin{align} \mathsf{CRB}\left(z, \vartheta^\tx \right)&=\mathsf{CRB}\left(\vartheta^\tx, z \right)=\mathsf{CRB}\left(x, \varphi^\tx \right) =\mathsf{CRB}\left(\varphi^\tx, x \right) \nonumber \\ &=\mathsf{CRB}_0 \frac{12}{S\, y\, \left(1-N_{\text{rx}}\right)}. \end{align} Contrarily in the \textit{orientation aware} case, it can be found \begin{align}\label{eq:CRBmimoknown} &\mathsf{CRB}\left(x \right)=\mathsf{CRB}\left(z\right)= \mathsf{CRB}_0\,\frac{12}{S\, \left(N_{\text{tx}}+N_{\text{rx}}-2\right)} \nonumber \\ & \mathsf{CRB}\left(y\right)=\frac{\mathsf{CRB}_0}{N_{\text{rx}}}. \end{align} Note that when passing from a condition of \textit{orientation-unawareness} to that of \textit{orientation-awareness} the positioning accuracy increases, thanks to the additional information provided. In fact, the \ac{CRB} on $x$ and $z$ coordinates now depends also on the number of transmitting antennas. \subsubsection{ {Special Case: Planar Timed Array}} \label{sec:TIMEDplanarArray} Differently from \ac{MIMO}, here in the \textit{orientation-unaware} case, the equivalent \ac{FIM} for position and orientation is singular meaning that it is not possible to jointly localize and determine the orientation using beamforming strategies. {Nevertheless, when multiple beams are generated \cite{alkhateeb2014channel_c,garcia2016location_c}, such singularity can be solved thus allowing the localization process, but at the prize of an increased scanning time if the beams are sequentially generated in time, or of a decreased \ac{SNR} if such beams are simultaneously formed. The investigation of this trade-off is out-of-the-scope of this paper.} If the {\ac{Tx}} orientation is a known parameter (\textit{orientation aware} case) and it is discarded from the estimation parameters vector, the elements of the position \ac{CRB} matrix result from \eqref{eq:Gresults2} \begin{align}\label{eq:crbP} &\mathsf{CRB}\left(x \right)=\mathsf{CRB}\left(z \right)=\mathsf{CRB}_0\,\frac{12}{S}\frac{1}{N_{\text{tx}}\,(N_{\text{rx}}-1)} \nonumber \\ & \mathsf{CRB}\left(y \right)=\frac{\mathsf{CRB}_0}{N_{\text{tx}}\,N_{\text{rx}}}. \end{align} From \eqref{eq:crbP} it is possible to remark that the \ac{CRB} of the estimation error in the $y$-coordinate is inversely proportional to $N_{\text{tx}}$ and $N_{\text{rx}}$: in fact, the $N_{\text{tx}}$ term accounts for the \ac{SNR} enhancement due to the beamforming process while the $N_{\text{rx}}$ term accounts for the number of independent measurements available at the {\ac{Rx}} (receiver diversity). Note that when $N_{\text{rx}}=1$, the localization along the $x$ and $z$ axes is not possible (only ranging in the $y$ direction), as for \ac{MIMO}. Refer to Appendix~\ref{appC} for more details related to the derivation of \eqref{eq:crbP}. { \section{Multipath effect on localization accuracy} \label{sec:mp_loc_ac} Once verified that the \ac{CRB} is a meaningful metric in different propagation conditions in Sec.~\ref{sec:MP_AF}, we now investigate the impact of \acp{MPC} on the localization performance for the considered scenario. In \cite{shen2010fundamental}, it is demonstrated that only the information related to the \textit{first-contiguous cluster}, i.e. the set of \acp{MPC} overlapped to the first path, is relevant from a localization perspective in the asymptotic SNR regime. Here we show that under the asymptotic massive antenna regime, all the MPCs can be made negligible, included those belonging to the first-contiguous cluster. \\ The FIM in presence of multipath can be written as follows \begin{equation}\label{eqFIMe2} \mathbf{J}_{\bm{\psi}}=\left[\begin{array}{cc} \mathbf{J}_{\mathbf{q}\mathbf{q}} & \mathbf{J}_{\mathbf{q}\bm{\kappa}} \\ \mathbf{J}_{\bm{\kappa}\mathbf{q}}& {\mathbf{J}}_{\bm{\kappa}\bm{\kappa}} \end{array}\right] \end{equation} where ${\mathbf{J}}_{\bm{\kappa}\bm{\kappa}}$ contains also the \textit{a-priori} information on \acp{MPC} statistics reported in Appendix A. Consequently, the \ac{CRB} for the multipath scenario can be formulated as \begin{equation}\label{eq:CRB_MP} \mathsf{CRB}\left(\mathbf{q} \right)=\left({\mathbf{J}}_{\mathbf{q}\mathbf{q}}-\mathbf{J}_{\mathbf{q}\bm{\kappa}}\, {\mathbf{J}}_{\bm{\kappa}\bm{\kappa}}^{-1} \, \mathbf{J}_{\bm{\kappa}\mathbf{q}} \right)^{-1} \end{equation} where all multipath information is gathered in $\mathbf{J}_{\mathbf{q}\bm{\kappa}}\, {\mathbf{J}}_{\bm{\kappa}\bm{\kappa}}^{-1} \, \mathbf{J}_{\bm{\kappa}\mathbf{q}}$. \\ Considering the average over different geometric configurations (e.g., the average over different \ac{Rx} orientations) and for large values of $N_{\text{rx}}$, it is possible to show that the number of configurations where the multipath impacts the localization performance compared to the number of configurations in which it does not influence the accuracy is negligible regardless the array architecture chosen. \\ Considering \eqref{eq:CRB_MP}, for the weak law of the large number (i.e., for $N_{\text{rx}} \rightarrow \infty$), it holds \begin{equation}\label{eq:lim_prob} \frac{1}{N_{\text{rx}} N_{\text{tx}}} \mathbf{J}_{\mathbf{q} \bm{\kappa}} \xrightarrow[]{\,\,\, P \,\,\, } \frac{1}{N_{\text{rx}} N_{\text{tx}}} \, \mathbb{E} \left[\mathbf{J}_{\mathbf{q} \bm{\kappa}}\right]\, \end{equation} where we aim at demonstrating that \begin{equation}\label{eq:lim_prob2} \frac{1}{N_{\text{rx}} N_{\text{tx}}} \, \mathbb{E} \left[\mathbf{J}_{\mathbf{q} \bm{\kappa}}\right]=0 \,. \end{equation} In the presence of a large number of antenna elements and considering random \ac{Rx} orientations, the inter-antenna phase terms can be modeled as \acp{RV} uniformly distributed in $\left[ 0,2\pi\right)$. Under this assumption, we have \begin{align}\label{eq:Jqkappa_exp} &\mathbb{E}\left[ J_{q\,a_1 } \right]= J_{q\,a_1}=0 \nonumber \\ &\mathbb{E}\left[ J_{ q \, \alpha_k^\Re} \right] = -4\, \pi\, a_1 \,\nu {\sum_{mij}} \Im \left\{\tilde{b}_{ij}^\text{c}\,\mathbb{E}\left[ \, \xi_{ij}^{(k,1)} \chi_{ij}^{(k,1)}(1)\right] \right\} \nonumber \\ &\qquad\qquad \times \nabla_{q}\left( \tau_{jm1}\right) =0 \nonumber \\ &\mathbb{E}\left[ J_{ q \, \alpha_k^\Im} \right]= 4\, \pi\, a_1 \,\nu {\sum_{mij}} \Re \left\{\tilde{b}_{ij}^\text{c}\,\mathbb{E}\left[ \, \xi_{ij}^{(k,1)} \chi_{ij}^{(k,1)}(1)\right] \right\} \nonumber\\ &\qquad\qquad \times \nabla_{q}\left( \tau_{im1}\right) =0 \end{align} where \begin{align}\label{eq:exp_ext} &\mathbb{E}\left[\xi_{ij}^{(k,1)} \chi_{ij}^{(k,1)}(1)\right] \propto \mathbb{E}[ e^{-j\, 2\, \pi\, (f+\fc)\,(\Delta \tau_m^\text{r}(\bm{\theta}_1, \bm{\theta}_k))}]\, =0 \end{align} with $\Delta \tau_m^\text{r}(\bm{\theta}_1, \bm{\theta}_k)=\tau_m^\text{r}(\bm{\theta}_1, \bm{\vartheta}^\text{r})-\tau_{m}^\text{r}(\bm{\theta}_k, \bm{\vartheta}^\text{r})$. Following similar considerations, it is straightforward to prove that the expectation of the $\mathbf{J}_{\bm{\kappa} \mathbf{q}}$ elements is zero. \\ The result in \eqref{eq:lim_prob} leads to the important conclusion that letting the antennas array be massive, i.e., large $N_{\text{rx}}$, makes the set of geometric configurations significantly impacted by \acp{MPC} negligible, and the performance converges to that of the free space case. As a consequence, the CRB converges to the CRB averaged over the RX orientations for massive antenna arrays. } \section{Numerical Results} \label{sec:numerical} In this section, numerical results are reported considering different array schemes, multipath conditions and system non-idealities. Four array structures are analyzed: timed arrays equipped with \acp{TDL} and \acp{PS}, phased and {random weighting} arrays using only \acp{PS}, and finally the \ac{MIMO} array in which orthogonal waveforms are transmitted and neither \acp{PS} nor \acp{TDL} are present. For what the antennas spatial deployment is regarded, planar arrays are considered as they represent the most conventional structure to be integrated in \acp{AP} and mobiles. {Differently from Secs.~\ref{sec:MIMOplanarArray} and ~\ref{sec:TIMEDplanarArray}, here we consider the results averaged over the {\ac{Rx}} orientations if not otherwise indicated, and thus it will be possible to appreciate the impact of the array rotational angle on the localization performance}. {In the following figures, we indicate with Q the presence of quantization errors, with S the presence of a residual {time synchronization} error.} { Moreover, we designate with: \begin{itemize} \item \textit{Fixed orientation}: the array configuration with the {\ac{Tx}} and the {\ac{Rx}} parallel to each other (i.e., $\bm{\vartheta}^\tx=\bm{\vartheta}^\text{r}=\left[0,\, 0 \right]^{\scriptscriptstyle T}$), as described in Sec.~\ref{sec:idealscenario}; \item \textit{Averaged orientation}: the geometric configuration in which, for each Monte Carlo iteration, a different 3D {\ac{Rx}} array orientation is generated, and the \ac{CRB} is computed as the average over all the partial \ac{CRB} results computed at each cycle. \end{itemize} Finally, we recall that: \begin{itemize} \item \textit{Orientation-aware} indicates the case in which the {\ac{Tx}} orientation is known at the {\ac{Rx}} side and, thus, it is not considered in the parameters vector to be estimated; \item \textit{Orientation-unaware}: the case in which the {\ac{Tx}} orientation is unknown at {\ac{Rx}} side and it has to be estimated together with the position. In the next figures, we will denote with O, the \textit{orientation-unawareness} case. \end{itemize} } \subsection{System Configuration} \label{sec:ch3_sysconfig} We consider a scenario with a single \ac{AP} equipped with a massive array, with the centroid placed in $\mathbf{p}^\rx=[0,0,0]^{\scriptscriptstyle \text{T}}$, and a transmitting array antenna whose centroid is located in $\mathbf{p}^\tx=[0,5,0]^{\scriptscriptstyle \text{T}}$ ($d = 5\,$m). As in the mathematical model, the {\ac{Rx}} has a perfect knowledge of the {\ac{Tx}} steering direction, and the results are obtained for $\fc \!\!=\!\!60\,$GHz and $W\!\!=\!\!1\,$GHz (the signal duration is $\tp=1.6\,$ns) in free-space and multipath conditions. \ac{RRC} transmitted pulses centered at frequency $\fc=60\,$GHz and roll-off factor {of $0.6$} are adopted, being compliant with \ac{FCC} mask at $60$ GHz \cite{FCC60r}. A receiver noise figure of $N_\text{F}=4\,$dB and a fixed transmitted power of $P_\text{t}=10\,$mW {are considered}, if not otherwise indicated. The performance is evaluated in terms of \ac{PEB} and \ac{OEB} averaged over $N_\text{cycle}=500$ Monte Carlo iterations. For each cycle, a different {3D} {\ac{Rx}} array orientation {, i.e., $\bm{\vartheta}^\text{r}=\left[{\vartheta}^\text{r},\,{\varphi}^\text{r} \right]^{\scriptscriptstyle \text{T}}$ ,} and multipath scenario are generated. Specifically, the receiving (transmitting) antennas are spaced apart of $d_\text{ant}=\lambda_L /2$, where $\lambda_{L}={c}/{f_L}$ and $f_L=\fc-W/2$. When present, the \acp{PS} quantization errors are $\delta_i^\tx \sim \mathcal{U} \left(-\pi/4, \pi/4\right)$ while the \acp{TDL} errors are $\Delta \tau_i^{\tx}\sim \mathcal{U} \left(0, d_\text{ant}/c \right)$. The standard deviation of the {time synchronization} error is set to $\sigma_{\epsilon}=1\,\text{ns}$. When operating at \ac{mm-wave} frequencies, the path arrival time distributions can be described by a Poisson process and the inter-arrival times by an exponential probability density function \cite{gustafson2014mm}. The paths arrival rate is set to $4\,$[1/ns] while the paths azimuth and elevation \ac{AOA} are modeled as uniformly distributed between $(0, 2\pi]$ and $(0, \pi]$, respectively. {Note that these values are also in line with those found in \cite{guerra2016delay_c,guidi2017eucap_c} where a \ac{mm-wave} measurements campaign using massive arrays for radar-based mapping purposes have been described.} { Before analyzing the \ac{MIMO} and beamforming localization performance, it is necessary to ensure that the comparison based on \ac{CRB} can be considered fair in terms of \ac{SNR} working regimes when operating with non-massive arrays. To this purpose, a threshold in terms of \ac{SNR} is derived in order to understand if ambiguities are significant or not and, hence, whether the \ac{CRB} can still be used as a performance metric for comparison. In this perspective, we still consider the \ac{AF} as a tool to investigate the performance of \ac{MLE} as a function of the probability of ambiguity, i.e., secondary lobes higher than the main lobe. In fact, the \ac{AF} main lobe determines the \ac{MSE} behaviour in the high \ac{SNR} region that is described by the \ac{CRB}, while the \ac{AF} sidelobes might generate ambiguities in case of large noise in the low \ac{SNR} region and that are not taken into account by the \ac{CRB}. The details of the aforementioned analysis are reported in Appendix D.\\ In our numerical results, for each tested configuration we verified that the \ac{SNR} level is above a threshold calculated to guarantee that the probability of ambiguity is less than $10^{-2}$. } \subsection{Results} \label{sec:ch3_results} The results of this section have been obtained as a function of $N_{\text{tx}}$ and $N_{\text{rx}}$; the array structure (i.e. timed, phased, random, \ac{MIMO}); the presence and absence of array beamforming and of a residual {time synchronization} error; and the multipath overlapping effect. \subsubsection{Free space scenario} \label{sec:ch3_awgnresults} \paragraph{Results as a function of $N_{\text{tx}}$} \label{sec:Ntxresults} \begin{figure}[t!] \psfrag{PEB}[c][c][0.8]{$\mathsf{PEB}$ [m]} \psfrag{Ntx}[c][c][0.8]{$N_{\text{tx}}$} \psfrag{data111111111111111111111111}[lc][lc][0.7]{Phased - Q - Averaged Orien.} \psfrag{data2}[lc][lc][0.7]{Phased - Fixed Orien.} \psfrag{data3}[lc][lc][0.7]{Phased - Q - Fixed Orien.} \psfrag{data4}[lc][lc][0.7]{MIMO - Fixed Orien.} \psfrag{data5}[lc][lc][0.7]{MIMO - Averaged Orien. } \psfrag{data6}[lc][lc][0.7]{Timed - Averaged Orien. } \psfrag{data7}[lc][lc][0.7]{Phased - Averaged Orien. } \psfrag{data8}[lc][lc][0.7]{Timed - Fixed Orien.} \psfrag{data9}[lc][lc][0.7]{Timed - Q - Averaged Orien.} \psfrag{data10}[lc][lc][0.7]{Timed - Q - Fixed Orien.} \centerline{\includegraphics[width=0.45\textwidth]{Figures/new/PEB_Ntx_OrienFixedAveraged2.eps}} \caption{$\mathsf{PEB}$ \textit{vs.} $N_{\text{tx}}$, $N_{\text{rx}}=25$ and orientation aware.} \label{fig:peb_awgn_ntx} \end{figure} Figure \ref{fig:peb_awgn_ntx} reports the \ac{PEB} performance as a function of $N_{\text{tx}}$ {and of the {\ac{Rx}} orientation} in free-space. MIMO, timed and phased arrays {(with and without quantization errors)} are compared in the \textit{orientation-aware} case when the number of receiving antennas is kept fixed to $N_{\text{rx}}=25$. \begin{figure} \psfrag{data2}[lc][lc][0.7]{Timed} \psfrag{data3}[lc][lc][0.7]{Phased} \psfrag{data11111111}[lc][lc][0.7]{Phased - Q } \psfrag{data4}[lc][lc][0.7]{MIMO} \psfrag{data5}[lc][lc][0.7]{MIMO - O} \psfrag{data6}[lc][lc][0.7]{Random} \psfrag{data7}[lc][lc][0.7]{Timed - Q} \psfrag{Nrx}[lc][lc][0.8]{$N_{\text{rx}}$} \psfrag{PEB}[lc][lc][0.8]{$\mathsf{PEB}$ [m]} \psfrag{OEB}[lc][lc][0.8]{$\mathsf{OEB}$ [deg]} \centerline{\includegraphics[width=0.45\textwidth]{./Figures/new/PEB_Nrx_OrienFixed2.eps}} \caption{$\mathsf{PEB}$ \textit{vs.} $N_{\text{rx}}$, $N_{\text{tx}}=25$ and fixed receiver orientations.} \label{fig:ccompideal1} \end{figure} It can be observed that \ac{MIMO} arrays, relying on different transmitted waveforms, outperform timed and phased arrays {in averaged orientations cases; whereas, in fixed {\ac{Rx}} orientation, as described in} Sec.~\ref{sec:idealscenario}, arrays {operating beamforming} exhibit a better performance. {This is due to the fact that} beamforming strategies (e.g., those adopted in timed and phased arrays) fail in preserving the same accuracy for any geometric configuration (i.e., for any {\ac{Rx}} orientation). Contrarily, thanks to the diversity gain characterizing \ac{MIMO} arrays, {\ac{Rx}} orientations have a less significant effect on positioning accuracy. {For what the beamforming arrays are concerned, it can be observed that, as expected, timed and phased arrays results coincide as $W/\fc \ll 1$. In fact, phased arrays are the best candidate to be adopted in narrowband systems where there is no need to compensate for delays to perform beamsteering operation. We refer the reader to our previous work \cite{guerra2015position} to better appreciate the impact of the fractional bandwidth on the timed/phased array performance.} \begin{figure}[t!] \psfrag{PEB}[lc][lc][0.8]{$\mathsf{PEB}$ [m]} \psfrag{Nrx}[lc][lc][0.8]{$N_{\text{rx}}$} \psfrag{data2}[lc][lc][0.7]{MIMO} \psfrag{data3}[lc][lc][0.7]{Timed} \psfrag{data4}[lc][lc][0.7]{Phased} \psfrag{data5}[lc][lc][0.7]{Timed - Q} \psfrag{data6}[lc][lc][0.7]{Random} \psfrag{data7}[lc][lc][0.7]{MIMO - O} \psfrag{data8}[lc][lc][0.7]{Timed - S} \psfrag{data9}[lc][lc][0.7]{MIMO - S} \psfrag{data10}[lc][lc][0.7]{MIMO - S - O} \psfrag{data1111111111}[lc][lc][0.7]{Phased - Q} \centerline{\includegraphics[width=0.45\textwidth]{Figures/new/PEB_Nrx_OrienAverage2.eps}} \caption{$\mathsf{PEB}$ \textit{vs.} $N_{\text{rx}}$, $N_{\text{tx}}=25$ and averaged receiver orientation.} \label{fig:peb_awgn_nrx} \end{figure} \begin{figure}[t!] \psfrag{OEB}[c][c][0.8]{$\mathsf{OEB}$ [deg.]} \psfrag{Nrx}[c][c][0.8]{$N_{\text{rx}}$} \psfrag{data2}[lc][lc][0.7]{MIMO - O - Averaged Orien.} \psfrag{data3}[lc][lc][0.7]{MIMO - O - Fixed Orien.} \psfrag{data11111111111111111111111}[lc][lc][0.75]{MIMO - S - O - Averaged Orien.} \centerline{\includegraphics[width=0.45\textwidth]{Figures/new/OEB_Nrx_OrienFixedAveraged2.eps}} \caption{ $\mathsf{OEB}$ \textit{vs.} $N_{\text{rx}}$, $N_{\text{tx}}=25$.} \label{fig:oeb_awgn_nrx} \end{figure} Another important outcome from Fig.~\ref{fig:peb_awgn_ntx} is that {array quantization errors, once characterized, slightly affect the localization performance. This implies that we can rely on simpler array structures (i.e., using switches instead of multipliers) without severely affecting the performance.} Finally, with $N_{\text{tx}} \ge 25$, the performance improvement becomes less important if a sufficiently high $N_{\text{rx}} $ is considered. This implies that $N_{\text{tx}}$ can be relaxed to shrink the array dimensions and to enable the integration on mobiles \cite{hong2014study}. Consequently, in the following, the number of transmitting antennas will be fixed to $N_{\text{tx}}=25$. \paragraph{Results as a function of $N_{\text{rx}}$} \label{sec:Nrxresults} In Fig.~\ref{fig:ccompideal1}, the \ac{PEB} performance are reported for both \textit{orientation-aware} and \textit{-unaware} cases as a function of $N_{\text{rx}}$ in free-space propagation condition, $N_{\text{tx}}=25$ and fixed {\ac{Tx}} and {\ac{Rx}} orientation $\bm{\vartheta}^\text{t}=\bm{\vartheta}^\text{r}=\left[0,0 \right]^{\scriptscriptstyle \text{T}}$. The results have been obtained using the analytic expressions \eqref{eq:crbMIMOpeo}, \eqref{eq:CRBmimoknown} and \eqref{eq:crbP} and reveal that arrays performing beamforming outperform the performance of \ac{MIMO} for the particular steering and geometric configuration conditions chosen, as already observed in Fig.~\ref{fig:peb_awgn_ntx}. We ascribe the effect to an increased \ac{SNR} in the considered direction. Nevertheless, with {arrays operating {single-beam} beamforming}, orientation estimation is not always possible and consequently the \ac{FIM} results to be singular. Matrix singularity or ill-conditioning are, here, synonymous of the impossibility to estimate the position/orientation given the collected measurements. Figure \ref{fig:peb_awgn_nrx} shows the average \ac{PEB} performance when the {\ac{Rx}} orientation randomly changes at each Monte Carlo iteration. For this analysis, we consider also {random weighting}, quantization errors as well as {time synchronization} mismatch between the {\ac{Tx}} and the {\ac{Rx}}. The positioning performance are shown for the \textit{orientation-aware} case if not otherwise indicated. \begin{figure}[t!] \psfrag{x}[c][c][0.75]{$x$ [m]} \psfrag{y}[c][c][0.75]{$y$ [m]} \centerline{\includegraphics[width=0.4\textwidth]{Figures/Free_Space/Grid/phased_grid_0507.eps}}\centerline{\includegraphics[width=0.4\textwidth]{Figures/Free_Space/Grid/mimo_grid_0507.eps}} \caption{Phased (top) and MIMO (bottom) array $\mathsf{PEB}$, $N_{\text{rx}}=100$, $N_{\text{tx}}=25$ and averaged receiver orientations.} \label{fig:peb_awgn_grid_phasedmimo} \end{figure} As in Figs.~\ref{fig:peb_awgn_ntx}-\ref{fig:ccompideal1}, the performance of timed and phased arrays coincide due to the narrow fractional bandwidth {(i.e., $W/\fc\approx 0.016$)}. {Differently from Fig.~\ref{fig:ccompideal1}, in Fig.~\ref{fig:peb_awgn_nrx} \ac{MIMO} achieves a higher positioning accuracy with respect to arrays employing beamforming strategies due to the fact that results are averaged over different {\ac{Rx}} orientations. In fact, with \ac{MIMO}, a reduction in the received \ac{SNR} is experienced, but the number of independent measurements is maximized (i.e., $N_{\text{tx}}N_{\text{rx}}$).} For what {random weighting} arrays are regarded, they share the structure simplicity of phased arrays but they neither perform beamforming nor achieve the diversity gain of MIMO, and thus, the positioning accuracy results degraded with respect to other structures. Nevertheless, if the localization accuracy required by the application of interest is not so stringent, they could be an interesting option to guarantee both a sub-centimeter positioning accuracy {(e.g., for $N_{\text{rx}}=50$ and $N_{\text{tx}}=25$, $\mathsf{PEB} \approx 7\,$mm)} and an easy implementation in future mobiles and AP operating at \ac{mm-wave} frequencies. Note that when the {\ac{Tx}} orientation is one of the parameters to be estimated (\textit{orientation-unaware} case), only \ac{MIMO} results in a non-singular \ac{FIM}. Obviously, in this case, given the reduced information available at the {\ac{Rx}} side, the positioning accuracy worsen with respect to the \textit{orientation-aware} case. In all cases, the residual {time synchronization} error degrades the localization performance. \begin{figure}[t!] \psfrag{PEB}[lc][lc][0.8]{$\mathsf{PEB}$ [m]} \psfrag{Nrx}[lc][lc][0.8]{$N_{\text{rx}}$} \psfrag{OEB}[lc][lc][0.8]{$\mathsf{OEB}$ [deg]} \psfrag{data1}[lc][lc][0.7]{\quad $L=1$} \psfrag{data2}[lc][lc][0.7]{\quad $L=2$} \psfrag{data666666666}[lc][lc][0.7]{\quad $L=5$} \centerline{\includegraphics[width=0.4\textwidth]{./Figures/Multipath/Orient_Average/PEB_average_1311.eps}} \psfrag{data1}[lc][lc][0.7]{\quad $L=1$} \psfrag{data2}[lc][lc][0.7]{\quad $L=2$} \psfrag{data33333333}[lc][lc][0.7]{\quad $L=5$} \centerline{\includegraphics[width=0.4\textwidth]{./Figures/Multipath/Orient_Average/OEB_average_1311.eps}} \caption{\ac{PEB} and \ac{OEB} \textit{vs.} $N_{\text{rx}}$ in multipath propagation scenario, $N_{\text{tx}}=25$, $W=1\,$GHz, averaged receiver orientation. Diamond marked lines refer to phased array, circle marked lines to \ac{MIMO}, and square marked lines to \ac{MIMO} orientation unaware.} \label{fig:peb_oeb_MP} \end{figure} % {Figure \ref{fig:oeb_awgn_nrx} reports the \ac{OEB} as a function of $N_{\text{rx}}$. In this case, only the performance of \ac{MIMO} is present because of the singularity problem arising in timed, phased and {random weighting} arrays. An interesting results is that, in this case, {time synchronization} error does not impact the orientation accuracy.} \noindent \paragraph{Grid results} \label{sec:Gridresults} Figure~\ref{fig:peb_awgn_grid_phasedmimo} reports the \ac{PEB} results for the \textit{orientation-aware} case when the mobile moves in a grid of points spaced of $0.5\,$m considering phased (Fig.~\ref{fig:peb_awgn_grid_phasedmimo}-top) and \ac{MIMO} (Fig.~\ref{fig:peb_awgn_grid_phasedmimo}-bottom) arrays, respectively. We considered a 3D indoor scenario of $10\times 10 \times 3\,$m$^3$ where the mobile and the \ac{AP} centroids are at the same height. The {\ac{Rx}}, equipped with $N_{\text{rx}}=100$ antennas, is placed in $\mathbf{p}^\rx=\left[ 0,0,0\right]^{\scriptscriptstyle \text{T}}$ with orientation changing at each Monte Carlo iteration. On the other side, the mobile array is equipped with $N_{\text{tx}}=25$ and its orientation and its steering direction are fixed to $\bm{\vartheta}^\tx=\left[0,0 \right]^{\scriptscriptstyle \text{T}}$ and to the broadside direction, respectively. Grid results confirm that \ac{MIMO} arrays localization performance does not depend on the {\ac{Rx}} orientation and on mobile position in space but only on the distance between the {\ac{Tx}} and the {\ac{Rx}}. Indeed, when comparing Fig.~\ref{fig:peb_awgn_grid_phasedmimo}-top and Fig.~\ref{fig:peb_awgn_grid_phasedmimo}-bottom, it can be seen that if the mobile steering is fixed, the localization accuracy is higher in a privileged direction in space corresponding to the best geometric configuration conditions. \subsubsection{Multipath scenario} \label{sec:ch3_MPresults} \paragraph{Results as a function of $N_{\text{rx}}$} \label{sec:Nrxresults_MP} {Figure~\ref{fig:peb_oeb_MP} investigates the multipath effect by analysing the \ac{PEB} and \ac{OEB} averaged over different {\ac{Rx}} orientations and as a function of the number of \acp{MPC} for phased and \ac{MIMO} arrays.} We consider the statistical multipath parameters described in Sec.~\ref{sec:ch3_sysconfig}. As foreseen in the asymptotic analysis in Sec.~\ref{sec:mp_loc_ac}, when increasing $N_{\text{rx}}$, the \acp{MPC} effect becomes negligible and the performance tends to coincide with that obtained in free-space. Moreover, it is possible to remark that phased arrays are more sensitive to multipath with respect to MIMO arrays, especially when the number of receiving antennas is small. {In fact, for phased arrays at least $N_{\text{tx}}=25$ and $N_{\text{rx}}=25$ are necessary to make the \acp{MPC} fully resolvable. {In any case, it is interesting to note that to make the impact of \acp{MPC} negligible the antenna arrays are not required to be significantly massive.}} \vspace{-0.3cm} \section{Conclusion} \label{sec:conclusions} In this paper, we have considered a new scenario where a single-anchor localization exploiting \ac{mm-wave} massive arrays has been put forth for next 5G applications. The theoretical localization performance has been evaluated by deriving the position and orientation \ac{CRB} for different array configurations. Phase quantization, the residual {time synchronization} mismatch, and multipath have been considered as nuisance parameters. The comparison between \ac{MIMO} and beamforming has been carried out by analyzing the impact of the number of antenna elements on the performance. {From {analytical and} simulation results, the main conclusions and achievements emerged in this paper can be summarized as follows: { \begin{itemize} \item {We show through an asymptotic analysis (i.e., $N_{\text{rx}} \rightarrow \infty$) that the considered \ac{CRB} is a tight bound regardless the propagation condition and the array configuration considered.} \item A beamforming with steering capability is desirable to maximize the \ac{SNR} towards a specific direction, as happens for phased and timed arrays; but it reduces the diversity gain (typical of \ac{MIMO}) exploitable for retrieving positioning information, especially in \textit{orientation-unaware} situations. \item Quantization errors slightly impact the localization performance of both timed and phased arrays. Consequently, the array design requirements can be relaxed in favor of lower complexity and cost. \item Both for MIMO and beamforming, a {time synchronization} mismatch between the {\ac{Tx}} and the {\ac{Rx}} significantly degrades the positioning performance. From this point-of-view, it is important to minimize the clock mismatch between the arrays. Contrarily, in the \textit{orientation-unaware} case, the {time synchronization} error does not affect the performance of the estimation; \item The adoption of multiple antennas makes the positioning insensitive to multipath for most of geometric configurations. This is true even when the number of antennas is not extremely high (i.e., $N_{\text{tx}}, N_{\text{rx}} >20$); {we demonstrated such point also through an asymptotic analysis.} \item Finally, {random weighting} results to be a good low-complexity strategy to achieve centimeter level accuracy. \end{itemize} }} \appendices \section{}\label{appA} Considering the sub-set of random parameters {$ \bm{\psi}_\text{r} =\left[\bm{\kappa}_2^{\scriptscriptstyle \text{T}},\, \ldots,\, \bm{\kappa}_L^{\scriptscriptstyle \text{T}} ,\, \epsilon^\text{s} \right]^{\scriptscriptstyle \text{T}} =\left[\bm{\alpha}^{\Re\,\scriptscriptstyle \text{T}},\, \bm{\alpha}^{\Im\,\scriptscriptstyle \text{T}} ,\, \epsilon^\text{s} \right]^{\scriptscriptstyle \text{T}}$}, and treating them as independent \acp{RV} we can write \begin{align} &\ln \left(f(\bm{\psi}_\text{r})\right)=\ln\left(f(\bm{\alpha}^{\Re \,\scriptscriptstyle \text{T}})\right)+\ln\left(f(\bm{\alpha}^{\Im \,\scriptscriptstyle \text{T}})\right)+\ln\left(f(\epsilon^\text{s})\right). \end{align} Therefore, the elements of the \textit{a-priori} \ac{FIM} are \begin{align} &J_{\epsilon^\text{s} \epsilon^\text{s}}^{\text{p}}=\frac{1}{\sigma_\epsilon^2}, \quad J_{\alpha^{\Re}_k \alpha^{\Re}_l}^{\text{p}}=J_{\alpha^{\Im}_k \alpha^{\Im}_l}^{\text{p}}=\begin{cases} & \frac{1}{\sigma_l^2}\,\,\,\text{if $l = k$}\\ &0\quad\,\text{if $l \neq k$} \end{cases}. \end{align} \vspace{-0.3cm} \section{}\label{appB} \newcounter{MYtempeqncnt3} \begin{figure*}[t!] \normalsize \setcounter{MYtempeqncnt3}{\value{equation}} \begin{align}\label{eq:Gmatrix_part} &{\nabla_{\mathbf{q}\mathbf{q}}\!\left( \tau_{im1}, \tau_{jm1} \right)}\!\!=\!\!\frac{d_\text{ant}^2}{(c\,y)^2} \left[\begin{array}{lllll} \! \left(m_x-i_x\right)\left(m_x-j_x\right)\!&\frac{y}{d_\text{ant}}(m_x-i_x)\!&\!(m_x-i_x)(j_z-m_z)\!& y\,(i_x-m_x)\,j_z\hdots \!&\!y\,(i_x-m_x)\,j_x \\ \hdots \!&\!\frac{y^2}{d_\text{ant}^2}\!&\!\frac{y}{d_\text{ant}} (j_z-m_z)\!& \! -\frac{y^2}{d_\text{ant}} j_z\!&\! -\frac{y^2}{d_\text{ant}} j_x \\ \hdots &\hdots &\left(i_z-m_z\right)\left(j_z-m_z\right)& \! y\,(m_z-i_z)\,j_z&\!y\,(m_z-i_z)\,j_x\\ \hdots &\hdots &\hdots & \! {y^2}i_z j_z\!&\!{y^2}i_z j_x \\ \hdots &\hdots &\hdots &\hdots &{y^2}i_x j_x\\ \end{array} \right] \end{align} \hrulefill \begin{align} \label{eq:Gresults1} &\mathsf{CRB}\left(\mathbf{q} \right)\!\!=\!\! \frac{c^2}{8 \pi^2\,N_{\text{tx}}\, \mathsf{SNR}_1 \left(\beta^2_i+\fc^2\right)} \frac{1}{S} \left[\begin{array}{lllll} \! \frac{12}{(N_{\text{rx}} -1)}\!&0\!&\!0\!& 0\!&\! \frac{12}{ y (1-N_{\text{rx}})}\\ 0\!&\!\frac{S}{N_{\text{rx}}}\!&\!0\!& \! 0\!&\!0 \\ 0&0&\frac{12}{(N_{\text{rx}}-1)}& \! \frac{12}{ y\, (1-N_{\text{rx}})}&\!0\\ 0&0&\frac{12}{ y\, (1-N_{\text{rx}})}& \! \frac{12 \,(N_{\text{rx}} +N_{\text{tx}}-2)}{y^2 \,(N_{\text{tx}} - 1)\,(N_{\text{rx}} - 1)}\!&\!0 \\ \frac{12}{ y (1-N_{\text{rx}})}&0&0&0&\frac{12 \,(N_{\text{rx}} +N_{\text{tx}}-2)}{y^2 \,(N_{\text{tx}} - 1)\,(N_{\text{rx}} - 1)}\\ \end{array} \right]\\ \label{eq:Gresults2} &\mathsf{CRB}\left(\mathbf{p} {^\text{t}} \right)\!\!=\!\! \frac{c^2}{8 \pi^2\, N_{\text{tx}}\,\mathsf{SNR}_1 \left(\beta^2+\fc^2\right)}\frac{1}{S} \, \text{diag}\left( \frac{12}{N_{\text{tx}}(N_{\text{rx}}-1)},\frac{S}{N_{\text{tx}}\,N_{\text{rx}}}, \frac{12}{N_{\text{tx}} (N_{\text{rx}}-1)} \right) \end{align} \hrulefill \vspace*{4pt} \end{figure*} In this Appendix we derive the elements of the data \ac{FIM} reported in \eqref{eq:jtheta}. To accomplish this task, we introduce the following quantities \begin{align} &\chi_{ij}^{(l,k)}(p)\!\!=\!\!\int_W\!\!\!\tilde{b}_{ij}(f)\left(f+\fc \right)^p\! e^{-j\, 2\, \pi\, f\, \Delta \tau_{ij}^{(l,k)}}P_i(f)\, P_j^*(f)df \nonumber \\ &{R^p_{ij}(\Delta \tau)}=\int_W\,\tilde{b}_{ij}(f)\, e^{-j\, 2\, \pi\, f\, \Delta \tau }\,P_i(f)\, P_j^*(f)\, df \nonumber \\ & R^{\ddot{p}}_{ij}(\Delta \tau)=\int_W\,\tilde{b}_{ij}(f)\,f^2\, e^{-j\, 2\, \pi\, f\, \Delta \tau }\,P_i(f)\, P_j^*(f)\, df \end{align} where $\Delta \tau_{ij}^{(l,k)}=\tau_{iml}-\tau_{jmk}$, $\tilde{b}_{ij}(f)=\tilde{b}_i(f)\, \tilde{b}_j^*(f)$, and $\tilde{b}_{ij}^\text{c}=\tilde{b}_i^\text{c}\, \left(\tilde{b}_j^\text{c}\right)^*$. {The elements of $\mathbf{J}_{\mathbf{q}\mathbf{q}}^{\text{d}}$ can be expressed as in \eqref{eq:Jqq_main}.} The elements of $\mathbf{J}_{\bm{\kappa}\bm{\kappa}}^{\text{d}}$ are \begin{align*} &J_{ a_1 \, a_1}\!=\!2\,\nu \sum_{mij}\Re \left\{ \tilde{b}_{ij}^\text{c} \, \xi_{ij}^{(1,1)}\, R^p_{ij} \left( \Delta \tau_{ij}^{(1,1)} \right) \right\} \end{align*} \begin{align*} &J_{\alpha_k^\Re\, a_1}\!=\!J_{a_1\,\alpha_k^\Re}^{\scriptscriptstyle \text{H}}= 2\,\nu \sum_{mij}\,\Re\left\{ \tilde{b}_{ij}^\text{c} \, \xi_{ij}^{(1,k)}\, R^p_{ij} \left( \Delta \tau_{ij}^{(1,k)} \right) \right\} \end{align*} \begin{align*} &J_{\alpha_k^\Im \, a_1}\!=\!J_{a_1 \,\alpha_k^\Im}^{\scriptscriptstyle \text{H}}=2\,\nu \sum_{mij}\,\Im \left\{ \tilde{b}_{ij}^\text{c} \, \xi_{ij}^{(1,k)}\, R^p_{ij}\left( \Delta \tau_{ij}^{(1,k)}\right) \right\} \end{align*} \begin{align*} &J_{ \alpha_k^\Re \, \alpha_l^\Re}\!=\!J_{ \alpha_k^\Im \, \alpha_l^\Im}\!=\!2\,\nu \sum_{mij}\Re \left\{ \,\tilde{b}_{ij}^\text{c} \, \xi_{ij}^{(l,k)} \, R^p_{ij}\left( \Delta \tau_{ij}^{(l,k)} \right) \right\} \end{align*} \begin{align*} &J_{ \alpha_k^\Im \, \alpha_l^\Re}\!=\!J_{ \alpha_l^\Re\,\alpha_k^\Im }^{\scriptscriptstyle \text{H}}=2\,\nu \sum_{mij}\Im \left\{\,\tilde{b}_{ij}^\text{c} \, \xi_{ij}^{(l,k)} \, R^p_{ij}\left( \Delta \tau_{ij}^{(l,k)} \right) \right\} \end{align*} where $\xi_{ij}^{(1,k)}=e^{-j\, 2\, \pi\,\fc \left( \tau_{im1}+ \epsilon^\text{s}+\tau_{m}^\text{r}({\bm{\theta}_k})-\tau_{j}^\text{t}({\bm{\theta}_k}) \right)}$ and $\xi_{ij}^{(l,k)}=e^{-j\, 2\, \pi\, \fc \left(-\tau_m^\text{r}({\bm{\theta}_l})+\tau_m^\text{r}({\bm{\theta}_k})+\tau_i^\text{t}({\bm{\theta}_l})-\tau_j^\text{t}({\bm{\theta}_k}) \right)}$. \noindent The elements of $\mathbf{J}_{\mathbf{q}\bm{\kappa}}^{\text{d}}$ are \begin{align}\label{eq:Jqkappa} &J_{q\,a_1 }= 4\, \pi\, a_1 \,\nu \sum_{mij}\Im\left\{\,\tilde{b}_{ij}^\text{c} \,\xi_{ij}^{(1,1)}\, \chi_{ij}^{(1,1)}(1) \right\}\nabla_{q}\left( \tau_{im1}\right)=0 \nonumber \\ &J_{q \, \alpha_k^\Re}=- 4\, \pi\, a_1 \,\nu \sum_{mij}\Im \left\{\,\tilde{b}_{ij}^\text{c} \, \xi_{ij}^{(k,1)} \chi_{ij}^{(k,1)}(1) \right\}\nabla_{q}\left( \tau_{jm1}\right) \nonumber \\ &J_{q \, \alpha_k^\Im}= 4\, \pi\, a_1 \,\nu \sum_{mij}\Re \left\{\, \tilde{b}_{ij}^\text{c} \,\xi_{ij}^{(k,1)} \chi_{ij}^{(k,1)}(1) \right\}\nabla_{q}\left( \tau_{im1}\right) \end{align} where $\xi_{ij}^{(k,1)}=e^{-j\, 2\, \pi\, \fc\, \left( -\tau_{jm1}-\epsilon^\text{s}-\tau_{m}^\text{r}({\bm{\theta}_k})+\tau_{i}^\text{t}({\bm{\theta_k}}) \right)}$. Note that $\mathbf{J}_{\bm{\kappa}\mathbf{q}}^{\text{d}}=\mathbf{J}_{\mathbf{q}\bm{\kappa}}^{\text{d}\, \scriptscriptstyle \text{H}}$ . \noindent Now, if we consider the presence of a residual {time synchronization} error, the \ac{FIM} $\mathbf{J}_{\epsilon^\text{s}\, \epsilon^\text{s}}^\text{d}$ is \begin{align*} J_{\epsilon^\text{s}\,\epsilon^\text{s} }&= 8\, \pi^2 \, \nu \,\Re\left\{\sum_{mij}\, \tilde{b}_{ij}^\text{c} \left[a_1^2 \, \xi_{ij}^{(1,1)}\, \chi_{ij}^{(1,1)}\left(2 \right) +\right. \right. \nonumber \\ &\left.\left. + \sum_{k=2}^{L} \sigma_l^2\, R^{\ddot{p}}_{ij}\left(\Delta \tau_{ij}^{(k,k)} \right)\, e^{-j\, 2\, \pi\, \fc \, \left(\tau_i^\text{t} ({\bm{\theta}_k})-\tau_j^\text{t} ({\bm{\theta}_k}) \right)} \right] \right\}. \end{align*} The elements of $\mathbf{J}_{\bm{\kappa} \epsilon^\text{s}}^\text{d}$ are \begin{align*} &J_{ a_1 \, \epsilon^\text{s}}= 4 \, \pi\, a_1\, \nu\, \sum_{mij}\,\,\Im\left\{\tilde{b}_{ij}^\text{c} \, \xi_{ij}^{(1,1)}\,\chi_{ij}^{(1,1)}\left(1 \right) \right\} \nonumber \\ &J_{ \alpha_k^\Re \,\epsilon^\text{s} }= 4 \, \pi\, a_1\, \nu\, \sum_{mij}\,\Im\left\{ \tilde{b}_{ij}^\text{c}\, \xi_{ij}^{(1,k)}\,\chi_{ij}^{(1,k)} \left(1 \right) \right\} \nonumber \\ &J_{\alpha_k^\Im \,\epsilon^\text{s} }= 4\, \pi\, a_1 \, \nu\,\sum_{mij}\,\Re\left\{\tilde{b}_{ij}^\text{c}\, \xi_{ij}^{(1,k)}\, \chi_{ij}^{(1,k)}\left(1 \right) \right\}. \end{align*} As before the elements of $\mathbf{J}_{\epsilon^\text{s}\, \bm{\kappa}}^\text{d}$ could be found as $\mathbf{J}_{\epsilon^\text{s}\, \bm{\kappa}}^\text{d}=\mathbf{J}_{\bm{\kappa} \epsilon^\text{s}}^{\text{d}\, \scriptscriptstyle \text{H}}$ \noindent Finally, the elements of $\mathbf{J}_{\mathbf{q}\epsilon^\text{s}}^{\text{d}}$ are \begin{align*} &J_{q \epsilon^\text{s}}= 8 \, \pi^2 \, \nu\, a_1^2\, \sum_{mij}\Re\left\{\,\tilde{b}_{ij}^\text{c}\, \xi_{ij}^{(1,1)}\, \chi_{ij}^{(1,1)}(2) \right\} \nabla_{q} \left( \tau_{jm1}\right) \end{align*} and $\mathbf{J}_{\epsilon^\text{s}\mathbf{q}}^{\text{d}}=\mathbf{J}_{\mathbf{q}\epsilon^\text{s}}^{\text{d}\, \scriptscriptstyle \text{H}}$ . \vspace{-0.3cm} \section{}\label{appC} In this Appendix we will specialize the expression of the symmetric matrix $\mathbf{G}$ reported in \eqref{eq:crbawgn}-\eqref{eq:crbawgn_elem}. {To this end, we explicit the geometric relationship relating the \ac{TOA} between each TX-RX antennas couple and the considered localization (position or orientation) parameter, i.e., $\nabla_{q_a \, q_b}\!\left(\tau_{iml},\tau_{jml}\right)=\nabla_{q_a}\!\left(\tau_{iml}\right)\!\nabla_{q_b}\!\left(\tau_{jml}\right)$}. For the particular antenna configuration chosen described in Sec.~\ref{sec:planar}, in which the array antennas are spaced of $d_\text{ant}$, and considering {$\bm{\vartheta}^\tx=\left[0, 0 \right]^{\scriptscriptstyle \text{T}}$}, we can {compute a simplified version of \eqref{eq:der_toa_q0}-\eqref{eq:nablaorie}. Specifically, it is possible to obtain: \begin{align}\label{eq:der_toa_p_planar} &\nabla_p\left( \tau_{im1} \right)=\frac{1}{c} \left[c\,\nabla_p\left( \tau_{1} \right)+d_\text{ant} \left((i_x-m_x) \nabla_p(\phi_1) \right. \right. \nonumber \\ &\left. \left. \qquad\qquad + (m_z-i_z) \nabla_p(\theta_1) \right) \right] \\ \label{eq:der_toa_o_planar} &\nabla_{\vartheta^\text{t}}\left( \tau_{im1} \right)=-\frac{d_\text{ant}}{c}\, i_z \, , \,\,\,\nabla_{\varphi^\text{t}}\left( \tau_{im1} \right)=-\frac{d_\text{ant}}{c}\, i_x \end{align} } with $m_x=m_z=-\frac{\sqrt{N_{\text{rx}}}-1 }{2},-\frac{\sqrt{N_{\text{rx}}}-1}{2}+1, \ldots, \frac{\sqrt{N_{\text{rx}}}-1}{2}$ and $i_x=i_z=j_x=j_z=-\frac{\sqrt{N_{\text{tx}}}-1 }{2},-\frac{\sqrt{N_{\text{tx}}}-1}{2}+1, \ldots, \frac{\sqrt{N_{\text{tx}}}-1}{2}$. {From (\ref{eq:der_toa_p_planar})-(\ref{eq:der_toa_o_planar}), it is straightforward to derive (\ref{eq:Gmatrix_part}). Then,} by considering the summations {present in $\mathbf{G}$}, it is possible to obtain the \ac{CRB} matrices for \ac{MIMO} and timed arrays respectively as in \eqref{eq:Gresults1}-\eqref{eq:Gresults2} where $S=A^\text{r}/y^2$. {\section{}\label{appD} In this Appendix we consider the \ac{AF} as defined in \eqref{eq:AFnorm}. \\ \indent The \ac{AF} for the position coordinates shows a main peak in correspondence of the true \ac{Tx} position and secondary sidelobes peaks relative to ``wrong" positions. An ambiguity problem arises when one of these sidelobes overcomes or becomes comparable to the main beam due to noise. Consequently, to determine whether ambiguities are negligible in the non-massive array case, we have derived a threshold on the noise standard deviation in order to keep the ambiguity probability fixed to a desired low value. \\ \indent By comparing the threshold obtained with the value used in the numerical results, we can demonstrate that we operate at a high \ac{SNR} regime where the \ac{CRB} is tight even if a non-massive array is adopted. \\ \begin{table}[t!] \caption{{MIMO \textit{vs.} beamforming comparison}} \label{tab:tab1} \begin{center} { \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline &&&& \\ MIMO/Timed & $N_{\text{rx}}$ & $\gamma$ [dB]& $\sigma_\text{thr}$ [mV] & $\sigma_\text{sim}$ [mV] \\ &&&& \\ \hline &&&& \\ MIMO & 4 & -36.9&$0.062$ &$0.022$ \\&&&& \\ MIMO & 36 & -32.1& $0.187$& $0.022$ \\ &&&& \\ MIMO & 100 &-29.9 &$0.313$ & $0.022$ \\&&&& \\ Phased & 4 & -33.5&$0.136$ & $0.022$ \\&&&& \\ Phased & 36 & -28.7 &$0.406$ &$0.022$ \\&&&& \\ Phased & 100 & -26.5& $0.677$& $0.022$ \\ &&&& \\ \hline \end{tabular} } \end{center} \end{table To this end, we define the ambiguity probability as \cite{van2004detection} \begin{align} &{\text{P}_\text{A}= \frac{1}{2}\, \text{erfc}\left( \frac{\gamma}{\sqrt{4\,\sigma^2}} \right)} \end{align} where $\sigma$ is the noise standard deviation and $\gamma$ is the gap between the main lobe of the \ac{AF} and the highest secondary sidelobe. Then, given a certain gap $\gamma$, it is possible to compute the noise threshold as \begin{align} &{\sigma_\text{thr}=\frac{\gamma}{{2}}\, \frac{1}{\text{erfc}^{-1}\left({2\, \text{P}_\text{A}^*} \right)} }\,. \end{align} In Table~\ref{tab:tab1}, we report the obtained simulation results. We have considered the \ac{Tx} moving in a grid of points spaced apart of $0.2\,$m in a cube of dimension $8\times8\times8\,$m$^3$. The target ambiguity probability has been fixed to $10^{-2}$. The gap $\gamma$ has been set to the minimum side-lobe level considering the three spatial coordinates (that is, to the worst case scenario). $\sigma_\text{sim}$ represents the noise standard deviation used in the numerical results of the paper.\\ \indent As one can notice, in all the tested configurations the noise standard deviation used in the numerical results is always much lower than the threshold $\sigma_\text{thr}$ above that the ambiguity effect is not anymore negligible. \\ \indent The proposed method is a useful tool to test whether a specific scenario can be considered in the asymptotic regime and, hence, the \ac{CRB} can be a meaningful metric. }
1,108,101,563,926
arxiv
\section{Introduction} A significant challenge in next-generation wireless networks is that of supporting a very large number of unattended, machine-type devices. In addition to dramatically increasing network densities, these devices pose a unique challenge because they utilize network resources in a fundamentally different way compared to traditional human-operated devices. Indeed, these devices are envisioned to sporadically transmit very short messages as opposed to sustaining long connections and transmitting large amounts of data. Existing network access protocols become very inefficient under such traffic; thus, novel physical and data-link layer processes must be developed to accommodate these users. One popular solution to this challenge of machine-type communication (MTC) is the paradigm of unsourced random access (URA)~\cite{polyanskiy2017perspective,vem2017user}. URA is able to support an arbitrarily large number of connected users as long as only a small subset of those users is active at any given point in time. This capability is enabled by the fact that there is no fine multi-user coordination under URA; rather, each device employs the exact same codebook when communicating with the central base station. At the receiver, a list of transmitted messages is recovered without regard to the identities of the senders. This strategy allows the network to avoid the overhead associated with user scheduling, which is especially important in MTC wherein this cost cannot be amortized over long payloads. Over the past several years, significant research has been dedicated to finding low-complexity schemes for the URA \textit{uplink} channel when the base station is equipped with a single antenna (see, e.g., \cite{calderbank2018chirrup, amalladinne2019coded, fengler2019sparcs, pradhan2019polar, amalladinne2020unsourced, ebert2021codeddemixing, han2021sparse, ahmadi2021random, nassaji2022unsourced, andreev2020polar}) and when the base station is equipped with an array of antennas (see \cite{shyianov2020massive, maxime2021tensor, fengler2021mimo, fengler2022pilotbased, gkagkos2022fasura}). However, all of these schemes operate under the same premise: each active user has no way of knowing whether its message was correctly recovered. If the base station fails to decode a message, that message is lost forever. While such behavior is tolerable for many MTC applications, this situation is unacceptable for applications that demand a higher level of reliability. This points to the need for an adequate feedback mechanism tailored to URA systems. Independent of URA, hybrid automatic repeat request (HARQ) and its variants have found considerable success as pragmatic means for providing active users with enhanced connectivity. This class of access schemes relies on timely feedback from the base station regarding whether messages have been correctly recovered. To implement such a scheme under URA, one possibility is for the base station to send an acknowledgement (ACK) to every user whose message was successfully decoded. Therein, when an active user receives an ACK, it is finished with its transmission. Conversely, when an active user does not receive an ACK, it transmits additional parity symbols to the base station, which then attempts to decode the message by leveraging the original transmission in addition to the extra parity bits. Optionally, this feedback-retransmission process may repeat itself several times. This type of strategy is very effective and has been widely adopted as HARQ and its variants play a crucial role in many modern wireless networking standards. We view the ability to provide timely feedback as a key step in implementing HARQ in URA settings. The application of HARQ to URA has only recently been considered. In \cite{popovski2022}, Kal{\o}r et al.\ seek to acknowledge $K$ users by broadcasting a common feedback signal to all $N$ users in the network. Though their scheme assumes that every user possesses a unique identifier, their scheme may be adapted to URA by using a portion of the user's message as the identifier. They point out that, naively, one may simply transmit the concatenation of the $K$ identifiers; however, this requires the transmission of $K \log_2 (N)$ bits, which seems unacceptably large for typical values of $N$. The required number of bits may be reduced by enumerating all $\binom{N}{K}$ subsets and transmitting the index of the subset corresponding to the current set of decoded users; this requires only $\left\lceil \log_2 \binom{N}{K} \right\rceil$ bits. A principal result of \cite{popovski2022} is that, if a small amount of false positives are allowed, the number of bits can be further reduced to $K \lceil \log_2 \left( 1/\epsilon_{\mathrm{fp}} \right) \rceil$, where $\epsilon_{\mathrm{fp}}$ is the rate of false positives introduced by the scheme, by solving a set of linear equations over a Galois field. In \cite{yang2016csack}, the challenge of providing feedback is considered from a compressed sensing (CS) perspective. Therein, a $K$-sparse vector of length $N$ is constructed where the non-zero entries indicate the $K$ out of $N$ users to be acknowledged. This vector is compressed and transmitted across the channel, and then the active users employ regularized approximate message passing to reconstruct the sent vector and determine whether their message was correctly decoded. To apply this scheme to URA, the users' messages would have to be used as proxies for their identifiers. Though elegant, this scheme requires many more channel uses than the scheme proposed in \cite{popovski2022}. As a side remark, we note that the problem of providing minimal length feedback to active users was also considered by Kang and Yu in \cite{yu2021minimumfeedback}; yet, this work pertains to collision-limited scheduling of active users. Though one could think of providing feedback in terms of scheduling the active users into two bins based on whether their messages were successfully decoded, the results of \cite{yu2021minimumfeedback} cannot be directly applied to the URA problem because it relies on random binning. A key distinction stems from the fact that in scheduling, it does not matter which users are assigned to which bins, as long as the total number of users in a single bin does not exceed a threshold. This is obviously not the case for feedback. Existing results rely on broadcasting a common message to all devices in the network. Under the paradigm of sending a common message as feedback, the proposed solutions are indeed highly effective. However, each user is ultimately only concerned about whether its own message was decoded successfully. Furthermore, the base station is often equipped with multiple antennas and, in such cases, the URA feedback scheme should exploit the additional degrees of freedom afforded by the antenna array. \subsection{Main Contribution} In this article, a novel downlink beamforming scheme for the URA broadcast channel is presented. This paradigm, which we call \emph{HashBeam}, uses a combination of beamforming and hashing to provide \textit{individualized} binary feedback to every active user. Crucial to this scheme is the insight that, while the base station does not know the identities of the active users, it does possess channel estimates along with the content of decoded uplink messages. A combination of the channel gains and a hash of the recovered data can therefore be leveraged to construct an effective feedback mechanism, without explicitly using the identities of the senders. Specifically, the users' channels are first exploited to send feedback \textit{in the direction of the origin of each} message. Then, because multiple devices may have nearby channel realizations in the signal space, the users' hashes are exploited to further separate sources. This ensures that pertinent feedback can be delivered directly to every transmitter, albeit in an anonymous fashion. The scheme presented in this paper leverages hashes and channel gains within the beamforming step and may be adapted to any number of antennas at the base station by adjusting the required length of the hash. In Section~\ref{sec:system_performance}, it is shown that the required number of channel uses grows linearly with the number of users to acknowledge, thus matching the order-wise performance of the scheme presented in \cite{popovski2022} without sending a common message to all users in the network. \begin{figure} \centering \include{Figures/systemModel} \caption{HashBeam is a novel downlink beamforming scheme for the URA broadcast channel that leverages decoded users' channels in conjunction with hashes of their uplink messages to beamform acknowledgements to decoded users, despite not knowing those users' identities. In this figure, the user equipments (UE)s with solid lines are the ones to be acknowledged and $\vec{{a}}_i$ represents the hash of the $i$th users' data. } \label{fig:generic_system_model} \end{figure} \subsection{Notation} Matrices are denoted by capital letters such as $\ensuremath{\mathbf{A}}$, and column vectors are denoted by lower-case bold letters such as $\ensuremath{\vec{x}}$. For vectors $\ensuremath{\mathbf{u}}$ and $\vec{{v}}$, the standard inner product is denoted as $\langle \ensuremath{\mathbf{u}}, \vec{{v}} \rangle$. The Hermitian of $\ensuremath{\mathbf{A}}$ is denoted by $\ensuremath{\mathbf{A}}^{\mathrm{H}}$ and the Khatri-Rao product of $\ensuremath{\mathbf{A}}$ and $\ensuremath{\mathbf{B}}$ is denoted as $\ensuremath{\mathbf{A}} \ast \ensuremath{\mathbf{B}}$. For vectors $\ensuremath{\mathbf{u}}$ and $\vec{{v}}$, the Kronecker product of $\ensuremath{\mathbf{u}}$ and $\vec{{v}}$ is denoted as $\ensuremath{\mathbf{u}} \otimes \vec{{v}}$. For any positive integer $N$, $[N] = \{1, 2, \ldots, N\} \subset \mathbb{Z}_{+}$. Finally, $\mathrm{Pr}\left(\mathcal{X}\right)$ refers to the probability of event $\mathcal{X}$ and $\mathbb{E}[X]$ refers to the expected value of random variable $X$. \section{System Model} \label{sec:system_model} Consider a URA scenario in which there are $N$ users, out of which $K_a$, $K_a \ll N$, users are active at a given time instant. Each user is equipped with a single antenna and the central base station is equipped with $M$ antennas. For simplicity of notation, each active user is assigned a unique but arbitrary label $i \in [K_a]$. We emphasize that these labels are for notational purposes only and do not reveal any information about the true identities of these users. Within a frame, all active users simultaneously transmit their short uplink messages to the central base station according to some uplink URA scheme (see, e.g., \cite{han2021sparse, gkagkos2022fasura}). In this article, the specifics of the URA scheme are unimportant and can therefore be abstracted away. The focus is on the ability to provide feedback using downlink transmissions. Still, we assume that a portion of the users are decoded correctly and we let $\mathcal{D} \subseteq [K_a]$ denote this set of successfully decoded users, where $|\mathcal{D}| = K$. Without loss of generality, we assume that $\mathcal{D} = [K]$ throughout this paper. Furthermore, we assume that in the process of message decoding or as a result of this process, channel estimates for these same users are available at the base station. Moreover, to keep the discussion simple, we assume that the channel estimates obtained by the base station are perfect. Note that once a message is decoded correctly, a hash can be computed based on this recovered message. The channels between devices and the base station are independent quasi-static Rayleigh fading channels with coherence times at least as long as the duration of the communication cycle. This implies that devices remain stationary for the duration of the transmission-feedback process and that channel reciprocity holds. While the assumption of device stationarity is not always valid, it does hold for many classes of machine-type devices, especially when the end-to-end system latency is very small. We denote the channel between user~$i$ and the base station by $\ensuremath{\mathbf{h}}_i \in \mathbb{C}^M$, where $\left( \ensuremath{\mathbf{h}}_i \right)_j \sim \mathcal{CN} \left( 0, 1 \right)$. \subsection{Proposed Scheme} Though the base station does not know the identities of the decoded users, it does have channel estimates and it can compute hashes associated with every successfully recovered message. Under the aforementioned assumptions of stationarity and channel coherence, the base station can use the channels of the active users as proxies for their identities and provide message-specific feedback by beamforming ACKs \textit{in the direction of the origin} of each recovered message. This problem is closely related to that of downlink beamforming, which has been well-studied over the past decades. However, the task of providing feedback through downlink beamforming is exacerbated by the fact that the users' channels are often not statistically well-separated, especially when $M$ is less than $K$. To handle this challenge, the hashes of the users' data are used in conjunction with their channels to create downlink beamforming vectors. Thus, the base station is able to provide individualized feedback to successfully decoded users without explicitly knowing their true identities. Specifically, let the base station and all active devices have access to a hash function $f: \{0, 1\}^B \rightarrow \mathbb{C}^{L}$, which computes a length $L$ random hash based on $B$ bits of a user's message $\ensuremath{\mathbf{m}}$. Let the $i$th user's hash be denoted by $\vec{{a}}_i$ and let $\ensuremath{\mathbf{A}}$ denote the concatenation of all $K$ hashes: \begin{equation} \ensuremath{\mathbf{A}} = \begin{bmatrix} \vrule & \vrule & & \vrule \\ \vec{{a}}_1 & \vec{{a}}_2 & \ldots & \vec{{a}}_K \\ \vrule & \vrule & & \vrule \end{bmatrix} \in \mathbb{C}^{L \times K} . \end{equation} Throughout this paper, we assume that the entries of $\ensuremath{\mathbf{A}}$ are of the form $\ensuremath{\mathbf{A}}_{i, k} = \alpha\exp\{j\phi_{i,k}\}$ and $\phi_{i,k} \sim \operatorname{Uniform}[0, 2\pi)$. Furthermore, let $\ensuremath{\mathbf{H}}$ denote the concatenation of all $K$ users' channels, \begin{equation} \ensuremath{\mathbf{H}} = \begin{bmatrix} \vrule & \vrule & & \vrule \\ \ensuremath{\mathbf{h}}_1 & \ensuremath{\mathbf{h}}_2 & \ldots & \ensuremath{\mathbf{h}}_K \\ \vrule & \vrule & & \vrule \end{bmatrix} \in \mathbb{C}^{M \times K} . \end{equation} Throughout this article, we assume that the channel estimates are exact. Yet, the techniques we present below can be extended to the more general setting by utilizing estimate $\hat{\ensuremath{\mathbf{H}}}$ instead of $\ensuremath{\mathbf{H}}$. In our proposed scheme, the base station begins by taking the Khatri-Rao (column-wise Kronecker) product of $\ensuremath{\mathbf{A}}$ and $\ensuremath{\mathbf{H}}$ to obtain \begin{equation} \ensuremath{\mathbf{S}} = \ensuremath{\mathbf{A}} \ast \ensuremath{\mathbf{H}} \in \mathbb{C}^{LM \times K}. \end{equation} Eventually, the base station employs a beamforming vector $\mathbf{w}_i$ to acknowledgement recovery of message~$i \in \mathcal{D}$. Thus, the signal transmitted from the base station is of the form \begin{equation} \label{equation:DownlinkInput} \mathbf{v} = \ensuremath{\mathbf{W}} \ensuremath{\vec{x}}_d \end{equation} where $\ensuremath{\vec{x}}_d^T = [ 1 \ 1 \ \cdots \ 1 ] \in \mathbb{R}^K$ and $\ensuremath{\mathbf{W}}$ is a $LM \times K$ matrix whose $i$th column is given by $\mathbf{w}_i$. The matrix $\mathbf{W}$ is designed using uplink downlink duality as explained below. The dual uplink channel is modelled as \begin{equation} \ensuremath{\vec{y}} = \ensuremath{\mathbf{S}} \ensuremath{\vec{x}}_u + \ensuremath{\vec{z}}_u \in \mathbb{C}^{LM}, \end{equation} where $\ensuremath{\vec{x}}_u$ represents the aggregate uplink input and $\ensuremath{\vec{z}}_u$ is a vector of circularly-symmetric additive white Gaussian noise (AWGN) with zero-mean and covariance $\alpha^2\sigma^2\ensuremath{\mathbf{I}}$. For this uplink dual, a good estimate for $\ensuremath{\vec{x}}_u$ given $\ensuremath{\vec{y}}$ in the mean-square error sense is given by the linear minimum mean square error (LMMSE) estimate $\hat{\ensuremath{\vec{x}}}_{u} = \ensuremath{\mathbf{W}}_{\mathrm{lmmse}}^\mathrm{H}\ensuremath{\vec{y}}$, where \begin{equation} \label{eq:lmmsebeamforming} \ensuremath{\mathbf{W}}_{\mathrm{lmmse}}^\mathrm{H} = \left(\alpha^2\sigma^2\ensuremath{\mathbf{I}} + \ensuremath{\mathbf{S}}^\mathrm{H}\ensuremath{\mathbf{S}}\right)^{-1}\ensuremath{\mathbf{S}}^\mathrm{H} \in \mathbb{C}^{K \times LM}. \end{equation} Under the proper power allocation scheme, column $i$ of $\ensuremath{\mathbf{W}}_{\mathrm{lmmse}}$ is the optimal downlink beamforming vector for user $i$ in a traditional communication scenario. Inspired by this result, we set $\ensuremath{\mathbf{W}} = \ensuremath{\mathbf{W}}_{\mathrm{lmmse}}$. Thus, \eqref{equation:DownlinkInput} becomes \begin{equation} \label{eq:finalbeamformingvector} \begin{split} \vec{{v}} &= \ensuremath{\mathbf{W}}_{\mathrm{lmmse}}\ensuremath{\vec{x}}_d \in \mathbb{C}^{LM} \\ \end{split} \end{equation} and this signal is transmitted from the base station to all of the users using $M$ antennas over $L$ channel uses. Fig.~\ref{fig:beamforming_diagram} graphically depicts this process. \begin{figure*}[t] \centering \include{Figures/beamforming} \caption{This figure illustrates the downlink beamforming process employed by HashBeam. The uplink URA scheme provides channel estimates and hashes for the uplink message associated with each decoded user. The Khatri-Rao product of the hashes and the channels is computed, and downlink beamforming vectors are computed via uplink-downlink duality. The sum of all beamforming vectors is then transmitted by the base station. } \label{fig:beamforming_diagram} \end{figure*} The signal received by user $i$ is given by \begin{equation} \ensuremath{\mathbf{r}}_i = \begin{bmatrix} \langle \ensuremath{\mathbf{h}}_i, \vec{{v}}_1 \rangle + \ensuremath{\vec{z}}_{i, 1}, & \ldots, & \langle \ensuremath{\mathbf{h}}_i, \vec{{v}}_{L} \rangle + \ensuremath{\vec{z}}_{i, L} \end{bmatrix} \in \mathbb{C}^{L}, \end{equation} where $\vec{{v}}_j \triangleq \vec{{v}}\left((j-1)M+1:jM\right)$ and $\ensuremath{\vec{z}}_{i, j} \sim \mathcal{CN}\left(0, \sigma^2\right)$ is the $j$th element of the $i$th user's noise vector. User $i$ will then correlate its received signal with its hash $\vec{{a}}_i$ to create statistic \begin{equation} \theta_i = \langle \vec{{a}}_i, \ensuremath{\mathbf{r}}_i \rangle \in \mathbb{C}. \end{equation} The receiver then performs a binary hypothesis test to determine whether an ACK was received. When $\theta_i \in \mathcal{R}_0$, the user fails to reject the null hypothesis $\mathcal{H}_0$ and assumes that no ACK has been received. Conversely, if $\theta_i \in \mathcal{R}_1$, the user rejects the null hypothesis in favor of $\mathcal{H}_1$ and decodes an ACK. We note that, in this scheme, each active user can only determine whether its own message was successfully decoded; this is what is refer to as individualized feedback. \subsection{Proposed Hypothesis Test} To develop a good hypothesis test, we must first understand the nature of the test statistic $\theta_i$. We begin by expressing $\theta_i$ as follows \begin{equation} \begin{split} \theta_i &= \langle \vec{{a}}_i, \ensuremath{\mathbf{r}}_i \rangle \\ &= \sum_{j = 1}^L \left( \vec{{a}}_{i, j}^* \langle \ensuremath{\mathbf{h}}_i, \vec{{v}}_j \rangle + \vec{{a}}_{i,j}^* \ensuremath{\vec{z}}_{i,j} \right) \\ &= \sum_{j = 1}^L \left( \langle \vec{{a}}_{i, j}\ensuremath{\mathbf{h}}_i, \vec{{v}}_j \rangle + \vec{{a}}_{i,j}^* \ensuremath{\vec{z}}_{i,j} \right) \\ &= \langle \ensuremath{\mathbf{s}}_i, \vec{{v}} \rangle + \langle \vec{{a}}_i, \ensuremath{\vec{z}}_i \rangle. \\ \end{split} \label{eqn:thetai} \end{equation} To get some insight into the distribution of $\theta_i$, we first consider the case when $\ensuremath{\mathbf{S}}^{\mathrm{H}} \ensuremath{\mathbf{S}}$ is diagonal with the diagonal entries given by $\| \ensuremath{\mathbf{s}}_j \|_2^2, j \in \mathcal{D}$. Furthermore, we also neglect the $\alpha^2\sigma^2\ensuremath{\mathbf{I}}$ term within the computation of $\ensuremath{\mathbf{W}}_{\mathrm{lmmse}}$. In this case, it can be seen that the first term in \eqref{eqn:thetai} reduces to $\sum_{m \in \mathcal{D}} \frac{\langle \ensuremath{\mathbf{s}}_{i},\ensuremath{\mathbf{s}}_m \rangle} {\langle \ensuremath{\mathbf{s}}_m, \ensuremath{\mathbf{s}}_m \rangle}$. Hence, under the aforementioned simplifications, \begin{equation} \theta_{i} = \begin{cases} 1 + \langle \vec{{a}}_i, \ensuremath{\vec{z}}_i \rangle, & i \in \mathcal{D} \\ \sum_{m \in \mathcal{D}} \frac{\langle \ensuremath{\mathbf{s}}_{i},\ensuremath{\mathbf{s}}_m \rangle} {\langle \ensuremath{\mathbf{s}}_m, \ensuremath{\mathbf{s}}_m \rangle} + \langle \vec{{a}}_i, \ensuremath{\vec{z}}_i \rangle, & i \in [K_a] \setminus \mathcal{D}. \end{cases} \end{equation} Thus, for the correctly decoded users, i.e., for $i \in \mathcal{D}$, $\theta_i \sim \mathcal{CN}(1, L\alpha^2\sigma^2)$ whereas, for the users who are not correctly decoded, i.e. $i \in [K_a] \setminus \mathcal{D}$, $\theta_i$ is a zero-mean complex random variable. Since the number of decoded messages, their hashes, and the corresponding channel gains are random; it is not necessarily the case that $\langle \ensuremath{\mathbf{s}}_i, \ensuremath{\mathbf{s}}_j \rangle = 0$. Thus, in reality, $\theta_i$ is a random variable whose distribution may not be complex Gaussian. The model intricacies associated with the distribution of $\theta_i$ make finding the optimal decision region quite challenging. In this article, we circumvent this issue by approximating $\theta_i$ as complex Gaussian for both successfully and unsuccessfully decoded users. Let $\theta_i$ have mean $\mu_0$ and variance $\sigma^2_0$ when $i \in [K_a] \setminus \mathcal{D}$ and mean $\mu_1$ and variance $\sigma^2_1$ when $i \in \mathcal{D}$. As normally $\sigma^2_0 \neq \sigma^2_1$, a quadratic discriminant may be computed using a Neyman-Pearson test to separate $\mathcal{R}_0$ and $\mathcal{R}_1$. The distribution parameters $\mu_0$, $\sigma^2_0$, $\mu_1$, and $\sigma^2_1$ can be approximated through sampling. \section{System Performance} \label{sec:system_performance} The predominant performance metrics for this downlink beamforming scheme are the probability of false alarm $P_{\mathrm{FA}}$ and the probability of a miss $P_{\mathrm{MD}}$. A false alarm occurs when user $i$'s uplink message is not successfully decoded (i.e. $i \in [K_a] \setminus \mathcal{D}$) yet user $i$ decodes an ACK because $\theta_i \in \mathcal{R}_1$. Conversely, a miss occurs when user $i$'s message is successfully decoded (i.e. $i \in \mathcal{D}$) but user $i$ does not declare an ACK because $\theta_i \in \mathcal{R}_0$ Within the context of HARQ, the cost of a miss is relatively low as it only results in an unnecessary transmission of additional parity symbols. False alarms, on the other hand, are much more costly as they result in a user's message being lost forever. Herein, we are interested in the regime where both $P_{\mathrm{FA}}$ and $P_{\mathrm{MD}}$ are less than $0.05$. For a fixed number of antennas $M$, an interesting question to ask is how the hash length $L$ must scale as a function of the number of successfully decoded users $K$. Clearly, we would like to minimize $L$ as this quantity is equal to the required number of channel uses for downlink beamforming. We begin by analyzing the noiseless case, i.e., $\sigma^2 = 0$. A straightforward lower bound for the proposed scheme in the noiseless setting is $L \geq \frac{K}{M}$; this bound ensures that the matrix product $\ensuremath{\mathbf{S}}^\mathrm{H}\ensuremath{\mathbf{S}}$ in \eqref{eq:lmmsebeamforming} has full rank (almost surely) and is therefore invertible. Numeric simulations shows that the proposed scheme achieves this lower bound for all values of $K$ and $M$ considered; thus $L = \lceil\frac{K}{M}\rceil$ suffices in the noiseless case. We note that this lower bound is inherent to the proposed scheme and not to the general problem of downlink beamforming. We also consider the performance of this scheme in the presence of AWGN. In such circumstances, we define the signal to noise ratio ($\operatorname{SNR}$) to be \begin{equation} \label{eq:snr} \begin{split} \operatorname{SNR} &= \frac{\mathbb{E}\left[ \sum_{j=1}^L |\langle \ensuremath{\mathbf{h}}, \vec{{v}}_j \rangle|^2\right]}{\mathbb{E}\left[\|\ensuremath{\vec{z}}\|_2^2\right]} = \frac{\mathbb{E}\left[ \sum_{j=1}^L |\langle \ensuremath{\mathbf{h}}, \vec{{v}}_j \rangle|^2\right]}{L\sigma^2} . \end{split} \end{equation} For fixed $\sigma^2$, one may obtain the desired $\operatorname{SNR}$ by adjusting $\alpha$, which is inversely proportional to $\operatorname{SNR}$. Fig.~\ref{fig:LvsKfixedMSNR} plots the required hash length $L$ as a function of $K$ to achieve $P_{\mathrm{MD}} \leq 0.05$ and $P_{\mathrm{FA}} \leq 0.05$ when $M = 10$ for various SNRs. In addition, Fig.~\ref{fig:LvsSNR} shows the required $L$ as a function of $K$ to achieve the target error rates when $\operatorname{SNR} = 10$~dB for various numbers of antennas $M$. These results highlight that the proposed scheme may be easily adapted to any combination of number of antennas and $\operatorname{SNR}$ by simply adjusting the length of the hash. From Fig.~\ref{fig:LvsKfixedMSNR}, Fig.~\ref{fig:LvsSNR}, and our noiseless results, it is also clear that the required hash length $L$, and by extension, the required number of channel uses, scales as $\mathcal{O}\left(K\right)$; thus our practical scheme scales on the same order as the theoretic results presented in \cite{popovski2022}. We emphasize that this scaling is independent of the total number of users $N$ and the number of active users $K_a$. \begin{figure}[t] \centering \include{Figures/LvsKfixedMSNR} \caption{Required hash length $L$ to obtain a $P_{\mathrm{MD}} \leq 0.05$ and $P_{\mathrm{FA}} \leq 0.05$ as a function of the number of successfully decoded users. Here, results are presented for the case of $M = 10$ antennas at base station.} \label{fig:LvsKfixedMSNR} \end{figure} \section{Conclusion} In this article, a novel downlink beamforming scheme entitled \emph{HashBeam} is presented that allows the base station to provide individualized feedback to successfully decoded users, despite not knowing the identities of those users. To accomplish this task, the base station leverages knowledge of the decoded users' channels as well as hashes of their uplink messages to beamform feedback directly to the decoded users. The proposed scheme may be easily adapted to any number of antennas at the base station and to various $\operatorname{SNR}$s by simply adjusting the length of the hash, which scales as $\mathcal{O}\left(K\right)$. \begin{figure} \centering \include{Figures/LvsKMfixedSNR} \caption{Required hash length $L$ to obtain a $P_{\mathrm{MD}} \leq 0.05$ and $P_{\mathrm{FA}} \leq 0.05$ as a function of the number of successfully decoded users. Here, results are presented for the case when $\operatorname{SNR} = 10$dB.} \label{fig:LvsSNR} \end{figure} \bibliographystyle{IEEEbib}
1,108,101,563,927
arxiv
\section{Introduction} A large part of the facility location literature deals with \emph{desirable} facilities that people like to have nearby, such as service centers, police departments, fire stations, and warehouses. However, there also do exist facilities that are \emph{undesirable} and \emph{obnoxious}, such as nuclear reactors, garbage dumps, chemical plants, military installations, and high security penal institutions. A standard goal in location theory is to spread out such obnoxious facilities and to avoid their accumulation and concentration in a small region; see for instance Erkut \& Neuman \cite{ErkNeu1989} and Cappanera \cite{Cappanera1999} for comprehensive surveys on this topic. In this paper, we investigate the location of obnoxious facilities in a metric space whose topology is determined by a graph. Formally, let $G=(V,E)$ be an undirected connected graph, where every edge is rectifiable and has unit length. Let $P(G)$ denote the continuum set of points on all the edges in $E$ together with all the vertices in $V$. For two points $p,q\in P(G)$, we denote by $d(p,q)$ the length of a shortest path connecting $p$ and $q$ in the graph. A subset $S\subset P(G)$ is said to be \emph{$\delta$-dispersed} for some positive real number $\delta$, if any two points $p,q\in S$ with $p\ne q$ are at distance $d(p,q)\ge\delta$ from each other. Our goal is to compute for a given graph $G=(V,E)$ and a given positive real number $\delta$ a maximum cardinality subset $S\subset P(G)$ that is $\delta$-dispersed. Such a set $S$ is called an \emph{optimal} $\delta$-dispersed set, and $|S|$ is called the \emph{$\delta$-dispersion number} $\disp{\delta}{G}$ of the graph $G$. \subsection*{Known and related results.} Obnoxious facility location goes back to the seminal articles of Goldman \& Dearing \cite{GolDea1975} from 1975 and Church \& Garfinkel \cite{ChuGar1978} from 1978. The area actually covers a wide variety of problem variants and models; some models specify a geometric setting, while other models use a graph-theoretic setting. For example, Abravaya \& Segal \cite{AbrSeg2010} consider a purely geometric variant of obnoxious facility location, where a maximum cardinality set of obnoxious facilities has to be placed in a rectangular region, such that their pairwise distance as well as the distance to a fixed set of demand sites is above a given threshold. As another example we mention the graph-theoretic model of Tamir \cite{Tamir1991}, where every edge $e\in E$ of the underlying graph $G=(V,E)$ is rectifiable and has a given edge-dependent length $\ell(e)$. Tamir discusses the complexity and approximability of various optimization problems with various objective functions. One consequence of \cite{Tamir1991} is that if the graph $G$ is a tree, then the value $\disp{\delta}{G}$ can be computed in polynomial time. Segal \cite{Segal2003} locates a single obnoxious facility on a network under various objective functions, such as maximizing the smallest distance from the facility to the clients on the network or maximizing the total sum of the distances between facility and clients. Megiddo \& Tamir \cite{MegTam1983} consider the covering problem that is dual to the $\delta$-dispersion packing problem: Given a graph $G=(V,E)$ with rectifiable unit-length edges, find a minimum cardinality subset $S\subset P(G)$ such that every point in $P(G)$ is at distance at most $\delta$ from one of the facilities in $S$. Among many other results \cite{MegTam1983} shows that this covering problem is NP-hard for $\delta=2$. Finally, we mention the work of Gawrychowski, Krasnopolsky, Mozes \& Weimann \cite{GawKraMozWei2017} who study the problem variant where the points in the dispersed set $S$ must be vertices of the graph $G$. They show that for a given tree $G$ and a given integer $k$, one can compute in linear time the largest possible value $\delta$ for which there exists a $\delta$-dispersed set $S$ of size $|S|=k$. \subsection*{Our results.} We provide a complete picture of the complexity of computing the $\delta$-dispersion number for connected graphs $G=(V,E)$ and positive rational numbers $\delta$. \begin{itemize} \item If $\delta=1/b$ for some integer $b$, then the $\delta$-dispersion number of $G$ can be written down without really looking at the structure of the graph: If $G$ is a tree then $\disp{\delta}{G}=b|E|+1$, and if $G$ is not a tree then $\disp{\delta}{G}=b|E|$. \item If $\delta=2/b$ for some integer $b$, then $\disp{\delta}{G}$ can be computed in polynomial time. The algorithm uses the Edmonds-Gallai decomposition of $G$ and reformulates the problem as a submodular optimization problem. \item If $\delta=a/b$ for integers $a$ and $b$ with $a\ge3$ and $\gcd(a,b)=1$, then the computation of $\disp{\delta}{G}$ is an NP-hard problem. \end{itemize} The rest of the paper is organized as follows. Section~\ref{sec:prel} summarizes the basic notations and states several technical observations. Section~\ref{sec:np} presents the NP-hardness results. The reductions are essentially based on routine methods, but need to resolve certain number-theoretic issues. Our technical main contribution is the polynomial time algorithm for the case $\delta=2$ as developed in Section~\ref{sec:delta=2}; this result is heavily based on tools from matching theory. Section~\ref{sec:polynomial} summarizes the polynomially solvable special cases and provides additional structural insights. \section{Notation and technical preliminaries} \label{sec:prel} All graphs in this paper are undirected and connected, and all edges have unit length. Throughout the paper we use the word \emph{vertex} in the graph-theoretic sense, and we use the word \emph{point} to denote the elements of the geometric structure $P(G)$. For a graph $G=(V,E)$ and a subset $V'\subseteq V$, we denote by $G[V']$ the subgraph induced by $V'$. For an integer $c\ge1$, the \emph{$c$-subdivision} of $G$ is the graph that results from $G$ by subdividing every edge in $E$ by $c-1$ new vertices into $c$ new edges. For an edge $e=\{u,v\}$ and a real number $\lambda$ with $0\le\lambda\le1$, we denote by $p(u,v,\lambda)$ the point on $e$ that has distance $\lambda$ from vertex $u$. Note that $p(u,v,0)=u$ and $p(u,v,1)=v$, and note that point $p(u,v,\lambda)$ coincides with point $p(v,u,1-\lambda)$; hence we will sometimes assume without loss of generality that $\lambda\le1/2$. \begin{lemma} \label{le:scaling} Let $G$ be a graph, let $c\ge1$ be an integer, and let $G'$ be the $c$-subdivision of $G$. Then for every $\delta>0$, the $\delta$-dispersed sets in $G$ are in one-to-one correspondence with the $(c\cdot\delta)$-dispersed sets in $G'$. In particular, $\disp{\delta}{G}=\disp{(c\cdot\delta)}{G'}$. \end{lemma} \proof Every point $p(u,v,\lambda)$ in $P(G)$ translates into a corresponding point in $P(G')$ that lies on the subdivided edge between $u$ and $v$ and is at distance $c\cdot\lambda$ from vertex $u$. \qed \medskip Lemma~\ref{le:scaling} has many useful consequences, as for instance the following: \begin{lemma} \label{le:monotonicity} Let $\delta>0$ and let $c\ge1$ be an integer. \begin{itemize} \item If the problem of computing the $\delta$-dispersion number is NP-hard, then also the problem of computing the $(c\cdot\delta)$-dispersion number is NP-hard. \item If the problem of computing the $(c\cdot\delta)$-dispersion number is polynomially solvable, then also the problem of computing the $\delta$-dispersion number is polynomially solvable. \end{itemize} \end{lemma} \proof By Lemma~\ref{le:scaling} the $c$-subdivision of a graph yields a polynomial time reduction from computing $\delta$-dispersions to computing $(c\cdot\delta)$-dispersions. \qed \medskip For integers $\ell$ and $k$, the rational number $\ell/k$ is called \emph{$k$-simple}. A set $S\subseteq P(G)$ is $k$-simple, if for every point $p(u,v,\lambda)$ in $S$ the number $\lambda$ is $k$-simple. \begin{lemma} \label{le:halfintegral} Let $\delta=a/b$ with integers $a$ and $b$, and let $G=(V,E)$ be a graph. Then there exists an optimal $\delta$-dispersed set $S^*$ that is $2b$-simple. \end{lemma} \proof We first handle the cases with $b=1$, so that $\delta$ is integer. Consider an optimal $\delta$-dispersed set $S$ for graph $G$. Note that for every vertex $u$, at most one point $p(u,v,\lambda)$ with $v\in V$ and $0\le\lambda<1/2$ is in $S$. For every point $p=p(u,v,\lambda)$ with $0\le\lambda\le1/2$ in $S$, we put a corresponding point $p^*$ into set $S^*$: If $0\le\lambda<1/2$ then $p^*=p(u,v,0)$, and if $\lambda=1/2$ then $p^*=p(u,v,1/2)$. As all points in the resulting set $S^*$ are either vertices or midpoints of edges, we get that $S^*$ is $2$-simple. We claim that $S^*$ is still $\delta$-dispersed: Consider two distinct points $p^*$ and $q^*$ in $S^*$. Note that $d(p,p^*)<1/2$ and $d(q,q^*)<1/2$ by construction. \begin{itemize} \item If $p^*$ and $q^*$ both are vertices in $V$, then the distance $d(p^*,q^*)$ is integer. By the triangle inequality $d(p,q)\le d(p,p^*)+d(p^*,q^*)+d(q^*,q)$. As the left hand side in this inequality is at least the integer $\delta$ and as its right hand side is strictly smaller than the integer $d(p^*,q^*)+1$, we conclude $d(p^*,q^*)\ge\delta$. \item If $p^*$ and $q^*$ both are midpoints of edges, then $p=p^*$ and $q=q^*$ yields $d(p^*,q^*)\ge\delta$. \item If $p^*$ is a vertex and $q^*$ is the midpoint of some edge, then $d(p^*,q^*)=D+1/2$ for some integer $D$. The triangle inequality together with $p=p^*$ yields $\delta\le d(p,q)=d(p^*,q)\le d(p^*,q^*)+d(q^*,q)<D+1$. This implies $D\ge\delta$, so that $d(p^*,q^*)\ge\delta+1/2$. \end{itemize} Since $S$ and $S^*$ have the same cardinality, we conclude that $S^*$ is an optimal $\delta$-dispersed set that is $2$-simple, exactly as desired. In the cases where $\delta=a/b$ for some integer $b\ge2$, we consider the $b$-subdivision $G'$ of $G$. By the above discussion, $G'$ possesses an optimal $a$-dispersed set $S'$ that is $2$-simple. Then Lemma~\ref{le:scaling} translates $S'$ into an optimal $\delta$-dispersed set $S$ for $G$ that is $2b$-simple. \qed \section{NP-completeness results} \label{sec:np} In this section we present our NP-hardness proofs for computing the $\delta$-dispersion number. All proofs are done through polynomial time reductions from the following NP-hard variant of the independent set problem; see Garey \& Johnson \cite{GarJoh1979}. \begin{quote} Problem: Independent Set in Cubic Graphs ({{\sc Cubic-Ind-Set}}) \\[1.0ex] Instance: An undirected, connected graph $H=(V_H,E_H)$ in which every vertex is adjacent to exactly three other vertices; an integer bound $k$. \\[1.0ex] Question: Does $H$ contain an independent set $I$ with $|I|\ge k$ vertices? \end{quote} Throughout this section we consider a fixed rational number $\delta=a/b$, where $a$ and $b$ are positive integers that satisfy $\gcd(a,b)=1$ and $a\ge3$. Section~\ref{ssec:np.1} the cases with odd numerators $a\ge3$, and Section~\ref{ssec:np.2} the cases with even numerators $a\ge4$. It is instructive to verify that our arguments do not work for the cases with $a=1$ and $a=2$, as our gadgets and our arguments break down at various places. \subsection{NP-hard cases with odd numerator} \label{ssec:np.1} Throughout this section we consider a fixed rational number $\delta=a/b$ where $\gcd(a,b)=1$ and where $a\ge3$ is an odd integer. For the NP-hardness proof, we first determine four positive integers $x_1,y_1,x_2,y_2$ that satisfy the following equations \eqref{eq:Bezout.1} and \eqref{eq:Bezout.2}. \begin{eqnarray} 2b\cdot x_1 - 2a\cdot y_1 &=& a-1 \label{eq:Bezout.1} \\[0.5ex] b\cdot x_2 - a\cdot y_2 &=& 1 \label{eq:Bezout.2} \end{eqnarray} Note that the value $a-1$ on the right hand side of equation \eqref{eq:Bezout.1} is even, and hence is divisible by the greatest common divisor $\gcd(2b,2a)=2$ of the coefficients in the left hand side. With this, B\'ezout's lemma yields the existence of positive integers $x_1$ and $y_1$ that satisfy \eqref{eq:Bezout.1}. B\'ezout's lemma also yields the existence of positive integers $x_2$ and $y_2$ in equation \eqref{eq:Bezout.2}, as the coefficients in the left hand are relatively prime. Our reduction now starts from an arbitrary instance $H=(V_H,E_H)$ and $k$ of {{\sc Cubic-Ind-Set}}, and constructs a corresponding dispersion instance $G=(V_G,E_G)$ from it. \begin{itemize} \item For every vertex $v\in V_H$, we create a corresponding vertex $v^*$ in $V_G$. \item For every edge $e=\{u,v\}\in E_H$, we create a corresponding vertex $e^*$ in $V_G$. \item For every edge $e=\{u,v\}\in E_H$, we create (i) a path with $x_1$ edges that connects vertex $u^*$ to vertex $e^*$, (ii) another path with $x_1$ edges that connects $v^*$ to $e^*$, and (iii) a cycle $C(e)$ with $x_2$ edges that runs through vertex $e^*$. \end{itemize} This completes the description of the graph $G=(V_G,E_G)$; see Figure~\ref{fig:gadget} for an illustration. We claim that graph $H$ contains an independent set of size $k$, if and only if $\disp{(a/b)}{G}\ge k+(2y_1+y_2)|E_H|$. \begin{figure}[tb] \begin{center} \unitlength=1.05mm \begin{picture}(120,50)(0,0) \linethickness{0.40mm} \put( 0,10){\line(1,0){120}} \multiput(10,10)(10,0){11}{\circle*{2.2}} \put( 0,10){\circle*{3.5}} \put( 60,10){\circle*{3.5}} \put(120,10){\circle*{3.5}} \put( 0,16){\makebox(0,0)[cc]{\LARGE\boldmath $u^*$}} \put(120,16){\makebox(0,0)[cc]{\LARGE\boldmath $v^*$}} \put( 60, 4){\makebox(0,0)[cc]{\LARGE\boldmath $e^*$}} \put( 28, 2){\makebox(0,0)[cc]{\Large$\underbrace{\hspace{10em}}_{\text{path on $x_1$ edges}}$}} \put( 92, 2){\makebox(0,0)[cc]{\Large$\underbrace{\hspace{10em}}_{\text{path on $x_1$ edges}}$}} \put( 60,10){\line( 1,1){23}} \put( 60,10){\line(-1,1){23}} \put( 40,40){\line( 1,0){40}} \put( 40,40){\line(-1,-3){2.5}} \put( 80,40){\line( 1,-3){2.5}} \multiput(60,10)( 7.5,7.5){4}{\circle*{2.2}} \multiput(60,10)(-7.5,7.5){4}{\circle*{2.2}} \multiput(40,40)( 10, 0){5}{\circle*{2.2}} \put( 88,34){\makebox(0,0)[lc]{\small cycle $C(e)$ on $x_2$ edges}} \end{picture} \end{center} \caption{The edge $e=\{u,v\}$ in the instance of {{\sc Cubic-Ind-Set}} translates into three vertices $u^*$, $e^*$, $v^*$ in the dispersion instance, together with two paths and one cycle.} \label{fig:gadget} \bigskip \end{figure} \begin{lemma} \label{le:hard.1} If graph $H$ contains an independent set of size $k$, then the $(a/b)$-dispersion number of graph $G$ is at least $k+(2y_1+y_2)|E_H|$. \end{lemma} \proof Let $I$ be an independent set of size $k$ in graph $H=(V_H,E_H)$. We construct from $I$ a $\delta$-dispersed set $S\subset P(G)$ as follows. Let $u\in V_H$ be a vertex, and let $e_1,e_2,e_3$ be the three edges in $E_H$ that are incident to $u$. \begin{itemize} \item If $u\in I$, then we put point $u^*$ into $S$. On each of the three paths that connect vertex $u^*$ respectively to vertex $e_i^*$ ($i=1,2,3$), we select $y_1$ further points for $S$. The first selected point is at distance $\delta$ from $u^*$, and every further selected point is at distance $\delta=a/b$ from the preceding selected point. By equation \eqref{eq:Bezout.1}, on each of the three paths the distance from the final selected point to point $e_i^*$ ($i=1,2,3$) then equals $(a-1)/(2b)$. \item If $u\notin I$, then on each of the three paths between $u^*$ and $e_i^*$ ($i=1,2,3$) we select $y_1$ points for $S$. The first selected point is at distance $\delta/2=a/(2b)$ from $u^*$, and every further selected point is at distance $\delta$ from the preceding selected point. By equation \eqref{eq:Bezout.1}, the distance from the final selected point to point $e^*$ then equals $(2a-1)/(2b)$. \end{itemize} Furthermore, for every edge $e\in E_H$ we select $y_2$ points from the cycle $C(e)$ for $S$: \begin{itemize} \item We start in point $e^*$ and traverse $C(e)$ in clockwise direction. The first selected point is at distance $(a+1)/(2b)$ from point $e^*$, and every further selected point is at distance $\delta$ from the preceding selected point. By equation \eqref{eq:Bezout.2}, the distance from the final selected point to point $e^*$ then equals $(a+1)/(2b)$. \end{itemize} This completes the construction of set $S$. Now let us count the points in $S$. First, there are the $k$ points $u^*\in S$ for which $u\in I$. Furthermore, for every edge $e=\{u,v\}\in E_H$ there are $2y_1$ points in $S$ that lie on the two paths from $u^*$ to $e^*$ and from $e^*$ to $v^*$. Finally, for every edge $e\in E_H$ there are $y_2$ points that lie on the cycle $C(e)$. Altogether, this yields the desired size $k+(2y_1+y_2)|E_H|$ for $S$. It remains to verify that the point set $S$ is $\delta$-dispersed. By construction, the points selected from each path are at distance at least $\delta$ from each other, and the same holds for the points selected from each cycle. If vertex $u^*$ is in $S$, then all selected points on the three incident paths are at distance at least $\delta$ from $u^*$. If vertex $u^*$ is not in $S$, then the first selected point on every path is at distance $\delta/2$ from $u^*$, so that these points are pairwise at distance at least $\delta$ from each other. Hence the only potential trouble could arise in the neighborhood of point $e^*$, where paths and cycles are glued together. Every selected point on $C(e)$ is at distance at least $(a+1)/(2b)$ from point $e^*$. Every selected point on some path from $u^*$ to $e^*$ is at distance at least $(a-1)/(2b)$ from $e^*$ if $u\in I$ and is at distance at least $(2a-1)/(2b)$ if $u\notin I$. Since for any edge $e=\{u,v\}\in E_H$ at most one of the end vertices $u$ and $v$ is in $I$, at most one selected point can be at distance $(a-1)/(2b)$ from $e^*$, and all other points are at distance at least $(a+1)/(2b)$ from $e^*$. Hence $S$ is indeed $\delta$-dispersed. \qed \begin{lemma} \label{le:hard.2} If the $(a/b)$-dispersion number of graph $G$ is at least $k+(2y_1+y_2)|E_H|$, then graph $H$ contains an independent set of size $k$. \end{lemma} \proof Let $S$ be an $(a/b)$-dispersed set of size $k+(2y_1+y_2)|E_H|$. By Lemma~\ref{le:halfintegral} we assume that for every point $p(u,v,\lambda)$ in $S$, the denominator of the rational number $\lambda$ is $2b$. For an edge $e=\{u,v\}\in E_H$, let us consider its corresponding path $\pi$ on $x_1$ edges that connects vertex $u^*$ to vertex $e^*$. Suppose that there is some point $p$ in $S\cap\pi$ with $d(p,e^*)\le(a-2)/(2b)$. Then by Equation \eqref{eq:Bezout.2}, set $S$ will contain at most $y_2-1$ points from the cycle $C(e)$. In this case we restructure $S$ as follows: We remove point $p$ together with the at most $y_2-1$ points on cycle $C(e)$ from $S$, and instead insert $y_2$ points into $S$ that are $\delta$-dispersed on $C(e)$ and that all are at distance at least $(a+1)/(2b)$ from $e^*$. As this restructuring does not decrease the size of $S$, we will from now on assume without loss of generality that $d(p,e^*)\ge(a-1)/(2b)$ holds for every point $p\in S\cap\pi$. Now let us take a closer look at the points in $S\cap\pi$. Equation \eqref{eq:Bezout.1} can be rewritten into $x_1=y_1\delta+(a-1)/(2b)$, which yields $|S\cap\pi|\le y_1+1$. \begin{itemize} \item In the equality case $|S\cap\pi|=y_1+1$, we must have $u^*\in S$ and also the point on $\pi$ at distance $(a-1)/(2b)$ from $e^*$ must be in $S$. \item In case $|S\cap\pi|\le y_1$, there is ample space for picking $y_1$ points from $\pi$ that are $\delta$-dispersed and that are at distance at least $\delta/2$ from $u^*$ and at distance at least $\delta/2$ from $e^*$. Hence we will from now on assume $|S\cap\pi|=y_1$ in these cases. \end{itemize} Now let us count: Set $S$ contains exactly $y_1$ interior points from every path $\pi$, and altogether there are $2|E_H|$ such paths. Set $S$ contains exactly $y_2$ points from every cycle $C(e)$, and altogether there are $|E_H|$ such cycles. Since $|S|\ge k+(2y_1+y_2)|E_H|$, this means that $S$ must contain at least $k$ further points on vertices $u^*$ with $u\in V_H$. The corresponding subset of $V_H$ is called $I$. Finally, we claim that this set $I$ with $|I|\ge k$ forms an independent set in graph $H$. Suppose for the sake of contradiction that there is an edge $e=\{u,v\}\in E_H$ with $u^*\in I$ and $v^*\in I$. Consider the two paths that connect $u^*$ to $e^*$ and $v^*$ to $E^*$. By the above discussion, $S$ then contains two points at distance $(a-1)/(2b)$ from $e^*$. As these two points are then at distance at most $(a-1)/b<\delta$ from each other, we arrive at the desired contradiction. \qed \bigskip The statements in Lemma~\ref{le:hard.1} and in~\ref{le:hard.2} yield the following theorem. \begin{theorem} \label{th:npc-odd} Let $a$ and $b$ be positive integers with $\gcd(a,b)=1$ and odd $a\ge3$. Then it is NP-hard to compute the $(a/b)$-dispersion number of a graph $G$. \end{theorem} \subsection{NP-hard cases with even numerator} \label{ssec:np.2} In this section we consider a fixed rational number $\delta=a/b$ where $\gcd(a,b)=1$ and where $a\ge4$ is an even integer. The NP-hardness argument is essentially a minor variation of the argument in Section~\ref{ssec:np.1} for the cases with odd numerators. Therefore, we will only explain the modifications, and leave all further details to the reader. The NP-hardness proof in Section~\ref{ssec:np.1} is centered around the four positive integers $x_1,y_1,x_2,y_2$ introduced in equations \eqref{eq:Bezout.1} and \eqref{eq:Bezout.2}. We perform the same reduction from {{\sc Cubic-Ind-Set}} as in Section~\ref{ssec:np.1} but with positive integers $x_1,y_1,x_2,y_2$ that satisfy the following equations \eqref{eq:Bezout.1b} and \eqref{eq:Bezout.2b}. \begin{eqnarray} 2b\cdot x_1 - 2a\cdot y_1 &=& a-2 \label{eq:Bezout.1b} \\[0.5ex] b\cdot x_2 - a\cdot y_2 &=& 2 \label{eq:Bezout.2b} \end{eqnarray} In \eqref{eq:Bezout.1b}, the right hand side $a-2$ is even and divisible by the greatest common divisor of the coefficients in the left hand side. In \eqref{eq:Bezout.2b}, the coefficients in the left hand are relatively prime. Therefore B\'ezout's lemma can be applied to both equations. The graph $G=(V_G,E_G)$ is defined as before, with a vertex $v^*$ for every $v\in V_H$ and a vertex $e^*$ for every $e\in E_H$, with paths on $x_1$ edges and cycles $C(e)$ on $x_2$ edges. The arguments in Lemma~\ref{le:hard.1} and~\ref{le:hard.2} can easily be adapted and yield the following theorem. \begin{theorem} \label{th:npc-even} Let $a$ and $b$ be positive integers with $\gcd(a,b)=1$ and even $a\ge4$. Then it is NP-hard to compute the $(a/b)$-dispersion number of a graph $G$. \end{theorem} \subsection{Containment in NP} \label{ssec:np.4} In this section we consider the decision version of $\delta$-dispersion: \emph{``For a given graph $G=(V,E)$, a positive real $\delta$, and a bound $k$, decide whether $\disp{\delta}{G}\le k$.''} Our NP-certificate specifies the following partial information on a $\delta$-dispersed set $S$ in a graph $G=(V,E)$: \begin{itemize} \item The certificate specifies the set $W:=V\cap S^*$. \item For every edge $e\in E$, the certificate specifies the number $n_e$ of facilities that are located in the interior of $e$. \end{itemize} As every edge accommodates at most $1/\delta$ points from $S$, the encoding length of our certificate is polynomially bounded in the instance size. For verifying the certificate, we introduce for every vertex $u$ and for every incident edge $e=\{u,v\}\in E$ with $n_e>0$ a corresponding real variable $x(u,e)$, which models the distance between vertex $u$ and the closest point from $S$ in the interior of edge $e$. Finally, we introduce the following linear constraints: \begin{itemize} \item The non-negativity constraints $x(u,e)\ge0$. \item For every edge $e=\{u,v\}\in E$, the inequality $x(u,e)+(n_e-1)\delta+x(v,e)\le1$. \item For all $u,v\in W$ with $u\ne v$, the inequality $d(u,v)\ge\delta$. \item For all $w\in W$ and $e=\{u,v\}\in E$, the inequality $x(u,e)+d(u,w)\ge\delta$. \item For all $e=\{u,v\}\in E$ and $e'=\{u',v'\}\in E$, the inequality $x(u,e)+d(u,u')+x(u',e')\ge\delta$. \end{itemize} These inequalities enforce that on every edge the variables properly work together, and that the underlying point set indeed is $\delta$-dispersed. For verifying the certificate, we simply check in polynomial time whether the resulting linear program has a feasible solution, and whether $|W|+\sum_{e\in E}n_e\ge k$ holds. \begin{theorem} \label{th:np-containment} The decision version of $\delta$-dispersion lies in NP, even if the value $\delta$ is given as part of the input. \qed \end{theorem} \section{The polynomial time result for \texorpdfstring{$\delta=2$}{delta=2}} \label{sec:delta=2} This section derives a polynomial time algorithm for computing the $2$-dispersion number of a graph. This algorithm is heavily based on tools from matching theory, as for instance developed in the book by Lov\'asz \& Plummer \cite{LovPlu1986}. As usual, the size of a maximum cardinality matching in graph $G$ is denoted by $\nu(G)$. \begin{lemma} \label{le:stefan.0} Every graph $G=(V,E)$ satisfies~ $\disp{2}{G}\ge\nu(G)$. \end{lemma} \proof The midpoints of the edges in every matching form a $2$-dispersed set. \qed \bigskip A $2$-dispersed set is in \emph{canonical} form, if it entirely consists of vertices and of midpoints of edges. Recall that by Lemma~\ref{le:halfintegral} every graph $G=(V,E)$ possesses an optimal $2$-dispersed set in canonical form. Throughout this section, we will consider $2$-dispersed (but not necessarily optimal) sets $S^*$ in canonical form; we always let $V^*$ denote the set of vertices in $S^*$, and we let $E^*$ denote the set of edges whose midpoints are in $S^*$. Finally, $N^*\subseteq V$ denotes the set of vertices in $V-V^*$ that have a neighbor in $V^*$. As $S^*$ is $2$-dispersed, the vertex set $V^*$ forms an independent set in $G$, and the edge set $E^*$ forms a matching in $G$. Furthermore, the vertex set $N^*$ separates the vertices in $V^*$ from the edges in $E^*$; in particular, no edge in $E^*$ covers any vertex in $N^*$. We start with two technical lemmas that will be useful in later arguments. \begin{lemma} \label{le:stefan.1} Let $G=(V,E)$ be a graph with a perfect matching, and let $S^*$ be some $2$-dispersed set in canonical form in $G$. Then $|S^*|\le\nu(G)$. \end{lemma} \proof Let $M\subseteq E$ denote a perfect matching in $G$, and for every vertex $v\in V$ let $e(v)$ denote its incident edge in matching $M$. Consider the vertex set $V^*$ and the edge set $E^*$ that correspond to set $S^*$. Then $E^*$ together with the edges $e(v)$ with $v\in V^*$ forms another matching $M'$ of cardinality $|E^*|+|V^*|=|S^*|$ in $G$. Now $|S^*|=|M'|\le\nu(G)$ yields the desired inequality. \qed \bigskip A graph $G$ is \emph{factor-critical} \cite{LovPlu1986}, if for every vertex $x\in V$ there exists a matching that covers all vertices except $x$. A \emph{near-perfect} matching in a graph covers all vertices in $V$ except one. Note that the statement in the following lemma cannot be extended to graphs that consist of a single vertex. \begin{lemma} \label{le:stefan.2} Every $2$-dispersed set $S^*$ in a factor-critical graph $G=(V,E)$ with $|V|\ge3$ satisfies $|S^*|\le\nu(G)$. \end{lemma} \proof Without loss of generality we assume that $S^*$ is in canonical form, and we let $V^*$ and $E^*$ denote the underlying vertex set and edge set, respectively. If $V^*$ is empty, we have $|S^*|=|E^*|\le\nu(G)$ since $E^*$ is a matching. If $V^*$ is non-empty, then also $N^*$ is non-empty (here we use the condition $|V|\ge3$) and we pick some vertex $x\in N^*$. We consider a near-perfect matching $M$ that covers all vertices except $x$, and we let $e(v)$ denote the edge incident to $v\in V$ in matching $M$. Then $E^*$ together with the edges $e(v)$ with $v\in V^*$ forms another matching $M'$ of cardinality $|E^*|+|V^*|=|S^*|$ in $G$. The claim follows from $|S^*|=|M'|\le\nu(G)$. \qed \begin{figure}[tb] \centerline{\includegraphics[height=8.0cm]{disperse1.eps}} \caption{An illustration for the Edmonds-Gallai structure theorem. A maximum matching is shown with fat edges, and the non-matching edges are dashed.} \label{fig:EG} \end{figure} \bigskip The following theorem goes back to Edmonds \cite{Edmonds1965} and Gallai \cite{Gallai1963,Gallai1964}; see also Lov\'asz \& Plummer \cite{LovPlu1986}. Figure~\ref{fig:EG} gives an illustration. \begin{theorem} \label{th:EG} (Edmonds-Gallai structure theorem) Let $G=(V,E)$ be a graph. The following decomposition of $V$ into three sets $X,Y,Z$ can be computed in polynomial time. \begin{eqnarray*} X &=& \{v\in V\mid \text{ there exists a maximum matching that misses $v$}\} \\[0.5ex] Y &=& \{v\in V\mid \text{ $v\notin X$ and $v$ is adjacent to some vertex in $X$}\} \\[0.5ex] Z &=& V-(X\cup Y) \end{eqnarray*} The Edmonds-Gallai decomposition has the following properties: \begin{itemize} \item Set $X$ is the union of the odd-sized components of $G-Y$; every such odd-sized component is factor-critical. Set $Z$ is the union of the even-sized components of $G-Y$. \item Every maximum matching in $G$ induces a perfect matching on every (even-sized) component of $Z$ and a near-perfect matching on every (odd-sized) component of $X$. Furthermore, the matching matches the vertices in $Y$ to vertices that belong to $|Y|$ different components of $X$. \qed \end{itemize} \end{theorem} We further subdivide the set $X$ in the Edmonds-Gallai decomposition into two parts: Set $X_1$ contains the vertices of $X$ that belong to components of size~$1$, and set $X_{\ge3}$ contains the vertices that belong to (odd-sized) components of size at least~$3$. The \emph{vicinity} $\text{vic}(v)$ of a vertex $v\in V$ consists of vertex $v$ itself and of the midpoints of all edges incident to $v$. \begin{lemma} \label{le:properties} There exists an optimal $2$-dispersed set $S^*$ in canonical form (with underlying edge set $E^*$) that additionally satisfies the following three properties. \begin{quote} \begin{itemize} \item[P1.] In every component of $X_{\ge3}$, the set $E^*$ induces a near-perfect matching. \item[P2.] For every vertex $y\in Y$, the set $\text{vic}(y)\cap S^*$ is either empty or consists of the midpoint of some edge between $X$ and $Y$. \item[P3.] In every component of $Z$, the set $E^*$ induces a perfect matching. \end{itemize} \end{quote} \end{lemma} \proof We start from an arbitrary optimal $2$-dispersed set $S^*$ (in canonical form, with corresponding sets $V^*$ and $E^*$) and transform it in two steps into an optimal $2$-dispersed set of the desired form. In the first transformation step, we exploit a matching $M$ between sets $Y$ and $X$ that matches every vertex $y\in Y$ to some vertex $M(y)$, so that for $y_1\ne y_2$ the vertices $M(y_1)$ and $M(y_2)$ belong to different components of $X$; see Theorem~\ref{th:EG}. A vertex $y\in Y$ is called \emph{blocked}, if it is adjacent to some $x\in X_1\cap S^*$. As for a blocked vertex the set $\text{vic}(y)\cap S^*$ is already empty (and hence already satisfies property P2), we will not touch it at the moment. We transform $S^*$ in the following way. \begin{itemize} \item For every non-blocked vertex $y\in Y$, the set $\text{vic}(y)\cap S^*$ contains at most one point. We remove this point from $S^*$, and we insert instead the midpoint of the edge between $y$ and $M(y)$ into $S^*$. These operations cannot decrease the size of $S^*$. \item Every (odd-sized) component $C$ of $X_{\ge3}$ contains at most one point $M(y)$ with $y\in Y$. We compute a near-perfect matching $M_C$ for $C$ that misses this vertex $M(y)$ (and if no such vertex is in $C$, matching $M_C$ misses an arbitrary vertex of $C$). We remove all points in $C$ from $S^*$, and we insert instead the midpoints of the edges in $M_C$. As by Lemma~\ref{le:stefan.2} we remove at most $\nu(C)$ points and as we insert exactly $\nu(C)$ points, these operations will not decrease the size of $S^*$. \end{itemize} The resulting set $S^*$ is of course again in canonical form, and it is also easy to see that $S^*$ is still $2$-dispersed. Furthermore, $S^*$ now satisfies properties P1 and P2. In the second transformation step, we note that the current $S^*$ does neither contain vertices from $Y$ nor midpoints of edges between $Y$ and $Z$. For every (even-sized) component $C$ of $Z$, we compute a perfect matching $M_C$. We remove all points in $C$ from $S^*$, and we insert instead the midpoints of the edges in $M_C$. As by Lemma~\ref{le:stefan.2} we remove at most $\nu(C)$ points and as we insert exactly $\nu(C)$ points, these operations will not decrease the size of $S^*$. The resulting set $S^*$ is $2$-dispersed and satisfies properties P1, P2, and P3. \qed \bigskip The optimal $2$-dispersed sets in Lemma~\ref{le:properties} are strongly structured and fairly easy to understand: The perfect matchings in set $Z$ contribute exactly $|Z|/2$ points to $S^*$. Every (odd-sized) component $C$ in $X_{\ge3}$ contributes exactly $(|C|-1)/2$ points to $S^*$. The only remaining open decisions concern the points in $X_1$ and the midpoints of the edges $\{y,M(y)\}$ for $y\in Y$. So let us consider the set $T:=S^*\cap X_1$, and let $\Gamma(T)\subset Y$ denote the vertices in $Y$ that are adjacent to some vertex in $T$. Then every vertex $y$ in $Y-\Gamma(T)$ contributes the midpoint of $\{y,M(y)\}$ to $S^*$, and every vertex $x\in T$ contributes itself to $S^*$. Hence the remaining optimization problem boils down to finding a subset $T\subseteq X_1$ that maximizes the function value $f(T):=|Y-\Gamma(T)|+|T|$, which is equivalent to minimizing the function value \begin{equation} \label{eq:submodular} g(T):=|\Gamma(T)|-|T|. \end{equation} The set function $g(T)$ in \eqref{eq:submodular} is a \emph{submodular} function, as it satisfies $g(A)+g(B)\ge g(A\cup B)+g(A\cap B)$ for all $A,B\subseteq X_1$; see for instance \ Gr\"otschel, Lov\'asz \& Schrijver \cite{GroLovSch1988}. Therefore, the minimum value of $g(T)$ can be determined in polynomial time by the ellipsoid method \cite{GroLovSch1988}, or by Cunningham's combinatorial algorithm \cite{Cunningham1985}. We also describe another way of minimizing the function $g(T)$ in polynomial time, that avoids the heavy machinery of submodular optimization and that formulates the problem as a minimum $s$-$t$-cut computation in a weighted directed auxiliary graph. The auxiliary graph is defined as follows. \begin{itemize} \item Its vertex set contains a source $s$ and a sink $t$, together with all the vertices in $X_1$ and all the vertices in $Y$. \item For every $x\in X_1$, there is an arc $(s,x)$ of weight $w(s,x)=1$ from the source to $x$. For every $y\in Y$, there is an arc $(y,t)$ of weight $w(y,t)=1$ from $y$ to the sink. Whenever the vertices $x\in X_1$ and $y\in Y$ are adjacent in the original graph $G$, the auxiliary graph contains the arc $(x,y)$ of weight $w(x,y)=+\infty$. \end{itemize} Now let us consider some $s$-$t$-cut of finite weight, which is induced by some vertex set $U$ in the auxiliary graph with $s\in U$ and $t\notin U$. As all arcs from set $X_1$ to set $Y$ have infinite weights, whenever $U$ contains some vertex $x\in X_1$ then $U$ must also contain all the neighbors of $x$ in $Y$. By setting $T:=X_1\cap U$, we get that the value of the cut equals $|X_1-T|+|\Gamma(T)|$; hence the minimizer for \eqref{eq:submodular} can be read off the minimizing cut in the auxiliary graph. We finally summarize all our insights and formulate the main result of this section. \begin{theorem} \label{th:delta=2} The $2$-dispersion number of a graph $G$ can be computed in polynomial time. \qed \end{theorem} \section{The polynomially solvable cases} \label{sec:polynomial} Theorem~\ref{th:delta=2} and Lemma~\ref{le:monotonicity} together imply that for every rational number $\delta=a/b$ with numerator $a\le2$, the $\delta$-dispersion number of a graph can be computed in polynomial time. We now present some results that provide additional structural insights into these cases. The cases where the numerator is $a=1$ are structurally trivial, and the value of the corresponding $\delta$-dispersion number can be written down with the sole knowledge of $|V|$ and $|E|$. \begin{lemma} \label{le:delta=1} Let $\delta=1/b$ for some integer $b$, and let $G=(V,E)$ be a connected graph. \begin{itemize} \item If $G$ is a tree then $\disp{\delta}{G}=b|E|+1$. \item If $G$ is not a tree then $\disp{\delta}{G}=b|E|$. \end{itemize} \end{lemma} \proof If $G$ is a tree, we use a $\delta$-dispersed set $S$ that contains all vertices in $V$ and that for every edge $e=\{u,v\}$ contains all points $p(u,v,i/b)$ with $i=1,\ldots,b-1$. Clearly $|S|=b|E|+1$. If $G$ is not a tree, set $S$ contains for every edge $e=\{u,v\}$ all the points $p(u,v,(2i-1)/(2b))$ with $i=1,\ldots,b$. Clearly $|S|=b|E|$. It remains to show that there are no $\delta$-dispersed sets of larger cardinality. If $G$ is a tree, we root it at an arbitrary vertex so that it becomes an out-tree. We partition $P(G)$ into $|E|+1$ regions: One region consists of the root, and all other regions consist of the interior points on some edge together with the source vertex of that edge. A $\delta$-dispersed set contains at most $b$ points from every edge-region and at most one point from the root region. If $G$ is not a tree, we similarly partition $P(G)$ into $|E|$ regions: Every region either consists of the interior points of some edge, or of the interior points of an edge together with one of its incident vertices. A $\delta$-dispersed set contains at most $b$ points from every such region. \qed \bigskip The following lemma derives an explicit (and very simple) connection between the $2$-dispersion number and the $(2/b)$-dispersion number (with odd denominator $b$) of a graph. The lemma also implies directly that for every odd $b$, the computation of $(2/b)$-dispersion numbers is polynomial time equivalent to the computation of $2$-dispersion numbers. \begin{lemma} \label{le:numerator=2} Let $G=(V,E)$ be a graph, let $z\ge1$ be an integer, and let $\delta=2/(2z+1)$. Then the dispersion numbers satisfy~ $\disp{\delta}{G}=\disp{2}{G}+z|E|$. \end{lemma} \proof We first show that $\disp{\delta}{G}\ge\disp{2}{G}+z|E|$. Indeed, let $S_2$ denote an optimal $2$-dispersed set for $G$. By Lemma~\ref{le:halfintegral} we assume that $S_2$ is in canonical form and hence entirely consists of vertices and of midpoints of edges. We partition the edge set $E$ into three parts: Part $E_1$ contains the edges, for which one end vertex is in $S_2$. Part $E_{1/2}$ contains the edges whose midpoint lies in $S_2$. Part $E_0$ contains the remaining edges (which hence are disjoint from $S_2$). We construct a point set $S_{\delta}\subset P(G)$ as follows: \begin{itemize} \item For every edge $\{u,v\}\in E_1$ with $u\in S_2$, we put point $u$ together with the $z$ points $p(u,v,i\delta)$ with $i=1,\ldots,z$ into $S_{\delta}$. \item For every edge $\{u,v\}\in E_{1/2}$, we put the $z+1$ points $p(u,v,(4i-3)\delta/4)$ with $i=1,\ldots,z+1$ into $S_{\delta}$. \item For every $\{u,v\}\in E_0$, we put the $z$ points $p(u,v,(4i-1)\delta/4)$ with $i=1,\ldots,z$ into $S_{\delta}$. \end{itemize} It is easily verified that the resulting set $S_{\delta}$ is $\delta$-dispersed and contains $|S_2|+z|E|$ points. Next, we show that $\disp{\delta}{G}\le\disp{2}{G}+z|E|$. Let $S_{\delta}$ denote an optimal $\delta$-dispersed set for $G$. By Lemma~\ref{le:halfintegral} we assume that for every point $p(u,v,\lambda)$ in $S_{\delta}$, the denominator of the rational number $\lambda$ is $2(2z+1)$. Our first goal is to bring the points in $S_{\delta}$ into a particularly simple constellation. \begin{itemize} \item As long as there exist edges $e=\{u,v\}\in E$ with $u,v\in S_{\delta}$, we remove all points on $e$ from $S_{\delta}$ and replace them by the $z+1$ points $p(u,v,(4i-3)\delta/4)$ with $i=1,\ldots,z+1$. \item Next, for every edge $e=\{u,v\}\in E$ with $u\in S_{\delta}$ and $v\notin S_{\delta}$, we remove all points on $e$ from $S_{\delta}$ and replace them by the $z+1$ points $p(u,v,i\delta)$ with $i=1,\ldots,z$. \item Finally, for every edge $e=\{u,v\}\in E$ with $u,v\notin S_{\delta}$ we remove all points on $e$ from $S_{\delta}$ and replace them by the $z$ points $p(u,v,(4i-1)\delta/4)$ with $i=1,\ldots,z$. \end{itemize} It can be seen that these transformations do not decrease the cardinality of $S_{\delta}$, and that the resulting set is still $\delta$-dispersed. Finally, we construct the following set $S_2$ from $S_{\delta}$: First, $S_2$ contains all points in $V\cap S_{\delta}$, Secondly, whenever $S_{\delta}$ contains $z+1$ points from the interior of some edge $e\in E$, then we put the midpoint of $e$ into $S_2$. It can be shown that the resulting set $S_2$ is $2$-dispersed and has the desired cardinality. \qed
1,108,101,563,928
arxiv
\section{Introduction}\label{sec-intro} \setcounter{equation}{0} If an ensemble of particles is immersed in the incompressible fluid, then the dynamics of the system can be modeled by the kinetic Cucker--Smale model coupled with the incompressible Navier--Stokes equations. Throughout the paper, $\nabla$ and $\Delta$ without indices denote gradient and Laplacian with respect to the spatial variable $x$, respectively. Suppose particles are transported by the fluid velocity. Then the coupled system reads as \begin{equation} \label{eq-cs-ns} \begin{dcases} f_t + \bu \cdot \nabla_{x} f+ \nabla_{v} \cdot (L[f]f+(\bu-v)f)=0,\\ \bu_t+ \bu \cdot \nabla \bu +\nabla P=\Delta \bu +\int_{\bbr^3}(v-\bu)fdv, \\ \nabla \cdot \bu=0, \end{dcases} \end{equation} subject to the initial data \begin{equation} \label{eq-sys-inidata} f|_{t=0}=f_0, \quad \bu|_{t=0}=\bu_0, \end{equation} with $\bu_0$ satisfying the compatibility condition $\nabla \cdot \bu_0=0$. Here $f(t,x, v)$ is the particle distribution function in phase space $(x, v)\in \bbr^3\times \bbr^3$ at the time $t$. $\bu$ and $P$ represent the fluid velocity and pressure, respectively. $L[f]$ is given by \[ L[f](t, x, v)=\int_{\bbr^{3}}\int_{\bbr^3}\varphi(|x-y|)f(t, y, v^*)(v^*-v)dy dv^*, \] where $\varphi(\cdot) \in C_{b}^{1}$ is a positive non-increasing function, denoting the interaction kernel. Without loss of generality, we postulate that \[ \max\{|\varphi|, |\varphi'|\} \le 1 \] in the sequel. The detailed background concerning the system \eqref{eq-cs-ns} can be found in \cite{Bae2012}\cite{bae2014global}. We remark that the transport term in $\eqref{eq-cs-ns}_1$ is $\bu \cdot \nabla_{x} f$, instead of $v \cdot \nabla_{x} f$ in the usual kinetic model. This is because we suppose particles are transported by the fluid. Such transport term also appeared in previous literature \cite{Constantin2007Global}\cite{Lin2007cpam}\cite{Lin2008on}, where some micro-macro models arising from polymeric fluids are investigated. The first equation in \eqref{eq-cs-ns} is the kinetic Cucker--Smale model with the drag term. Now let us review some backgrounds on it. In 2007, Cucker--Smale \cite{Cucker2007} introduced a system of ODEs, termed as the Cucker--Smale model, to describe flocking behaviors in multi-agent systems. Then Ha--Liu \cite{Ha2009} contributed a complete analysis on the Cucker--Smale model using the Lyapunov functional approach, and further derived the kinetic Cucker--Smale model by taking the mean-field limit to the particle model. The results on the particle model in \cite{Ha2009} were extended to measure-valued solutions to the kinetic Cucker--Smale in \cite{Carrillo2010}. See also \cite{Canizo2011} for an elegant analysis employing the optimal transport theory. As for weak and strong solutions in weighted Sobolev spaces, Jin \cite{Jin2018} recently established the well-posedness by developing a unified framework. The method of weighted energy estimates in \cite{Jin2018} is also used to deal with the kinetic equation $\eqref{eq-cs-ns}_1$ in this paper. For classical solutions to the kinetic Cucker--Smale model, we refer to \cite{Ha2008}. Taking into account stochastic factors in the modeling, there is an additional diffusive term in the kinetic Cucker--Smale model. This kind of kinetic equation is of the Fokker--Planck type, allowing for existence of an equilibrium. Duan et al. \cite{duan2010kinetic} analyzed the stability and convergence rate of classical solutions to an equilibrium under small initial perturbations in Sobolev spaces, by using the micro-macro decomposition. Now research for the Cucker--Smale model from particle descriptions to kinetic and hydrodynamic descriptions has been launched. The hydrodynamic limits to some kinetic equations were analyzed in \cite{Figalli2019}\cite{karper2015hydrodynamic}, based on the relative entropy method. For results on the hydrodynamic Cucket--Smale model, we refer to \cite{Ha2014}\cite{ha2015emergent}\cite{jin2015well}\cite{Jin}. The interested reader can also consult the review papers \cite{carrillo2010particle}\cite{choi2017emergent} for the state of the art in this research topic. The rest two equations in \eqref{eq-cs-ns} are the three dimensional incompressible Navier--Stokes equations with the coupling term. Pioneered by the seminal work of Leray \cite{Leray1934} in 1934, there has been much extensive research on the Navier--Stokes equations. Among them, we only mention results related to this paper. In fact, the incompressible Navier--Stokes equations have an important scaling property, i.e., if $\bu(t,x)$ is a solution on the whole space, then $\bu_{\lambda}(t,x):=\lambda\bu(\lambda^2t,\lambda x)$ is also a solution. A space whose norm is invariant under this scaling is referred to as a critical space. Much work has been carried out in this setting, mainly based on two types of methods. One is by an energy type estimate, and the other by a fixed point argument. Well-posedness in $\dot{H}^{\frac12}(\bbr^3)$ is due to Fujita--Kato \cite{Fujita1964}. The existence results in the critical Besov spaces can be founded in Cannone \cite{Cannone1997A} and Planchon \cite{Planchon1998Asymptotic}. In this direction, the best result was contributed by Koch--Tataru \cite{koch2001}, where small global-in-time solutions in $BMO^{-1}$ were constructed. For the modern method in the setting of the critical $L^3(\bbr^3)$ space, we refer to the excellent book \cite{robinson2016the}. In this paper, we generalize it to the incompressible Navier--Stokes equations with the coupling term. Even though global weak solutions have been known long before in \cite{Leray1934}, their large time behaviors are investigated much later. Kato first made progress in \cite{Kato1984StrongLp}, and laid foundations for decay and existence questions for the solutions to the incompressible Navier--Stokes equations. For weak solutions with integrable initial data, Schonbek \cite{Schonbek1985L} obtained algebraic decay rate for the Cauchy problem, using the Fourier splitting method. Later, this method was extended to some other diffusive equations or with an external force. In this paper, we will generalize it to the coupled kinetic-fluid model. Coupled kinetic-fluid models were first introduced by Williams \cite{Williams1985Combustion} in the framework of the combustion theory, and have received much attention since then, due to their applications in biotechnology, medicine, waste-water recycling, and mineral processing \cite{Carrillo2006stability}. Most previous results were concentrated on study for weak solutions in spatial-periodic domain, or for classical solutions under small smooth perturbations around an equilibrium. Global existence and large time behavior of weak solutions to the Vlasov equation coupled with the unsteady Stokes system were investigated by Hamdache \cite{hamdache1998global}, in a bounded domain with reflection boundary conditions. Later, Boudin et al. \cite{boudin2009global} extended Hamdache's work, and studied global existence of weak solutions to the Vlasov--Navier--Stokes equations in spatial-periodic domain. Such coupled kintic-fluid equations also appeared in the setting of flocking particles immersed in fluids. For the kinetic Cucker--Smale model coupled with the Stokes equations, incompressible Navier--Stokes equations, and isentropic compressible Navier--Stokes equations, existence and large time behaviors of weak or strong solutions in spatial-periodic domain have been analyzed in \cite{Bae2012}\cite{bae2014asymptotic}\cite{bae2014global}\cite{bae2016global}. We refer the reader to \cite{carrillo2011global}\cite{goudon2010navier}\cite{li2017strong} for global classical solutions to the coupled kinetic-fluid models under small perturbations regime. Recently, the author has started the program to study the Cauchy problem of the kinetic Cucker--Smale model coupled with fluid equations. Local existence and a blowup criterion of strong solutions to the kinetic Cucker--Smale model coupled with the isentropic compressible Navier--Stokes equations were obtained in Jin \cite{jin2019local}, where weighted Sobolev spaces were introduced to overcome difficulties arising from unboundedness of the domain and the coupling term. It was shown that the integrability in time of the spatial $W^{1,\infty}$-norm of $\bu(t,x)$ controls blowup of strong solutions. Along this line, Jin \cite{Jin2021} investigated global existence of strong solutions to the kinetic Cucker--Smale model coupled with the Stokes equations, by using the maximal regularity estimate on the Stokes equations. In this paper, we further study the kinetic Cucker-Smale model coupled with the three dimensional incompressible Navier--Stokes equations. It is remarked that there are two aspects different from most previous results: First, the regularity of initial data is minimal in terms of existence of strong solutions to the incompressible Navier--Stokes equations. We control the quantity determining the extendence of local strong solutions by invoking the maximal regularity estimate on the Stokes equations, instead of using high order energy estimates, which requires initial data lying in high order Sobolev spaces; second, large time behaviors of strong solutions are analyzed in the whole space, instead of bounded or spatial-periodic domain. Thus the Poincar{\'e} inequality is not applicable. We circumvent this difficulty by ingeniously using the Fourier splitting method. Motivated by our previous studies \cite{jin2019local}\cite{Jin2021}, we introduce the following weighted Sobolev space to overcome difficulty induced from unboundedness of the domain. \begin{multline*} H_{\omega}^1(\bbr^3 \times \bbr^3):=\bigg\{h(x,v):\ h \in L_{\omega}^2(\bbr^3 \times \bbr^3), \\ \nabla_{x} h\in L_{\omega}^2(\bbr^3 \times \bbr^3),\ \nabla_{v}h \in L_{\omega}^2(\bbr^3 \times \bbr^3)\bigg\}, \end{multline*} \[ \|h\|_{H_{\omega}^1}^2:=\|h\|_{L_{\omega}^2}^2+\|\nabla_{x}h\|_{L_{\omega}^2}^2+\|\nabla_{v}h\|_{L_{\omega}^2}^2, \] where \[ \|h\|_{L_{\omega}^2}:=\left(\int_{\bbr^3}\int_{\bbr^3}h^2(x,v)\omega(x)dx dv\right)^{\frac12}, \quad \omega(x):=(1+|x|^2)^{\alpha},\quad \alpha>\frac32, \] similarly for $\|\nabla_{x}h\|_{L_{\omega}^2}$ and $\|\nabla_{v}h\|_{L_{\omega}^2}$. The following simplified notations for homogeneous Sobolev Spaces are used in this paper. \[ D^1(\bbr^3):=\left\{u\in L^{6}(\bbr^3):\ \nabla u \in L^2(\bbr^3)\right\}, \] \[ D^2(\bbr^3):=\left\{u\in L_{loc}^1(\bbr^3):\ \nabla^2u \in L^2(\bbr^3)\right\}, \] \[ D^{2,p}(\bbr^3):=\left\{u\in L_{loc}^1(\bbr^3):\ \nabla^2u \in L^p(\bbr^3)\right\}, \quad 1\le p\le \infty. \] Next we give the definition of strong solutions to \eqref{eq-cs-ns}-\eqref{eq-sys-inidata}. \begin{definition} \label{def-stro} Let $0<T< \infty$. $(f(t,x,v), \bu(t,x),\nabla P(t,x))$ is said to be a strong solution to \eqref{eq-cs-ns}-\eqref{eq-sys-inidata} in $[0,T]$, if \[ \begin{aligned} &f(t,x,v)\in C([0,T];H_{\omega}^1(\bbr^3 \times \bbr^3)),\\ &\bu(t,x)\in C([0,T];H^{1}(\bbr^3))\cap L^2(0,T;D^{2}(\bbr^3)),\\ &\bu_t(t,x)\in L^2(0,T;L^{2}(\bbr^3)),\\ &\nabla P(t,x)\in L^2(0,T;L^{2}(\bbr^3)), \end{aligned} \] and \[ \begin{gathered} \int_0^{\infty}\int_{\bbr^6}f \phi_t dx dv dt+\int_0^{\infty}\int_{\bbr^6}f v \cdot \nabla_{x}\phi dx dv dt\\+\int_0^{\infty}\int_{\bbr^6}\Big(L[f]+\bu-v \Big)\cdot \nabla_{v}\phi f dx dv dt +\int_{\bbr^6}f_0 \phi(0) dx dv=0, \end{gathered} \] for all $\phi(t,x,v)\in C_0^{\infty}([0,T)\times \bbr^3 \times \bbr^3)$; \[ \begin{gathered} \int_0^{\infty}\int_{\bbr^3}\bu\cdot \boldsymbol{\psi}_t dx dt+\int_0^{\infty}\int_{\bbr^3}\nabla \bu : \nabla \boldsymbol{\psi} dx dt -\int_0^{\infty}\int_{\bbr^3} \bu \cdot \nabla \bu \cdot \boldsymbol{\psi} dx dt \\+\int_0^{\infty}\int_{\bbr^6}(v-\bu) \cdot \boldsymbol{\psi}f dx dv dt +\int_{\bbr^3}\bu_0 \cdot\boldsymbol{\psi}(0) dx=0, \end{gathered} \] for all $\boldsymbol{\psi}(t,x)\in C_{0,\sigma}^{\infty}([0,T)\times \bbr^3)$, where \[ C_{0,\sigma}^{\infty}([0,T)\times \bbr^3):=\bigg\{\boldsymbol{\psi}(t,x): \ \boldsymbol{\psi}(t,x)\in C_0^{\infty}([0,T)\times \bbr^3), \ \nabla \cdot \boldsymbol{\psi}=0 \bigg\}. \] \end{definition} We define the particle density as \[ \rho(t,x):=\int_{\bbr^3}f(t,x,v)dv, \] with the initial value $\rho_0(x):=\rho(0,x)$. The energy of the system \eqref{eq-cs-ns} is defined as \[ E(t):=\frac12 \|\bu(t)\|_{L^2}^2 +\frac12 \int_{\bbr^3}\int_{\bbr^3}f(t,x,v)|v|^2dxdv, \] with the initial energy $E_0:=E(0)$. If the initial fluid velocity $\bu_0(x)\in H^1(\bbr^3)$, it follows from the Sobolev embedding that \[ \|\bu_0\|_{L^3}\le C \|\bu_0\|_{H^1},\quad \text{for some constant $C>0$.} \] Denote by $B(R_0)$ the ball centered at the origin with a radius $R_0$. The bound of $v$-support of $f(t,x, v)$ at the time $t$ is defined as \[ R(t):=\sup \Big\{|v|: \ v \in \text{supp$f(t,x,\cdot)$ for a.e. $x \in \bbr^3$}\Big \}. \] If $f_0(x,v) \in H_{\omega}^1(\bbr^3 \times \bbr^3)\cap L^{\infty}(\bbr^3 \times \bbr^3)$ and $\text{supp}_{v}f_0(x,\cdot)\subseteq B(R_0)$ for a.e. $x \in \bbr^3$, then we have \[ \|\rho_0\|_{L^1}=\int_{\bbr^3}\int_{\bbr^3}f_0(x,v)dxdv\le C R_0^{\frac32} \|f_0\|_{L_{\omega}^2}, \] and \[ \|\rho_0\|_{L^{\infty}}=\left\|\int_{\bbr^3}f_0(x,v)dv \right\|_{L^{\infty}}\le C R_0^3 \|f_0\|_{L^{\infty}}, \] for some constant $C>0$. In terms of the above notations, the results of this paper are summarized as follows. \begin{theorem} \label{thm-exist} Let $0<R_0<\infty$. Assume that $f_0(x,v)\ge 0$, $f_0(x,v) \in H_{\omega}^1(\bbr^3 \times \bbr^3)\cap L^{\infty}(\bbr^3 \times \bbr^3)$, and $\bu_0(x) \in H^{1}(\bbr^3)$, with $\nabla \cdot \bu_0=0$ and the $v$-support of $f_0(x,v)$ satisfying \[ \text{supp}_{v}f_0(x,\cdot)\subseteq B(R_0) \quad \text{for a.e. $x \in \bbr^3$}. \] If there is a sufficiently small constant $\varepsilon_0$ such that \[ C\|\bu_0\|_{L^3}^6+(1+\|\rho_0\|_{L^{\infty}})E_0 \le \varepsilon_0 \] for some constant $C>0$ independent of the initial data, then the Cauchy problem \eqref{eq-cs-ns}-\eqref{eq-sys-inidata} admits a unique global-in-time strong solution in the sense of Definition \ref{def-stro}. Moreover, it holds that \[ \begin{aligned} &(1)\ \|f(t)\|_{H_{\omega}^1}\le \|f_0\|_{H_{\omega}^1}\exp\Big(C(1+t)\Big) \quad \text{and} \quad R(t)\le R_0+C,\\&\quad\, \text{where $C:=C(R_0, E_0, \|\rho_0\|_{L^{\infty}}, \|\bu_0\|_{H^1},\|\bu_0\|_{L^3})$};\\ &(2)\ \|\bu(t)\|_{H^{1}}\le C\bigg(\|\nabla \bu_0\|_{L^2}+\Big(1+\|\rho_0\|_{L^{\infty}}^{\frac12}\Big)E_0^{\frac12}\bigg)\exp\left(C(\varepsilon_0+\varepsilon_0^{\frac12})\right),\\ &\quad\, \text{for some constant $C>0$ independent of the initial data};\\ &(3)\ \lim_{t \to \infty}\int_{\bbr^6} |v-\bu|^2 f dxdv =0. \end{aligned} \] \end{theorem} If $\bu_0(x)$ is integrable, except for the conditions in Theorem \ref{thm-exist}, then the quantitative decay rate of the system can be obtained. \begin{theorem} \label{thm-beha} Assume initial data satisfy the conditions in Theorem \ref{thm-exist}. If $\bu_0(x)\in L^1(\bbr^3)$ in addition, then there holds \[ \begin{aligned} &(1)\ E(t)\le C\Big(1+t\Big)^{-\frac32} \quad \text{and} \quad \|\bu(t)\|_{L^{2}}\le C\Big(1+t\Big)^{-\frac34};\\ &(2)\ \int_{\bbr^6} |v-\bu|^2 f dxdv \le C\Big(1+t\Big)^{-\frac32}, \end{aligned} \] where $C:=C(E_0, \|\rho_0\|_{L^\infty}, \|\bu_0\|_{L^1})$. \end{theorem} \begin{remark}\label{rem-thm} The transport term in $\eqref{eq-cs-ns}_1$ is taken as $\bu \cdot \nabla_xf$, in order to derive a uniform-in-time bound on the particle density $\rho(t,x)$, which plays a crucial role in the analysis of global existence and large time behaviors of strong solutions. As for the usual transport term $v \cdot \nabla_xf$, particle concentration in phase space seems unavoidable. Whether Theorem \ref{thm-exist} holds or not in this setting is still unknown, due to lack of a uniform-in-time bound on the particle density. \end{remark} \begin{remark} The regularity of $\bu_0$ is optimal in terms of existence of strong solutions to the incompressible Navier--Stokes equations. The small global strong solution is obtained in the critical $L^3(\bbr^3)$ space, based on an energy-type estimate. As is well-known, the best critical space to date for existence of small global solutions to the three dimensional incompressible Navier--Stokes equations is $BMO^{-1}$. Whether or not the same type results hold for small $\bu_0$ in $BMO^{-1}$ is still a problem deserving our further endeavor. \end{remark} \begin{remark} Compared with decay results in the three dimensional incompressible Navier--Stokes equations, the decay rates in Theorem \ref{thm-beha} are optimal. Since the smallness assumption is only used to derive global strong solutions, which does not play a role in the analysis of large time behaviors, we conjecture that Theorem \ref{thm-beha} also holds for large weak solutions to the system \eqref{eq-cs-ns}-\eqref{eq-sys-inidata}. \end{remark} Based on our analysis on local strong solutions to the coupled system, it suffices to prove $\int_0^T \|\bu(t)\|_{W^{1,\infty}} dt<\infty$ for all $T>0$, to extend local solutions to infinity. This estimate can be obtained by interpolation, once we have the estimate on $\|\nabla^2\bu\|_{L^s(0,T;L^q)}$, $1<s<2$, $3<q<6$. In fact, the estimate of $\|\nabla^2\bu\|_{L^s(0,T;L^q)}$ can be obtained, by employing the maximal regularity estimate on the Stokes equations. Nevertheless, the incompressible Navier--Stokes equations contain the convective term $\bu\cdot\nabla\bu$. To deal with the convective term, we need the estimate on $\sup_{0\le t\le T}\|\nabla\bu(t)\|_{L^2}$. However, the estimate on $\sup_{0\le t\le T}\|\nabla\bu(t)\|_{L^2}$ in the three dimensional incompressible Navier--Stokes equations is still open for large initial data. Thus this estimate by far can only be hoped for under small initial data regime. In this paper, we generalize the small global existence result in the critical $L^3(\bbr^3)$ space for the incompressible Navier--Stokes equations to the coupled system \eqref{eq-cs-ns}-\eqref{eq-sys-inidata}. Using the splitting approach, we decompose $\bu=\bar\bu +\bw$, with $\bar\bu$ and $\bw$ determined by the following equations: \begin{equation*} \label{eq-ns-baru} \begin{dcases} \bar\bu_t+ \bu \cdot \nabla \bar\bu =\Delta \bar\bu, \\ \bar\bu|_{t=0}=\bu_0, \end{dcases} \end{equation*} and \begin{equation*} \label{eq-ns-w} \begin{dcases} \bw_t+ \bu \cdot \nabla \bw +\nabla P=\Delta \bw +\int_{\bbr^3}(v-\bu)fdv, \\ \bw|_{t=0}=0, \end{dcases} \end{equation*} respectively. In the process of the $L^3$-type energy estimate, the estimate on $\bar\bu$ is standard under the incompressible condition $\nabla\cdot\bu=0$. The difficulty mainly arises from the term $\int_{\bbr^3}3|\bw|\bw\cdot\nabla P dx$, where the pressure $P$ is determined by the following equation: \[ \Delta P=-\nabla\cdot(\bu\cdot\nabla\bu)+\nabla\cdot\bh, \quad \text{with $\bh:=\int_{\bbr^3}(v-\bu)f dv$.} \] Appearance of the additional term $\nabla\cdot\bh$ disables us to use integration by parts to estimate $\int_{\bbr^3}3|\bw|\bw\cdot\nabla P dx$. We overcome this difficulty by decomposing $P=P_1+P_2$, with $P_1$ and $P_2$ governed by \[ \Delta P_1=-\nabla\cdot(\bu\cdot\nabla\bu)\quad\text{and}\quad \Delta P_2=\nabla\cdot\bh, \] respectively. Then it holds that \[ \int_{\bbr^3}3|\bw|\bw\cdot\nabla P dx=\int_{\bbr^3}3|\bw|\bw\cdot\nabla P_1 dx+\int_{\bbr^3}3|\bw|\bw\cdot\nabla P_2 dx. \] The first term in the right-hand side of the above equation is dealt with similarly as in the incompressible Navier--Stokes equations, i.e., using integration by parts and the inequality \[ \|P_1\|_{L^p}\le C\|\bu\|_{L^{2p}}^{2}\quad \text{for some constant $C>0$ and $1<p<\infty$}. \] The second term is estimated directly, using H\"older's inequality and the fact that \[ \|\nabla P_2\|_{L^p}\le C\|\bh\|_{L^{p}}\quad \text{for some constant $C>0$ and $1<p<\infty$}. \] Having the uniform-in-time bound on the particle density $\rho(t,x)$, along with the elementary energy estimate on the system \eqref{eq-cs-ns}-\eqref{eq-sys-inidata}, the quantity $\int_0^T \|\bh(t)\|_{L^p}^2dt$ for $1\le p\le 2$ can be controlled by the initial energy $E_0$ times $1+\|\rho_0\|_{L^{\infty}}$ for all $T>0$. Then we can obtain the Serrin condition $\bu\in L^3(0,T;L^9)$ for all $T>0$, under some smallness assumption on initial data, which leads to estimate on $\sup_{0\le t\le T}\|\nabla\bu(t)\|_{L^2}$ for all $T>0$. In analysis of large time behaviors to the coupled system, we use the Fourier splitting method. Nevertheless, we cannot directly apply this method to the fluid equations, since \[ \frac12\frac{d}{dt}\|\bu\|_{L^2}^2 +\|\nabla\bu\|_{L^2}^2=\int_{\bbr^6}(v-\bu)\cdot\bu f dxdv, \] and we do not have enough decay estimates on the coupling term $\int_{\bbr^6}(v-\bu)\cdot\bu f dxdv$. Combining the kinetic equation, we deduce that \[ \frac{d}{dt}E(t) +\|\nabla\bu\|_{L^2}^2+\int_{\bbr^6}|v-\bu|^2 f dxdv \le 0. \] Our main observation is that \[ E(t)\le (1+\|\rho_0\|_{L^{\infty}})\left(\|\bu\|_{L^2}^2+\int_{\bbr^6}|v-\bu|^2 f dxdv\right), \] under uniform-in-time bound on the particle density $\rho(t,x)$. Then we use the Fourier splitting method to derive the inequality \[ \frac{d}{dt}E(t) +\frac{c^2}{(t+c^2)(1+\|\rho_0\|_{L^{\infty}})}E(t) \le \frac{c^2}{t+c^2}\int_{|\xi|\le c(t+c^2)^{-\frac12}} |\hat\bu(t,\xi)|^2d\xi, \] where the positive constant $c$ satisfies $c^2=3(1+\|\rho_0\|_{L^{\infty}})$. However, due to appearance of the coupling term, we are still faced with some difficulties in estimating the low frequency of $\bu(t,x)$. Fortunately, these difficulties can be overcome by a subtle bootstrap argument. It follows from the elementary energy estimate that $E(t)\le E_0$ for all $t\ge 0$. Therefore, we can make the bootstrap assumption that \[ E(t)<K(1+t)^{-\frac98} \quad \text{in $[0,T)$} \] for some suitably large constants $K$ and $T$. Under this assumption, the coupling term can be controlled by using a time-weighted energy estimate on the coupled system \eqref{eq-cs-ns}-\eqref{eq-sys-inidata}. Then we can improve estimate on the low frequency of $\bu(t,x)$, which leads to a better decay rate on $E(t)$, i.e., \[ E(t)\le CK(1+t)^{-\frac32}+CK^2(1+t)^{-\frac52} \quad \text{in $[0,T)$} \] for some constant $C>0$, depending only on $E_0$, $\|\rho_0\|_{L^\infty}$ and $\|\bu_0\|_{L^1}$; see details in Sect. \ref{sec-glob-exst}. Taking $K$ suitably large, we can prove that the supremum among all the above $T$ is $\infty$ by continuity argument. Thus we can improve the bootstrap assumption and show that \[ E(t)\le C(1+t)^{-\frac32} \quad \text{for all $t\ge 0$}, \] where $C:=C(E_0, \|\rho_0\|_{L^\infty}, \|\bu_0\|_{L^1})$. Then the decay rates of $\|\bu(t)\|_{L^2}$ and $\int_{\bbr^6} |v-\bu|^2 f dxdv$ follow easily. The proof is subtle and robust. We believe that the idea can be also used in some other coupled models. The rest of the paper is organized as follows. In Sect. \ref{sec-preli}, we present some preliminary results used in the subsequent analysis. In Sect. \ref{set-loc-exist}, we construct local strong solutions to the coupled model by iteration on the linearized system. In Sect. \ref{sec-apriori}, we derive some a priori estimates on classical solutions, which are used to extend local strong solutions. Sect. \ref{sec-glob-exst} is devoted to the proof of our theorems. \vskip 0.3cm \noindent\textbf{Notation}. Throughout the paper, $c, C$ represent a general positive constant that may depend on $\varphi$, $\varphi'$, and the initial data. We write $C(\star)$ to emphasize that $C$ depends on $\star$. Both $C$ and $C(\star)$ may take different values in different expressions. \section{Preliminaries}\label{sec-preli} \setcounter{equation}{0} In this section, we present some preliminaries that will be used in the following analysis. The first result concerns the well-posedness and estimates on strong solutions to the kinetic Cucker--Smale model with a coupling term, provided that the fluid velocity is given in some function space. It will be used in the construction of local strong solutions to the system \eqref{eq-cs-ns}-\eqref{eq-sys-inidata}. While the second result is the maximal regularity estimate on the Stokes equations. This estimate is crucially used in the estimate of $\int_0^T \|\bu(t)\|_{W^{1,\infty}} dt<\infty$ for all $T>0$, in order to extend local strong solutions to infinity. \subsection{The kinetic Cucker--Smale model with a coupling term} When an ensemble of particles is immersed in a fluid, due to influence of ambient fluid, the alignment term $L[f]f$ is replaced by $L[f]f+(\bu-v)f$, in order for conservation of mass and momentum of the system. In this paper, we suppose particles are transported by the fluid velocity $\bu$. Then the transport term in the kinetic equation becomes $\bu\cdot\nabla_xf$. Given $\bu(t,x)\in C([0,T];H^{1}(\bbr^3))\cap L^2(0,T;D^{2}(\bbr^3)) \cap L^s(0,T;D^{2,q}(\bbr^3))$,$1<s<2$, $3<q<6$, with $\nabla\cdot\bu=0$, it follows from the Sobolev imbedding that $\bu(t,x)\in L^1(0,T;W^{1,\infty}(\bbr^3))$. Consider \begin{equation} \label{eq-kine-cs} \begin{dcases} f_t + \bu \cdot \nabla_{x} f+ \nabla_{v} \cdot (L[f]f+(\bu-v)f)=0,\\ f|_{t=0}=f_0(x,v), \end{dcases} \end{equation} in $[0,T]\times \bbr^3 \times \bbr^3$. Define \[ a(t,x):=\int_{\bbr^{6}} \varphi(|x-y|)f(t, y,v^*) dy dv^*, \] \[ \mathbf{b} (t,x):=\int_{\bbr^{6}} \varphi(|x-y|)f(t, y,v^*) v^* dy dv^*. \] Recall that the bound of $v$-support of $f(t,x, v)$ at the time $t$ is defined as \[ R(t):=\sup \Big\{|v|: \ v \in \text{supp$f(t,x,\cdot)$ for a.e. $x \in \bbr^3$}\Big \}. \] Following the line of proof of Proposition 2.1 in \cite{jin2019local}, we have the following results. \begin{proposition}\label{prop-kine-cs-wp} Let $0<R_0, T<\infty$. Assume $f_0(x,v) \ge 0$, $f_0(x,v) \in H_{\omega}^1(\bbr^3 \times \bbr^3)$, and $\text{supp}_{v}f_0(x,\cdot)\subseteq B(R_0)$ for a.e. $x \in \bbr^3$. Given $\bu(t,x)\in C([0,T];H^{1}(\bbr^3))\cap L^2(0,T;D^{2}(\bbr^3)) \cap L^s(0,T;D^{2,q}(\bbr^3))$,$1<s<2$, $3<q<6$, with $\nabla\cdot\bu=0$, there exists a unique non-negative strong solution $f(t, x, v)\in C([0,T];H_{\omega}^1(\bbr^3 \times \bbr^3))$ to \eqref{eq-kine-cs}. Moreover, \[ \begin{aligned} (1)& \ R(t)\le R_0+\int_0^t(\|\mathbf{b}(\tau)\|_{L^{\infty}}+\|\bu(\tau)\|_{L^{\infty}})d\tau,\quad 0\le t \le T ;\\ (2)& \ \|f(t)\|_{H_{\omega}^1}\le \|f_0\|_{H_{\omega}^1}\exp \left( C \int_0^t \Big(1+R(\tau)+\|\bu(\tau)\|_{W^{1,\infty}} \Big) d \tau \right), \quad 0\le t \le T, \end{aligned} \] where $C:=C(\|\varphi\|_{C^1}, \|f_0\|_{L^1})$. \end{proposition} From $f_0(x,v) \in H_{\omega}^1(\bbr^3 \times \bbr^3)$, we deduce that \begin{equation} \label{eq-ini-f-lonenm} \|f_0\|_{L^1}=\int_{\bbr^6}f_0(x,v)\omega^{\frac12}(x)\omega^{-\frac12}(x)dx dv \le C(R_0)\|f_0\|_{L^2_{\omega}}. \end{equation} Integrating $\eqref{eq-kine-cs}_1$ over $[0,t]\times \bbr^3 \times \bbr^3$ $, 0<t\le T$, gives \begin{equation} \label{eq-kin-conser-mass} \|f(t)\|_{L^1}=\|f_0\|_{L^1}. \end{equation} Without loss of generality, we suppose $\|f_0\|_{L^1}=1$. Using Cauchy's inequality, we have \begin{equation}\label{eq-estb} \begin{aligned} |\mathbf{b}(t,x)| \le& \left(\int_{\bbr^{6}} f(t, y,v^*) dy dv^* \right)^{\frac12}\left(\int_{\bbr^{6}} f(t, y, v^*) |v^*|^2 dy dv^* \right)^{\frac12}\\ \le& \left(\int_{\bbr^{6}} f(t, y, v^*) |v^*|^2 dy dv^* \right)^{\frac12}, \quad 0\le t \le T. \end{aligned} \end{equation} Multiplying $\eqref{eq-kine-cs}_1$ by $|v|^2$, we obtain \begin{equation} \label{eq-kin-cs-ener} \frac{\partial}{\partial t} (f |v|^2)+ \bu \cdot \nabla_{x}(f |v|^2)+\nabla_{v} \cdot \Big(L[f]f |v|^2+(\bu-v)f |v|^2\Big)=2fL[f]\cdot v+2f v \cdot (\bu-v). \end{equation} Integrating \eqref{eq-kin-cs-ener} over $\bbr^3 \times \bbr^3$ leads to \[ \begin{aligned} \frac{d}{dt}\int_{\bbr^6}f|v|^2 dx dv=&-\int_{\bbr^{12}}\varphi(|x-y|)f(t,x,v)f(t,y,v^*)|v^*-v|^2dy dv^* dx dv\\ &-2\int_{\bbr^6}f|v|^2 dx dv+2\int_{\bbr^3}\int_{\bbr^3}f v dv \cdot \bu dx\\ \le&-2\int_{\bbr^6}f|v|^2 dx dv+2\|\bu(t)\|_{L^{\infty}}\|f\|_{L^1}^{\frac12}\Bigg(\int_{\bbr^6}f|v|^2 dx dv\Bigg)^{\frac12}. \end{aligned} \] Solving the above Gronwall's inequality yields \begin{equation}\label{eq-cs-twmonieq} \Bigg(\int_{\bbr^6}f(t, x, v)|v|^2 dx dv\Bigg)^{\frac12}\le R_0+\int_0^t \|\bu(\tau)\|_{L^{\infty}}d\tau, \quad 0\le t \le T. \end{equation} Combining \eqref{eq-estb} and \eqref{eq-cs-twmonieq} with Proposition \ref{prop-kine-cs-wp} (1), we can further estimate $R(t)$ as follows: \begin{equation}\label{eq-cs-estvbd} R(t)\le \left(R_0+\int_0^t \|\bu(\tau)\|_{L^{\infty}}d\tau\right)(1+t),\quad 0\le t \le T. \end{equation} \subsection{Maximal regularity estimate on the Stokes equations} Our analysis to the incompressible Navier--Stokes equations is based on the global-in-time maximal $L^p-L^q$ estimates for the Stokes equations. By global-in-time maximal $L^p-L^q$ estimates, we mean that for given $\bg(t,x)\in L^p(0,T;L^q(\bbr^3))$, there exists a unique solution $(\bu,\nabla P)$ to the Stokes equations \begin{equation} \label{eq-s} \begin{dcases} \bu_t+\nabla P=\Delta \bu +\bg, \\ \nabla \cdot \bu=0,\\ \bu|_{t=0}=0, \end{dcases} \end{equation} satisfying \[ \|\bu_t\|_{L^p(0,T;L^q)}+\|\nabla^2\bu\|_{L^p(0,T;L^q)}+\|\nabla P\|_{L^p(0,T;L^q)}\le C\|\bg\|_{L^p(0,T;L^q)}, \] for some constant $C>0$ independent of $T$ and $\bg$. Pioneering results on maximal $L^p-L^p$ estimates are due to Solonnikov \cite{Solonnikov1977Estimates}. Later, by proving boundedness of the imaginary powers of the Stokes operator, and using the Dore--Venni Theorem \cite{Dore1987On}, Giga--Sohr \cite{giga1991abstract} extended these estimates to the mixed $L^p-L^q$ setting, with the above constant $C$ improved to be independent of $T$, i.e., global in time. For modern approach to the maximal regularity estimates from the point of view of evolution equations, we refer the reader to \cite{Pruss2016Moving}. In terms of the regularity of $\bu(t,x)$, the space of initial data $\bu_0(x)$, as the trace space of $\bu(t,x)$, can be characterized by the real interpolation space \[ \Big(W^{2,q}, L^q\Big)_{\frac1s,s}=B_{q,s}^{2-\frac2s}. \] For general initial data in the Besov space $B_{q,s}^{2-\frac2s}$, it is proved in \cite{giga1991abstract} that \[ \|\bu_t\|_{L^s(0,T;L^q)}+\|\nabla^2\bu\|_{L^s(0,T;L^q)}+\|\nabla P\|_{L^s(0,T;L^q)}\le C\bigg(\|\bu_0\|_{B_{q,s}^{2-\frac2s}}+\|\bg\|_{L^s(0,T;L^q)}\bigg). \] In this paper, we suppose \begin{equation} \label{eq-sq-rela} 1<s<2, \quad 3<q<6, \quad \text{and} \quad \frac2s+\frac3q>\frac52. \end{equation} It follows from the Sobolev embedding that \begin{equation} \label{eq-s-inidaemb} \|\bu_0\|_{B_{q,s}^{2-\frac2s}}\le C\|\bu_0\|_{H^1}, \quad \text{for some constant $C>0$.} \end{equation} Combining Theorem 2.8 in \cite{giga1991abstract} with \eqref{eq-s-inidaemb}, we have the following proposition. \begin{proposition}\label{prop-s} Let $0<T\le\infty$, $1<s<2$, $3<q<6$, $\frac2s+\frac3q>\frac52$. Given $\bg(t,x)\in L^s(0,T;L^q(\bbr^3)$, $\bu_0(\bx)\in H^1(\bbr^3)$, there exists a unique solution $(\bu,\nabla P)$ to the Stokes equations \begin{equation} \label{eq-s-nzroini} \begin{dcases} \bu_t+\nabla P=\Delta \bu +\bg, \\ \nabla \cdot \bu=0,\\ \bu|_{t=0}=\bu_0, \end{dcases} \end{equation} such that for some constant $C>0$ independent of $T$ and $\bg$, \[ \|\bu_t\|_{L^s(0,T;L^q)}+\|\nabla^2\bu\|_{L^s(0,T;L^q)}+\|\nabla P\|_{L^s(0,T;L^q)}\le C\Big(\|\bu_0\|_{H^1}+\|\bg\|_{L^s(0,T;L^q)}\Big). \] If $\bg(t,x)\in L^2(0,T;L^2(\bbr^3)$, it holds similarly that \[ \|\bu_t\|_{L^2(0,T;L^2)}+\|\nabla^2\bu\|_{L^2(0,T;L^2)}+\|\nabla P\|_{L^2(0,T;L^2)}\le C\Big(\|\bu_0\|_{H^1}+\|\bg\|_{L^2(0,T;L^2)}\Big), \] for some constant $C>0$ independent of $T$ and $\bg$. \end{proposition} \section{Construction of Local Strong Solutions}\label{set-loc-exist} \setcounter{equation}{0} This section is devoted to the construction of local strong solutions to the coupled system. Based on our previous results for the kinetic Cucker--Smale model, we first linearize the incompressible Navier--Stokes equations and construct approximate solutions by iteration. Then we show that there exists some $T_*>0$, depending only on initial data and model parameters, such that the approximate solutions are uniformly bounded in strong solution spaces on $[0, T_*]$. Then it is proved that the approximate solution sequence is a Cauchy sequence in a lower order regularity function space for some short time interval. Combining the uniform bound on the approximate solutions, we further demonstrate that the limit is the desired local strong solution, by employing functional analysis. The result in this section is summarized as follows. \begin{proposition} \label{prop-loc-exist} Let $0<R_0<\infty$. Assume that $f_0(x, v)\ge 0$, $f_0(x, v) \in H_{\omega}^1(\bbr^3 \times \bbr^3)$, and $\bu_0(x) \in H^{1}(\bbr^3)$, with $\nabla \cdot \bu_0=0$ and the $v$-support of $f_0(x, v)$ satisfying \[ \text{supp}_{v}f_0(x,\cdot)\subseteq B(R_0) \quad \text{for a.e. $x \in \bbr^3$}. \] Then there exists some $T_0>0$, depending only on $R_0$, $\|\bu_0\|_{H^1}$, $\|f_0\|_{H_{\omega}^1}$, such that the Cauchy problem \eqref{eq-cs-ns}-\eqref{eq-sys-inidata} admits a unique strong solution on $[0,T_0]$, satisfying \[ \begin{aligned} (1) \ &\sup_{0\le t\le T_0}\|f(t)\|_{H_{\omega}^1}\le 2\|f_0\|_{H_{\omega}^1}, \ \text{and $R(t)\le 2R_0$ for $t\in [0, T_0]$};\\ (2)\ &\sup_{0\le t\le T_0}\|\bu(t)\|_{H^1}+\|\bu_t\|_{L^2(0,T_0;L^2)}+\|\nabla^2\bu\|_{L^2(0,T_0;L^2)}+ \|\nabla^2\bu\|_{L^s(0,T_0;L^q)}\le 2K_0 \|\bu_0\|_{H^1}\\ &\text{for some constant $K_0>0$.} \end{aligned} \] \end{proposition} Next we use results in Sect. \ref{sec-preli} to complete the proof of Proposition \ref{prop-loc-exist}. \vskip 0.3cm \noindent \textit{Proof of Proposition \ref{prop-loc-exist}}. We first construct approximate solutions by linearizing the incompressible Navier--Stokes equations. Given $\bu^n(t, x) \in C([0,T];H^{1}(\bbr^3))\cap L^2(0,T;D^{2}(\bbr^3))\cap L^s(0,T;D^{2,q}(\bbr^3))$, $T>0$, with $\nabla \cdot \bu^{n}=0$ and $\bu^n|_{t=0}=\bu_0$, $(f^{n+1},\bu^{n+1},\nabla P^{n+1})$ is determined by \begin{equation} \label{eq-cs-s-appro} \begin{dcases} f^{n+1}_t + \bu^n \cdot \nabla_{x} f^{n+1}+ \nabla_{v} \cdot (L[f^{n+1}]f^{n+1}+(\bu^{n}-v)f^{n+1})=0,\\ \bu^{n+1}_t+\bu^n \cdot \nabla \bu^{n+1}+ \nabla P^{n+1}=\Delta \bu^{n+1}+\int_{\bbr^3}(v-\bu^{n+1})f^{n+1}dv,\\ \nabla \cdot \bu^{n+1}=0, \end{dcases} \end{equation} subject to the initial data \begin{equation} \label{eq-cs-s-approcau} f^{n+1}|_{t=0}=f_0, \quad \bu^{n+1}|_{t=0}=\bu_0, \end{equation} with $\bu_0$ satisfying the compatibility condition $\nabla \cdot \bu_0=0$. From Proposition \ref{prop-kine-cs-wp}, we know $f^{n+1}$ is well-defined. Substituting $f^{n+1}$ into $\eqref{eq-cs-s-appro}_2$, $(\bu^{n+1},\nabla P^{n+1})$ can be obtained by solving the linear equations in routine way. Thus $(f^{n+1},\bu^{n+1},\nabla P^{n+1})$ is obtained, and repeating the above procedure gives rise to an approximate solution sequence. In the iteration procedure, $\bu^0$ is set by \begin{equation} \label{eq-cs-s-approini} \begin{dcases} \bu^0_t=\Delta\bu^0,\\ \bu^0|_{t=0}=\bu_0. \end{dcases} \end{equation} It is easy to see that \begin{equation}\label{eq-induini} \sup_{0\le t\le \infty}\|\bu^0(t)\|_{H^1}+\|\bu^0_t\|_{L^2(0,\infty;L^2)}+\|\nabla^2\bu^0\|_{L^2(0,\infty;L^2)}+ \|\nabla^2\bu^0\|_{L^s(0,\infty;L^q)}\le 2K_0 \|\bu_0\|_{H^1} \end{equation} for some constant $K_0>0$, where we have applied Proposition \ref{prop-s} to \eqref{eq-cs-s-approini}. Then the proof of Proposition \ref{prop-loc-exist} is divided into the following three steps. \vskip 0.3cm \noindent \textbf{Step 1. Uniform bound on approximate solutions} \vskip 0.3cm We prove by induction that there exists $T_* \in (0,T]$, to be determined later, such that \begin{equation}\label{eq-indu-assup} \sup_{0\le t\le T_*}\|\bu^n(t)\|_{H^1}+\|\bu^n_t\|_{L^2(0,T_*;L^2)}+\|\nabla^2\bu^n\|_{L^2(0,T_*;L^2)}+ \|\nabla^2\bu^n\|_{L^s(0,T_*;L^q)}\le 2K_0 \|\bu_0\|_{H^1} \end{equation} for some constant $K_0>0$. Under the above induction hypothesis, it suffices to prove that \eqref{eq-indu-assup} holds for $n+1$. Denote by $R^{n+1}(t)$ the bound of $v$-support of $f^{n+1}(t,x,v)$. We deduce from \eqref{eq-cs-estvbd} and Proposition \ref{prop-s}(ii) that \begin{equation}\label{eq-appro-f-norm} \begin{aligned} &\sup_{0\le t \le T_1}R^{n+1}(t)\le \left(R_0+\int_0^{T_1} \|\bu^n(t)\|_{L^{\infty}}dt\right)(1+T_1),\\ &\sup_{0\le t \le T_1}\|f^{n+1}(t)\|_{H^1_{\omega}}\le \|f_0\|_{H_{\omega}^1}\exp \left( C \int_0^{T_1} \Big(1+R^{n+1}(t)+\|\bu^n(t)\|_{W^{1,\infty}} \Big) dt \right). \end{aligned} \end{equation} Take $T_1:=T_1(R_0, \|\bu_0\|_{H^1})$ suitably small such that \begin{equation}\label{eq-appro-f-normtwo} \begin{aligned} &\sup_{0\le t \le T_1}R^{n+1}(t)\le 2R_0,\\ &\sup_{0\le t \le T_1}\|f^{n+1}(t)\|_{H^1_{\omega}}\le 2\|f_0\|_{H_{\omega}^1}. \end{aligned} \end{equation} We take the inner product of $\eqref{eq-cs-s-appro}_2$ with $2\bu^{n+1}$ to obtain \begin{equation}\label{eq-appro-usqua} \begin{aligned} \frac{d}{dt}\|\bu^{n+1}\|_{L^2}^2 +2\|\nabla \bu^{n+1}\|_{L^2}^2 =&2\int_{\bbr^3}\int_{\bbr^3}(v-\bu^{n+1}) \cdot \bu^{n+1}f^{n+1}dv dx\\ \le&2\int_{\bbr^3}\int_{\bbr^3}v \cdot \bu^{n+1}f^{n+1}dv dx. \end{aligned} \end{equation} Taking the inner product of $\eqref{eq-cs-s-appro}_2$ with $-\Delta\bu^{n+1}$, we have \begin{equation}\label{eq-appro-dif-usqua} \begin{aligned} &\frac{d}{dt}\|\nabla\bu^{n+1}\|_{L^2}^2 +\|\Delta \bu^{n+1}\|_{L^2}^2\\ =&\int_{\bbr^3}\bu^n\cdot\nabla\bu^{n+1}\cdot\Delta \bu^{n+1} dx- \int_{\bbr^3}\int_{\bbr^3}(v-\bu^{n+1}) \cdot \Delta\bu^{n+1}f^{n+1}dv dx\\ \le&C\|\bu^n\|_{L^{\infty}}^2 \|\nabla\bu^{n+1}\|_{L^2}^2+ C\left\|\int_{\bbr^3} (v-\bu^{n+1})f^{n+1} dv\right\|_{L^2}^2+ \frac14\|\Delta \bu^{n+1}\|_{L^2}^2. \end{aligned} \end{equation} Taking the dot product of $\eqref{eq-cs-s-appro}_2$ with $2\bu^{n+1}_t$ and integrating the resulting equation over $\bbr^3$, we deduce that \begin{equation}\label{eq-appro-dif-gradusqua} \begin{aligned} &\frac{d}{dt}\|\nabla \bu^{n+1}\|_{L^2}^2 +2\|\bu^{n+1}_t\|_{L^2}^2\\ =&-2\int_{\bbr^3}\bu^n \cdot \nabla \bu^{n+1} \cdot \bu^{n+1}_t dx + 2\int_{\bbr^3}\int_{\bbr^3}(v-\bu^{n+1}) \cdot \bu^{n+1}_t f^{n+1} dv dx\\ \le& 2\|\bu^{n}\|_{L^{\infty}} \|\nabla \bu^{n+1}\|_{L^2} \|\bu^{n+1}_t\|_{L^2}+ 2\left\|\int_{\bbr^3} (v-\bu^{n+1})f^{n+1} dv\right\|_{L^2} \|\bu^{n+1}_t\|_{L^2}\\ \le&C\|\bu^{n}\|_{L^{\infty}}^2\|\nabla \bu^{n+1}\|_{L^2}^2+ C\left\|\int_{\bbr^3} (v-\bu^{n+1})f^{n+1} dv\right\|_{L^2}^2+ \|\bu_t^{n+1}\|_{L^2}^2, \end{aligned} \end{equation} Summing \eqref{eq-appro-usqua}, \eqref{eq-appro-dif-usqua} and \eqref{eq-appro-dif-gradusqua} together yields \begin{equation}\label{eq-appro-dif-uhonesqua} \begin{aligned} &\frac{d}{dt}\|\bu^{n+1}\|_{H^1}^2 +2\|\nabla\bu^{n+1}\|_{L^2}^2+\frac34\|\Delta \bu^{n+1}\|_{L^2}^2 +\|\bu_t^{n+1}\|_{L^2}^2\\ \le&C\|\bu^{n}\|_{L^{\infty}}^2\|\nabla \bu^{n+1}\|_{L^2}^2+ C\left\|\int_{\bbr^3} (v-\bu^{n+1})f^{n+1} dv\right\|_{L^2}^2\\ &+ 2\int_{\bbr^3}\int_{\bbr^3}v \cdot \bu^{n+1}f^{n+1}dv dx. \end{aligned} \end{equation} Take $T_2\le T_1$. It holds for $t\in [0, T_2]$ that \begin{equation}\label{eq-coultwo} \begin{aligned} &C\left\|\int_{\bbr^3} (v-\bu^{n+1})f^{n+1} dv\right\|_{L^2}^2\\ \le&C(R_0)\|f^{n+1}\|_{L_{\omega}^2}^2+ C(R_0)\|f^{n+1}\|_{L_{\omega}^2}^2\|\nabla \bu^{n+1}\|_{L^2}\|\nabla^2 \bu^{n+1}\|_{L^2}\\ \le&C(R_0)\|f^{n+1}\|_{L_{\omega}^2}^2+C(R_0)\|f^{n+1}\|_{L_{\omega}^2}^4\|\nabla \bu^{n+1}\|_{L^2}^2 +\frac14\|\Delta \bu^{n+1}\|_{L^2}^2, \end{aligned} \end{equation} and \begin{equation}\label{eq-couint} \begin{aligned} 2\int_{\bbr^3}\int_{\bbr^3}v \cdot \bu^{n+1}f^{n+1}dv dx \le& C(R_0)\|f^{n+1}\|_{L_{\omega}^2}\|\bu^{n+1}\|_{L^2}\\ \le&C(R_0)\|f^{n+1}\|_{L_{\omega}^2}^2+ \|\bu^{n+1}\|_{L^2}^2, \end{aligned} \end{equation} where we have used the Sobolev inequality \[ \|\bu^{n+1}\|_{L^{\infty}}\le C \|\nabla \bu^{n+1}\|_{L^2}^{\frac12}\|\nabla^2 \bu^{n+1}\|_{L^2}^{\frac12} \] and the elliptic estimate \[ \|\nabla^2 \bu^{n+1}\|_{L^2}\le C\|\Delta \bu^{n+1}\|_{L^2}. \] Substituting \eqref{eq-coultwo} and \eqref{eq-couint} into \eqref{eq-appro-dif-uhonesqua} leads to \begin{equation}\label{eq-appro-gron-uhonesqua} \begin{aligned} &\frac{d}{dt}\|\bu^{n+1}\|_{H^1}^2 +\frac12\|\Delta \bu^{n+1}\|_{L^2}^2 +\|\bu_t^{n+1}\|_{L^2}^2\\ \le&\Big(C+C\|\bu^{n}\|_{L^{\infty}}^2+C(R_0)\|f^{n+1}\|_{L_{\omega}^2}^4\Big) \|\bu^{n+1}\|_{H^1}^2+ C(R_0)\|f^{n+1}\|_{L_{\omega}^2}^2 \end{aligned} \end{equation} for $t\in [0, T_2]$. Solving the above Gronwall's inequality gives \begin{equation}\label{eq-appro-uhonesqua} \begin{aligned} &\sup_{0\le t\le T_2}\|\bu^{n+1}(t)\|_{H^1}^2 +\frac12\int_0^{T_2}\|\Delta \bu^{n+1}\|_{L^2}^2 dt + \int_0^{T_2}\|\bu_t^{n+1}\|_{L^2}^2dt\\ \le&\left(\|\bu_0\|_{H^1}^2+\int_0^{T_2}C(R_0)\|f^{n+1}\|_{L_{\omega}^2}^2dt\right)\exp\left(\int_0^{T_2}\Big(C+C\|\bu^{n}\|_{L^{\infty}}^2 +C(R_0)\|f^{n+1}\|_{L_{\omega}^2}^4\Big)dt\right) \end{aligned} \end{equation} Take $T_2:=T_2(R_0, \|\bu_0\|_{H^1}, \|f_0\|_{H_{\omega}^1})$ suitably small such that \begin{equation}\label{eq-u-nabtu-ut} \sup_{0\le t\le T_2}\|\bu^{n+1}(t)\|_{H^1}+\|\bu^{n+1}_t\|_{L^2(0,T_2;L^2)}+\|\nabla^2\bu^{n+1}\|_{L^2(0,T_2;L^2)}\le K_0 \|\bu_0\|_{H^1}. \end{equation} Take $T_3\le T_2$. Applying Proposition \ref{prop-s} to $\eqref{eq-cs-s-appro}_2-\eqref{eq-cs-s-appro}_3$ results in \begin{equation}\label{eq-appro-ulsq} \begin{aligned} \|\nabla^2\bu^{n+1}\|_{L^s(0,T_3;L^q)} \le&C\Bigg(\|\bu_0\|_{H^1}+\|\bu^n\cdot\nabla\bu^{n+1}\|_{L^s(0,T_3;L^q)}\\ &\quad+\left\|\int_{\bbr^3} (v-\bu^{n+1})f^{n+1} dv\right\|_{L^s(0,T_3;L^q)}\Bigg). \end{aligned} \end{equation} The last two terms in the right-hand side of \eqref{eq-appro-ulsq} are estimated as follows. \begin{equation}\label{eq-appro-ulsq-p1} \begin{aligned} &C\|\bu^n\cdot\nabla\bu^{n+1}\|_{L^s(0,T_3;L^q)}\\ \le&C\left(\int_0^{T_3}\|\bu^{n}\|_{L^{\infty}}^s\|\nabla\bu^{n+1}\|_{L^2}^{s(1-\theta)}\|\nabla^2\bu^{n+1}\|_{L^q}^{s\theta} dt\right)^{\frac1s}\\ \le&C\left(\int_0^{T_3}\|\bu^{n}\|_{L^{\infty}}^{\frac{s}{1-\theta}}\|\nabla\bu^{n+1}\|_{L^2}^{s} dt\right)^{\frac1s}+\frac12 \|\nabla^2\bu^{n+1}\|_{L^s(0,T_3;L^q)}\\ \le&C \sup_{0\le t\le T_3}\|\nabla\bu^{n+1}(t)\|_{L^2} \left(\int_0^{T_3}\|\bu^{n}\|_{L^{\infty}}^{\frac{s}{1-\theta}} dt\right)^{\frac1s}+\frac12 \|\nabla^2\bu^{n+1}\|_{L^s(0,T_3;L^q)}, \end{aligned} \end{equation} where we have used the following Gagliardo--Nirenberg inequality in $\bbr^3$, \begin{equation}\label{eq-gn-ineq} \|\nabla\bu^{n+1}\|_{L^q}\le C\|\nabla\bu^{n+1}\|_{L^2}^{1-\theta}\|\nabla^2\bu^{n+1}\|_{L^q}^{\theta} \quad \text{with $1-\frac3q=-\frac{1-\theta}{2}+\theta\left(2-\frac3q\right)$}. \end{equation} Using \eqref{eq-appro-f-normtwo} and the Sobolev inequality, we proceed to estimate the last term in the right-hand side of \eqref{eq-appro-ulsq}. \begin{equation}\label{eq-appro-ulsq-p2} \begin{aligned} &C\left\|\int_{\bbr^3} (v-\bu^{n+1})f^{n+1} dv\right\|_{L^s(0,T_3;L^q)}\\ \le& C(R_0)\sup_{0\le t\le T_3}\|f^{n+1}(t)\|_{H_{\omega}^1}T_3^{\frac1s}+ C(R_0)\sup_{0\le t\le T_3}\|f^{n+1}(t)\|_{H_{\omega}^1} \left(\int_0^{T_3}\|\bu^{n+1}\|_{L^{\infty}}^{s} dt\right)^{\frac1s}. \end{aligned} \end{equation} Substituting \eqref{eq-appro-ulsq-p1} and \eqref{eq-appro-ulsq-p2} into \eqref{eq-appro-ulsq} gives \begin{equation}\label{eq-appro-ulsq-subst} \begin{aligned} &\|\nabla^2\bu^{n+1}\|_{L^s(0,T_3;L^q)}\\ \le&C\|\bu_0\|_{H^1}+C \sup_{0\le t\le T_3}\|\nabla\bu^{n+1}(t)\|_{L^2} \left(\int_0^{T_3}\|\bu^{n}\|_{L^{\infty}}^{\frac{s}{1-\theta}} dt\right)^{\frac1s} \\&+ C(R_0)\sup_{0\le t\le T_3}\|f^{n+1}(t)\|_{H_{\omega}^1}T_3^{\frac1s}+ C(R_0)\sup_{0\le t\le T_3}\|f^{n+1}(t)\|_{H_{\omega}^1} \left(\int_0^{T_3}\|\bu^{n+1}\|_{L^{\infty}}^{s} dt\right)^{\frac1s}. \end{aligned} \end{equation} It is easy to see from \eqref{eq-sq-rela} and \eqref{eq-gn-ineq} that \[ \frac{s}{1-\theta}=\frac{(5q-6)s}{2q}<2 \quad \text{and} \quad 1<s<2. \] Using \eqref{eq-indu-assup}, \eqref{eq-appro-f-normtwo} and \eqref{eq-u-nabtu-ut}, we take $T_3:=T_3(R_0, \|\bu_0\|_{H^1}, \|f_0\|_{H_{\omega}^1})$ suitably small such that \begin{equation}\label{eq-appro-ulsqk} \|\nabla^2\bu^{n+1}\|_{L^s(0,T_3;L^q)}\le K_0\|\bu_0\|_{H^1}. \end{equation} Define $T_*:=\min\{T_1, T_2, T_3\}$. Adding \eqref{eq-appro-ulsqk} to \eqref{eq-u-nabtu-ut} results in \begin{equation}\label{eq-indu-assupnex} \sup_{0\le t\le T_*}\|\bu^{n+1}(t)\|_{H^1}+\|\bu^{n+1}_t\|_{L^2(0,T_*;L^2)}+\|\nabla^2\bu^{n+1}\|_{L^2(0,T_*;L^2)}+ \|\nabla^2\bu^{n+1}\|_{L^s(0,T_*;L^q)}\le 2K_0 \|\bu_0\|_{H^1}. \end{equation} From \eqref{eq-induini}, we know \eqref{eq-indu-assup} holds for $n=0$. Thus, we prove by induction that \eqref{eq-indu-assup} holds for all $n \in \bbn$. \vskip 3mm \noindent \textbf{Step 2. Convergence of approximate aolutions} \vskip 3mm Define \[ \overline{f}^{n+1}:=f^{n+1}-f^n,\quad \overline{\bu}^{n+1}:=\bu^{n+1}-\bu^n,\quad \overline{P}^{n+1}:=P^{n+1}-P^n. \] It follows from \eqref{eq-cs-s-appro}-\eqref{eq-cs-s-approcau} that \begin{equation} \label{eq-cs-s-appro-dif} \begin{dcases} \overline{f}^{n+1}_t +\bu^n \cdot\nabla_{x} \overline{f}^{n+1} +\nabla_{v}\cdot\Big[L[f^{n+1}]\overline{f}^{n+1}+(\bu^n-v)\overline{f}^{n+1}\Big]\\ \qquad+\overline{\bu}^{n}\cdot\nabla_{x}f^n+\nabla_{v}\cdot\Big[L[\overline{f}^{n+1}]f^n +f^n \overline{\bu}^n\Big]=0,\\ \overline{\bu}^{n+1}_t +\bu^n\cdot \nabla \overline{\bu}^{n+1} +\overline{\bu}^n \cdot \nabla \bu^n+\nabla\overline{P}^{n+1}\\ \qquad=\Delta \overline{\bu}^{n+1} -\int_{\bbr^2}f^{n+1}\overline{\bu}^{n+1}dv +\int_{\bbr^2}(v-\bu^{n})\overline{f}^{n+1} dv,\\ \nabla\cdot\overline{\bu}^{n+1}=0, \end{dcases} \end{equation} with \begin{equation} \label{eq-cs-s-approini-dif} \overline{f}^{n+1}|_{t=0}=0, \quad \overline{\bu}^{n+1}|_{t=0}=0. \end{equation} Taking the inner product of $\eqref{eq-cs-s-appro-dif}_2$ with $\overline{\bu}^{n+1}$, we deduce that for $t\in [0, T_*]$, \begin{equation}\label{eq-app-udif-squa} \begin{aligned} &\frac12\frac{d}{dt}\|\overline{\bu}^{n+1}\|_{L^2}^2 +\|\nabla\overline{\bu}^{n+1}\|_{L^2}^2\\ \le& -\int_{\bbr^3} \overline{\bu}^n \cdot \nabla \bu^n\cdot \overline{\bu}^{n+1}dx+ \int_{\bbr^3}\int_{\bbr^3}\overline{f}^{n+1}(v-\bu^{n})dv\cdot\overline{\bu}^{n+1}dx\\ \le&C\|\nabla\overline\bu^{n}\|_{L^2}\|\nabla\bu^{n}\|_{L^3} \|\overline{\bu}^{n+1}\|_{L^2}+C\left\|\int_{\bbr^3}\overline{f}^{n+1}v dv\right\|_{L^{\frac65}} \|\nabla\overline{\bu}^{n+1}\|_{L^2}\\ &+C\left\|\int_{\bbr^3}\overline{f}^{n+1} dv\right\|_{L^{\frac65}}\|\bu^{n}\|_{L^{\infty}}\|\nabla\overline{\bu}^{n+1}\|_{L^2}\\ \le&C\|\nabla\bu^{n}\|_{L^3}^2\|\overline{\bu}^{n+1}\|_{L^2}^2 +C(R_0)\Big(1+\|\bu^{n}\|_{L^{\infty}}^2\Big)\left\|\overline{f}^{n+1}\right\|_{L^{\frac65}}^2\\ &+\frac12 \|\nabla\overline{\bu}^{n+1}\|_{L^2}^2 +\frac{1}{24} \|\nabla\overline{\bu}^{n}\|_{L^2}^2, \end{aligned} \end{equation} that is, \begin{equation}\label{eq-app-udif-squashort} \begin{gathered} \frac{d}{dt}\|\overline{\bu}^{n+1}\|_{L^2}^2 +\|\nabla\overline{\bu}^{n+1}\|_{L^2}^2 \le C\|\nabla\bu^{n}\|_{L^3}^2\|\overline{\bu}^{n+1}\|_{L^2}^2 \\ +C(R_0)\Big(1+\|\bu^{n}\|_{L^{\infty}}^2\Big)\left\|\overline{f}^{n+1}\right\|_{L^{\frac65}}^2 +\frac{1}{12} \|\nabla\overline{\bu}^{n}\|_{L^2}^2. \end{gathered} \end{equation} Multiplying $\eqref{eq-cs-s-appro-dif}_1$ by $\frac65 \left|\overline{f}^{n+1}\right|^{\frac15}\text{sgn}\overline{f}^{n+1}$ leads to \begin{equation}\label{eq-app-kcswt-dif-frac} \begin{aligned} &\frac{\partial}{\partial t}\left|\overline{f}^{n+1}\right|^{\frac65} +\bu^n \cdot\nabla_{x} \left|\overline{f}^{n+1}\right|^{\frac65} +\nabla_{v}\cdot\bigg[L[f^{n+1}]\left|\overline{f}^{n+1}\right|^{\frac65} +(\bu^n-v)\left|\overline{f}^{n+1}\right|^{\frac65}\bigg]\\ =&-\frac15 \nabla_v\cdot L[f^{n+1}]\left|\overline{f}^{n+1}\right|^{\frac65} +\frac35\left|\overline{f}^{n+1}\right|^{\frac65}\\ &-\frac65 \left|\overline{f}^{n+1}\right|^{\frac15}\text{sgn}\overline{f}^{n+1}\overline{\bu}^{n}\cdot\nabla_x f^n\\ &-\frac65 \left|\overline{f}^{n+1}\right|^{\frac15}\text{sgn}\overline{f}^{n+1}\nabla_{v} \cdot \bigg(L[\overline{f}^{n+1}]f^n +\overline{\bu}^n f^n\bigg). \end{aligned} \end{equation} Integrating \eqref{eq-app-kcswt-dif-frac} over $\bbr^3\times \bbr^3$ gives \begin{equation}\label{eq-app-kcswt-dif-frac-gron} \begin{aligned} &\frac{d}{d t}\left\|\overline{f}^{n+1}\right\|_{L^{\frac65}}^{\frac65}\\ =&\int_{\bbr^3}\int_{\bbr^3}\Bigg(-\frac15 \nabla_v\cdot L[f^{n+1}]\left|\overline{f}^{n+1}\right|^{\frac65} +\frac35\left|\overline{f}^{n+1}\right|^{\frac65}\Bigg)dx dv \\ &-\int_{\bbr^3}\int_{\bbr^3}\frac65 \left|\overline{f}^{n+1}\right|^{\frac15}\text{sgn}\overline{f}^{n+1}\overline{\bu}^{n}\cdot\nabla_x f^ndx dv\\ &-\int_{\bbr^3}\int_{\bbr^3} \frac65 \left|\overline{f}^{n+1}\right|^{\frac15}\text{sgn}\overline{f}^{n+1}\nabla_{v} \cdot \bigg(L[\overline{f}^{n+1}]f^n +\overline{\bu}^n f^n\bigg) dx dv\\ =:&\sum_{i=1}^3N_i. \end{aligned} \end{equation} For $t\in [0, T_*]$, we estimate each $N_i$ $(i=1,2,3)$ as follows. \begin{equation*} \begin{aligned} N_1:=&\int_{\bbr^3}\int_{\bbr^3}\Bigg(-\frac15 \nabla_v\cdot L[f^{n+1}]\left|\overline{f}^{n+1}\right|^{\frac65} +\frac35\left|\overline{f}^{n+1}\right|^{\frac65}\Bigg)dx dv\\ \le& C \left\|\overline{f}^{n+1}\right\|_{L^{\frac65}}^{\frac65};\\ N_2:=& -\int_{\bbr^3}\int_{\bbr^3}\frac65 \left|\overline{f}^{n+1}\right|^{\frac15}\text{sgn}\overline{f}^{n+1}\overline{\bu}^{n}\cdot\nabla_x f^ndx dv\\ \le& C\left\|\overline{f}^{n+1}\right\|_{L^{\frac65}}^{\frac15} \left\|\nabla_x f^{n}\right\|_{L_x^{\frac32}L_v^{\frac65}} \|\nabla\overline{\bu}^n\|_{L^2}\\ \le& C(R_0)\|\nabla_x f^{n}\|_{L_{\omega}^2} \left\|\overline{f}^{n+1}\right\|_{L^{\frac65}}^{\frac15} \|\nabla\overline{\bu}^n\|_{L^2}; \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} N_3:=& -\int_{\bbr^3}\int_{\bbr^3} \frac65 \left|\overline{f}^{n+1}\right|^{\frac15}\text{sgn}\overline{f}^{n+1}\nabla_{v} \cdot \bigg(L[\overline{f}^{n+1}]f^n +\overline{\bu}^n f^n\bigg) dx dv\\ \le& C\left\|\overline{f}^{n+1}\right\|_{L^{\frac65}}^{\frac15} \left\|\overline{f}^{n+1}\right\|_{L^{1}} \left\|{f}^{n}\right\|_{L^{\frac65}}\\ &+C(R_0)\left\|\overline{f}^{n+1}\right\|_{L^{\frac65}}^{\frac15} \left\|\overline{f}^{n+1}\right\|_{L^{1}} \left\|\nabla_v{f}^{n}\right\|_{L^{\frac65}}\\ &+ C\left\|\overline{f}^{n+1}\right\|_{L^{\frac65}}^{\frac15} \left\|\nabla_v f^{n}\right\|_{L_x^{\frac32}L_v^{\frac65}} \|\nabla\overline{\bu}^n\|_{L^2}\\ \le& C(R_0)\|f^{n}\|_{H_{\omega}^1} \left\|\overline{f}^{n+1}\right\|_{L^{\frac65}}^{\frac15} \left\|\overline{f}^{n+1}\right\|_{L^{1}}+ C(R_0)\|\nabla_v f^{n}\|_{L_{\omega}^2} \left\|\overline{f}^{n+1}\right\|_{L^{\frac65}}^{\frac15} \|\nabla\overline{\bu}^n\|_{L^2}. \end{aligned} \end{equation*} In the estimate of $N_2$ and $N_3$, we have used the following inequalities. \begin{equation*} \begin{aligned} \left\|f^n\right\|_{L^{\frac65}}\le& C(R_0)\Bigg(\int_{\bbr^3}\int_{\bbr^3}|f^n|^2(1+|x|^2)^{\frac{2\alpha}{3}} dx dv \Bigg)^{\frac12}\|(1+|x|^2)^{-\frac{\alpha}{3}}\|_{L^3}\\ \le& C(R_0) \|f^n\|_{L_{\omega}^2}; \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} \left\|\nabla_x f^{n}\right\|_{L_x^{\frac32}L_v^{\frac65}} \le& C(R_0)\Bigg(\int_{\bbr^3}\int_{\bbr^3}|\nabla_{x}f^n|^2(1+|x|^2)^{\frac{\alpha}{3}} dx dv \Bigg)^{\frac12} \|(1+|x|^2)^{-\frac{\alpha}{6}}\|_{L^6}\\ \le& C(R_0) \|\nabla_{x}f^n\|_{L_{\omega}^2}. \end{aligned} \end{equation*} $\displaystyle\left\|\nabla_v{f}^{n}\right\|_{L^{\frac65}}$ and $\displaystyle\left\|\nabla_v f^{n}\right\|_{L_x^{\frac32}L_v^{\frac65}}$ are estimated similarly as above. Substituting the estimates on $N_i$ $(i=1,2,3)$ into \eqref{eq-app-kcswt-dif-frac-gron}, we deduce that for $t\in [0, T_*]$, \begin{equation}\label{eq-app-kcswt-dif-frac-gronshort} \begin{aligned} \frac{d}{d t}\left\|\overline{f}^{n+1}\right\|_{L^{\frac65}}^{2} \le&C(R_0)\bigg(1 + \|f^n\|_{H_{\omega}^1}^2 \bigg)\left\|\overline{f}^{n+1}\right\|_{L^{\frac65}}^{2} +\left\|\overline{f}^{n+1}\right\|_{L^1}^2 +\frac{1}{12} \|\nabla\overline{\bu}^n\|_{L^2}^2. \end{aligned} \end{equation} Similarly, we have for $t\in [0, T_*]$, \begin{equation}\label{eq-app-kcswt-dif-lonesqu-gron} \begin{aligned} \frac{d}{d t}\left\|\overline{f}^{n+1}\right\|_{L^{1}}^{2} \le&C(R_0)\bigg(1 + \|f^n\|_{H_{\omega}^1}^2 \bigg)\left\|\overline{f}^{n+1}\right\|_{L^{1}}^{2} +\frac{1}{12} \|\nabla\overline{\bu}^n\|_{L^2}^2. \end{aligned} \end{equation} Define \[ F^{n+1}(t):=\|\overline{\bu}^{n+1}\|_{L^2}^2 +\left\|\overline{f}^{n+1}\right\|_{L^{\frac65}}^2 +\left\|\overline{f}^{n+1}\right\|_{L^1}^2. \] Combining \eqref{eq-app-udif-squashort}, \eqref{eq-app-kcswt-dif-frac-gronshort}, and \eqref{eq-app-kcswt-dif-lonesqu-gron} , we deduce that for $t\in [0, T_*]$, \begin{equation}\label{eq-app-dif-gron} \begin{aligned} &\frac{d}{dt}F^{n+1} +\|\nabla\overline{\bu}^{n+1}\|_{L^2}^2\\ \le& C(R_0)\bigg(1 +\|\bu^n\|_{L^{\infty}}^2 +\|f^n\|_{H_{\omega}^1}^2 +\|\nabla{\bu}^{n}\|_{L^3}^2 \bigg) F^{n+1} +\frac14 \|\nabla\overline{\bu}^n\|_{L^2}^2. \end{aligned} \end{equation} Solving the above Gronwall inequality in $[0, T_0]$ for $0<T_0\le T_*$, we obtain \begin{equation}\label{eq-app-dif-est} \begin{aligned} \sup_{0\le t\le T_0}F^{n+1}(t) +\int_0^{T_0}\|\nabla\overline{\bu}^{n+1}(t)\|_{L^2}^2 dt \le \frac{A(T_0)}{4} \int_0^{T_0}\|\nabla\overline{\bu}^n(t)\|_{L^2}^2 dt, \end{aligned} \end{equation} where $A(T_0)$ is given by \[ A(T_0):=\exp\left(\int_0^{T_0} C(R_0)\bigg(1 +\|\bu^n\|_{L^{\infty}}^2 +\|f^n\|_{H_{\omega}^1}^2 +\|\nabla{\bu}^{n}\|_{L^3}^2 \bigg) dt \right). \] Using the uniform bound on approximate solutions, we take $T_0:=T_0(R_0, \|\bu_0\|_{H^1}, \|f_0\|_{H_{\omega}^1})$ suitably small, so that \[ A(T_0) \le 2. \] Thus, we have \begin{equation}\label{eq-app-dif-est-tzro} \sup_{0\le t\le T_0}F^{n+1}(t) +\int_0^{T_0}\|\nabla\overline{\bu}^{n+1}(t)\|_{L^2}^2 dt \le \frac{1}{2} \int_0^{T_0}\|\nabla\overline{\bu}^n(t)\|_{L^2}^2 dt. \end{equation} Summing \eqref{eq-app-dif-est-tzro} over all $n\in \bbn$ yields \begin{equation}\label{eq-app-dif-est-sum} \sum_{n=2}^{\infty}\sup_{0\le t\le T_0}F^n(t) +\frac{1}{2}\sum_{n=2}^{\infty} \int_0^{T_0}\|\nabla\overline{\bu}^n(t)\|_{L^2}^2 dt \le \frac{1}{2}\int_0^{T_0}\|\nabla\overline{\bu}^1(t)\|_{L^2}^2 dt. \end{equation} We deduce from \eqref{eq-app-dif-est-sum} that there exists $(f, \bu)$ such that \begin{equation}\label{eq-app-conver-lowoder} \begin{aligned} &f^n \to f, \quad\text{in $C([0, T_0]; L^1)$, as $n \to \infty$};\\ &\bu^n \to \bu, \quad \text{in $C(0, T_0; L^2)$, as $n \to \infty$};\\ &\bu^n \to \bu, \quad \text{in $L^2(0, T_0; D^1)$, as $n \to \infty$}. \end{aligned} \end{equation} From \eqref{eq-app-conver-lowoder}, it is easy to show that $(f, \bu)$ verifies \eqref{eq-cs-ns} in the sense of distributions. \vskip 3mm \noindent \textbf{Step 3. Continuity in strong solution space} \vskip 3mm Combining the uniform bound estimates \eqref{eq-indu-assup} and \eqref{eq-appro-f-normtwo} with \eqref{eq-app-conver-lowoder}, we deduce that \begin{equation}\label{eq-app-conver-wek} \begin{aligned} f^n &\rightharpoonup f, \quad\text{weakly-$\star$ in $L^{\infty}(0, T_0; H^1_{\omega})$, as $n \to \infty$};\\ \bu^n &\rightharpoonup \bu, \quad \text{weakly in $L^{2}(0, T_0; H^2)$, as $n \to \infty$};\\ \bu^n_t &\rightharpoonup \bu_t, \quad \text{weakly in $L^2(0, T_0; L^2)$, as $n \to \infty$};\\ \nabla^2\bu^n &\rightharpoonup \nabla^2\bu, \quad \text{weakly in $L^s(0, T_0; L^{q})$, as $n \to \infty$}. \end{aligned} \end{equation} It follows from \eqref{eq-app-conver-wek} that $\bu_t \in L^2(0, T_0; L^2)$ and $\bu \in L^2(0, T_0; H^{2})$. Using the regularity of $\bu$, we can show that $\bu\in C(0, T_0; H^1)$. Thus, \[ \bu \in C([0, T_0]; H^1)\cap L^2(0, T_0; D^{2})\cap L^s(0, T_0; D^{2,q}). \] Employing Proposition \ref{prop-kine-cs-wp}, we infer that \begin{equation}\label{eq-conti-fwtsobo} f \in C([0, T_0]; H^1_{\omega}). \end{equation} Therefore, $(f,\bu)$ is the desired strong solution in the sense of Definition \ref{def-stro}. The uniqueness of strong solutions can be proved in the same way as in the derivation of \eqref{eq-app-dif-gron}. This completes the proof. $\hfill \square$ \section{A Priori Estimates}\label{sec-apriori} \setcounter{equation}{0} In this section, we derive some a priori estimates on classical solutions to the coupled model, which are used to extend the local strong solutions. \begin{lemma}\label{lm-cs-s-emapriori} Let $0<R_0<\infty$. Assume that $f_0(x,v)\ge 0$, $f_0(x,v) \in H_{\omega}^1(\bbr^3 \times \bbr^3)\cap L^{\infty}(\bbr^3 \times \bbr^3)$, and $\bu_0(x) \in H^{1}(\bbr^3)$, with $\nabla \cdot \bu_0=0$ and the $v$-support of $f_0(x,v)$ satisfying \[ \text{supp}_{v}f_0(x,\cdot)\subseteq B(R_0) \quad \text{for a.e. $x \in \bbr^3$}. \] If $(f, \bu)$ is a classical solution in $[0, T]$, then it holds that \[ \begin{aligned} (1)&\ f\ge0 \ \text{and} \ \sup_{0\le t\le T}\|f(t)\|_{L^{\infty}}\le \|f_0\|_{L^{\infty}}\exp\big(CT\big),\quad \text{where $C:=C\Big(\|\varphi\|_{C^0}, \|f_0\|_{L^1}\Big)$};\\ (2)&\ \sup_{0\le t\le T}\|\rho(t)\|_{L^{p}}=\|\rho_0\|_{L^p}\le C\|f_0\|_{L^{\infty}}^{1-\frac1p}R_0^{3-\frac3p},\quad 1\le p\le\infty, \\ &\ \text{for some constant $C>0$};\\ (3)&\ E(T)+\frac12\int_0^T \int_{\bbr^6}\int_{\bbr^6}\varphi(|x-y|)f(t,y,v^*)f(t,x,v)|v^*-v|^2dy dv^* dx dv dt\\ & \qquad + \int_0^T \|\nabla \bu(t) \|_{L^2}^2 d t +\int_0^T \int_{\bbr^6} f(t,x,v)|\bu-v|^2 dx dv dt=E_0. \end{aligned} \] \end{lemma} \begin{proof} (1)\ Denote by $(X(t;x_0,v_0),V(t;x_0,v_0))$ the characteristic to $\eqref{eq-cs-ns}_1$, emanating from $(x_0,v_0)$. It verifies that \begin{equation} \label{eq-charac} \begin{dcases} \frac{d X}{d t}=\bu(t,X), \\ \frac{d V}{d t}=\int_{\bbr^{6}} \varphi(|X-y|)f(t, y,v^*)(v^*-V)dy dv^*+\bu(t,X)-V. \end{dcases} \end{equation} \begin{equation} \label{eq-charac-ini} X(0;x_0,v_0)=x_0, \qquad V(0;x_0,v_0)=v_0. \end{equation} Recall that \[ a(t,x)=\int_{\bbr^{6}} \varphi(|x-y|)f(t, y,v^*) dy dv^*, \] \[ \mathbf{b} (t,x)=\int_{\bbr^{6}} \varphi(|x-y|)f(t, y, v^*)v^* dy dv^*. \] Solving the equation \eqref{eq-kine-cs} by the method of characteristics gives \begin{equation} \label{eq-kin-cs-positive} f(t,X(t;x_0,v_0),V(t;x_0,v_0))=f_0(x_0,v_0)\exp \left(3 \int_0^t \Big[1+a(\tau,X(\tau))\Big] d \tau \right)\ge 0. \end{equation} From \eqref{eq-kin-conser-mass}, \eqref{eq-kin-cs-positive} and the initial condition $f_0(x, v)\in L^{\infty}(\bbr^3\times\bbr^3)$, we deduce that \begin{equation} \label{eq-kin-cs-linfty} \sup_{0\le t\le T}\|f(t)\|_{L^{\infty}}\le \|f_0\|_{L^{\infty}}\exp\big(CT\big),\quad \text{where $C:=C\Big(\|\varphi\|_{C^0}, \|f_0\|_{L^1}\Big)$}. \end{equation} (2)\ Integrating $\eqref{eq-cs-ns}_1$ with respect to $v$ over $\bbr^3$ gives \begin{equation} \label{eq-parden-mascon} \rho_t+\bu\cdot\nabla\rho=0. \end{equation} Multiplying \eqref{eq-parden-mascon} by $p\rho^{p-1}$, $1\le p<\infty$, and integrating the resulting equation over $[0,t]\times\bbr^3$, we obtain that for $t\in [0,T]$ \begin{equation} \label{eq-parden-pnorm} \|\rho(t)\|_{L^p}=\|\rho_0\|_{L^p}\le \|\rho_0\|_{L^1}^{\frac1p}\|\rho_0\|_{L^\infty}^{1-\frac1p}\le C\|f_0\|_{L^{\infty}}^{1-\frac1p}R_0^{3-\frac3p} \end{equation} for some constat $C>0$, where we have used the incompressible condition $\nabla\cdot\bu=0$. Since \begin{equation} \label{eq-parden-inftynorm} \|\rho(t)\|_{L^\infty}=\lim_{p\to\infty}\|\rho(t)\|_{L^p}=\lim_{p\to\infty}\|\rho_0\|_{L^p}=\|\rho_0\|_{L^\infty} \le C\|f_0\|_{L^{\infty}}R_0^{3}, \end{equation} we combine \eqref{eq-parden-pnorm} with \eqref{eq-parden-inftynorm} to obtain that for all $1\le p\le\infty$, \[ \sup_{0\le t\le T}\|\rho(t)\|_{L^{p}}=\|\rho_0\|_{L^p}\le C\|f_0\|_{L^{\infty}}^{1-\frac1p}R_0^{3-\frac3p},\quad \text{for some constant $C>0$.} \] (3)\ Multiplying $\eqref{eq-cs-ns}_1$ by $\frac12 |v|^2$, and integrating the resulting equation over $\bbr^3\times\bbr^3$, we have \begin{equation}\label{eq-cs-ener-dt} \begin{gathered} \frac{d}{dt}\int_{\bbr^6}\frac12 f |v|^2dx dv +\frac12 \int_{\bbr^6}\int_{\bbr^6}\varphi(|x-y|)f(t,y,v^*)f(t,x,v)|v^*-v|^2dy dv^* dx dv \\ =\int_{\bbr^6}fv\cdot(\bu-v)dx dv. \end{gathered} \end{equation} Taking the dot product of $\eqref{eq-cs-ns}_2$ with $\bu$, and integrating the resulting equation over $\bbr^3$, we deduce that \begin{equation}\label{eq-s-ener-dt} \frac12 \frac{d}{dt}\int_{\bbr^3} |\bu|^2 dx +\|\nabla \bu\|_{L^2}^2 =\int_{\bbr^6}f\bu\cdot(v-\bu)dx dv. \end{equation} Adding \eqref{eq-cs-ener-dt} to \eqref{eq-s-ener-dt}, and integrating the resulting equation over $[0,T]$, we arrive at Lemma \ref{lm-cs-s-emapriori}(3). This completes the proof. \end{proof} The following lemma is devoted to estimating $\|\bu\|_{L^3(0,T;L^9)}$. It plays a crucial role in the estimate of $\sup_{0\le t\le T}\|\nabla\bu(t)\|_{L^2}$, which is indispensable in applying Proposition \ref{prop-s} to the incompressible Navier--Stokes equations. \begin{lemma}\label{lm-u-serr-cond} Under the conditions in Theorem \ref{thm-exist}, if $(f, \bu)$ is a classical solution to \eqref{eq-cs-ns}-\eqref{eq-sys-inidata} in $[0, T]$, then it holds that \[ \|\bu\|_{L^3(0,T;L^9)}^3 \le C\Big(\|\bu_0\|_{L^3}^3+ \|\bu_0\|_{L^3}^6+(1+\|\rho_0\|_{L^{\infty}})E_0 \Big), \] for some constant $C>0$ independent of the initial data. \end{lemma} \begin{proof} Using the splitting approach, we decompose $\bu=\bar\bu +\bw$, with $\bar\bu$ and $\bw$ determined by the following equations: \begin{equation} \label{eq-ns-baru} \begin{dcases} \bar\bu_t+ \bu \cdot \nabla \bar\bu =\Delta \bar\bu, \\ \bar\bu|_{t=0}=\bu_0, \end{dcases} \end{equation} and \begin{equation} \label{eq-ns-w} \begin{dcases} \bw_t+ \bu \cdot \nabla \bw +\nabla P=\Delta \bw +\int_{\bbr^3}(v-\bu)fdv, \\ \bw|_{t=0}=0, \end{dcases} \end{equation} respectively. Taking the dot product of \eqref{eq-ns-baru} with $3|\bar\bu|\bar\bu$, and integrating the resulting equation over $\bbr^3$, we obtain \begin{equation}\label{eq-baru-ener-dt} \frac{d}{dt} \|\bar\bu\|_{L^3}^3 dx +3\int_{\bbr^3} |\bar\bu||\nabla \bar\bu|^2 dx\le 0, \end{equation} where we have used the fact that \[ \begin{aligned} -\int_{\bbr^3} |\bar\bu|\bar\bu \cdot\Delta\bar\bu dx=&\int_{\bbr^3} \nabla\Big(|\bar\bu|\bar\bu\Big):\nabla\bar\bu dx \ge \int_{\bbr^3} |\bar\bu||\nabla \bar\bu|^2 dx, \end{aligned} \] and \[ \begin{aligned} 3\int_{\bbr^3} \bu \cdot \nabla \bar\bu \cdot \bar\bu |\bar\bu|dx=&\int_{\bbr^3} \bu \cdot \nabla |\bar\bu|^3 dx =0. \end{aligned} \] Integrating \eqref{eq-baru-ener-dt} over $[0, T]$ gives \begin{equation}\label{eq-baru-ener} \sup_{0\le t\le T} \|\bar\bu(t)\|_{L^3}^3 dx +3\int_0^T\int_{\bbr^3} |\bar\bu||\nabla \bar\bu|^2 dx dt \le \|\bar\bu_0\|_{L^3}^3. \end{equation} For convenience of notation, we define $I(\bar\bu)$ as \[ I(\bar\bu):=\int_{\bbr^3} |\bar\bu||\nabla \bar\bu|^2 dx, \] similarly for the following $I(\bw)$, and $\bh(t,x)$ is defined as \[ \bh(t,x):= \int_{\bbr^3} (v-\bu)f dv. \] Taking the dot product of \eqref{eq-ns-w} with $3|\bw|\bw$, and integrating the resulting equation over $\bbr^3$, we deduce that for $t\in [0, T]$, \begin{equation}\label{eq-w-ener-dt} \begin{aligned} \frac{d}{dt} \|\bw\|_{L^3}^3 +3I(\bw) \le& -3\int_{\bbr^3} |\bw| \bw \cdot\nabla P dx +3\int_{\bbr^3} |\bw| \bw \cdot \bh dx, \end{aligned} \end{equation} where the pressure $P$ is determined by the following equation: \[ \Delta P=-\nabla\cdot(\bu\cdot\nabla\bu)+\nabla\cdot\bh. \] Nevertheless, appearance of the additional term $\nabla\cdot\bh$ disables us to use integration by parts to estimate the term $\int_{\bbr^3} |\bw| \bw \cdot\nabla P dx$. We overcome this difficulty by decomposing $P=P_1+P_2$, with $P_1$ and $P_2$ governed by \[ \Delta P_1=-\nabla\cdot(\bu\cdot\nabla\bu)\quad\text{and}\quad \Delta P_2=\nabla\cdot\bh, \] respectively. Then it follows from the Calder\'on--Zygmund Theorem that \[ \|P_1\|_{L^{\frac94}}\le C\|\bu\|_{L^{\frac92}}^{2}\quad \text{and}\quad \|\nabla P_2\|_{L^{\frac32}}\le C\|\bh\|_{L^{\frac32}}, \] for some constant $C>0$. We estimate the right-hand side of \eqref{eq-w-ener-dt} as follows. \begin{equation}\label{eq-w-ener-dt-rh} \begin{aligned} & -3\int_{\bbr^3} |\bw| \bw \cdot\nabla P dx +3\int_{\bbr^3} |\bw| \bw \cdot \bh dx\\ \le&-3\int_{\bbr^3} |\bw| \bw \cdot(\nabla P_1+\nabla P_2) dx +3\|\bw\|_{L^6}^2 \|\bh\|_{L^{\frac32}}\\ \le&3 \int_{\bbr^3} \nabla\cdot\Big(|\bw| \bw\Big) P_1 dx -3\int_{\bbr^3} |\bw| \bw \cdot \nabla P_2 dx +3\|\bw\|_{L^6}^2 \|\bh\|_{L^{\frac32}}\\ \le&C \int_{\bbr^3} |\bw| |\nabla \bw| |P_1| dx +C\|\bw\|_{L^6}^2 \|\nabla P_2\|_{L^{\frac32}} +3\|\bw\|_{L^6}^2 \|\bh\|_{L^{\frac32}}\\ \le&C \left\| |\bw|^{\frac12} |\nabla \bw| \right\|_{L^2} \|\bw\|_{L^9}^{\frac12} \|P_1\|_{L^{\frac94}} +C\|\bw\|_{L^6}^2 \|\bh\|_{L^{\frac32}}\\ \le&CI^{\frac23}(\bw) \|\bu\|_{L^{\frac92}}^{2} +C\|\bw\|_{L^6}^2 \|\bh\|_{L^{\frac32}}\\ \le&CI^{\frac23}(\bw) \bigg(\|\bar\bu\|_{L^{\frac92}}^{2} +\|\bw\|_{L^{\frac92}}^{2}\bigg) +C\|\bw\|_{L^6}^2 \|\bh\|_{L^{\frac32}}\\ \le&CI^{\frac23}(\bw) \|\bar\bu\|_{L^3}\|\bar\bu\|_{L^9} +CI^{\frac23}(\bw) \|\bw\|_{L^3}\|\bw\|_{L^9} +C\|\bw\|_{L^3}\|\bw\|_{L^9}^3 +\frac12\|\bh\|_{L^{\frac32}}^2\\ \le& CI^{\frac23}(\bw) \|\bar\bu\|_{L^3} I^{\frac13}(\bar\bu) +C\|\bw\|_{L^3} I(\bw) +\frac12\|\bh\|_{L^{\frac32}}^2\\ \le& I(\bw) +K_1\|\bw\|_{L^3}^3 I(\bw)+C\|\bar\bu\|_{L^3}^3 I(\bar\bu) +\Big(1+\|\rho_0\|_{L^{\infty}}\Big)\int_{\bbr^6}|v-\bu|^2f dx dv, \end{aligned} \end{equation} for some positive constants $C$ and $K_1$. In the above estimate, we have used the Sobolev inequality \begin{equation}\label{eq-w-sobo} \|\bw\|_{L^9}^3 \le C I(\bw) \quad\text{for some constant $C>0$,} \end{equation} similarly for $\|\bar\bu\|_{L^9}^3$, and the fact that \[ \begin{aligned} \frac12\|\bh\|_{L^{\frac32}}^2\le \|\bh\|_{L^{1}}^2 +\|\bh\|_{L^{2}}^2 \le\Big(1+\|\rho_0\|_{L^{\infty}}\Big)\int_{\bbr^6}|v-\bu|^2f dx dv. \end{aligned} \] Substituting \eqref{eq-w-ener-dt-rh} into \eqref{eq-w-ener-dt} and integrating the resulting equation over $[0,t]$, $t\in (0,T]$, we have by Lemma \ref{lm-cs-s-emapriori}(3) and \eqref{eq-baru-ener} that \begin{equation}\label{eq-w-ener} \begin{aligned} &\sup_{0\le \tau\le t} \|\bw(\tau)\|_{L^3}^3 +2\int_0^t I(\bw) d\tau \\ \le& K_1\int_0^t \|\bw\|_{L^3}^3 I(\bw) d\tau +C\sup_{0\le \tau\le t} \|\bar\bu(\tau)\|_{L^3}^3\int_0^t I(\bar\bu) d\tau\\ &+\Big(1+\|\rho_0\|_{L^{\infty}}\Big)\int_0^t \int_{\bbr^6}|v-\bu|^2f dx dv d\tau\\ \le&K_1\int_0^t \|\bw\|_{L^3}^3 I(\bw) d\tau + C\|\bu_0\|_{L^3}^6+(1+\|\rho_0\|_{L^{\infty}})E_0, \end{aligned} \end{equation} for some constants $K_1, C>0$, independent of the initial data. Since $\|\bw(0)\|_{L^3}^3=0$, there must exist $t^*\in (0, T]$ such that \begin{equation}\label{eq-w-ener-smal} \|\bw(t)\|_{L^3}^3 <2\e_0 \quad\text{for $t\in [0, t^*)$,} \end{equation} where $\e_0$ satisfies $2K_1\e_0=1.$ If \[ C\|\bu_0\|_{L^3}^6+(1+\|\rho_0\|_{L^{\infty}})E_0\le\e_0, \] it follows from \eqref{eq-w-ener} and \eqref{eq-w-ener-smal} that \begin{equation}\label{eq-w-ener-est} \begin{aligned} &\sup_{0\le \tau\le t^*} \|\bw(\tau)\|_{L^3}^3 +\int_0^{t^*} I(\bw) d\tau \le C\|\bu_0\|_{L^3}^6+(1+\|\rho_0\|_{L^{\infty}})E_0\le\e_0. \end{aligned} \end{equation} By continuity argument, we can show that $t^*=T$. Thus, it holds that \begin{equation}\label{eq-w-ener-esteqv} \begin{aligned} &\sup_{0\le t\le T} \|\bw(t)\|_{L^3}^3 +\int_0^{T} I(\bw) dt \le C\|\bu_0\|_{L^3}^6+(1+\|\rho_0\|_{L^{\infty}})E_0. \end{aligned} \end{equation} Combining with \eqref{eq-baru-ener} and \eqref{eq-w-sobo}, we deduce that \begin{equation}\label{eq-u-seri-cond} \begin{aligned} \|\bu\|_{L^3(0,T; L^9)}^3\le&C\Big(\|\bar\bu\|_{L^3(0,T; L^9)}^3+\|\bw\|_{L^3(0,T; L^9)}^3\Big)\\ \le& C\int_0^{T} I(\bar\bu) dt+C\int_0^{T} I(\bw) dt\\ \le& C\Big(\|\bu_0\|_{L^3}^3+ \|\bu_0\|_{L^3}^6+(1+\|\rho_0\|_{L^{\infty}})E_0 \Big), \end{aligned} \end{equation} for some constant $C>0$ independent of the initial data. \end{proof} In fact, $\bu\in {L^3(0,T; L^9)}$ satisfies the Serrin condition, which can be used to derive the estimate on $\sup_{0\le t\le T}\|\nabla \bu\|_{L^2}$. \begin{lemma}\label{lm-cs-s-graduinftapriori} Under the conditions in Theorem \ref{thm-exist}, if $(f, \bu)$ is a classical solution to \eqref{eq-cs-ns}-\eqref{eq-sys-inidata} in $[0, T]$, then it holds that \[ \sup_{0\le t\le T}\|\nabla \bu(t)\|_{L^2}^2 +\int_0^T \|\nabla^2\bu\|_{L^2}^2dt \le C\Big(\|\nabla \bu_0\|_{L^2}^2+\|\rho_0\|_{L^{\infty}}E_0\Big)\exp\left(C(\varepsilon_0+\varepsilon_0^{\frac12})\right), \] for some constant $C>0$ independent of the initial data. \end{lemma} \begin{proof} Taking the dot product of $\eqref{eq-cs-ns}_2$ with $-\Delta \bu$, and integrating the resulting equation over $\bbr^3$, we obtain \begin{equation}\label{eq-aprigu} \begin{aligned} &\frac{d}{dt}\|\nabla \bu \|_{L^2}^{2}+ K_2\|\nabla^2 \bu \|_{L^2}^{2}\\ \le&\int_{\bbr^3}\bu \cdot\nabla \bu\cdot\Delta \bu dx -\int_{\bbr^3}\int_{\bbr^3} (v-\bu ) f dv\cdot\Delta \bu dx\\ \le&\|\bu \|_{L^{9}} \|\nabla \bu \|_{L^{\frac{18}{7}}}\|\Delta \bu \|_{L^2}+\|\bh \|_{L^2}\|\Delta \bu \|_{L^2}\\ \le&C \|\bu \|_{L^9} \|\nabla \bu \|_{L^2}^{\frac23} \|\nabla^2 \bu \|_{L^2}^{\frac43} +C\|\bh \|_{L^2}\|\nabla^2 \bu \|_{L^2}\\ \le&\frac {K_2}{2} \|\nabla^2 \bu \|_{L^2}^2 +C\|\bu \|_{L^9}^{3}\|\nabla \bu \|_{L^2}^{2} +C\|\bh \|_{L^2}^2, \end{aligned} \end{equation} for some positive constants $K_2$ and $C$. Here we have used the Gagliardo--Nirenberg inequality \begin{equation}\label{eq-gn-ulinfty} \|\nabla\bu \|_{L^{\frac{18}{7}}} \le C \|\nabla\bu \|_{L^2}^{\frac23} \|\nabla^2 \bu \|_{L^2}^{\frac13} \quad \text{in $\bbr^3$,} \end{equation} and the elliptic estimate \begin{equation}\label{eq-gtuelli} K_2\|\nabla^2 \bu \|_{L^2}^2 \le \|\Delta \bu \|_{L^2}^2. \end{equation} Using Lemma \ref{lm-u-serr-cond} and the fact that \[ \int_0^T \|\bh\|_{L^2}^2 dt \le \|\rho_0\|_{L^{\infty}}E_0, \] we solve the above Gronwall inequality \eqref{eq-aprigu} to obtain \begin{equation}\label{eq-gultwo} \sup_{0\le t\le T}\|\nabla \bu(t)\|_{L^2}^2+ \int_0^T \|\nabla^2 \bu \|_{L^2}^{2} dt \le C\Big(\|\nabla \bu_0\|_{L^2}^2+\|\rho_0\|_{L^{\infty}}E_0\Big)\exp\left(C(\varepsilon_0+\varepsilon_0^{\frac12})\right), \end{equation} for some constant $C>0$ independent of the initial data. \end{proof} Having obtained the estimate on $\sup_{0\le t\le T}\|\nabla \bu(t)\|_{L^2}$, we then use Proposition \ref{prop-s} to derive the estimate on $\|\nabla^2\bu\|_{L^s(0,T: L^q)}$, which leads to the estimate on $\int_0^T \|\bu(t)\|_{W^{1,\infty}} dt$ by interpolation. \begin{lemma}\label{lm-uwone-est} Under the conditions in Theorem \ref{thm-exist}, if $(f, \bu)$ is a classical solution to \eqref{eq-cs-ns}-\eqref{eq-sys-inidata} in $[0, T]$, then it holds that \[ \begin{aligned} &(1)\ R(t)\le R_0+C,\quad \text{for $t\in [0, T]$};\\ &(2)\ \|\nabla^2\bu\|_{L^s(0,T; L^q)}\le C+CT^{\frac1s};\\ &(3)\ \int_0^T \|\bu(t)\|_{W^{1,\infty}} dt \le C(1+T), \end{aligned} \] where $C:=C(R_0, E_0, \|\rho_0\|_{L^{\infty}}, \|\bu_0\|_{H^1},\|\bu_0\|_{L^3})$. \end{lemma} \begin{proof} (1)\ Solving the characteristic equation \eqref{eq-charac} yields \begin{equation}\label{eq-charac-v} \begin{aligned} V(t) =&v_0 \exp\left(-\int_0^t \Big[1+a(s,X(s))\Big]ds\right)\\ &+\int_0^t \Big[\mathbf{b}(\tau,X(\tau))+\bu(\tau,X(\tau))\Big]\exp\left(-\int_{\tau}^t \Big[1+a(s,X(s))\Big]ds\right) d\tau. \end{aligned} \end{equation} Using Lemma \ref{lm-cs-s-emapriori}(3) and the Cauchy inequality, we have \begin{equation}\label{eq-b-bound} \sup_{0\le t\le T}\|\mathbf{b}(t)\|_{L^{\infty}}\le \left(\int_{\bbr^{6}} f |v|^2 dx dv \right)^{\frac12}\le E_0^{\frac12}. \end{equation} From \eqref{eq-charac-v}, we deduce that for $t\in [0, T]$, \begin{equation}\label{eq-v-bound} \begin{aligned} R(t) \le&R_0+\int_0^t \|\bu(\tau)\|_{L^{\infty}}^2 d\tau\\ &+\int_0^t \Big(1+\|\mathbf{b}(\tau)\|_{L^{\infty}}\Big)\exp\left(-\int_{\tau}^t \Big[1+a(s,X(s))\Big]ds\right) d\tau\\ \le& R_0+C\int_0^t \|\nabla\bu(\tau)\|_{L^2} \|\nabla^2\bu(\tau)\|_{L^2} d\tau\\ &+\bigg(1+\sup_{0\le \tau\le t}\|\mathbf{b}(\tau)\|_{L^{\infty}}\bigg) \int_0^t \Big[1+a(\tau,X(\tau))\Big]\exp\left(-\int_{\tau}^t \Big[1+a(s,X(s))\Big]ds\right) d\tau\\ \le&R_0+1+ E_0^{\frac12}+ C \int_0^t \|\nabla\bu(\tau)\|_{L^2}^2 d\tau +C\int_0^t \|\nabla^2\bu(\tau)\|_{L^2}^2 d\tau\\ \le& R_0 +C(R_0, E_0, \|\rho_0\|_{L^{\infty}}, \|\bu_0\|_{H^1},\|\bu_0\|_{L^3}), \end{aligned} \end{equation} where we have used Lemma \ref{lm-cs-s-emapriori}(3), Lemma \ref{lm-cs-s-graduinftapriori}, \eqref{eq-b-bound} and the Sobolev inequality \[ \|\bu(\tau)\|_{L^{\infty}}^2\le C\|\nabla\bu(\tau)\|_{L^2} \|\nabla^2\bu(\tau)\|_{L^2} \quad\text{in $\bbr^3$, for some constant $C>0$.} \] (2)\ Applying Proposition \ref{prop-s} to $\eqref{eq-cs-ns}_2-\eqref{eq-cs-ns}_3$, we obtain \begin{equation}\label{eq-ulsq-est} \begin{aligned} \|\nabla^2\bu\|_{L^s(0,T;L^q)} \le C\Big(\|\bu_0\|_{H^1}+\|\bu\cdot\nabla\bu\|_{L^s(0,T;L^q)}+\|\bh\|_{L^s(0,T;L^q)}\Big). \end{aligned} \end{equation} We estimate $\|\bu\cdot\nabla\bu\|_{L^s(0,T;L^q)}$ and $\|\bh\|_{L^s(0,T;L^q)}$ as follows. It is easy to see that \begin{equation}\label{eq-conv-lq} \begin{aligned} \|\bu\cdot\nabla\bu\|_{L^q}\le& \|\bu\|_{L^{\infty}} \|\nabla\bu \|_{L^q}\\ \le& C\|\nabla\bu \|_{L^2}^{1-\theta_1} \|\nabla^2\bu \|_{L^q}^{\theta_1} \|\nabla\bu \|_{L^2}^{1-\theta_2} \|\nabla^2\bu \|_{L^q}^{\theta_2}\\ \le&C\|\nabla\bu \|_{L^2}^{2-(\theta_1+\theta_2)} \|\nabla^2\bu \|_{L^q}^{\theta_1+\theta_2}, \end{aligned} \end{equation} where we have use the following Gagliardo--Nirenberg inequalities in $\bbr^3$, \[ \|\bu\|_{L^{\infty}}\le C\|\nabla\bu \|_{L^2}^{1-\theta_1} \|\nabla^2\bu \|_{L^q}^{\theta_1}\quad \text{and}\quad \|\nabla\bu \|_{L^q}\le C\|\nabla\bu \|_{L^2}^{1-\theta_2} \|\nabla^2\bu \|_{L^q}^{\theta_2}, \] for some constant $C>0$, with \[ -\frac{1-\theta_1}{2}+ \theta_1\left(2-\frac3q\right)=0, \quad -\frac{1-\theta_2}{2}+ \theta_2\left(2-\frac3q\right)=1-\frac3q. \] Thus, \begin{equation}\label{eq-conv-lsq} \begin{aligned} C\|\bu\cdot\nabla\bu\|_{L^s(0, T;L^q)}\le C\left(\int_0^T \|\nabla\bu\|_{L^{2}}^{rs}dt\right)^{\frac1s}+ \frac12 \|\nabla^2\bu\|_{L^s(0,T;L^q)}, \end{aligned} \end{equation} where $$r:=\frac{2-(\theta_1+\theta_2)}{1-(\theta_1+\theta_2)}=6-\frac6q.$$ Using Lemma \ref{lm-cs-s-emapriori}(2), Lemma \ref{lm-cs-s-graduinftapriori} and \eqref{eq-v-bound}, we have \begin{equation}\label{eq-h-lq} \begin{aligned} \|\bh\|_{L^q}=&\left\|\int_{\bbr^3}(v-\bu)f dv\right\|_{L^q} \\ \le& C\|\rho\|_{L^q}+ C\|\rho\|_{L^{\frac{6q}{6-q}}} \|\nabla\bu\|_{L^2}\\ \le& C(R_0, E_0, \|\rho_0\|_{L^{\infty}}, \|\bu_0\|_{H^1},\|\bu_0\|_{L^3}). \end{aligned} \end{equation} Substituting \eqref{eq-conv-lsq} into \eqref{eq-ulsq-est}, we use Lemma \ref{lm-cs-s-graduinftapriori} and \eqref{eq-h-lq} to obtain \begin{equation}\label{eq-ulsq-est-sim} \begin{aligned} \|\nabla^2\bu\|_{L^s(0,T;L^q)} \le& C\|\bu_0\|_{H^1}+C\left(\int_0^T \|\nabla\bu\|_{L^{2}}^{rs}dt\right)^{\frac1s}+C\|\bh\|_{L^s(0,T;L^q)}\\ \le& C+CT^{\frac1s}, \end{aligned} \end{equation} where $C:=C(R_0, E_0, \|\rho_0\|_{L^{\infty}}, \|\bu_0\|_{H^1},\|\bu_0\|_{L^3}).$ \noindent (3)\ From the following Gagliardo--Nirenberg inequality in $\bbr^3$, \begin{equation}\label{eq-sobogu-linfty} \|\nabla\bu \|_{L^{\infty}} \le C \|\nabla \bu \|_{L^2}^{1-\theta_3} \|\nabla^2 \bu \|_{L^q}^{\theta_3} \quad \text{with $-\frac{1-\theta_3}{2}+ \theta_3\left(2- \frac3q \right)=1$,} \end{equation} we deduce that \begin{equation}\label{eq-grauone-infty} \begin{aligned} \int_0^T \|\nabla\bu \|_{L^{\infty}} dt\le &C \int_0^T \|\nabla \bu \|_{L^2}^{1-\theta_3} \|\nabla^2 \bu \|_{L^q}^{\theta_3} dt\\ \le& C \int_0^T \|\nabla \bu \|_{L^2} dt+ C \int_0^T \|\nabla^2 \bu \|_{L^q} dt\\ \le& C\left(\int_0^T \|\nabla\bu\|_{L^{2}}^{2}dt\right)^{\frac12}T^{\frac12}+ C\left(\int_0^T \|\nabla^2\bu\|_{L^{q}}^{s}dt\right)^{\frac1s}T^{1-{\frac1s}}\\ \le& C(1+T), \end{aligned} \end{equation} where $C:=C(R_0, E_0, \|\rho_0\|_{L^{\infty}}, \|\bu_0\|_{H^1},\|\bu_0\|_{L^3})$ and we have used Lemma \ref{lm-cs-s-emapriori}(3), along with \eqref{eq-ulsq-est-sim}. Since \[ \int_0^T \|\bu \|_{L^{\infty}} dt\le \left(\int_0^T \|\bu \|_{L^{\infty}}^{2}dt\right)^{\frac12}T^{\frac12}\le C(1+T), \] where $C:=C(R_0, E_0, \|\rho_0\|_{L^{\infty}}, \|\bu_0\|_{H^1},\|\bu_0\|_{L^3})$. Combining with \eqref{eq-grauone-infty}, we deduce that \[ \int_0^T \|\bu(t)\|_{W^{1,\infty}} dt \le C(1+T) \quad \text{for $C:=C(R_0, E_0, \|\rho_0\|_{L^{\infty}}, \|\bu_0\|_{H^1},\|\bu_0\|_{L^3})$.} \] This completes the proof. \end{proof} \section{Proof of the Theorems}\label{sec-glob-exst} \setcounter{equation}{0} Combining the local existence result with the a priori estimates on classical solutions to the coupled system, we prove global existence of strong solutions by continuity argument. \vskip 3mm \noindent \textit{Proof of Theorem \ref{thm-exist}.} From Proposition \ref{prop-loc-exist}, it follows that there exists some $T_0>0$ such that \eqref{eq-cs-ns}-\eqref{eq-sys-inidata} admits a unique strong solution in $[0, T_0]$. Take the supremum among all the $T_0$, and define the supremum $T^*$ as the life span of strong solutions. Next, we demonstrate that $T^*=\infty$ by contradiction. Suppose not, i.e., $T^*<\infty$. We mollify the initial data by convolving with the standard mollifier, and then take limit to the obtained approximate classical solutions. Under conditions in Theorem \ref{thm-exist}, it follows from Proposition \ref{prop-kine-cs-wp} and Lemma \ref{lm-uwone-est} that the local strong solutions satisfy \begin{equation}\label{eq-rf-est} \begin{aligned} &\|f(t)\|_{H_{\omega}^1}\le \|f_0\|_{H_{\omega}^1}\exp\Big(C(1+t)\Big) \quad \text{for $t\in [0, T^*)$};\\ &R(t)\le R_0+C\quad \text{for $t\in [0, T^*)$}, \end{aligned} \end{equation} where $C:=C(R_0, E_0, \|\rho_0\|_{L^{\infty}}, \|\bu_0\|_{H^1},\|\bu_0\|_{L^3})$. Combining Lemma \ref{lm-cs-s-emapriori} and \ref{lm-cs-s-graduinftapriori}, we deduce that for $t\in [0, T^*)$, \[ \|\bu(t)\|_{H^{1}}^2\le E_0 + C\Big(\|\nabla \bu_0\|_{L^2}^2+\|\rho_0\|_{L^{\infty}}E_0\Big)\exp\left(C(\varepsilon_0+\varepsilon_0^{\frac12})\right). \] Thus, we have \begin{equation}\label{eq-uhone-est} \|\bu(t)\|_{H^{1}}\le C\bigg(\|\nabla \bu_0\|_{L^2}+\Big(1+\|\rho_0\|_{L^{\infty}}^{\frac12}\Big)E_0^{\frac12}\bigg)\exp\left(C(\varepsilon_0+\varepsilon_0^{\frac12})\right) \end{equation} for $t\in [0, T^*)$ and some constant $C>0$. In terms of continuity of $f(t)$ and $\bu(t)$, we can define \begin{equation}\label{eq-rfulim} \begin{aligned} &f(T^*):= \lim_{t \to T^*-}f(t) \quad \text{in $H_{\omega}^1(\bbr^3\times \bbr^3)$};\\ &\bu(T^*):=\lim_{t \to T^*-}\bu(t) \quad \text{in $H^1(\bbr^3)$}. \end{aligned} \end{equation} From $\eqref{eq-rf-est}_2$, we know \[ R(T^*)\le R_0+C(R_0, E_0, \|\rho_0\|_{L^{\infty}}, \|\bu_0\|_{H^1},\|\bu_0\|_{L^3}). \] Thus, we can take $\Big(f(T^*), \bu(T^*)\Big)$ as an initial datum, and use Proposition \ref{prop-loc-exist} to continue the local strong solution past $T^*$, which contradicts the definition of $T^*$. Therefore, $T^*=\infty$, i.e., the Cauchy problem \eqref{eq-cs-ns}-\eqref{eq-sys-inidata} admits a unique global-in-time strong solution. Using the equations $\eqref{eq-cs-ns}_1$ and $\eqref{eq-cs-ns}_2$, we deduce that \begin{equation}\label{eq-ali-uv-dt} \begin{aligned} &\frac{d}{dt}\int_{\bbr^6} |v-\bu|^2 f dxdv\\ =&\int_{\bbr^6} |v-\bu|^2 f_t dxdv -2\int_{\bbr^6} (v-\bu)\cdot\bu_t f dxdv\\ =&-\int_{\bbr^6}\Big[\bu \cdot \nabla_{x} f+ \nabla_{v} \cdot (L[f]f+(\bu-v)f)\Big] |v-\bu|^2 dxdv\\ &-2\int_{\bbr^6} \bigg(-\bu\cdot\nabla\bu-\nabla P+\Delta\bu+\int_{\bbr^3} (v-\bu) f dv\bigg)\cdot(v-\bu) f dxdv\\ =&\int_{\bbr^6} 2L[f]\cdot (v-\bu) f dxdv- 2\int_{\bbr^6} |v-\bu|^2 f dxdv\\ &+\int_{\bbr^6} 2(v-\bu)\cdot\nabla P f dxdv-2\int_{\bbr^6}\Delta\bu\cdot (v-\bu) f dxdv -2\int_{\bbr^3} |\bh|^2 dx\\ \le& \int_{\bbr^6}\int_{\bbr^6}\varphi(|x-y|)f(t,y,v^*)f(t,x,v)\Big(|v^*-v|^2 + |v-\bu|^2\Big)dy dv^* dx dv \\ &- 2\int_{\bbr^6} |v-\bu|^2 f dxdv-2\int_{\bbr^3} \bh\cdot\nabla P dx-2\int_{\bbr^3}\Delta\bu\cdot \bh dx -2\int_{\bbr^3} |\bh|^2 dx\\ \le&- \int_{\bbr^6} |v-\bu|^2 f dxdv +\|\nabla P\|_{L^2}^2 +\|\Delta\bu\|_{L^2}^2\\ & +\int_{\bbr^6}\int_{\bbr^6}\varphi(|x-y|)f(t,y,v^*)f(t,x,v)|v^*-v|^2dy dv^* dx dv. \end{aligned} \end{equation} It follows from Proposition \ref{prop-s} that \begin{equation}\label{eq-dudp-ltwo} \begin{aligned} &\int_0^T \|\nabla P\|_{L^2}^2 +\|\Delta\bu\|_{L^2}^2 dt\\ \le& C\left(\|\bu_0\|_{H^1}^2 +\int_0^T \|\bu\cdot\nabla\bu\|_{L^2}^2 dt +\int_0^T \|\bh\|_{L^2}^2 dt\right)\\ \le&C\Bigg(\|\bu_0\|_{H^1}^2 +\sup_{0\le t\le T} \|\nabla\bu\|_{L^2}^2 \int_0^T \|\bu\|_{L^{\infty}}^2 dt +\|\rho_0\|_{L^{\infty}} \int_0^T \int_{\bbr^6} |v-\bu|^2 f dxdv dt\Bigg)\\ \le&C(R_0, E_0, \|\rho_0\|_{L^{\infty}}, \|\bu_0\|_{H^1},\|\bu_0\|_{L^3}), \end{aligned} \end{equation} for $T\in (0,\infty)$, where we have used Lemma \ref{lm-cs-s-emapriori}(3) and Lemma \ref{lm-cs-s-graduinftapriori}. Define \[ D(t):= \int_{\bbr^6}\int_{\bbr^6}\varphi(|x-y|)f(t,y,v^*)f(t,x,v)|v^*-v|^2dy dv^* dx dv +\|\nabla P\|_{L^2}^2 +\|\Delta\bu\|_{L^2}^2. \] Combining Lemma \ref{lm-cs-s-emapriori}(3) with \eqref{eq-dudp-ltwo}, we infer that for all $T\in (0, \infty)$, \begin{equation}\label{eq-dissi-est} \begin{aligned} \int_0^T D(t) dt \le C(R_0, E_0, \|\rho_0\|_{L^{\infty}}, \|\bu_0\|_{H^1},\|\bu_0\|_{L^3}). \end{aligned} \end{equation} Solving the Gronwall inequality \eqref{eq-ali-uv-dt} yields \begin{equation}\label{eq-ali-uv} \begin{aligned} \int_{\bbr^6} |v-\bu|^2 f dxdv\le& \int_{\bbr^6} |v-\bu_0|^2 f_0 dxdv e^{-t} +\int_0^t e^{-(t-s)} D(s) ds\\ \le&C e^{-t} + e^{-\frac{t}{2}} \int_0^{\frac{t}{2}} D(s) ds+ \int_{\frac{t}{2}}^t D(s) ds. \end{aligned} \end{equation} It follows from \eqref{eq-dissi-est} and \eqref{eq-ali-uv} that \[ \lim_{t \to \infty}\int_{\bbr^6} |v-\bu|^2 f dxdv =0. \] Thus the proof of Theorem \ref{thm-exist} is completed. $\hfill \square$ If $\bu_0\in L^1$ in addition, then the quantitative decay rate of $E(t)$ can be achieved by means of the Fourier splitting method. \vskip 3mm \noindent \textit{Proof of Theorem \ref{thm-beha}.} Adding \eqref{eq-cs-ener-dt} to \eqref{eq-s-ener-dt} yields \begin{equation}\label{eq-elener-dt} \begin{aligned} \frac{d}{dt}E(t) +\|\nabla\bu\|_{L^2}^2+\int_{\bbr^6}|v-\bu|^2 f dxdv \le 0. \end{aligned} \end{equation} Since \begin{equation}\label{eq-du-fsplit} \begin{aligned} \|\nabla\bu\|_{L^2}^2=& \int_{\bbr^3} |\hat\bu(\xi)|^2 |\xi|^2 d\xi\\ \ge&\int_{|\xi|\ge c(t+c^2)^{-\frac12}} |\hat\bu(t,\xi)|^2 |\xi|^2 d\xi\\ \ge& \frac{c^2}{t+c^2} \int_{|\xi|\ge c(t+c^2)^{-\frac12}} |\hat\bu(t,\xi)|^2d\xi\\ =& \frac{c^2}{t+c^2} \int_{\bbr^3} |\bu(t,x)|^2 dx- \frac{c^2}{t+c^2} \int_{|\xi|\le c(t+c^2)^{-\frac12}} |\hat\bu(t,\xi)|^2d\xi, \end{aligned} \end{equation} for some constant $c>0$ to be determined later. Here we have used the Plancherel Theorem \[ \|\hat\bu(t)\|_{L^2}^2=\|\bu(t)\|_{L^2}^2. \] Substituting \eqref{eq-du-fsplit} into \eqref{eq-elener-dt} results in \begin{equation}\label{eq-elener-fs-dt} \begin{aligned} \frac{d}{dt}E(t) +\frac{c^2}{t+c^2}\left(\|\bu\|_{L^2}^2+\int_{\bbr^6}|v-\bu|^2 f dxdv \right)\le \frac{c^2}{t+c^2} \int_{|\xi|\le c(t+c^2)^{-\frac12}} |\hat\bu(t,\xi)|^2d\xi. \end{aligned} \end{equation} We observe that \begin{equation}\label{eq-ener-eqv} \begin{aligned} &\|\bu\|_{L^2}^2+\int_{\bbr^6}|v|^2 f dxdv\\ \le& \|\bu\|_{L^2}^2+ 2\int_{\bbr^6}|v-\bu|^2 f dxdv +2\int_{\bbr^6}|\bu|^2 f dxdv\\ \le& \|\bu\|_{L^2}^2+ 2\int_{\bbr^6}|v-\bu|^2 f dxdv +2\|\rho_0\|_{L^{\infty}} \|\bu\|_{L^2}^2\\ \le&2 (1+\|\rho_0\|_{L^{\infty}}) \left(\|\bu\|_{L^2}^2+\int_{\bbr^6}|v-\bu|^2 f dxdv \right), \end{aligned} \end{equation} where we have used Lemma \ref{lm-cs-s-emapriori}(2). Substituting \eqref{eq-ener-eqv} into \eqref{eq-elener-fs-dt} gives \begin{equation}\label{eq-elener-fsn-dt} \begin{aligned} \frac{d}{dt}E(t) +\frac{c^2}{(t+c^2)(1+\|\rho_0\|_{L^{\infty}})}E(t) \le \frac{c^2}{t+c^2}\int_{|\xi|\le c(t+c^2)^{-\frac12}} |\hat\bu(t,\xi)|^2d\xi. \end{aligned} \end{equation} Take $c^2=3(1+\|\rho_0\|_{L^{\infty}})$. Then \eqref{eq-elener-fsn-dt} becomes \begin{equation}\label{eq-elener-fsngro-dt} \begin{aligned} \frac{d}{dt}E(t) +\frac{3}{t+c^2}E(t) \le \frac{c^2}{t+c^2}\int_{|\xi|\le c(t+c^2)^{-\frac12}} |\hat\bu(t,\xi)|^2d\xi. \end{aligned} \end{equation} Applying the Fourier transform to $\eqref{eq-cs-ns}_2$ leads to \begin{equation}\label{eq-flueq-ftrans} \begin{aligned} \hat\bu_t+\widehat{\bu\cdot\nabla\bu} +\widehat{\nabla P}=-|\xi|^2\hat\bu +\hat\bh. \end{aligned} \end{equation} It follows from \eqref{eq-flueq-ftrans} that \begin{equation}\label{eq-flueq-ftrans-uest} \begin{aligned} |\hat\bu(t,\xi)|\le& |\hat\bu_0(\xi)| e^{-t|\xi|^2}\\ &+\int_0^t e^{-(t-s)|\xi|^2} \Big(|\widehat{\bu\cdot\nabla\bu}|(s,\xi) +|\widehat{\nabla P}|(s,\xi)+|\hat\bh|(s,\xi) \Big) ds. \end{aligned} \end{equation} Using properties of Fourier's transform and Young's inequality for convolutions, we have \begin{equation}\label{eq-flueq-ftrans-convest} \begin{aligned} |\widehat{\bu\cdot\nabla\bu}|(s,\xi)= |\Widehat{\nabla\cdot(\bu\otimes\bu)}|(s,\xi)\le|\xi|\cdot |\widehat{\bu\otimes\bu}|(s,\xi)\le C |\xi| \cdot \|\bu(s)\|_{L^2}^2, \end{aligned} \end{equation} for some constant $C>0$. Here $\bu\cdot\nabla\bu=\nabla\cdot(\bu\otimes\bu)$ is due to the fact that $\nabla\cdot\bu=0$. Taking divergence to $\eqref{eq-cs-ns}_2$ yields \begin{equation}\label{eq-lapl-p} \begin{aligned} \Delta P=-\nabla\cdot(\bu\cdot\nabla\bu)+\nabla\cdot\bh. \end{aligned} \end{equation} From \eqref{eq-flueq-ftrans-convest} and \eqref{eq-lapl-p}, we deduce that \begin{equation}\label{eq-dp-ftrans-est} \begin{aligned} |\widehat{\nabla P}|(s,\xi)\le& C\Big(|\widehat{\bu\cdot\nabla\bu}|(s,\xi)+ |\hat{\bh}|(s,\xi)\Big)\\ \le& C |\xi|\cdot \|\bu(s)\|_{L^2}^2+ C|\hat{\bh}|(s,\xi), \end{aligned} \end{equation} for some constant $C>0$. Substituting \eqref{eq-flueq-ftrans-convest} and \eqref{eq-dp-ftrans-est} into \eqref{eq-flueq-ftrans-uest}, we obtain \begin{equation}\label{eq-flueq-ftrans-uestsim} \begin{aligned} |\hat\bu(t,\xi)|\le& |\hat\bu_0(\xi)| e^{-t|\xi|^2} +C \int_0^t e^{-(t-s)|\xi|^2} \Big( |\xi|\cdot \|\bu(s)\|_{L^2}^2+ |\hat{\bh}|(s,\xi) \Big) ds\\ \le&\|\bu_0\|_{L^1} +C\int_0^t |\xi|\cdot \|\bu(s)\|_{L^2}^2 ds +C\int_0^t \|\bh(s)\|_{L^1}ds, \end{aligned} \end{equation} for some constant $C>0$, where we have used the Hausdorff--Young inequality \[ \|\hat\bu_0\|_{L^{\infty}}\le \|\bu_0\|_{L^1}, \] similarly for $|\hat{\bh}|(s,\xi)$. From Lemma \ref{lm-cs-s-emapriori}(3), it is easy to see that \[ E(t)\le E_0 \quad \text{for all $t\in [0, \infty)$}. \] Thus, there must exist some constant $K>1$, sufficiently large such that \begin{equation}\label{eq-ener-bootassum} \begin{aligned} E(t)<K\Big(1+t\Big)^{-\frac98} \quad \text{for $t\in [0, T]$}. \end{aligned} \end{equation} Denote by $T_m$ the supremum among all the $T$ in \eqref{eq-ener-bootassum}. We prove by contradiction that $T_m=\infty$. Suppose $T_m<\infty$. Then it holds that \begin{equation}\label{eq-ener-bootassum-tm} \begin{aligned} E\Big(T_m\Big)=K\Big(1+T_m \Big)^{-\frac98}. \end{aligned} \end{equation} It is easy to see that \begin{equation}\label{eq-u-bootassum} \begin{aligned} \|\bu(t)\|_{L^2}^2\le 2E(t)< 2K\Big(1+t\Big)^{-\frac98} \quad \text{for $t\in [0, T_m)$}. \end{aligned} \end{equation} Multiplying \eqref{eq-elener-dt} by $\Big(1+t\Big)^{\frac{17}{16}}$ yields \begin{equation}\label{eq-tweig-ener} \begin{aligned} \frac{d}{dt} \left(\Big(1+t\Big)^{\frac{17}{16}}E(t)\right) +\Big(1+t\Big)^{\frac{17}{16}}\int_{\bbr^6}|v-\bu|^2 f dxdv \le \frac{17}{16}\Big(1+t\Big)^{\frac{1}{16}}E(t). \end{aligned} \end{equation} Integrating \eqref{eq-tweig-ener} over $[0, T_m]$, we have \begin{equation}\label{eq-tweig-coupest} \begin{aligned} \int_0^{T_m}\Big(1+t\Big)^{\frac{17}{16}}\int_{\bbr^6}|v-\bu|^2 f dxdvdt \le E_0+CK \end{aligned} \end{equation} for some some constant $C>0$, where we have used the fact that \[ E(t)<K\Big(1+t\Big)^{-\frac98} \quad \text{for $t\in [0, T_m)$}. \] From \eqref{eq-tweig-coupest}, we infer that \begin{equation}\label{eq-coupest} \begin{aligned} C\int_0^{T_m}\|\bh(s)\|_{L^1} ds \le& C\int_0^{T_m}\Big(1+s\Big)^{-\frac{17}{32}}\Big(1+s\Big)^{\frac{17}{32}}\|\bh(s)\|_{L^1} ds \\ \le&C\left( \int_0^{T_m}\Big(1+s\Big)^{\frac{17}{16}}\int_{\bbr^6}|v-\bu|^2 f dxdvds\right)^{\frac12}\\ \le&C(E_0) \Big(1+K\Big)^{\frac12}. \end{aligned} \end{equation} Substituting \eqref{eq-u-bootassum} and \eqref{eq-coupest} into \eqref{eq-flueq-ftrans-uestsim}, we deduce that \begin{equation}\label{eq-ftrans-usquest} \begin{aligned} |\hat\bu(t,\xi)|^2 \le C+CK +CK^2 |\xi|^2 \quad \text{for $t\in [0, T_m]$}, \end{aligned} \end{equation} where $C:=C(\|\bu_0\|_{L^1}, E_0)$. It follows from \eqref{eq-ftrans-usquest} that \begin{equation}\label{eq-ftrans-usquestlow} \begin{aligned} \int_{|\xi|\le c(t+c^2)^{-\frac12}} |\hat\bu(t,\xi)|^2d\xi \le CK\Big(1+t\Big)^{-\frac{3}{2}}+ CK^2 \Big(1+t\Big)^{-\frac{5}{2}} \quad \text{for $t\in [0, T_m]$}, \end{aligned} \end{equation} where $C:=C(\|\rho_0\|_{L^{\infty}}, \|\bu_0\|_{L^1}, E_0)$. Combining \eqref{eq-elener-fsngro-dt} with \eqref{eq-ftrans-usquestlow}, we have \begin{equation}\label{eq-ener-decay} \begin{aligned} E(t) \le CK\Big(1+t\Big)^{-\frac{3}{2}}+ CK^2 \Big(1+t\Big)^{-\frac{5}{2}} \quad \text{for $t\in [0, T_m]$}, \end{aligned} \end{equation} where $C:=C(\|\rho_0\|_{L^{\infty}}, \|\bu_0\|_{L^1}, E_0)$. Since \[ E\Big(T_m\Big)=K\Big(1+T_m \Big)^{-\frac98}\le E_0, \] we have \begin{equation}\label{eq-tm-est} 1+T_m\ge \left(\frac{K}{E_0}\right)^{\frac89}. \end{equation} From \eqref{eq-ener-decay} and \eqref{eq-tm-est}, we infer that \begin{equation}\label{eq-ener-decay-impr} \begin{aligned} E\Big(T_m\Big) \le& \bigg[C\Big(1+T_m \Big)^{-\frac38}+CK \Big(1+T_m \Big)^{-\frac{11}{8}} \bigg] K\Big(1+T_m \Big)^{-\frac98}\\ \le& CK^{-\frac29} K\Big(1+T_m \Big)^{-\frac98}, \end{aligned} \end{equation} for some constant $C:=C(\|\rho_0\|_{L^{\infty}}, \|\bu_0\|_{L^1}, E_0)$. Take $K$ suitably large such that \[ CK^{-\frac29}=\frac12. \] Then we have $E\Big(T_m\Big) \le \frac K2 \Big(1+T_m \Big)^{-\frac98}$, which contradicts \eqref{eq-ener-bootassum-tm}. Therefore, we show that $T_m=\infty$, and it follows form \eqref{eq-ener-decay} that \begin{equation}\label{eq-ener-decay-op} \begin{aligned} E(t) \le C\Big(1+t\Big)^{-\frac{3}{2}} \end{aligned} \end{equation} for some constant $C:=C(\|\rho_0\|_{L^{\infty}}, \|\bu_0\|_{L^1}, E_0)$. It is easy to deduce from \eqref{eq-ener-decay-op} that \[ \|\bu(t)\|_{L^2} \le C\Big(1+t\Big)^{-\frac{3}{4}} \] and \begin{equation*}\label{eq-ener-decay-final} \begin{aligned} \int_{\bbr^6} |v-\bu|^2 f dxdv\le&2 \int_{\bbr^6} |v|^2 f dxdv+ 2\int_{\bbr^6} |\bu|^2 f dxdv\\ \le& 2 \int_{\bbr^6} |v|^2 f dxdv+ 2\|\rho_0\|_{L^{\infty}} \|\bu(t)\|_{L^2}^2\\ \le& CE(t) \le C\Big(1+t\Big)^{-\frac32}, \end{aligned} \end{equation*} for some constant $C:=C(\|\rho_0\|_{L^{\infty}}, \|\bu_0\|_{L^1}, E_0)$, where we have used Lemma \ref{lm-cs-s-emapriori}(2). This completes the proof of Theorem \ref{thm-beha}. $\hfill \square$ \vskip 3mm \noindent\textbf{Acknowledgements}. Chunyin Jin is supported by NSFC Grant No. 12001530.
1,108,101,563,929
arxiv
\section{Conclusion}\label{sec:conclusion} In this paper we consider the design, implementation and evaluation of virtual network migration on GENI as a working example of a future SDN-enabled wide-area infrastructure. VN migration adds network agility to the repertoire of network functions and provides a mechanism that enables the deployment of important policies for resource management, energy conservation and attack defense. We show how agility can be enabled on top of GENI's slicing approach, enumerate and address challenges in the design of efficient VN migration mechanisms, and develop and deploy an implementation of a controller architecture that achieves our VN migration objectives. We perform a set of experiments that help us understand the implications of various design decisions and network parameters on VN migration performance. Our work also exposes some limitations on the current design of GENI. \section{Mitigating GENI Limitations}\label{sec:suggestion} When we started our work, our goal was to design, implement, and evaluate an efficient VN migration mechanism in GENI as an example of a future SDN-enabled wide-area network. While it is possible to deploy virtual networks on GENI and use proper remote scheduling implementation to enable live migration, we observe that some GENI limitations complicate the design. These constraints are not only particular to our VN migration research, but may also apply to other types of experimentation. We summarize the features that are not well supported by GENI. This will aid in future GENI development and also in informing the designs of GENI-inspired SDN-enabled wide area infrastructure. \subsection{Interaction with the Substrate Network} GENI deploys virtualization architectures to share physical resources for simultaneous experiments. It provides multiple models of virtualization to cater for different levels of performance and isolation requirements. However, experimenters are only free to select from the provided virtualization models, and do not have the privilege to modify the model or build an alternative one. In particular, we have the following constraints if our experimentation explores the interaction between the virtualized architectures and the substrate networks. \paragraph{Little knowledge about substrate networks} Under the current GENI context, we only have access to limited information about the substrate network such as the geographical information about GENI aggregates and VM load on each physical machine. Without sufficient real-time information about the physical substrate, it is difficult or impossible to implement an algorithm that has interaction with the substrate network. For example, some VN migration research may require real-time statistics about the substrate network to determine when to trigger the migration and where to migrate a VN. More generally, to support experimentation where the placement of the virtual topology is affected by the performance of the substrate network, we expect GENI to expose more network statistics such as link utilization, throughput, and latency. \paragraph{Difficulty in debugging} The virtualization techniques are deployed on GENI to support simultaneous experiments on limited physical infrastructure. However, virtualized architecture not only implies a trade-off between performance and isolation, but also makes debugging challenging. The virtualization architecture may bring unexpected problems, and the limited access to physical substrate further increases the difficulty in debugging. In our VN migration research, we had a hard time finding the cause of the duplicated packets when shared VLANs are used. We can only debug by observing the traffic in virtual topology and infer what is happening in the physical substrate. We expect GENI to develop efficient debugging tools to make the debugging process easier. Besides, it is impossible to debug without a deep understanding of the mechanisms (e.g., how shared VLAN works). Most GENI tutorials only introduce how to use their features. It would be helpful if GENI can include more architecture design of GENI features in its tutorials. \paragraph{No control of substrate networks} We have flexibility to assign bandwidth to our virtual links in the reservation stage, but we cannot adjust parameters for the substrate network. Therefore, it is difficult to evaluate an algorithm with bandwidth constraints. This constraint makes it difficult to observe how dynamics in physical substrate such as changes in bandwidth or latency can affect the performance of a virtualized architecture. \subsection{Multi-domain Network Research} In GENI, a slice is a unit that contains all computing and networking resources for an experiment. The common usage of GENI is to reserve all resources for an experiment within an slice and isolate different slices. One possible design for multi-domain network experiment is to place all domains within the same slice and build different administrative controllers to handle different domains. The disadvantage is obvious: there is no isolation among domains, and we are unable to add more domains to dynamically scale up the networks. Alternatively, we can place one domain on one slice with isolated administration. To enable inter-slice communication, we need to use shared VLANs to connect slices, which complicate the virtual topology and makes it difficult to scale up. Neither of the two designs are ideal solutions for experiments that involves multiple domains. \subsection{Dynamic Resource Reservation} The GENI platform requires experimenters to reserve all resources on GENI slices before running their experiments. Most GENI resource does not provide flexibility to partially modify the resources. In our work, we take advantage of the shared VLAN feature to make resource reservation more dynamic. This resource reservation method requires the experimenters to consider which virtual links in the first slice should be converted to shared VLAN at the beginning when they design their experiments. Each design is particular to a specific topology: whenever we need a new virtual topology, we need to reconsider the shared VLAN. The restriction in resource reservation makes it difficult to scale up an experiment. \section{Migration Controller Architecture}\label{sec:implementation} The migration controller stands in the center of our migration architecture and is responsible for the migration process. It clones the flow tables from the old switches to the new switches, schedules the migration sequences and switches traffic between VNs. We implement our migration controller on GENI using the POX controller platform \cite{pox-controller}. The migration controller runs on the POX controller while other client applications keep operating normally. The controller architecture is shown in Figure \ref{fig:advanced-controller}. \begin{figure} \vskip -5pt \centering \includegraphics[width=0.5\textwidth]{./figure/advanced_controller.png} \vskip -5pt \caption{Migration controller architecture} \vskip -10pt \label{fig:advanced-controller} \end{figure} \textbf{Mapping Module}: specifies how to map the switches in the old VN to the switches in the new VN. It also includes mapping of the virtual network interfaces in the old switches and in the new switches. When reserving resources on GENI, we cannot specify virtual network interfaces in the request Rspec file and GENI aggregate arbitrarily assigns virtual network interfaces to VMs. We need to query the virtual network interface corresponding to a certain IP address and store that information in the Mapping Module. \textbf{Flow Table Manager}: When a request for VN migration is initiated, the Flow Table Manager polls switches that are affected by migration, translates flow tables from the old VN based on the mapping information stored in the Mapping Module, and installs the flows into the new switches. \textbf{Scheduler}: calculates the sequence of rule installation based on our traffic redirection algorithm to minimize the packet loss. \textbf{Traffic Redirector}: After all flows are successfully installed, the Flow Table Manager notifies the Traffic Redirector to generate traffic redirection commands. The traffic Redirector retrieves the sequence of rule installation from the Scheduler and redirects the traffic from the old VN to the new VN. \textbf{VN Presenter}: intercepts events from switches, translates them based on mapping information from the Mapping Module, and presents a consistent VN topology to client applications. This module hides all migration process from clients. \textbf{Status Monitor}: collects dynamic network statistics and decides where and when to migrate based on the VN placement algorithm. Our focus is to migration mechanisms, thus we have not implemented the Status Monitor. \textbf{Migration API}: Migration APIs are similar to OpenFlow controller APIs so that client applications adapt to the new APIs easily. The migration APIs allow client SDN applications to configure migration parameters such as migration destinations and triggering requirements. The client SDN applications should use migration API to retrieve virtual switch information, the connections to virtual switches, and events from virtual switches to get a consistent view of the VN. \section{Introduction} Virtualization is well-recognized as a technique to share physical resources, providing the appearance of dedicated resources and isolation from others sharing the same physical resources. Virtual networks run over a physical network substrate, with an allocation of physical network resources (e.g., routers, switches, links, paths, or portions thereof) to the virtual network. A virtual network (VN) thus contains a collection of virtual nodes and virtual links assigned to a subset of the underlying physical resources. A virtual link spans one or more physical links in the substrate, and a substrate node can host multiple virtual nodes. Network virtualization allows significant flexibility in network operation. Most important are the flexibility in the VN's {\em placement} (the specific mapping of VNs elements to substrate resources \cite{VNmappingsurvey}) and VN {\em agility} (the ability to remap the VN to a different set of substrate resources over time). Our interest in this paper is on enabling VN agility through {\em VN migration} mechanisms. This refers to the process of remapping some or all of a VN's logical topology to a new set of physical resources. VN migration research considers both {\em policy}, when and why a VN is migrated, and {\em mechanism}, how a VN is migrated. Research into VN migration policy is motivated by specific objectives. These have included: efficient utilization of dynamic resources \cite{fan2006dynamic,fajjari2011vnr}, recovery from failure \cite{tang2008efficient,gillani2012fine}, defending against attacks \cite{gillaniagile}, and reducing energy consumption \cite{MiucciVN}. Our focus in this paper is on VN migration mechanisms. The development of such mechanisms can be quite challenging because of the desire to make them {\it transparent} or {\it seamless} -- informally, with minimal impact on running applications. A further challenge is that a VN migration mechanism is highly dependent on the technology deployed in the substrate. There is, therefore, no generic mechanism that can be used universally. Previous research has developed mechanisms for VN migration in different environments. In the data center context, Ghorbani et al. develop migration methods within their LIME architecture that provably meet a transparency definition based on valid behaviors in a migration-free setting~\cite{ghorbani2014transparent}. In the wide-area context, Lo et al.~\cite{lo2014virtual} develop a tool for VN migration in PlanetLab~\cite{chun2003planetlab}, a well-known shared infrastructure used for network experimentation. Their PL-VNM tool implements a migration schedule heuristic that minimizes packet loss under ideal conditions, but in practice cannot ensure zero loss, due in part to coarse timing control in PlanetLab. In this paper we focus on developing VN migration mechanisms for GENI, a recently developed infrastructure for sharing wide-area network resources~\cite{berman2014geni}. A key technology included in GENI is software-defined networking (SDN) where the packet-processing rules in switches are installed and modified from a logically-centralized controller~\cite{mckeown2008openflow}. SDNs offer a number of advantages including ease of network management and the opportunity for increased innovation. Our focus on GENI is motivated by the fact that SDN-enabled wide-area networks are likely to become an important building block of future networking and GENI represents a fully-functional instantiation of this technology. As such, techniques developed for GENI will have wider applicability. Further, because SDN technology is at a stage where its future can be influenced, lessons we learn about the capability of such technology in supporting network agility can have significant value on future developments. \begin{figure}[h] \vskip -10pt \centering \includegraphics[width=0.5\textwidth]{./figure/migration_steps.png} \vskip -10pt \caption{VN migration process from VN1 to VN2:In step 1, setup VN1 and connect virtual switches in VN1 to the SDN controller. In step 2, setup VN2 and connect virtual switches in VN2 to the SDN controller. In step 3, the migration controller clones flow tables from VN1 to VN2 based on the mapping. In step 4, connect VN2 with the hosts and disconnect VN1.} \vskip -15pt \label{fig:migration-steps} \end{figure} Our work focuses on migrating an entire VN from the initial placement to the final placement, without moving the hosts~\footnote{ Note that this model for hosts differs from the data center context where hosts are migrated with the VN. In shared wide-area infrastructure, we assume the hosts are customer-premise equipment and remain in place when the VN moves.}. Figure \ref{fig:migration-steps} illustrates the migration steps assuming an SDN-enabled infrastructure. A migration controller interacts with the SDN controller to initialize and schedule the migration process. Prior to migration, virtual switches on the VN1 are controlled by the client application running on the SDN controller, and VN1 is used to deliver traffic (Step 1). When the migration starts, VN2 is setup (Step 2) and flow tables on the virtual switches in VN1 are cloned to the virtual switches in VN2 based on the mapping (Step 3). The migration controller issues commands to reconnect hosts from VN1 to VN2 in Step 4 and to disconnect VN1. This paper addresses several challenges in realizing the basic VN migration steps above. In addressing these challenges we make the following contributions: (1) We develop approaches that enable VN agility in the SDN-enabled GENI infrastructure; (2) Develop and evaluate options for dealing with the dynamic allocation of resources inherent in migration, where the initial mapping to physical resources is known but the future mappings necessitated by migration may not be; (3) Propose an approach for managing the hosts that connect to the VN and will remain in place when the VN migrates; (4) Develop techniques to mitigate the disruption caused by live VN migration, to minimize packet loss observed by the application in the data plane and to maintain the topological view in the control plane (as observed by the application running on the SDN controller). We carefully manage process steps and flow table migration sequences to achieve this; (5) Evaluate, using an implementation running on GENI, how the performance of live VN migrations as a function of design decisions and network parameters; and (6) Expose some limitations of the GENI infrastructure and propose approaches to their mitigation. The remainder of the paper is structured as follows. In Section \ref{sec:background} we develop a framework for enabling VN agility within the context of the GENI substrate technology. We highlight a decision regarding VN migration related to the allocation of resources in or across GENI slices. Section \ref{sec:migration-challenges} proposes mechanisms to address the challenges associated with VN migration on GENI to meet the goals of efficiently and transparency and our solutions. We develop a controller architecture and describe the deployment of our VN migration mechanism within GENI in Section \ref{sec:implementation}. In Section \ref{sec:performance-evaluation} we present results from experiments conducted using our prototype with the aim of evaluating its performance. Section \ref{sec:suggestion} discusses GENI limitations that were exposed by our research and approaches to address them. Related Work is covered in Section \ref{sec:related-work}. We conclude the paper in Section VIII. \section{Dealing with VN Migration Challenges}\label{sec:migration-challenges} After VN agility is enabled in GENI as discussed in the previous section, we are still faced with several challenges as we strive to meet the goals of efficiency and transparency. In this section we investigate three distinct challenges and propose mechanisms to deal with them. The challenges are: (1) how to manage inter-slice communication that connects the hosts to both VNs temporarily during the migration; (2) how to minimize packet loss by scheduling the flow table migration sequence; and (3) how to provide a seamless interface to SDN applications during and after migration. The first challenge is specific to a sliced platform like GENI and the other two challenges can be generalized to other SDN environment. In the section following this one we use the solutions for each challenge to inform the design of a migration controller architecture. \subsection{Inter-slice Connection}\label{inter-slice-connection-section} We described two VN-to-Slice allocation options in Section \ref{vn-to-slice-section}. In the first option, all hosts, the old VN, and the new VN are located within the same slice. We will not discuss the first design in detail since it follows the common usage of the GENI testbed. We will focus on the second design, where the old VN, the new VN, and the hosts are assigned to three different slices. The challenge in the second design is to direct traffic from one slice to another given the current GENI constraints which do not support virtual links between slices. It should be noted that the dynamic tunnel implemented in LIME \cite{ghorbani2014transparent} does not apply for our case. The tunnel uses the control plane for data links and cannot guarantee performance such as bandwidth and latency. Moreover, the control plane is a shared channel on GENI and should not be used to send a large amount of data. \subsubsection{Broadcasting problem in a virtualization environment} To enable inter-slice communication, it may seem natural to use a shared VLAN to connect a host slice to VN slices. The traffic is broadcast within the same VLAN, no matter in which slice a switch is located. The connection/disconnection of the VNs with the hosts are controlled by turning up/down the network interfaces on virtual switches. Figure \ref{fig:wo-gw} presents an example of this approach. The topology includes three hosts, the old VN (VN1), the new VN (VN2), and the controller slice. In our virtual topology, each host connects to both VN1 and VN2 with a shared VLAN. When host1 sends data to host2, the data will be broadcast to both OVS1 and OVS1'. When VN1 is in use, the network interfaces of OVS1 is up and the network interfaces of OVS1' is down. After the migration, we redirect traffic from VN1 to VN2 by turning down the interfaces of OVS1 and turning up the interfaces of OVS1'. \begin{figure}[h] \vskip -10pt \centering \includegraphics[width=0.4\textwidth]{./figure/wo_gw_topo.png} \vskip -5pt \caption{An example of using shared VLAN to connect the host and the VNs} \vskip -10pt \label{fig:wo-gw} \end{figure} Unfortunately, this approach can violate the correctness of the migration in a virtualized environment. GENI uses XEN \cite{barham2003xen} as a virtual machine monitor to allow multiple virtual machines to share the same hardware resources. Xen only allows a privileged virtual machine called domain 0 to access the physical Network Interface Card (NIC). Domain 0 communicates with other virtual machines through a set of back-end interfaces. All the packets destined to a virtual machine will be first transferred to domain 0 and then destined to the virtual machine. The packets stored in domain 0 are not dropped when the network interfaces in the virtual machine is turned down. When the virtual network interface goes up again, these buffered packets will be copied from domain 0 memory to the receiver virtual machine's memory. \begin{figure}[h] \vskip -10pt \centering \includegraphics[width=0.4\textwidth]{./figure/broadcast_problem.png} \vskip -10pt \caption{an example of a VN and its substrate} \vskip -10pt \label{fig:broadcast-problem} \end{figure} We illustrate why this small number of buffered packets can be a problem through a one-node VN example as shown in Figure \ref{fig:broadcast-problem}. In our virtual topology, host1 connects two switches with shared VLAN1 and host2 connects two switches with shared VLAN2. In actual substrate network, there is a rack switch residing in the shared VLAN to broadcast packets to all switches in the same VLAN. Before migration, we connect VN1 with the hosts by turning up the network interfaces eth1 and eth2 and disconnect VN2 by turning down the network interfaces eth1' and eth2'. The data from host1 to host2 is broadcast by Rack{\_}SW1 to eth1 and eth2, and then broadcast by Rack{\_}SW2 to eth2' and host2. Although we turn down the virtual network interface eth1' and eth2', a small number of packets are still stored in the XEN domain 0. During the migration, we switch from VN1 to VN2 by turning up eth1' and eth2' and turning down eth1 and eth2. Previously buffered packets in domain 0 are transferred through eth1' and eth2' to the virtual machine that hosts SW2. These packets have the same matching fields (e.g, same source and destination IP) but request different actions(e.g, send through different ports). In the standard SDN implementation of the learning switch, this is considered as an error. The switch will install rules to drop all packets for several seconds, resulting in a much longer disconnection time than normal migration process. In the worst case, when conflicting rules are installed on the openflow switch, the switch may stop forwarding packets, which requires manual configuration to recover. \subsubsection{Mitigate the broadcasting problem -- gateway design} To avoid the broadcasting problem, we propose a gateway design which establishes additional SDN switches as `gateways' to switch traffic from the old VN to the new VN. The gateways are layer 2 devices that sit between hosts and VNs, hiding changes in VNs from end hosts. Figure \ref{fig:gw-topo} presents an example of the gateway design that enables migration within the same aggregate. Each host is connected with a gateway, and each gateway uses two different shared VLANs to connect to the two VNs. The gateway switch is responsible for forwarding packets from hosts to a certain VN. In the process of VN migration, the migration controller issues commands to the gateway switches, asking them to redirect traffic from VN1 to VN2 after all flow tables in VN1 are cloned to VN2. The controller sends SDN commands to the gateway switches to update the flow tables, redirecting traffic from VN1 to VN2. \begin{figure} \vskip -10pt \centering \includegraphics[width=0.4\textwidth]{./figure/gw_topo.png} \vskip -10pt \caption{Gateway design} \label{fig:gw-topo} \end{figure} The gateway design can be extended to enable migration across substrates. As mentioned earlier, we use GENI aggregates to represent substrates in different locations. Virtual components in different aggregates are connected with a special link called stitched links. Unfortunately, a stitched link cannot be part of a shared VLAN. We use additional nodes to serve as a bridge to connect stitched links and shared VLANs. A cross-aggregates example is shown in Figure \ref{fig:gw-topo-stitch}. The hosts, VN1, and VN2 are located in three different aggregates. GENI does not provide inter-slice stitched links to connect gateway switches in the host slice with SDN switches in two VNs directly. To connect gateways with VN1, we put three more additional nodes in the host slice. These three nodes are in the same aggregate with VN1 and we use the stitched link to connect them with the gateway switches. Then we use shared VLANs to connect those three nodes to the virtual switches in VN1. Those three additional nodes serve as a bridge to connect the host slice with VN1. We do the same to connect the hosts with VN2. \begin{figure} \centering \includegraphics[width=0.45\textwidth, height=55mm]{./figure/gw-topo-stitch.png} \caption{Topology to enable cross-aggregate migration} \vskip -20pt \label{fig:gw-topo-stitch} \end{figure} \subsection{Minimizing Packet Loss}\label{scheduling-section} In our migration mechanism, packet loss may occur when the migration controller issues commands to gateway switches to disconnect the old VN and reconnect the new VN. In a traditional network without SDN features, unicast Reverse Path Forwarding (uRPF) \cite{dalal1978reverse} in strict mode drops traffic received on an interface that is not used to forward the return traffic. We illustrate why VN migration always introduces packets loss in symmetric routing through a two-node topology. In Figure \ref{fig:scheduling-problem}, there are two hosts and two VNs, each VN containing two virtual nodes. Each host connects to both VN1 and VN2 through a gateway switch. We define f\textsubscript{1,2} as the traffic flow from host1 to host2, and f\textsubscript{2,1} as the traffic flow from host2 to host1. We migrate the virtual network from VN1 to VN2. Before migration, GW1 directs f\textsubscript{1,2} from in-port 1 to out-port 2, directs f\textsubscript{2,1} from in-port 2 to in-port 1, and drops any traffic from in-port 3 to disconnect VN2. The same applies for GW2 to control traffic from/to host2. When the migration begins, our migration controller issues commands to GW1 and GW2 and updates their flow tables to redirect traffic from VN1 to VN2. We assume GW1 finishes update at time t\textsubscript{1,2} and GW2 finishes at time t\textsubscript{2,1}. We define d\textsubscript{1} as the latency from GW1 to GW2 and d\textsubscript{2} as the latency from GW2 to GW1. The data rate of f\textsubscript{1,2} is r\textsubscript{1} and the data rate of f\textsubscript{2,1} is r\textsubscript{2}. We therefore calculate the number of dropped data c\textsubscript{1,2} for f\textsubscript{1,2} and c\textsubscript{2,1} for f\textsubscript{2,1} as follows: \[ c_{1,2}=\begin{cases} (t_{2,1}-t_{1,2})-d_1)\times r_1, &\text{if } t_{2,1}-t_{1,2}\geq d_1\ \\ (t_{1,2}-t_{2,1})+d_1)\times r_1, &\text{otherwise} \end{cases} \] \[ c_{2,1}=\begin{cases} (t_{1,2}-t_{2,1})-d_2)\times r_2, &\text{if } t_{1,2}-t_{2,1}\geq d_2\ \\ (t_{2,1}-t_{1,2})+d_2)\times r_2, &\text{otherwise} \end{cases} \] It is obvious that $t_{2,1}-t_{1,2} = d_1(d_1 >= 0)$ and $t_{1,2}-t_{2,1} = d_2(d_2 >=0)$ cannot be both satisfied. At least one of c\textsubscript{1,2} and c\textsubscript{2,1} is larger than 0, which means additional packet loss is unavoidable in this setting. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{./figure/scheduling_problem.png} \caption{The topology of two-node VN on GENI} \vskip -20pt \label{fig:scheduling-problem} \end{figure} \subsubsection{Flow Migration Sequence} SDN shows promise to enable lossless migration with an optimized sequence of rule installation. We propose a scheduling sequence to remove the additional packet drop introduced by VN migration. Algorithm \ref{alg:traffic-redirection} shows pseudocode for the traffic redirection process. We install rules to let traffic coming from the new VN to go through the gateway switches. Then we update rules on gateway switches to direct traffic from hosts to the new VN. Finally, we insert drop rules to disconnect the old VN. By following this sequence, we avoid dropping packets buffered in the old VN. \begin{algorithm} \caption{Traffic Redirection Algorithm}\label{alg:traffic-redirection} \begin{algorithmic}[1] \For{$gateway \in gatewayList$} \State $Ports_h \gets $Ports on gateway that point to hosts \For{$Port \in Ports_h$} \State install new rule $r$ where $r.inPort$ = $PortToVN2$ and $r.outPort$ = $Port$ \EndFor \EndFor \For{$gateway \in gatewayList$} \For{$Port \in Ports_h$} \State update rule $r$ set $r.outPort$ = $PortToVN2$ where $r.outPort$ = $PortToVN1$ and $r.inPort$ = $Port$ \EndFor \EndFor \For{$gateway \in gatewayList$} \For{$Port \in Ports_h$} \State update rule $r$ set $r.action$ = $dropPkt$ where $r.outPort$ = $PortToVN1$ and $r.inPort$ = $Port$ \EndFor \EndFor \end{algorithmic} \end{algorithm} Algorithm \ref{alg:traffic-redirection} also applies to partial VN migration when only part of the old VN is remapped to different physical machines. In the partial VN migration, the traffic redirection occurs at the neighboring nodes of the partial network instead of the gateways. In this case, all neighboring nodes of the partial network should be treated as gateway switches and the same algorithm can be applied to minimize the packet loss. \subsubsection{Remote Scheduling Methods} We have two implementations for issuing migration commands to disconnect the old VN and connect the new VN. The first option is to control VN connection by turning up/down network interfaces using SSH sessions. We refer this type of scheduling as SSH scheduling. With this scheduling method, there is a lag between the time when the command is issued and the time when the command is actually executed in the remote node. Besides, GENI requires SSH key-based authentication before executing commands on a node, which might lead to a longer lag time. The second implementation, called OpenFlow-message scheduling, redirects traffic by installing flows on gateways based on the OpenFlow messages from the migration controller. This method does not support complicated operations such as executing a monitoring script. We expect this method to be faster than SSH scheduling because it does not introduce authentication overhead. \subsection{Seamless Migration} Our migration mechanism should ensure the illusion of a consistent VN for the client SDN applications during and after the migration. We discuss possible inconsistencies in the migration process and our solutions. \subsubsection{Topology Changes} As mentioned in Section \ref{sec:migration-challenges}, both the old and the new VN are connected during the migration. To present the client SDN applications with a consistent view of a single VN, our migration controller intercepts all OpenFlow events, changes the datapath ID of the events based on the mapping between the old and the new VN, and passes the modified events to the client SDN applications. No events about the topology changes in the VN2 should be passed to the client SDN applications. \subsubsection{Switch State Changes} A switch maintains information about its ports and flow tables. Ideally, a switch should present the same switch Information including datapath ID, serial number and ports in the old and the new VN. Unfortunately, GENI does not allow users to assign virtual network interfaces to the virtual switch and the ports number are randomly assigned during reservation stage. It is highly likely that the virtual switch has different ports status in the new VN. Since the flow tables contain the port information, our migration controller modifies the flow tables based on the port mapping when it clones flow tables from old switches to new switches. \section{Performance Evaluation}\label{sec:performance-evaluation} In this section, we evaluate the performance of our migration mechanism in terms of the migration time, packet loss during migration, latency, and the controller overhead. More evaluation results can be found in the accompanying technical report \cite{}. \subsection{Migration Time} \subsubsection{SSH vs. OpenFlow-messaging} We evaluate whether OpenFlow-message scheduling performs better than SSH scheduling in terms of the time difference between command issue and command execution. We use our migration controller to issue command to turn down/up a network interface using SSH session for 50 times. We repeat the same experiments with OpenFlow-message scheduling. Figure \ref{fig:of-vs-ssh} shows the CDF of the time difference between the time when migration commands are issued by the controller and the time when the migration finishes with SSH scheduling and OpenFlow-message scheduling. All OpenFlow-message scheduling completes within 0.1 second, but about 50\% of SSH scheduling takes 1s or longer to finish. This confirms our earlier assessment that OpenFlow-Message scheduling is much faster and has lower variance than SSH scheduling. \begin{figure} \vskip -10pt \centering \includegraphics[width=0.49\textwidth, height=50mm]{./figure/of-vs-ssh.png} \vskip -5pt \caption{CDF of time difference between command issued and execution} \vskip -15pt \label{fig:of-vs-ssh} \end{figure} \subsubsection{Migration Duration} Figure \ref{fig:mt-flowsize} shows how migration time changes as the flow table size grows with 95\% confidence level upper and lower bands. The migration time is negligible when flow table size is small. It takes less than 1s to finish the migration when the flow table size is smaller than 1000. The migration duration increases roughly linearly with the number of rules per switch and can take 7s when there are 10,000 rules. The migration time depends on the number of rules and the number of switches but is independent of the topology. \begin{figure} \vskip -10pt \centering \includegraphics[width=0.4\textwidth, height=50mm]{./figure/new/mt-flowsize.png} \vskip -5pt \caption{Migration time as flow table size per switch grows} \vskip -15pt \label{fig:mt-flowsize} \end{figure} \subsection{Packet Loss During Migration} \subsubsection{Move a Complete VN within a Substrate} We build a prototype of the basic migration controller using the POX controller platform \cite{pox-controller} and evaluate its performance through experiments on the topology illustrated in Figure \ref{fig:gw-topo} with three hosts and six virtual switches. All virtual switches are Open vSwitches \cite{pfaff2009extending}. We use iperf to generate UDP traffic for 10 seconds between all pairs of hosts and migrate VN from its initial position to final position at time t=5s. We vary the data sending rate to see whether our migration controller works well in relatively high data rate. We perform three sets of experiments: (a) a baseline experiment where no migration occurs, (b) migration with symmetric routing, where traffic redirection commands are issued at the same time by controller, and (c) migration with asymmetric routing, where traffic redirection commands are issued in an optimized sequence. We repeat the experiments for 30 times for each data rate and measure the migration time and data loss rate. \begin{figure} \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{./figure/h1-h2-detail-figure.png} \caption{From host1 to host3} \label{fig:h1-h3-drop} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{./figure/h2-h1-detail-figure.png} \caption{From host3 to host1} \vskip -5pt \label{fig:h3-h1-drop} \end{subfigure} \caption{Average packet loss as percentage of total packets (baseline, symmetric routing, optimized scheduling) at different data rates for the flow between host1 and host3} \vskip -15pt \label{fig:avg-loss} \end{figure} We only present results for forwarding and reverse flows between host1 and host3 due to space constraints. In Figure \ref{fig:avg-loss}, the percentage of average packet loss on y-axis is based on the measurement of UDP traffic for 10s, and the x-axis shows baseline experiment, symmetric routing, and asymmetric routing for different data sending rates. For both the forwarding and the reserve flows, the packet loss rate in asymmetric routing is almost the same with that in a migration-free setting. It demonstrate that asymmetric routing prevents hosts from experiencing significant increase in packet loss during migration. \begin{figure} \vskip -5pt \centering \includegraphics[width=0.4\textwidth, height=50mm]{./figure/new/loss-rtt.png} \caption{The performance of symmetric routing and asymmetric routing for different RTTs} \vskip -15pt \label{fig:loss-rtt} \end{figure} \subsubsection{Impact of RTT} During the migration, packet loss occurs when packets buffered in the old VN are dropped by gateway switches because of the traffic redirection. As shown in Figure \ref{fig:loss-rtt}, the performance of the symmetric routing is much worse than that of the asymmetric routing, especially when the flow table size is large. The average packet drop rate for symmetric routing increases linearly with the increase of the RTT while the packet drop rate for asymmetric routing is very close to zero for any RTT values. \begin{figure} \centering \includegraphics[width=0.4\textwidth, height=50mm]{./figure/new/ctrl-throughput.png} \caption{Controller performance for different switch numbers} \vskip -15pt \label{fig:ctrl-throughput} \end{figure} \subsection{Control-Plane Overhead} Our migration controller intercepts the events, modifies the datapath ID based on the mapping between the old and the new virtual switches, and passes the new events to the client application. These operations cause overhead at the controller. To evaluate the controller performance, we use the cbench program \cite{sherwood2010cbench}, which creates and sends a large number of OpenFlow messages to the controller. Figure \ref{fig:ctrl-throughput} shows performance of the unmodified POX controller and our migration controller from one switch to 64 switches. The y-axis shows the number of flows that a switch can handle within a second. Our migration controller processes roughly 3\% fewer flows per second than the unmodified controller does. \section{Related Work}\label{sec:related-work} Some of the VN embedding solutions suggest reconfiguration or remapping of the VN\cite{fan2006dynamic,fajjari2011vnr,tang2008efficient,gillani2012fine,gillaniagile}. However, all of those works use simulation to demonstrate the effectiveness of their solutions. It remains a challenging task for network researchers to move their experiments to a real infrastructure when there is a lack of effective migration mechanism. There has been some work addressing the challenges of VN migration in a real infrastructure. Prior work \cite{lo2014virtual} proposes an orchestration mechanism for VN migration on PlanetLab, using the same technology to move a single virtual router without disrupting current traffic traversing the virtual network presented in \cite{wang2008virtual}. Other work \cite{pisa2010openflow,ghorbani2014transparent} shows how to migrate VN within software defined networks. Pisa et al. considers the basic migration scenario for migrating virtual network in traditional network and software defined network\cite{pisa2010openflow}. Ghorbani et al. \cite{ghorbani2014transparent} move the whole SDN network with the hosts in the data center context. It concentrates on low level configuration including packet-in events, traffic statistics and rule timeout to handle correctness violation \cite{ghorbani2014towards}. \section{Enabling Full VN Agility in GENI}\label{sec:background} GENI is relatively mature shared infrastructure. As such GENI provides support for for sharing, isolation, resource allocation, and multiple physical substrate ``owners''. GENI, however, was not designed specifically to support our desired transparent and efficient VN agility. Our work, therefore, involves the development and evaluation of options to support VN agility within GENI. This section explores some critical aspects of providing agility support. Because GENI uses its own unique terminology, Table \ref{tb:geni-concepts} summarizes how the general VN terminology maps to GENI terms. \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \begin{footnotesize} \caption{GENI Context vs. Virtual Components} \label{tb:geni-concepts} \centering \footnotesize \begin{tabular}{|c|c|} \hline Component & GENI Context\\ \hline\hline Substrate networks & GENI testbed\\ \hline Virtual Network(s) & GENI slice\\ \hline Physical location & GENI aggregate\\ \hline Virtual links within a VN & LANs\\ \hline Virtual links between VNs & Shared VLAN\\ \hline Virtual links connecting different physical locations & Stitched links\\ \hline Mapping between VN to physical substrate & Rspec file\\ \hline \end{tabular} \end{footnotesize} \vskip -10pt \end{table} \subsection{Allocating VNs to Slice(s)}\label{vn-to-slice-section} GENI is a ``sliced'' platform that allows concurrent experiments on shared infrastructure. A GENI slice is a unit that contains all the resources for an experiment, including computing resources and network links. This is already a form of network virtualization used primarily to isolate experiments in GENI. In a real-world GENI-like substrate, slicing would be used to isolate commercial network providers sharing the same physical substrate. Slices exist in one or more aggregates; each aggregate is an independent collection of physical resources often operated by the same entity (e.g., a specific university). Figure \ref{fig:geni-arch} shows two slices on the GENI infrastructure. Slice 1 has three virtual nodes in Aggregate A, while Slice 2 has six virtual nodes across Aggregate A and Aggregate B connected with stitched link through the backbone network. Each slice is an isolated environment where virtual nodes and virtual links can be added. Each virtual node in a slice can be accessed by the user who creates the slice with corresponding SSH key. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{./figure/new/geni-arch.png} \vskip -10pt \caption{An example of GENI architecture} \vskip -15pt \label{fig:geni-arch} \end{figure} Slices are meant to be deployed for the long term and are thus not agile. To enable agility, VNs will need to be deployed within slices as an additional layer of virtualization. We consider two options for mapping VNs to slices with an eye to migration. The first option is to build all VNs (original and future) and hosts for migration within the same slice. This approach follows the common usage model for GENI to include all resources for an experiment within a single slice. However, this option has three disadvantages: 1) There is no clear isolation between the different VNs. 2) Most GENI resources cannot be modified after the reservation. Once resources are reserved on a slice, no partial modification (e.g, add a virtual link or a virtual node) is allowed. In the case of migration, this restriction requires us to reserve all resources for hosts and VNs, including those that will be migrated to in the future, at the outset. 3) When a VN or a host fails, we need to rebuild the whole virtual topology. Alternatively, it is possible to allocate a single VN to a slice, starting with the original VN and later allocating a VN to migrate to. Deploying a VN on one slice is straightforward. The challenge for deploying and migrating a VN between two slices is caused by the difficulty to enable the inter-slice communication during migration. We cannot create a virtual link to connect virtual components in different slices directly. Instead, we can set up a VLAN, a broadcast domain at the data link layer, to connect virtual components in different slices. All virtual components in the same VLAN will receive the broadcasting packets even when the virtual components are in separate slices. Compared with deploying all VNs on one slice, this second design provides clear separation between VNs and gives more flexibility in resource reservation. We can reserve one VN first and create another VN when needed. However, it complicates the virtual topology during migration. We will talk further about shared VLANs and migration in Section \ref{sec:migration-challenges}. \subsection{Mapping Virtual Switches to Physical Machines} GENI uses a Resource Specification (RSpec) document to describe a requested virtual topology and its physical substrate, including ID of the virtual machines (VM), virtual network interface configuration, VLAN tags assigned to links, and corresponding hardware information. The RSpec is submitted with the GENI aggregate manager API to aggregates in order to reserve resources. The requested virtual topology is translated to a request Rspec file by Rspec generation tools, and GENI aggregates automatically allocate resources to the requested VN based on the Rspec file. While GENI aggregate's automatic assignment of resources can meet the requirements of most experiments, it may be necessary to have the flexibility of mapping virtual nodes to specific physical resources for VN migration research. Although Rspec generating tools do not directly support resource assignment, we are able to map a virtual node to a specific machine by manually modifying the request Rspec. The Omni tool \cite{omni} provides commands to obtain information about all reserved and available resources at a specific aggregate, including machine ID, machine status, hardware types, and OS types. We can locate the component name for a specific physical machine in the resource information returned by Omni and copy its component ID to the request Rspec file to achieve a fixed mapping. \subsection{Assigning VNs to Substrates} In VN migration, it might be necessary to migrate between different physical substrates, or aggregates in GENI terminology. A GENI aggregate comprises a set of resources under common control including storage resources, computing nodes, and OpenFlow switches \cite{berman2014geni}. Experimenters can reserve multiple aggregates within the same slice and connect them with stitched links (See Figure \ref{fig:geni-arch}). It is also possible to allocate each VN to a different aggregate and connect them with both shared VLAN and stitched links. We will show how to use shared VLAN and stitched links together in Section \ref{sec:migration-challenges}.
1,108,101,563,930
arxiv
\section{Introduction} Besides both the Kadomtsev-Petviashvili and the Zakharov-Kuznetsov equations the zero energy Novikov-Veselov equation \begin{equation}\tag{NV}\label{NV} \partial_tu+(\partial ^3 + \overline{\partial}^3)u +3(\partial (u\overline{\partial}^{-1}\partial u)+\overline{\partial}(u\partial^{-1}\overline{\partial}u))=0 \end{equation} is another two-dimensional generalization of the famous Korteweg-de Vries equation~(KdV). Here \[ \partial = \frac{\partial}{\partial z}= \frac{1}{2}(\frac{\partial}{\partial x}-i\frac{\partial}{\partial y}) \qquad \mbox{and} \qquad \overline{\partial} = \frac{\partial}{\partial \overline{z}}= \frac{1}{2}(\frac{\partial}{\partial x}+i\frac{\partial}{\partial y}) \] denote the Wirtinger derivatives. (NV) was introduced in 1984/86 by S. P. Novikov and A. P. Veselov~\cites{NV84A, NV84B, NV86} in their study of the two-dimensional Schrödinger operator. These authors considered the unknown function $u: \mathbb T^2 \times I \to \mathbb R$ to be a \emph{periodic} and \emph{real valued} potential. Originally the equation was written down in the form \begin{equation}\label{NVorg} \partial_tu = (\partial ^3 + \overline{\partial}^3)u + \partial (u w) + \overline{\partial} (u \overline{w}), \qquad \overline{\partial}w=3\partial u, \end{equation} see equation (14) in~\cite{NV86}, which gives, if $\partial$ and $\overline{\partial}$ can be inverted in a well-defined way, the --- for complex valued functions differing slightly from~\eqref{NV} --- following equation \[ \partial_tu=(\partial ^3 + \overline{\partial}^3)u +3(\partial (u\overline{\partial}^{-1}\partial u)+\overline{\partial}(u\partial^{-1}\overline{\partial}\overline{u})). \] After time reversion this coincides with~\eqref{NV}, if $u$ is real. The investigation of (NV) in the nonperiodic case via the inverse scattering method was initiated by Boiti, Leon, Manna, and Pempinelli~\cites{BLMP86,BLMP87} and continued later on by Tsai~\cite{T93}. Here the authors consider the potential $u:\mathbb R^2 \times I \to \mathbb C$ to be a small, rapidly decreasing and, in general, \emph{complex valued} function. The latter assumption is also made by Bogdanov~\cite{Bog}, who changed the equation to \[ \partial_tu + (\partial ^3 + \overline{\partial}^3)u + \partial (u w_1) + \overline{\partial} (u w_2)=0, \qquad \overline{\partial}w_1=3\partial u,\partial w_2 = 3 \overline{\partial}u, \] to which our form~\eqref{NV} corresponds. (It turns out in our analysis that~\eqref{NV} with a $u$ as the last factor in the last term instead of the $\overline{u}$ is by far better behaved.) Bogdanov found the related equation \[ \partial_tv+(\partial ^3 + \overline{\partial}^3)v +3(\partial (v\overline{\partial}^{-1}\partial |v|^2)+\overline{\partial}(v\partial^{-1}\overline{\partial}|v|^2)+v\partial^{-1}\overline{\partial}(\overline{v}\overline{\partial}v)+v\overline{\partial}^{-1}\partial(\overline{v}\partial v))=0, \] which he called the ``modified VN equation'' (mNV), since the Miura-type transformation $$ \mathcal{M}: v \mapsto \mathcal{M}(v):|v|^2-i\partial v $$ maps a solution $v$ of (mNV) with $\partial v = \overline{\partial v}$ onto a solution $u:= \mathcal{M}(v)$ of (NV). This discovery led Bogdanov to the conclusion that ``from the mathematical point of view [\,\ldots] the VN equation is the natural two dimensional generalization of the KdV equation.''~\cite{Bog}*{p.~219}. (NV) is said to be completely integrable by the inverse scattering method. The precise meaning of this statement is the subject of a lively discussion, see e.g.~\cites{CMS, CMMPSS, LMSSI, LMSSII, Perry, MP}. As for (KdV), smooth and --- in case of $\mathbb R^2$ being their domain --- rapidly decreasing solutions of (NV) satisfy a whole sequence of conservation laws: Integration of the equation over $\mathbb R^2$ or $\mathbb T^2$ gives that $$\int u(x,y,t)dxdy=\mbox{const. },$$ which is referred to as the conservation of the mean and plays a role in our considerations concerning the periodic case. At the level of $L^2$ we have for solutions of~\eqref{NV} that $$\int u(x,y,t)\overline{\partial}^{-1}\partial u(x,y,t)dxdy=\mbox{const.}$$ Unfortunately, this functional is not definite and does not give any a priori bound for the $L^2$-norm. A recursion formula for the higher order conservation laws is provided in~\cite{CMMPSS}*{Section~2.3}. Among them there is the ``energy'' $$E(u(t))= \int \partial u(x,y,t)\partial\overline{\partial}^{-1}\partial u(x,y,t)+ \mbox{lower order terms }dxdy= \mbox{const. },$$ which is not definite, either. It turns out that in the whole sequence of conserved quantities there is none giving a useful a priori bound on any $H^s$-norm. In fact, such a bound in combination with the existing local well-posedness theory (see below) would lead to a general global well-posedness result, eventually at a high level of regularity. But this is impossible as illustrated by an instructive example of Taimanov and Tsarev (see~\cite{TT}*{Theorem~4}). They found a rational solution of (NV) defined on the whole plane, decaying at infinity as $|(x,y)|^{-3}$ and developing a singularity in finite time. As long as it exists, this solution (at fixed time $t \ge 0$) belongs to $\bigcap_{s \ge 0}H^s(\mathbb R^2)$. On $\mathbb R^2$ the Novikov-Veselov equation is invariant under the scaling transformation $u \mapsto u_{\lambda}$ where, for $\lambda >0$, $$u_{\lambda}(x,y,t)=\lambda^2u(\lambda x, \lambda y, \lambda^3t).$$ Let $u_{0,\lambda}(x,y)=u_{\lambda}(x,y,0)$. Then $\|u_{0,\lambda}\|_{\dot{H}^{-1}}$ is independent of $\lambda$, and thus $s_c=-1$ becomes the critical Sobolev regularity, below which we do not expect any well-posedness result for the Cauchy problem. In fact, $C^2$-ill-posedness in $\dot{H}^s(\mathbb R^2)$ for $s<-1$ has been shown by Angelopoulos in~\cite{Angel}*{Theorem~17}. The question of well-posedness of the Cauchy problem for (NV) has been tackled so far with two different approaches. The first is the inverse scattering method, which has the great advantage of leading to some global existence theorems and to a solution formula. To the best of our knowledge, the most advanced results in this direction are those of Perry~\cite{Perry}*{Theorem~1.6} and of Music and Perry~\cite{MP}*{Theorem 1.2}, who built on earlier works~\cites{M, GM} of Music and of Grinevich and Manakov. The data are assumed to belong to some weighted Sobolev space of fairly high regularity and to lie in the image of the Miura map, or to satisfy a certain (sub-)criticality condition, see Definition 1.1 in~\cite{MP}. Unfortunately, uniqueness and hence continuous dependence remains open in this approach. On the other hand the Fourier restriction norm method introduced by Bourgain in~\cites{B1, B2} and further developed in~\cites{KPV96a, KPV96b, GTV} has been applied to treat the Cauchy problem (nonperiodic case) for (NV) and (mNV): In~\cite{Angel} Angelopoulos proved the local well-posedness for (NV) with data in $H^s(\mathbb R^2)$, provided that $s>\frac{1}{2}$, and for (mNV) with data in $H^s(\mathbb R^2)$, $s>1$. His result on (mNV) was substantially improved upon by Schottdorf in~\cite{Schott}, who could admit $s \ge 0$ and obtain a global result for small data in the critical case $s=0$. To treat the endpoint case he used the $U^p$- and $V^p$-spaces introduced by Koch and Tataru~\cites{KT05, KT07, HHK}. In~\cites{KMI, KMII} Kazeykina and Muñoz generalized the $s>\frac{1}{2}$ result mentioned above to the more general ``nonzero energy NV equation'' $$ \partial_tu+(\partial ^3 + \overline{\partial}^3)u +3(\partial (u\overline{\partial}^{-1}\partial u)+\overline{\partial}(u\partial^{-1}\overline{\partial}u)) + E(\overline{\partial}^{-1}\partial^2 u + \partial^{-1}\overline{\partial}^2u) =0, $$ for a fixed parameter $E \in \mathbb{R}$, which is much harder to analyze. All these LWP results rely exclusively on a global smoothing effect of solutions to the linear part of the equation, expressed in terms of (eventually bilinear) Strichartz-type estimates with derivative gain. Such a smoothing effect does not exist in the periodic case. In the sequel we will follow this second approach. Additionally we will take the structure of the nonlinearity into account, which will allow us to push down the lower bound on $s$ in the nonperiodic case substantially and to reach something below $L^2(\mathbb T^2)$ for data of mean zero in the periodic case. We emphasize, that our arguments do not cover the case of nonzero energy, see also the open question (3) in the last section.\\ {\bf{Acknowledgement.}} The authors wish to thank Karin Halupczok for valuable hints concerning the number theoretic aspects of the periodic case. They also want to thank the anonymous referees for valuable hints. \section{General arguments and main results} We consider the initial value problem $u(x,y,0)=u_0(x,y)$ for equation~\eqref{NV}, where either \begin{itemize} \item the data $u_0$ and the solution $u(t)$ at time $t$ belong to some classical Sobolev space $H^s(\mathbb R^2)$ of functions defined on the whole plane (Cauchy problem, nonperiodic case), or \item $u_0$ and $u(t)$ are elements of $H^s_0(\mathbb T^2)$, the Sobolev space of (in both directions) periodic functions on $\mathbb R^2$ of mean zero, i.e.~we assume $$\int_{\mathbb T^2}u_0(x,y)dxdy=0,$$ which is preserved under the evolution of (NV). \end{itemize} In contrast to the majority of the more recent literature we follow Bogdanov and consider data and solution to be complex valued. In the end the uniqueness part of our results will give that solutions with real valued data remain real valued. In both cases considered here the operators $\overline{\partial}^{-1}\partial$ and $\partial^{-1}\overline{\partial}$ are well-defined as bounded Fourier multipliers from $H^s$ to $H^s$. To be more explicit, let us write the Fourier transform in the space variables as $$\mathcal{F}_{xy}f(\xi, \eta)= c \int e^{-ix\xi-iy\eta}f(x,y)dxdy,$$ where the integral is taken over $\mathbb R^2$ or over $\mathbb T^2$, respectively. Then we have $$\overline{\partial}^{-1}\partial=\mathcal{F}_{xy}^{-1}\frac{i\xi+\eta}{i \xi -\eta}\mathcal{F}_{xy}=\mathcal{F}_{xy}^{-1}\frac{\xi^2-\eta^2-2i\xi\eta}{\xi^2 +\eta^2}\mathcal{F}_{xy}=:\frac{\partial_x^2-\partial_y^2}{\Delta}-i\frac{2\partial_x\partial_y}{\Delta}$$ and $$\partial^{-1}\overline{\partial}=\mathcal{F}_{xy}^{-1}\frac{i\xi-\eta}{i \xi +\eta}\mathcal{F}_{xy}=\mathcal{F}_{xy}^{-1}\frac{\xi^2-\eta^2+2i\xi\eta}{\xi^2 +\eta^2}\mathcal{F}_{xy}=:\frac{\partial_x^2-\partial_y^2}{\Delta}+i\frac{2\partial_x\partial_y}{\Delta}.$$ Since $\partial^3 + \overline{\partial}^3=\frac{1}{4}(\partial_x^3-3\partial_x\partial_y^2)$ we can rewrite equation~\eqref{NV} in real cartesian coordinates as $$\partial_tu + \frac{1}{4}(\partial_x^3-3\partial_x\partial_y^2)u +3N(u)=0, $$ where \begin{equation}\label{N} N(u)=\partial_x(u\frac{\partial_x^2-\partial_y^2}{\Delta}u)-\partial_y(u\frac{2\partial_x\partial_y}{\Delta}u). \end{equation} Since constant factors in front of the nonlinearity don't play any role in the local analysis ahead, we may, after rescaling the time variable, consider the equation \begin{equation}\label{NVreal} \partial_tu + (\partial_x^3-3\partial_x\partial_y^2)u = N(u) \end{equation} with $N(u)$ as in~\eqref{N} and initial condition \begin{equation}\label{IC} u(x,y,0)=u_0(x,y). \end{equation} Solutions of the linear part of this equation with initial value $u_0$ will be denoted by $U_{\varphi}(t)u_0=e^{-it\varphi(D)}u_0$ with the phase function $\varphi(\xi, \eta)=\xi^3-3\xi\eta^2$, which determines the weight in the Bourgain spaces adequate for our problem. For the \emph{nonperiodic} case we define $$X_{s,b}:=\{f \in \mathcal{S} '(\mathbb R^3): \|f\|_{X_{s,b}}< \infty\}$$ with \begin{align*} \|f\|^2_{X_{s,b}}&:=\|\langle \tau - \varphi(\xi,\eta)\rangle^b\langle (\xi,\eta) \rangle^s\widehat{f}\|^2_{L^2_{\tau \xi \eta}} \\&=\int_{\mathbb R^3}\langle \tau - \varphi(\xi,\eta)\rangle^{2b}\langle (\xi,\eta) \rangle^{2s}|\widehat{f}(\xi,\eta,\tau)|^2 d\tau d\xi d\eta , \end{align*} where, for $x \in \mathbb R^n$, $\langle x \rangle = (1+|x|^2)^{\frac12}$ and $\widehat{f}$ denotes the Fourier transform with respect to all variables including time. The corresponding time restriction norm is denoted by $$\|f\|_{X_{s,b}^{\delta}}:=\inf\{\|\widetilde{f}\|_{X_{s,b}}:\widetilde{f} \in X_{s,b}, \widetilde{f}\big|_{\mathbb R^2 \times (- \delta, \delta)}=f \},$$ defining our solution space, which is embedded continuously in $C([-\delta,\delta], H^s(\mathbb R^2))$, if $b > \frac12$. Similarly, for the \emph{periodic} case we set $$\dot{X}_{s,b}:=\{f \in \mathcal{S} '(\mathbb R^3): f \mbox{ is periodic in space and } \|f\|_{\dot{X}_{s,b}}< \infty\},$$ where now $$\|f\|^2_{\dot{X}_{s,b}}:= \int_{\mathbb R} \sum_{(\xi, \eta) \in \mathbb Z^2\setminus \{(0,0)\}}\langle \tau - \varphi(\xi,\eta)\rangle^{2b}\langle (\xi,\eta) \rangle^{2s}|\widehat{f}(\xi,\eta,\tau)|^2 d\tau.$$ The restriction norm spaces here are denoted by $\dot{X}^{\delta}_{s,b}$. We will have to choose the parameter $b=\frac12$, which would lose us the embedding into a space of continous functions. In order to recover the continuity of the solution in the periodic case we will also prove estimates in the function spaces defined by $$\|f\|^2_{\dot{Y}^s}:=\sum_{(\xi, \eta) \in \mathbb Z^2\setminus \{(0,0)\}}\left(\int_{\mathbb R}\langle \tau - \varphi(\xi,\eta)\rangle^{-1}\langle (\xi,\eta) \rangle^{s}|\widehat{f}(\xi,\eta,\tau)| d\tau \right)^2,$$ similar to those introduced in~\cite{GTV}. Now we are able to give a precise statement of our results. Concerning the nonperiodic case we have: \begin{theorem}\label{NVcont} Let $s>-\frac34$ and $u_0 \in H^s(\mathbb R^2)$. Then there exist $b > \frac12$ and $\delta=\delta (\|u_0\|_{H^s})>0$, such that there is a unique solution $u \in X_{s,b}^{\delta}$ of~\eqref{NVreal},~\eqref{IC}. Moreover, for every $R>0$ the solution operator $$S_R: H^s(\mathbb R^2) \supset B_R(0) \to X_{s,b}^{\delta(R)}, \quad u_0 \mapsto S_R(u_0):=u$$ is Lipschitz continuous. \end{theorem} Similarly, for the periodic case we will prove: \begin{theorem}\label{NVper} Let $s>-\frac15$ and $u_0 \in H_0^s(\mathbb T^2)$. Then there exist $\delta=\delta (\|u_0\|_{H^s})>0$ and a unique solution $u \in \dot{X}_{s,\frac12}^{\delta}\cap C([-\delta,\delta], H_0^s(\mathbb T^2))$ of~\eqref{NVreal},~\eqref{IC}. For every $R>0$ the solution operator $$S_R: H_0^s(\mathbb T^2) \supset B_R(0) \to \dot{X}_{s,\frac12}^{\delta(R)}, \quad u_0 \mapsto S_R(u_0):=u$$ is Lipschitz continuous. \end{theorem} \section{Symmetrization and the resonance function} We write the nonlinearity~\eqref{N} as $N(u)=\frac12 B(u,u)$ with the bilinear operator $$B(u,v)=\partial_x \left( \Big(\frac{\partial_x^2-\partial_y^2}{\Delta}u\Big)v + u\Big(\frac{\partial_x^2-\partial_y^2}{\Delta}v\Big)\right) - \partial_y \left( \Big(\frac{2\partial_x\partial_y}{\Delta}u\Big)v + u\Big(\frac{2\partial_x\partial_y}{\Delta}v\Big)\right).$$ Then the partial Fourier transform of $B(u,v)$ with respect to the space variables becomes (ignoring constants and the time dependence) \begin{align*} \mathcal{F}_{xy}B(u,v)(\xi,\eta)= \xi \int_* \left(\frac{\xi_1^2-\eta_1^2}{\xi_1^2+\eta_1^2}+\frac{\xi_2^2-\eta_2^2}{\xi_2^2+\eta_2^2} \right)\mathcal{F}_{xy}u(\xi_1,\eta_1)\mathcal{F}_{xy}v(\xi_2,\eta_2)d\xi_1d\eta_1 \\ - \eta \int_* \left(\frac{2\xi_1\eta_1}{\xi_1^2+\eta_1^2}+\frac{2\xi_2\eta_2}{\xi_2^2+\eta_2^2} \right)\mathcal{F}_{xy}u(\xi_1,\eta_1)\mathcal{F}_{xy}v(\xi_2,\eta_2)d\xi_1d\eta_1, \end{align*} where $\displaystyle \int_*$ denotes integration under the convolution constraint $(\xi,\eta)=(\xi_1,\eta_1)+(\xi_2,\eta_2)$. For the complete multiplier in this expression an elementary calculation shows that \begin{align*} m(\xi_1,\xi_2,\eta_1,\eta_2):= \xi \left(\frac{\xi_1^2-\eta_1^2}{\xi_1^2+\eta_1^2}+\frac{\xi_2^2-\eta_2^2}{\xi_2^2+\eta_2^2} \right) -\eta \left(\frac{2\xi_1\eta_1}{\xi_1^2+\eta_1^2}+\frac{2\xi_2\eta_2}{\xi_2^2+\eta_2^2} \right) \\ = \frac{2(\xi_1\xi_2+\eta_1\eta_2)}{(\xi_1^2+\eta_1^2)(\xi_2^2+\eta_2^2)}\big(\xi(\xi_1\xi_2-\eta_1\eta_2)-\eta(\xi_1\eta_2+\xi_2\eta_1) \big). \end{align*} We wish to show estimates of the type $$\|B(u,v)\|_{X_{s,b'}} \lesssim \|u\|_{X_{s,b}}\|v\|_{X_{s,b}}$$ with $s$ as low as possible and $b'=-\frac12+2\varepsilon$, $b=\frac12+\varepsilon$, where $\varepsilon >0$ ($\varepsilon=0$ in the periodic case). Choosing $f$, $g$ such that $\|f\|_{L^2_{\xi\eta\tau}}=\|u\|_{X_{s,b}}$ and $\|g\|_{L^2_{\xi\eta\tau}}=\|v\|_{X_{s,b}}$ the previous inequality turns into $$\|\langle \tau-\varphi(\xi,\eta) \rangle^{b'} \langle(\xi,\eta) \rangle^s I_{f,g}\|_{L^2_{\xi\eta\tau}} \lesssim \|f\|_{L^2_{\xi\eta\tau}}\|g\|_{L^2_{\xi\eta\tau}}$$ with \begin{align*} I_{f,g}(\xi,\eta,\tau):= \int_*m(\xi_1,\xi_2,\eta_1,\eta_2)\langle(\xi_1,\eta_1) \rangle^{-s}\langle \tau_1-\varphi(\xi_1,\eta_1) \rangle^{-b}f(\xi_1,\eta_1,\tau_1) \times \ldots \\ \ldots \langle(\xi_2,\eta_2) \rangle^{-s}\langle \tau_2-\varphi(\xi_2,\eta_2) \rangle^{-b}g(\xi_2,\eta_2,\tau_2)d\xi_1 d\eta_1 d \tau_1, \end{align*} where now $(\xi,\eta,\tau)=(\xi_1,\eta_1,\tau_1)+(\xi_2,\eta_2,\tau_2)$. $f$ and $g$ are assumed to be nonnegative and $\displaystyle \int_* \ldots d\xi_1 d\eta_1$ may denote integration with respect to the Lebesgue measure on $\mathbb R^2$ as well as alternatively the counting measure on $\mathbb Z^2 \setminus \{(0,0)\}$. Now the resonance function, i.e.~the quantity controlled by $$\max\{|\tau-\varphi(\xi,\eta)|, |\tau_1-\varphi(\xi_1,\eta_1)|, |\tau_2-\varphi(\xi_2,\eta_2)|\},$$ for our nonlinearity, is given by $$r(\xi_1,\xi_2,\eta_1,\eta_2):=\varphi(\xi,\eta)-\varphi(\xi_1,\eta_1)-\varphi(\xi_2,\eta_2)=3\big(\xi(\xi_1\xi_2-\eta_1\eta_2)-\eta(\xi_1\eta_2+\xi_2\eta_1)\big).$$ Again we leave the elementary verification of the last equality to the reader. Comparing the expressions for $m$ and $r$ we arrive at $$m(\xi_1,\xi_2,\eta_1,\eta_2)= \frac23 \frac{\xi_1\xi_2+\eta_1\eta_2}{(\xi_1^2+\eta_1^2)(\xi_2^2+\eta_2^2)} r(\xi_1,\xi_2,\eta_1,\eta_2),$$ which gives, for $\theta \in (0,1)$, the inequality \begin{align}\label{resonance} |m(\xi_1,\xi_2,\eta_1,\eta_2)|\le \frac{|r(\xi_1,\xi_2,\eta_1,\eta_2)|^{\theta}|r(\xi_1,\xi_2,\eta_1,\eta_2)|^{1-\theta}}{|(\xi_1,\eta_1)||(\xi_2,\eta_2)|} \\ \nonumber \le |(\xi,\eta)|^{1-\theta}|(\xi_1,\eta_1)|^{-\theta}|(\xi_2,\eta_2)|^{-\theta}|r(\xi_1,\xi_2,\eta_1,\eta_2)|^{\theta}, \end{align} the latter since $|r(\xi_1,\xi_2,\eta_1,\eta_2)|\le |(\xi,\eta)||(\xi_1,\eta_1)||(\xi_2,\eta_2)|$. This will be used especially with $\theta = -b' \approx \frac12$. \section{The nonperiodic case} In addition to the structure of the nonlinearity discussed above we will make use of smoothing estimates of Strichartz-type for the unitary group $(U_{\varphi}(t))_{t \in \mathbb R}$. Here and below $I^{\sigma}=\mathcal{F}_{xy}^{-1}|(\xi,\eta)|^{\sigma}\mathcal{F}_{xy}$ represents the Riesz potential operator of order $- \sigma$ with respect to the space variables. \begin{lemma} For $u_0 \in L^2(\mathbb R^2)$ let $U_{\varphi}u_0$ denote the solution of $$\partial_t u+ (\partial_x^3-3\partial_x\partial_y^2)u=0 \quad \mbox{with} \quad u(0)=u_0.$$ Then the following estimates hold true: \begin{itemize} \item If $p>3$ and $\displaystyle \frac{3}{p}+\frac{2}{q}=1$: \begin{equation}\label{Str0} \|U_{\varphi}u_0\|_{L_t^pL^q_{xy}} \lesssim \|u_0\|_{L^2_{xy}}, \end{equation} \item if $p>2$ and $\displaystyle \frac{2}{p}+\frac{2}{q}=1$: \begin{equation}\label{Str} \|I^{\frac{1}{p}}U_{\varphi}u_0\|_{L_t^pL^q_{xy}} \lesssim \|u_0\|_{L^2_{xy}}. \end{equation} \end{itemize} \end{lemma} \emph{Citation and proof:}~\eqref{Str0} follows from~\eqref{Str} by a Sobolev embedding. To prove~\eqref{Str} one starts with the estimate $$\|IU_{\varphi}(t)u_0\|_{L^{\infty}_{xy}} \lesssim |t|^{-1}\|u_0\|_{L^1_{xy}},$$ which is Part 2. of Theorem 5.6 in~\cite{BKS}. For a dyadic piece of the data $P_{\Delta l} u = \mathcal{F}_{xy}^{-1}\chi_{\{|(\xi,\eta)| \sim 2^l\}}\mathcal{F}_{xy}u$ this reads $$\|U_{\varphi}(t)P_{\Delta l}u_0\|_{L^{\infty}_{xy}} \lesssim |t|^{-1}2^{-l}\|u_0\|_{L^1_{xy}}.$$ Now the standard proof of the Strichartz estimates using Riesz-Thorin interpolation, the Hardy-Littlewood-Sobolev inequality and the $TT^*$-argument applies. Since one has to deal with a gain of derivatives we refer to~\cite{GV}*{Section~3} for more details. \hfill $\Box$ \quad \\ We remark that the endpoints $p=3$ in~\eqref{Str0} and $p=2$ in~\eqref{Str} are excluded. Considering the results of Montgomery-Smith \cite{MS} and Tao \cite{T} we strongly believe the latter endpoint estimate to fail. By the transfer principle~\cite{GTV}*{Lemma~2.3} we obtain corresponding $X_{s,b}$-estimates. A soft argument dealing with low frequencies allows us to infer that \begin{equation}\label{StrX} \|u\|_{L_t^pL^q_{xy}} \lesssim \|u\|_{X_{-\frac{1}{p}, b}} \end{equation} if $p>2$, $\displaystyle \frac{2}{p}+\frac{2}{q}=1$, and $\displaystyle b> \frac{1}{2}$. Now we are prepared to prove the central bilinear estimate of this section, which (inserted into the framework of Bourgain's method) leads to Theorem 1. \begin{prop} Let $ s>-\frac34$ and $ b' \le - \frac38$ as well as $ b'<s+\frac14$. Then for all $ b> \frac{1}{2}$ the estimate \begin{equation}\label{keycont} \|B(u,v)\|_{X_{s,b'}} \lesssim \|u\|_{X_{s,b}}\|v\|_{X_{s,b}} \end{equation} holds true. \end{prop} \begin{proof} Without loss of generality we assume $s \le - \frac58$ so that $s\le -1-b'$. The proof consists of a case by case discussion, essentially depending on which of the weights \begin{equation}\label{modulations} \langle \tau-\varphi(\xi,\eta) \rangle, \quad \langle \tau_1-\varphi(\xi_1,\eta_1) \rangle, \quad\langle \tau_2-\varphi(\xi_2,\eta_2) \rangle \end{equation} is the largest and thus controls the resonance function. We start with a trivial low frequency issue.\\ \quad \\ \emph{Case 0:} $|(\xi_1,\eta_1)|\le 1$ and $|(\xi_2,\eta_2)|\le 1$. In this case the multiplier $m$ is bounded, so that the left hand side of~\eqref{keycont} can be estimated \[ \|uv\|_{L^2_{xyt}}\le \|u\|_{L^4_{xyt}}\|v\|_{L^4_{xyt}}\lesssim \|u\|_{X_{0b}}\|v\|_{X_{0b}}\lesssim \|u\|_{X_{s,b}}\|v\|_{X_{s,b}}, \] where we have used~\eqref{StrX} and the support restriction of $\widehat{u}$ and $\widehat{v}$ to $\{|(\xi,\eta)| \le 1\}$.\\ \quad \\ \emph{Case 1:} $\langle \tau-\varphi(\xi,\eta) \rangle$ is maximal.\\ \emph{Subcase 1.1:} $|(\xi_1,\eta_1)|\le 1 \le |(\xi_2,\eta_2)|$. In this case we have $|(\xi,\eta)| \sim |(\xi_2,\eta_2)|$ which reduces the consideration to the case $s=0$. We use~\eqref{resonance} with $\theta = \frac38$ to obtain $$\|B(u,v)\|_{X_{0b'}} \lesssim \|I^{-\theta}u\|_{L^4_{xyt}}\|I^{\frac14}v\|_{L^4_{xyt}},$$ where by~\eqref{StrX} the second factor is bounded by $\|v\|_{X_{0b}}$. For the first factor we use a Sobolev embedding and the fact that $\widehat{u}$ is restricted to $\{|(\xi,\eta)| \le 1\}$ to see that $$\|I^{-\theta}u\|_{L^4_{xyt}} \lesssim \|u\|_{L_t^4L^2_{xy}} \lesssim \|u\|_{X_{0b}},$$ where in the last step a time embedding was applied.\\ \emph{Subcase 1.2:} $|(\xi_2,\eta_2)|\le 1 \le |(\xi_1,\eta_1)|$ needs no discussion by symmetry.\\ \emph{Subcase 1.3:} $|(\xi_1,\eta_1)|\ge 1$ and $|(\xi_2,\eta_2)| \ge 1$. We use~\eqref{resonance} with $\theta = -b'$ and without loss of generality $s+1+b'\le0$ to infer that the contribution of this case is bounded by $$\|(I^{b'}u)(I^{b'}v)\|_{L^2_{xyt}}\le \|I^{b'}u\|_{L^4_{xyt}}\|I^{b'}v\|_{L^4_{xyt}}\lesssim \|u\|_{X_{s,b}}\|v\|_{X_{s,b}},$$ the latter by~\eqref{StrX} and the assumption $b'-\frac14 < s$. \quad \\ \emph{Case 2:} $\langle \tau_1-\varphi(\xi_1,\eta_1) \rangle$ is maximal. \\ \emph{Subcase 2.1:} $|(\xi_1,\eta_1)|\le 1 \le |(\xi_2,\eta_2)|$. Because of $|(\xi,\eta)| \sim |(\xi_2,\eta_2)|$ we may consider $s=0$ only. We write $\Lambda^b=\mathcal{F}^{-1}\langle \tau-\varphi(\xi,\eta) \rangle^b \mathcal{F}$ and use~\eqref{resonance} with $\theta = \frac38$ to see that the contribution of this region is bounded by \begin{align*} \|I^{\frac14}((I^{-\frac{3}{8}}\Lambda^{\frac{3}{8}+b+b'}u)I^{1-\frac34-\frac14}v)\|_{X_{0,-b}} \lesssim \|(I^{-\frac{3}{8}}\Lambda^{\frac{3}{8}+b+b'}u)v\|_{L^{\frac43}_{xyt}} \\ \lesssim \|I^{-\frac{3}{8}}\Lambda^{\frac{3}{8}+b+b'}u\|_{L^2_tL^4_{xy}}\|v\|_{L^4_tL^2_{xy}} \lesssim \|\Lambda^bu\|_{L^2_{xyt}}\|v\|_{X_{0,\frac14}}\le\|u\|_{X_{0,b}}\|v\|_{X_{0,b}}. \end{align*} Here we have used the dual version of the $L^4$-Strichartz-type estimate, H\"older's inequality and Sobolev-type embeddings in space (first factor) and time (second factor).\\ \emph{Subcase 2.2:} $|(\xi_2,\eta_2)|\le 1 \le |(\xi_1,\eta_1)|$. Considering again $s=0$ and choosing $\theta = \frac38$ in~\eqref{resonance} we get the bound $$\|\Lambda^{\frac{3}{8}+b+b'}u\|_{L^2_{xyt}}\|I^{-\frac38}v\|_{L^4_{xyt}} \lesssim \|u\|_{X_{0,b}}\|v\|_{X_{0,b}}.$$ \emph{Subcase 2.3:} $|(\xi_1,\eta_1)|\ge 1$ and $|(\xi_2,\eta_2)| \ge 1$. Here we choose $\theta = b'$ in~\eqref{resonance}, remember that $s+1+b'\le0$ and obtain the bound \begin{equation}\label{bd} \|(I^{b'}\Lambda^bu)(I^{b'}v)\|_{X_{0,-b}}. \end{equation} Now there are two possibilities:\\ \emph{2.3.1:} $|(\xi_1,\eta_1)| \lesssim |(\xi,\eta)|$. We use the dual version of the $L^4$-Strichartz-type estimate, H\"older, and the estimate itself for the second factor to get $$\eqref{bd} \lesssim \|(I^{b'-\frac14}\Lambda^bu)(I^{b'}v)\|_{L^{\frac43_{xyt}}}\lesssim \|I^{b'-\frac14}\Lambda^{b}u\|_{L^2_{xyt}}\|I^{b'}v\|_{L^4_{xyt}}\lesssim \|u\|_{X_{s,b}}\|v\|_{X_{s,b}}.$$ \emph{2.3.2:} $|(\xi_1,\eta_1)| \lesssim |(\xi_2,\eta_2)|$. We start with a time embedding, apply Hölder's inequality, a Sobolev embedding in space and the almost endpoint version of the Strichartz-type estimate to obtain \begin{align*} \eqref{bd} & \lesssim \|(I^{b'-\frac14+}\Lambda^bu)(I^{b'+\frac14-}v)\|_{L_t^{1+}L^2_{xy}} \\ & \lesssim \|I^{b'-\frac14+}\Lambda^{b}u\|_{L_t^2L^{2+}_{xy}}\|I^{b'}v\|_{L_t^{2+}L^{\infty-}_{xy}}\lesssim \|u\|_{X_{s,b}}\|v\|_{X_{s,b}}. \end{align*} The third case, where $\langle \tau_2-\varphi(\xi_2,\eta_2) \rangle$ is maximal, needs no consideration by symmetry. \end{proof} \section{The periodic case} To prove a bilinear Strichartz-type estimate for the periodic problem, we rely on the following number theoretic result due to W.~M.~Schmidt: \begin{theorem*}[Schmidt] Call $n(\mathfrak{C}, N)$ the number of integral points on the curve $\mathfrak{C} = \set{(x, f(x)) \mid x \in \mathbb{R}}$ in an arbitrary square of side length $N \ge 1$. Then, if $f''$ exists and is weakly monotonic, the estimate \begin{equation}\label{generalNTestimate} n(\mathfrak{C}, N) \le c(\varepsilon) N^{\gamma + \varepsilon} \end{equation} holds true for $\gamma = \frac{3}{5}$ with a constant $c(\varepsilon)$ independent of the particular curve. \end{theorem*} \noindent See \cite{Schmidt}*{Theorem~1}. We will apply this estimate to \begin{enumerate}[(i)] \item classical hyperbolas described by \[ a(x^2 - y^2) + 2bxy = c \qquad (c \not= 0) \] and to \item cubic hyperbola-like curves of the form \[ (x + a)(x^2 - y^2) = 2(y + b)xy, \] \end{enumerate} where $a$, $b$ and $c$ are parameters. Schmidt's Theorem applies to these curves, unless they degenerate (partially) into straight lines. It is possible that sharper estimates with lower exponents $\gamma$ hold true for the curves in (i) and (ii). Thus we decided to state and prove several subsequent estimates depending on the exponent $\gamma \in [0, 1)$, assuming~\eqref{generalNTestimate} to be applicable. Next we define the bilinear projection operator $Q$ by \[ \widehat{Q(u, v)}(\xi, \eta, \tau) = \sum_* (1 - \delta_{\xi,0}\delta_{\xi_1,0}) \hat{u}(\xi_1, \eta_1, \tau_1) \hat{v}(\xi_2, \eta_2, \tau_2) \] where $\sum_*$ indicates summation under the constraint introduced by the convolution $(\xi, \eta, \tau) = (\xi_1, \eta_1, \tau_1) + (\xi_2, \eta_2, \tau_2)$. $Q$ acts only on the first space variable. \begin{prop} Let $\gamma \in [0,1)$, such that~\eqref{generalNTestimate} holds for the nondegenerate curves of type (i) and (ii). For $B_R \subset \mathbb R^2$ a circle with radius $R > 0$ and arbitrary center and $u_0, v_0 \in L^2_{xy}$, where $\supp{\hat{u}_0} \subset B_R$, one has \begin{equation}\label{bilinOrig} \|Q(U_\varphi u_0, U_\varphi v_0)\|_{L^2_{xyt}} \lesssim R^{\frac{\gamma}{2}+} \|u_0\|_{L^2_{xy}} \|v_0\|_{L^2_{xy}}. \end{equation} \end{prop} \begin{remark*} Without the projector $Q$ the best possible estimate is \[ \|U_\varphi u_0 U_\varphi v_0\|_{L^2_{xyt}} \lesssim R^{\frac{1}{2}} \|u_0\|_{L^2_{xy}} \|v_0\|_{L^2_{xy}}, \] which can be seen by the example $\hat{u}_0(\xi,\eta) = \hat{v}_0(\xi,\eta) = \delta_{\xi, 0}\chi_{[-R,R]}(\eta)$. But~\eqref{bilinOrig} will work in our application to the nonlinearity, since the bilinear Fourier multiplier $m$ introduced at the beginning of Section 3 vanishes, if $\xi = \xi_1 = \xi_2 = 0$. \end{remark*} \begin{proof} We split \begin{align*} \mathcal{F}_{xyt}Q(U_\varphi u_0, U_\varphi v_0)(\xi, \eta, \tau) &= \sum_* (1 - \delta_{\xi,0}\delta_{\xi_1,0}) \delta_{\tau,\varphi(\xi_1, \eta_1) + \varphi(\xi_2, \eta_2)} \hat{u}_0(\xi_1, \eta_1) \hat{v}_0(\xi_2, \eta_2) \\\nonumber &= \RN{1} + \RN{2}, \end{align*} where for $\RN{1}$ we assume that $\tau - \frac{\xi^3}{4} + \frac{7}{4}\xi\eta^2 \not= 0$. This term can be estimated by Cauchy-Schwarz \begin{align}\label{afterFTCS} \|\RN{1}\|_{L^2_{\xi\eta\tau}}^2 &\lesssim \sum_{(\xi, \eta,\tau) \in \mathbb Z^3} \Sigma_1(\xi, \eta, \tau) \sum_* \delta_{\tau, \varphi(\xi_1, \eta_1) + \varphi(\xi_2, \eta_2)} |\hat{u}_0(\xi_1, \eta_1) \hat{v}_0(\xi_2, \eta_2)|^2, \end{align} noting that $\delta_{\tau, \varphi(\xi_1, \eta_1) + \varphi(\xi_2, \eta_2)}^2 = \delta_{\tau, \varphi(\xi_1, \eta_1) + \varphi(\xi_2, \eta_2)}$ and $\hat{u_0} = \chi_{R} \hat{u}_0$ where we define \[ \Sigma_1(\xi, \eta, \tau) = \sum_* \delta_{\tau, \varphi(\xi_1, \eta_1) + \varphi(\xi_2, \eta_2)} \chi_R(\xi_1, \eta_1). \] If we are now able to prove an estimate of the type $\Sigma_1(\xi, \eta, \tau) \lesssim R^{\gamma+}$ we can further bound \begin{align*} \eqref{afterFTCS}&\lesssim R^{\gamma+} \sum_{(\xi, \eta) \in \mathbb Z^2} \sum_* \left(\sum_{\tau \in \mathbb Z} \delta_{\tau, \varphi(\xi_1, \eta_1) + \varphi(\xi_2, \eta_2)}\right) |\hat{u}_0(\xi_1, \eta_1) \hat{v}_0(\xi_2, \eta_2)|^2\\ &\le R^{\gamma+} \|u_0\|_{L^2_{xy}}^2 \|v_0\|_{L^2_{xy}}^2 \end{align*} which is our proposition for the contribution by $\RN{1}$. In order to bound $\Sigma_1(\xi, \eta, \tau)$ we use the substitution $\xi_1 = x + \frac{\xi}{2}$ and $\eta_1 = y + \frac{\eta}{2}$. A lengthy but elementary calculation shows that then \[ \tau - \varphi(\xi_1, \eta_1) - \varphi(\xi_2, \eta_2) = \tau - \frac{1}{4}\xi^3 + \frac{7}{4} \xi \eta^2 + 3\xi(x^2 - y^2) - 6\eta xy =: K(\xi, \eta, \tau, x, y). \] One immediately identifies this to be a curve of type (i) in the variables $x$ and $y$, $(\xi, \eta)$ and $\tau$ only play the role of parameters. The sum to be estimated now reads \[ \Sigma_1(\xi, \eta, \tau) = \sum_{(x,y)\in\mathbb Z^2} \delta_{0, K(\xi, \eta, \tau, x, y)} \chi_{2R}(2x + \xi, 2\eta + y) \] where, because of the substitution, we have had to double the radius of the circle. Now the general result~\eqref{generalNTestimate} about curves is applicable, since this sum merely counts the integral points within some disc of radius $\lesssim R$ on the hyperbola $K$. Hence, as desired, $\Sigma_1(\xi, \eta, \tau) \lesssim R^{\gamma+}$ and this completes the proof for $\RN{1}$. The second contribution is, with $(x, y) = (\xi - 2\xi_1, \eta - 2\eta_1)$, \begin{align*} \RN{2} &= \delta_{\tau, \frac{\xi^3}{4} + \frac{7}{4}\xi\eta^2} \sum_* \delta_{(x + 2\xi_1)(x^2 - y^2), 2(y + 2\eta_1) xy} (1 - \delta_{\xi,0}\delta_{\xi_1,0}) \hat{u}_0(\xi_1, \eta_1) \hat{v}_0(\xi_2, \eta_2)\\ & =: \delta_{\tau, \frac{\xi^3}{4} + \frac{7}{4}\xi\eta^2} \cdot \Sigma_2(\xi, \eta). \end{align*} To estimate $\|\RN{2}\|_{L^2_{\xi\eta\tau}} = \|\Sigma_2\|_{L^2_{\xi\eta}}$ we decompose $\mathbb{R}^2 = \sum_{\alpha \in \mathbb{Z}^2} Q_\alpha$, where $Q_\alpha$ are disjoint squares of side length $2R$, so that \begin{equation} \|\RN{2}\|_{L^2_{\xi\eta\tau}}^2 = \sum_{\alpha \in \mathbb{Z}^2} \|\chi_{Q_\alpha} \Sigma_2\|_{L^2_{\xi\eta}}^2. \end{equation} Next we estimate $\|\chi_{Q_\alpha} \Sigma_2\|_{L^2_{\xi\eta}}$ for $\alpha \in \mathbb{Z}^2$ fixed by duality. For that purpose let $\psi \in L^2_{\xi\eta}$ with $\|\psi\|_{L^2_{\xi\eta}} \le 1$. Then \begin{align}\label{dualityEst} \langle \psi, \chi_{Q_\alpha} \Sigma_2 \rangle_{L^2_{\xi\eta}} &= \sum_{(\xi,\eta) \in \mathbb Z^2} \psi(\xi,\eta) \chi_{Q_\alpha}(\xi,\eta) \sum_* \hat{u}_0(\xi_1, \eta_1) \hat{v}_0(\xi_2, \eta_2) \times \ldots \\ \nonumber &\qquad\ldots (1 - \delta_{\xi,0}\delta_{\xi_1,0}) \delta_{(x + 2\xi_1)(x^2 - y^2), 2(y + 2\eta_1) xy} \\ \nonumber &= \sum_{(\xi_1, \eta_1) \in \mathbb Z^2} \hat{u}_0(\xi_1, \eta_1) \sum_{(\xi,\eta) \in \mathbb{Z}^2} \chi_{Q_\alpha}(\xi,\eta) \psi(\xi,\eta) \hat{v}_0(\xi_2, \eta_2) \times\ldots \\ \nonumber & \qquad\ldots (1 - \delta_{\xi,0}\delta_{\xi_1,0}) \delta_{(x + 2\xi_1)(x^2 - y^2), 2(y + 2\eta_1) xy}. \end{align} An application of Cauchy-Schwarz' inequality to the inner sum gives \begin{align}\label{squareContrib} & \sum_{(\xi,\eta) \in \mathbb{Z}^2} \chi_{Q_\alpha}(\xi,\eta) \psi(\xi,\eta) (1 - \delta_{\xi,0}\delta_{\xi_1,0}) \delta_{(x + 2\xi_1)(x^2 - y^2), 2(y + 2\eta_1) xy} \hat{v}_0(\xi_2, \eta_2) \\ \nonumber & \qquad \le \Sigma_3(\xi_1, \eta_1)^\frac{1}{2} \cdot \left( \sum_{(\xi,\eta) \in \mathbb{Z}^2} |\psi(\xi,\eta)|^2 |\hat{v}_0(\xi - \xi_1, \eta - \eta_1)|^2 \right)^\frac{1}{2} \end{align} Where we have shortened the first factor to \begin{equation}\label{type2count} \Sigma_3(\xi_1, \eta_1) := \sum_{(\xi,\eta) \in \mathbb{Z}^2} (1 - \delta_{\xi,0}\delta_{\xi_1,0}) \delta_{(x + 2\xi_1)(x^2 - y^2), 2(y + 2\eta_1) xy} \chi_{Q_\alpha}(\xi,\eta), \end{equation} the variables $(\xi_1, \eta_1)$ now appearing as parameters. Here again we must argue for an estimate of type $\Sigma_3(\xi_1, \eta_1) \lesssim R^{\gamma+}$, similar as to the above. In general, there are three kinds of solutions to the hyperbola-like curve of type (ii) appearing in this sum: \begin{enumerate}[(i)] \item If $\xi_1 = 0$ and $x = 0$, then an arbitrary pair $(\eta_1, y) \in \mathbb Z^2$ will complete a solution to $(x + 2\xi_1)(x^2 - y^2) = 2(y + 2\eta_1) xy$. Though since $x = \xi - 2\xi_1$ the factor involving the first Kronecker deltas causes these solutions to be disregarded in the count. \item In case $\xi_1\eta_1 \not= 0$ then $(x, y) = (-\frac{2}{3}\xi_1, -\frac{2}{9}\frac{\xi_1^2}{\eta_1})$ also gives a solution on the curve. Though since this is just a single point -- $(\xi_1, \eta_1)$ are fixed -- it may at most give a single 1 in our sum. \item Lastly, if $3x + 2\xi_1 \not= 0$ then \[ y_\pm = \frac{\pm\sqrt{x^2(4\xi_1^2 + 4\eta_1^2 + 8\xi_1 x + 3x^2)} - 2\eta_1x}{3x+2\xi_1} \] gives a whole family of solutions depending on $x$. In order to ensure that Schmidt's Theorem is sufficient to give the required bound, we must ensure that if such a curve degenerates into a straight line, it has an irrational slope. Assuming $y_\pm$ does indeed describe a straight line we may calculate its slope as $\lim_{x \to \infty} \frac{y_\pm}{x} = \frac{\pm 1}{\sqrt{3}}$, which is irrational. In all other cases Schmidt's theorem delivers the required bound $\Sigma_3(\xi_1, \eta_1) \lesssim R^{\gamma+}$. \end{enumerate} Inserting this into~\eqref{squareContrib}, then into~\eqref{dualityEst} and applying Cauchy-Schwarz to the outer sum over $(\xi_1,\eta_1) \in \mathbb{Z}^2$ we arrive at \[ \langle \psi, \chi_{Q_\alpha} \Sigma_2 \rangle_{L^2_{\xi\eta}} \lesssim R^{\frac{\gamma}{2}+} \|\hat{u}_0\|_{L^2_{\xi\eta}} \|\hat{v}_0\|_{L^2_{\xi\eta}}. \] Since in the above calculation we have $(\xi_1, \eta_1) \in B_R$ and $(\xi,\eta) \in Q_\alpha$, the variables $(\xi_2, \eta_2) = (\xi, \eta) - (\xi_1, \eta_1)$ are confined to a square $\tilde{Q}_\alpha$ of side length $4R$ containing $Q_\alpha - B_R$, so that in fact we can rely on the stronger estimates \[ \langle \psi, \chi_{Q_\alpha} \Sigma_2 \rangle_{L^2_{\xi\eta}} \lesssim R^{\frac{\gamma}{2}+} \|\hat{u}_0\|_{L^2_{\xi\eta}} \|\chi_{\tilde{Q}_\alpha} \hat{v}_0\|_{L^2_{\xi\eta}} \] respectively on \[ \|\chi_{Q_\alpha} \Sigma_2 \|_{L^2_{\xi\eta}}^2 \lesssim R^{\gamma+} \|\hat{u}_0\|_{L^2_{\xi\eta}}^2 \|\chi_{\tilde{Q}_\alpha} \hat{v}_0\|_{L^2_{\xi\eta}}^2 \] Since the $\tilde{Q}_\alpha$ can be chosen in such a way that their union covers $\mathbb{R}^2$ exactly four times, we can sum over $\alpha \in \mathbb{Z}^2$ to obtain \[ \|\RN{2}\|_{L^2_{\xi\eta\tau}}^2 \lesssim \sum_{\alpha \in \mathbb{Z}^2} R^{\gamma+} \|\hat{u}_0\|_{L^2_{\xi\eta}}^2 \|\chi_{\tilde{Q}_\alpha} \hat{v}_0\|_{L^2_{\xi\eta}}^2 \lesssim R^{\gamma+} \|\hat{u}_0\|_{L^2_{\xi\eta}}^2 \|\hat{v}_0\|_{L^2_{\xi\eta}}^2 \] which by Plancherel gives the desired bound. \end{proof} So that we can make use of this estimate we will first use the transfer principle \[ \|Q(u,v)\|_{L^2_{xyt}} \lesssim \|u\|_{X_{\frac{\gamma}{2}+,b}} \|v\|_{X_{0,b}}, \] which holds for any $b > \frac{1}{2}$, and also interpolate this with the trivial bound \[ \|Q(u,v)\|_{L^2_{xyt}} \le \|u\|_{L^4_t L^\infty_{xy}} \|v\|_{L^4_t L^2_{xy}} \lesssim \|u\|_{X_{1+, \frac{1}{4}}} \|v\|_{X_{0, \frac{1}{4}}}, \] in order to arrive at \begin{equation}\label{bilinXsb} \|Q(u,v)\|_{L^2_{xyt}} \lesssim \|u\|_{X_{\frac{\gamma}{2}+,\frac{1}{2}-}} \|v\|_{X_{0,\frac{1}{2}-}}. \end{equation} Dualizing we obtain \begin{equation}\label{dualBilinXsb} \|Q(u,v)\|_{X_{0,-\frac{1}{2}+}} \lesssim \|u\|_{L^2_{xyt}} \|v\|_{X_{\frac{\gamma}{2}+,\frac{1}{2}-}}. \end{equation} One additional estimate is needed, which we prove with a second dyadic decomposition. \begin{lemma} Assume that~\eqref{generalNTestimate} holds with a certain $\gamma \in [0,1)$ for the nondegenerate curves of type (i) and (ii). Then \begin{equation}\label{transposedBilin} \|Q(u,v)\|_{L^2_t H^{-\frac{\gamma}{2}-}} \lesssim \|u\|_{X_{0,\frac{1}{2}-}} \|v\|_{X_{0,\frac{1}{2}-}}. \end{equation} \end{lemma} \begin{proof} With $\widehat{Q_0(u,v)}(\xi,\eta,\tau) = \delta_{\xi, 0} \hat{u}(\xi,\eta,\tau)$ we can write \[ Q(u, v) = ((I-Q_0)u)v + (Q_0 u)(I-Q_0)v, \] and the Fourier transform of both contributions vanishes, if $\xi_1 = \xi_2 = \xi = 0$, so that~\eqref{bilinXsb} applies to both of them. We give the argument only for the first, which we write as $wv$ with $w = (I - Q_0)u$. Using a dyadic decomposition in the space variables only with Littlewood-Paley projections $P_{\Delta l} = \mathcal{F}_{xy}^{-1}\chi_{\set{|(\xi, \eta)| \sim 2^l}}\mathcal{F}_{xy}$, $l \ge 1$, and $P_{\Delta 0} = \mathcal{F}_{xy}^{-1}\chi_{\set{|(\xi, \eta)| \le 1}}\mathcal{F}_{xy}$ we obtain \[ \|wv\|_{L^2_t H^{-\frac{\gamma}{2}-}} \le \sum_{l \ge 0} 2^{-l(\frac{\gamma}{2} + \varepsilon)} \|P_{\Delta l}(uv)\|_{L^2_{xyt}}. \] Now for a fixed $l \in \mathbb N_0$ we write \begin{equation}\label{squareDecomp} \|P_{\Delta l}(wv)\|^2_{L^2_{xyt}} = \sum_{\alpha, \beta \in \mathbb Z^2} \langle P_{\Delta l}(P_{Q_\alpha^l}(w) \cdot v), P_{\Delta l}(P_{Q_\beta^l}(w) \cdot v) \rangle \end{equation} where we have introduced a second dyadic decompostion with squares $Q_\alpha^l$ of side length $2^l$, centered at $\alpha 2^l$ with $\alpha \in \mathbb Z^2$. Double sized squares with the same centers will be denoted $\tilde{Q}_\alpha^l$. Hence if $(\xi_1, \eta_1) \in Q_\alpha^l$ and $|(\xi, \eta)| \le 2^l$ then we must have $(\xi_2, \eta_2) = (\xi, \eta) - (\xi_1, \eta_1) \in \tilde{Q}_{-\alpha}^l$, so we can estimate \begin{align*} \eqref{squareDecomp} &= \sum_{\alpha, \beta \in \mathbb Z^2} \langle P_{Q_\alpha^l}(w) \cdot P_{\tilde{Q}_{-\alpha}^l}(v), P_{Q_\beta^l}(w) \cdot P_{\tilde{Q}_{-\beta}} (v) \rangle \\ &\le \sum_{\alpha, \beta \in \mathbb Z^2} \langle P_{\tilde{Q}_\alpha^l} (w) \cdot P_{\tilde{Q}_{\beta}^l} (\overline{v}), P_{\tilde{Q}_\beta^l}(w) \cdot P_{\tilde{Q}_{\alpha}}(\overline{v}) \rangle \\ &\le \sum_{\alpha, \beta \in \mathbb Z^2} \|P_{\tilde{Q}_\alpha^l} (w) \cdot P_{\tilde{Q}_{-\beta}^l} (v)\|_{L^2_{xyt}} \|P_{\tilde{Q}_\beta^l} (w) \cdot P_{\tilde{Q}_{-\alpha}} (v)\|_{L^2_{xyt}} \\ &\le \sum_{\alpha, \beta \in \mathbb Z^2} \|P_{\tilde{Q}_\alpha^l} (w) \cdot P_{\tilde{Q}_{-\beta}^l} (v)\|_{L^2_{xyt}}^2 \\ &\lesssim 2^{l(\gamma + \frac{\varepsilon}{2})} \sum_{\alpha, \beta \in \mathbb Z^2} \|P_{\tilde{Q}_\alpha^l} w\|_{X_{0,\frac{1}{2}-}}^2 \| P_{\tilde{Q}_{-\beta}^l} v\|_{X_{0,\frac{1}{2}-}}^2 \\ &\lesssim 2^{l(\gamma + \frac{\varepsilon}{2})} \|w\|_{X_{0,\frac{1}{2}-}}^2 \|v\|_{X_{0,\frac{1}{2}-}}^2 \lesssim 2^{l(\gamma + \frac{\varepsilon}{2})} \|u\|_{X_{0,\frac{1}{2}-}}^2 \|v\|_{X_{0,\frac{1}{2}-}}^2 \end{align*} Here we have used Cauchy-Schwarz twice, the $X_{s,b}$-estimate~\eqref{bilinXsb} and the almost orthogonality of the sequences $(P_{\tilde{Q}_\alpha^l} w)_{\alpha \in \mathbb Z^2}$ and $(P_{\tilde{Q}_{-\beta}^l} v)_{\beta \in \mathbb Z^2}$. Altogether \begin{align*} \|wv\|_{L^2_t H^{-\frac{\gamma}{2}-}} &\lesssim \sum_{l \ge 0} 2^{-l(\frac{\gamma}{2} + \varepsilon)} 2^{l(\frac{\gamma}{2} + \frac{\varepsilon}{2})} \|u\|_{X_{0,\frac{1}{2}-}} \|v\|_{X_{0,\frac{1}{2}-}} \\ &\lesssim \|u\|_{X_{0,\frac{1}{2}-}} \|v\|_{X_{0,\frac{1}{2}-}} \end{align*} as desired. \end{proof} Now we are prepared to show the proposition that, when inserted into the general framework of Bourgain's $X_{s,b}$-spaces, will result in a well-posedness theorem. \begin{prop}\label{generalPerEstimate} Let $\gamma \in [0, 1)$, such that~\eqref{generalNTestimate} holds for nondegenerate curves of type (i) and (ii), and $s > \frac{\gamma - 1}{2}$, then for all $u, v \in \dot{X}_{s, \frac{1}{2}}$ with support in $\mathbb R^2 \times [-\delta, \delta]$ there exists an $\varepsilon > 0$ such that \begin{align*} &\|B(u,v)\|_{\dot{X}_{s,-\frac{1}{2}}} \lesssim \delta^\varepsilon \|u\|_{\dot{X}_{s,\frac{1}{2}}} \|v\|_{\dot{X}_{s,\frac{1}{2}}} &&\quad\text{and}\\ &\|B(u,v)\|_{\dot{Y}^s} \lesssim \delta^\varepsilon \|u\|_{\dot{X}_{s,\frac{1}{2}}} \|v\|_{\dot{X}_{s,\frac{1}{2}}} &&\quad\text{hold.} \end{align*} \end{prop} \begin{proof} Since our data is of mean zero we may use $|(\xi_i, \eta_i)| \sim \langle (\xi_i, \eta_i) \rangle$ for $i \in \{1,2\}$, and will do so freely without further mention. We also assume $s < 0$, because $\gamma < 1$. As in the nonperiodic case the proof is split into cases where a single one of the modulations \eqref{modulations} is maximal. \quad \\ \emph{Case 1:} $\langle \tau - \varphi(\xi, \eta) \rangle$ is maximal. Without loss of generality we may assume that $|(\xi_1, \eta_1)| \gtrsim |(\xi_2, \eta_2)|$. Making use of~\eqref{resonance} with $\theta = \frac{1}{2}$ we can estimate \begin{align} \|B(u, v)\|_{\dot{X}_{s, -\frac{1}{2}}} &\lesssim \|Q(I^{-\frac{1}{2}}u,I^{-\frac{1}{2}}v)\|_{\dot{X}_{\frac{1}{2}+s,0}} \lesssim \|Q(I^s u, I^{-\frac{1}{2}} v)\|_{L^2_{xyt}}\\ &\lesssim \|u\|_{\dot{X}_{s,\frac{1}{2}-}} \|v\|_{\dot{X}_{\frac{\gamma - 1}{2}+,\frac{1}{2}-}} \lesssim \delta^\varepsilon \|u\|_{\dot{X}_{s,\frac{1}{2}}} \|v\|_{\dot{X}_{s,\frac{1}{2}}}. \end{align} In the penultimate step we used our bilinear estimate~\eqref{bilinXsb}. The last step depends on the support condition on $u$ and $v$. \quad \\ \emph{Case 2:} $\langle \tau_1 - \varphi(\xi_1, \eta_1) \rangle$ is maximal. Again we begin this case by using~\eqref{resonance} with $\theta = \frac{1}{2}$, though now we must use the modulation on the first factor to eliminate the resonance function: \begin{equation}\label{startingEst} \|B(u, v)\|_{\dot{X}_{s, -\frac{1}{2}}} \lesssim \|B(u, v)\|_{\dot{X}_{s, -\frac{1}{2}+}} \lesssim \|Q(I^{-\frac{1}{2}}\Lambda^{\frac{1}{2}}u, I^{-\frac{1}{2}}v)\|_{\dot{X}_{\frac{1}{2}+s,-\frac{1}{2}+}} \end{equation} The first bound may seem trivial and unnecessary, but we will come back to it in bounding the $Y^s$-norms. Depending on which factor the derivatives on the product can now fall we must differentiate between two cases:\\ \emph{Subcase 2.1:} $|(\xi_1, \eta_1)| \gtrsim |(\xi_2, \eta_2)|$. Here the derivatives can only fall on the first factor, so we use~\eqref{dualBilinXsb} putting $u$ into $L^2_{xyt}$ and lastly using the support condition again: \[ \eqref{startingEst} \lesssim \|Q(I^{s}\Lambda^{\frac{1}{2}}u, I^{-\frac{1}{2}}v)\|_{\dot{X}_{0,-\frac{1}{2}+}} \lesssim \|u\|_{\dot{X}_{s,\frac{1}{2}}} \|v\|_{\dot{X}_{\frac{\gamma - 1}{2}+,\frac{1}{2}-}} \lesssim \delta^\varepsilon \|u\|_{\dot{X}_{s,\frac{1}{2}}} \|v\|_{\dot{X}_{s,\frac{1}{2}}}. \] \emph{Subcase 2.2:} $|(\xi_1, \eta_1)| \lesssim |(\xi_2, \eta_2)|$. This time we use the dual of~\eqref{transposedBilin} putting the first factor in $L^2_t H^{\frac{\gamma}{2}+}$: \[ \eqref{startingEst} \lesssim \|Q(I^{-\frac{1}{2}}\Lambda^{\frac{1}{2}}u, I^{s}v)\|_{\dot{X}_{0,-\frac{1}{2}+}} \lesssim \delta^\varepsilon \|u\|_{\dot{X}_{s,\frac{1}{2}}} \|v\|_{\dot{X}_{s,\frac{1}{2}}} \] The case where $\langle \tau_2 - \varphi(\xi_2, \eta_2) \rangle$ is maximal need not be considered by symmetry. Next we can deal with the $Y^s$-norm estimate. Here again we consider two cases, where either the modulation of the product or of the first factor is maximal.\\ \quad \\ \emph{Case 1:} $\langle \tau - \varphi(\xi, \eta) \rangle$ is maximal. Using $\theta = 1-$ in~\eqref{resonance} we can make nearly complete use of the modulation. Discarding the derivative gain (and remainder of the modulation) on the product and after applying Cauchy-Schwarz twice we arrive at the desired bound \begin{align*} \|B(u,v)\|_{\dot{Y}^s} &\lesssim \|I^{s+}\Lambda^{0-} Q(I^{-1+}u, I^{-1+}v)\|_{L^2_{\xi\eta}L^1_\tau} \lesssim \|Q(I^{-1+}u, I^{-1+}v)\|_{L^2_{\xi\eta}L^1_\tau}\\ &\lesssim \|u\|_{\dot{X}_{s,\frac{1}{2}-}} \|v\|_{\dot{X}_{s,\frac{1}{2}-}} \lesssim \delta^\varepsilon \|u\|_{\dot{X}_{s,\frac{1}{2}}} \|v\|_{\dot{X}_{s,\frac{1}{2}}}. \end{align*} \emph{Case 2:} $\langle \tau_1 - \varphi(\xi_1, \eta_1) \rangle$ is maximal. Here we may use just over half of the modulation to apply Cauchy-Schwarz in the $\tau$ variable. This results in the same situation as in after the first inequality in~\eqref{startingEst}. The case where $\langle \tau_2 - \varphi(\xi_2, \eta_2) \rangle$ is maximal again need not be considered. \end{proof} Thus the quality of our well-posedness result depends entirely on the exponent in the number theoretic estimate~\eqref{generalNTestimate} that we use. The previously mentioned result due to Schmidt~\cite{Schmidt}*{Theorem~1} gives \begin{kor} In Proposition~\ref{generalPerEstimate} one can choose $\gamma = \frac{3}{5}$ and thus Theorem~\ref{NVper} holds. \end{kor} \section{Open questions} Unfortunately there are several questions that we cannot answer. They are immediately connected with our results here: \begin{itemize} \item[(1)] Optimality in the nonperiodic case: Is the Cauchy problem for (NV) locally well-posed in $H^s(\mathbb R^2)$ for $s \in [-1,-\frac34]$? For KdV on the real line this gap was closed by the celebrated global $H^{-1}(\mathbb R)$-result of Killip and Vișan~\cite{KV}, but they had to go beyond iterative methods because KdV in $H^s(\mathbb R)$ is ill-posed for $s<-\frac34$ in the $C^0$-uniform sense by~\cite{KPV01}*{Theorem~1.4}. The problem with (NV) is possibly on a much lower level, since our attempt to prove $C^2$-illposedness below $H^{-\frac34}(\mathbb R^2)$ failed. Schottdorf's $L^2(\mathbb R^2)$-result for (mNV) in combination with the Miura-type map suggests in a sense, that one should be able to do the step down to $H^{-1}(\mathbb R^2)$ by the contraction mapping principle. \item[(2)] Optimality in the periodic case: Is the initial value problem for (NV) locally well-posed in $H^s_0(\mathbb T^2)$ for $s \in [-\frac12, -\frac15]$? In our proof we inserted the estimate $$\# (\mathbb Z^2 \cap H \cap Q_N) \le c N^{\frac35+}$$ for the number of lattice points on a nondegenerate curve of type (i) and (ii) $H$ in a square $Q_N$ of size $N$. This estimate due to Schmidt~\cite{Schmidt} has the advantage of being independent of the shape of the curves. There are some estimates in the number theoretic literature with smaller exponents (e.g.~\cites{BP, Huxley}), which are valid for general sufficiently smooth curves, but it seems to be quite cumbersome to check whether they give the necessary \emph{uniform} bounds. Moreover, to get anything better than $\lesssim N^{\frac{4}{15}}$ seems to rely on specific properties of the family of curves in our considerations. Observe that an estimate $\lesssim N^{0+}$ for the number of lattice points would imply LWP in $H^s_0(\mathbb T^2)$ for $s > - \frac12$. Below $- \frac12$ there is $C^2$-illposedness by Bourgain's counterexample for KdV in the periodic case, see~\cite{Bmeasures}. \item[(3)] Can our result in the periodic case (valid for data of mean zero) be generalized to data of arbitrary mean? For KdV the reduction of the general to the mean zero case~\cite{B1}*{p.~219} is trivial in the sense that it leaves the $L^4$-estimate and the resonance function unchanged. For (NV) this reduction produces the additional linear term $$3\phi_0(\partial^2\overline{\partial}^{-1}+\overline{\partial}^2\partial^{-1})u, \qquad \mbox{where}\qquad \phi_0=\frac{1}{4\pi^2}\int_{\mathbb T^2}u_0(x,y)dxdy,$$ which changes the phase function into $$\widetilde{\varphi}(\xi,\eta)= \varphi (\xi,\eta)(1+\frac{3\phi_0}{\xi^2+\eta^2}).$$ With $E=3\phi_0$ this is precisely the situation of the ``nonzero energy'' (NV) analyzed in~\cites{KMI, KMII} in the nonperiodic case. The resonance function is then disturbed by the additional term and the exact cancellation of the Fourier multiplier is destroyed. \end{itemize}
1,108,101,563,931
arxiv
\section{Introduction} \label{sec1} In the present work we investigate on how ``large'' the set of the difference of primes. In 1905, Maillet conjectured in \cite{mai} that the set of the difference of primes should have the ``largest'' form, which contains all even numbers. \begin{conjecture} \label{conjecture1} Every even number is the difference of two primes. \end{conjecture} Actually, before Maillet's conjecture, there were two stronger forms of the conjecture. One was due to Kronecker \cite{kro} in 1901: \begin{conjecture} \label{conjecture2} Every even number can be expressed in infinitely many ways as the difference of two primes. \end{conjecture} Another is formulated by Polignac \cite{pol} in 1849, which has the most general form as \begin{conjecture} \label{conjecture3} Every even number can be written in infinitely many ways as the difference of two consecutive primes. \end{conjecture} It is easy to see that the twin prime conjecture is about the lower bound for the set of the difference of primes in Conjecture \ref{conjecture2} or \ref{conjecture3}. Recently, based on the GPY\cite{gol} sieve method, Zhang \cite{zha} made a breakthrough and proved that there exists an even number not more than $7\times10^7$ which can be expressed in infinitely many ways as the difference of two primes. Soon after, Maynard \cite{may} and Tao reduced the limit of such even number to not more than 600. The best known result now is not more than 246, to see \cite{pol}. Assume that the primes have level of distribution $\theta$ for every $\theta<1$, then the best known result now is 12 proved by Maynard \cite{may}. Here, for some given $\theta>0$, we say the primes have `level of distribution $\theta$' if, for any $W>0$, we have \begin{align} \sum_{q\le x^\theta}\max_{(a,q)=1}\Big|\pi(x;q,a)-\frac{\pi(x)}{\phi(q)}\Big|\ll_W\frac{x}{(\log x)^W}. \end{align} Another important aspect of these conjectures is how ``large'' the set of the difference of primes. In combinatorial number theory, as well as in dynamics, there are various notions of ``large'' sets of integers. Some familiar notions are those of sets of positive (upper) density, syndetic sets, thick set, return-time sets, sets of recurrence, Bohr sets, Nil$_d$ Bohr$_0$-set, piecewise-Bohr sets and strongly piecewise-Bohr sets. We will give some basic definitions and elementary considerations of these notions in section \ref{sec2}. Let $D$ denote the set of even numbers that can be expressed in infinitely many ways as the difference of two primes. Based on recent breakthrough on the twin prime conjecture, Pintz proved in \cite{pin} that \begin{theorem} There exists an ineffective constant $C'$ such that every interval of type $[M, M+C']$ contains at least one even numbers that can be expressed in infinitely many ways as the difference of two primes, that is to say $D$ is a syndetic set. \end{theorem} Somewhat later, using a different method, Granville, Kane, Koukoulopoulos and Lemke Oliver obtained the same result in \cite{gra}. \begin{definition} If $S$ is a non-empty subset of $\mathbb{N}$, define the difference set $\Delta(S)$ by \begin{align} \Delta(S)=(S-S)\cap\mathbb{N}=\{b-a:a\in S, b\in S, b> a\}. \end{align} If $A$ is a subset of $\mathbb{N}$, $A$ is a $\Delta_{r}^*$-set if $A\cap\Delta(S)\neq\emptyset$ for every subset $S$ of $\mathbb{N}$ with $|S|=r$; $A$ is a $\Delta^*$-set if $A\cap\Delta(S)\neq\emptyset$ for every infinite subset $S$ of $\mathbb{N}$. \end{definition} In this paper, we prove that $D$ is a ``larger'' set than a syndetic set. \begin{theorem} \label{theoremhw} Let $D$=\{$d$: $d$ can be expressed in infinitely many ways as the difference of two primes \}, then $D$ is a $\Delta_{r}^*$-set for $r\ge721$. If assume that the primes have level of distribution $\theta$ for every $\theta<1$, we have $D$ is a $\Delta_{r}^*$-set for $r\ge19$. \end{theorem} Actually, we obtain the following inequality for the lower bound of $r$ in Theorem \ref{theoremhw}. \begin{align} r\ge C\prod_{p\le C}\big(1-\frac1p\big)^{-1}, \end{align} where $C$ is the lower bound of the length of admissible $k$-tuple of integers in Zhang-Maynard-Tao's theorem, to see section \ref{sec3} in the following. It is proved by Green and Tao in \cite{gre} that the primes contain arbitrarily long arithmetic progressions, but we know little about the distribution of the common difference for these arithmetic progressions. Let $\mathbb{P}$ denote the set of all primes and \begin{align} C_d(\mathbb{P})=\{n\in\mathbb{N}:\mathbb{P}\cap(\mathbb{P}-n) \cap(\mathbb{P}-2n)\cap\cdots\cap(\mathbb{P}-dn)\neq\emptyset\}. \end{align} Huang, Shao and Ye asked the following question in their work \cite{hua} \begin{question} \label{que} Is $C_d(\mathbb{P})$ a Nil$_d$ Bohr$_0$-set~? \end{question} When take $d=1$, we can see that Question \ref{que} actually asks whether $D$ is a Bohr$_0$-set. From Theorem \ref{theoremhw}, we have the following corollary for this question \begin{corollary} \label{cor} Let $D$=\{$d$: $d$ can be expressed in infinitely many ways as the difference of two primes \}, then $D$ is a strongly piecewise-$\text{Bohr}_0$ set. \end{corollary} \section{some notations} \label{sec2} We begin with basic definitions and elementary considerations for some notions. A set $S\subset \mathbb{N}$ is called \emph{syndetic} if for some finite subset $F\subset\mathbb{N}$ \begin{align} \bigcup_{n\in F}\Big(S-n\Big)=\mathbb{N}, \end{align} where $S-n=\{m\in\mathbb{N}: m+n\in S\}$. In other words, $S$ is a syndetic set if it has bounded gaps, which means that there is an integer $k$ such that $\{a, a+1, a+2, \cdots, a+k\}\cap S\neq\emptyset$ for any $a\in\mathbb{N}$. A set $A\subset \mathbb{N}$ is called a \emph{thick} set if it contains arbitrarily long intervals. That is, for every $m\in\mathbb{N}$, there is some $n\in\mathbb{N}$ such that $\{n, n+1, n+2, \cdots, n+m\}\subset A$. Thus if $A$ is a thick set, it must contain a subset in the form \begin{align} \bigcup_{m=1}^\infty\Big\{a_l, a_l+1, a_l+2, \cdots, a_m+m-1\Big\} \end{align} for some sequence of integers $a_m\rightarrow\infty$. It is easy to see that a set $S$ is syndetic$\Leftrightarrow\mathbb{N}\backslash S$ is not thick $\Leftrightarrow S\cap A\ne\emptyset$ for any thick set $A$. Syndetic set and thick set are fundamental concepts in ergodic theory, for details, one may see Furstenberg \cite{fur}. \begin{definition} A subset $A\subset\mathbb{N}$ is a Bohr set if there exists a trigonometric polynomial $\psi(t)=\sum_{k=1}^mc_ke^{i\lambda_kt}$, with the $\lambda_k$ real numbers, such that the set \begin{align} A'=\{n\in\mathbb{N}:\text{Re}\psi(n)>0\} \end{align} is non-empty and $A\supset A'$. When $\psi(0)>0$ we say $A$ is a $Bohr_0$ set. \end{definition} As a consequence of the almost periodicity of trigonometric polynomials we can see that a Bohr set is syndetic. We may also define Bohr set and $\text{Bohr}_0$ set in an alternative way, a subset $A\subset\mathbb{N}$ is Bohr set if there exist $m\in\mathbb{N}, \alpha\in\mathbb{T}^m$, and a open set $U\subset\mathbb{T}^m$ such that $\{n\in \mathbb{N}:n\alpha\in U\}$ is contained in $A$; the set $A$ is a $\text{Bohr}_0$ set if additionally $0\in U$. Bohr-sets are fundamentally abelian in nature. Nowadays it has become apparent that higher order non-abelian Fourier analysis plays an important role both in combinatorial number theory and ergodic theory. Related to this, a higher-order version of Bohr$_0$ sets, namely Nil$_d$ Bohr$_0$-sets, was introduced in \cite{hos}. \begin{definition} A subset $A\subset\mathbb{N}$ is a Nil$_d$ Bohr$_0$-set if there exist a $d$-step nilsystem $(X, \mu, T)$, $x_0\in X$, and an open set $U\subset X$ containing $x_0$ such that \begin{align} \{n\in\mathbb{N}: T^nx_0\in U\} \end{align} is contained in $A$. \end{definition} Bergelson, Furstenberg and Weiss introduced the notion of piecewise-Bohr set in \cite{ber}. They defined that a set $A$ is a \emph{piecewise-Bohr} set if $A=S\cap Q$, where $S$ is a Bohr set and $Q$ is a thick set. This notion of piecewise-Bohr set is very simple but weak, a piecewise-Bohr set defined in this manner is even not necessarily syndetic. Then Host and Kra introduced a stronger definition of piecewise-Bohr set, named by \emph{strongly piecewise-Bohr} set in \cite{hos}. \begin{definition} The set $A\subset\mathbb{N}$ is said to be strongly piecewise-Bohr, if for every sequence ($J_k: k\ge1$) of intervals whose lengths $|J_k|$ tend to $\infty$, there exists a sequence ($I_j: j\ge1$) of intervals satisfying the following. \begin{itemize} \item[(i)\ \ ] For each $j\ge1$, there exists some $k=k(j)$ such that the interval $I_j$ is contained in $J_k$. \item[(ii)~]The lengths $I_j$ tend to infinity. \item[(iii)] There exists a Bohr set $B$ such that $B\cap I_j\subset A$ for every $j\ge1$. \end{itemize} \end{definition} Similarly we may define \emph{strongly piecewise-$\text{Bohr}_0$} set. With this definition, both strongly piecewise-Bohr set and strongly piecewise-$\text{Bohr}_0$ set are syndetic. \section{Zhang-Maynard-Tao's theorem} \label{sec3} Let $k$ be a positive integer, we say a given $k$-tuple of integers $H=\{h_1, h_2, \cdots, h_k\}$ is \emph{admissible} if \begin{align} \Big|\Big\{n~\text{mod}~p: \prod_{i=1}^k(n+h_i)\equiv0~\text{mod}~p\Big\}\Big|<p, ~\text{for every prime}~p. \end{align} In other words, $H$ is admissible if and only if, for any prime $p$, $h_i$'s never occupy all of the residue classes modulo $p$. This is immediately true for all primes $p>k$; so to test this condition for a $k$-tuple of integers $H$ we need only to examine such small primes $p\le k$. We observe that either Zhang's work or Maynard and Tao's work may follow from a result in the form as \begin{theorem} \label{theoremz} Let $H=\{h_1, h_2, \cdots, h_k\}$ be an $k$-tuple of integers. If $H$ is admissible and $k\ge C$ for some given constant $C>0$, then there are infinitely many integers $n$ such that at least two of the numbers $n+h_1, n+h_2, \cdots, n+h_k$ will be prime. \end{theorem} Zhang's Theorem 1 in \cite{zha} proved that $C=3.5\times10^6$ is available in Theorem \ref{theoremz}, then obtained there are infinitely many couples of primes with difference not more than $7\times10^7$. To obtain Maynard-Tao's theorem, they proved that a much smaller value $C=105$ can be used in Theorem \ref{theoremz}. In \cite{pol} they proved $C=50$ is available. If assumed that the primes have level of distribution $\theta$ for every $\theta<1$, Maynard proved that $C$ may take value 5 in the theorem and improved the inferior limit of the difference of primes to 12. \section{proof of main results} In this section we prove Theorem \ref{theoremhw} and Corollary \ref{cor}. We begin with an observation that if $A$ is a subset of $\mathbb{N}$ which has more enough elements, then $A$ contains at least an admissible $k$-tuple of integers $H=\{h_1, \cdots, h_k\}$. To find an admissible $k$-tuple of integers, we only need to consider such primes $\mathbb{P}_k=\{p: p\le k\}$. For any prime $p_1\in\mathbb{P}_k$, we have that \begin{align} |A|=\sum_{a~mod~p_1}\sum_{{g\in A}\atop{g\equiv a~mod~p_1}}1, \end{align} so there exists an integer $b_{p_1}$ such that \begin{align} \label{mod} |\{g\in A: g\equiv b_{p_1}~\text{mod}~p_1\}|\le |A|/p_1. \end{align} Let \begin{align} A_1=A\setminus \{g\in A: g\equiv b_{p_1}~\text{mod}~p_1\}. \end{align} It is easy to see that \begin{align} |A_1|\ge|A|\Big(1-\frac1{p_1}\Big) \end{align} Then for another prime $p_2\in\mathbb{P}_k$ with $p_2\neq p_1$, we can also choose a $b_{p_2}$ so that \begin{align} |\{g\in A_1: g\equiv b_{p_2}~\text{mod}~p_2\}|\le |A_1|/p_2. \end{align} The same to $A_1$ we may get a set \begin{align} A_2=A_1\setminus \{g\in A_1: g\equiv b_{p_2}~\text{mod}~p_2\} \end{align} with \begin{align} |A_2|\ge|A_1|\Big(1-\frac1{p_2}\Big) \end{align} Repeating this process one prime at a time, with $p$ varying over the elements of $\mathbb{P}_k$, we eventually obtain a set \begin{align} \label{2} A_{\pi(k)}=A\setminus\bigcup_{p\in\mathbb{P}_k}\Big\{g: g\equiv b_p \mod p\Big\} \end{align} after $\pi(k)$ steps and the cardinality of this set is \begin{align} \label{3} |A_{\pi(k)}|\ge|A|\prod\limits_{p\in\mathbb{P}_k}\Big(1-\frac1p\Big). \end{align} Here $\pi(k)$ denote the number of primes not more than $k$. From (\ref{2}) we have that, for any $p\le k$, elements of $A_{\pi(k)}$ never occupy all of the residue classes modulo $p$. Thus if \begin{align} \label{gek} |A_{\pi(k)}|\ge k, \end{align} any $k$-tuple of integers $H\subset A_{\pi(k)}$ is admissible. By (\ref{3}), to meet the condition (\ref{gek}), we just need assure $A$ large enough that \begin{align} A\ge k\prod\limits_{p\le k}\Big(1-\frac1p\Big)^{-1}. \end{align} Thus we have that any large enough subset $A$ of $\mathbb{N}$, which has at least $k\prod_{p\le k}\big(1-\frac1p\big)^{-1}$ elements, contains at least an admissible $k$-tuple of integers $H=\{h_1, \cdots, h_k\}$. However, Theorem \ref{theoremz} tells us that when take the integer $k\ge C$, we may have there are infinitely many integers $n$ such that at least two of the numbers $n+h_1, n+h_2, \cdots, n+h_k$ will be prime. So there must be some integers $h_i,~h_j\in H\subset A$, $h_i>h_j$ that $h_i-h_j$ can be expressed in infinitely many ways as the difference of two primes, that is to say \begin{align} \Delta(A)\cap D\neq\emptyset. \end{align} From the discussion above we may have the following conclusion. For any subset $A$ of $\mathbb{N}$ with at least $C\prod_{p\le C}\big(1-\frac1p\big)^{-1}$ elements, we have that \begin{align} \Delta(A)\cap D\neq\emptyset. \end{align} That is to say $D$ is a $\Delta_r^*$-set for any $r$ with \begin{align} \label{rge} r\ge C\prod_{p\le C}\big(1-\frac1p\big)^{-1}. \end{align} Here the constant $C$ is given in Theorem \ref{theoremz}. Unconditionally, the smallest possible value of $C$ that we can take now is 50, to see \cite{pol}. So, from (\ref{rge}), we have $r\ge720.96$, and $D$ is a $\Delta_{721}^*$-set. If assumed that the primes have level of distribution $\theta$ for every $\theta<1$, Maynard proved in \cite{may} that $C=5$ is available. Thus under this condition, we have $r\ge18.75$ in (\ref{rge}), and then $D$ is a $\Delta_{19}^*$-set. To prove the corollary, we need the following lemma. \begin{lemma} \label{lem} Every $\Delta^*$-set is a strongly piecewise-Bohr$_0$ set. \end{lemma} This lemma is Theorem 2.8 in Host and Kra's work \cite{hos}. We have proved that $D$ is a $\Delta_r^*$-set above, so it is obviously a $\Delta^*$-set. Now it is easy to see that Corollary \ref{cor} is a direct result of Lemma \ref{lem}.
1,108,101,563,932
arxiv
\section{Introduction} The purpose of this contribution is to show some transformations that simplify Hamiltonians such that they may be treated in an analytic form. We will study various problems, starting with the ion-laser interaction, where we have shown that there exists a time dependent transformation that linearizes the Hamiltonian with no approximations \cite{Moya-1}. We follow with the interaction of a slow atom with a quantized field, the main result of the paper. In this case the atom is affected by the mode shape of the field \cite{Larson} and we show that the interaction may be simplified as we pass from a three-body system to a two-body effective interaction, again with no approximations. Finally we analyze the case of the master equation (ME) that describes losses for an anharmonic oscillator \cite{Milburn}, one can transform such a ME to obtain a simpler equation where all the superoperators commute, allowing its simple integration \cite{Moya-2}. \section{Ion-laser interaction} We consider the Hamiltonian of a single ion (with unity mass trapped in a harmonic potential in interaction with laser light in the (optical) rotating wave approximation \cite{expl}) \begin{equation} \hat{H} = \frac{1}{2} \left[ \hat{p}^2 + \nu^2(t) \hat{x}^2 \right] + \hbar\omega_{21}\hat{A}_{22} +\hbar\lambda(t)[ E^{(-)}(\hat{x},t)\hat{A}_{12}+ H.c.], \label{1} \end{equation} $\hat{A}_{ab}$ are the operators relating the different electronic transitions (two-level) flip operator for the $|b\rangle \rightarrow |a\rangle$ transition of frequency $\omega_{21}$, respectively. $\nu(t)$ is the trap (time dependent) frequency, $\lambda$ the electronic coupling matrix element, and $E^{(-)}(\hat{x},t)$ the negative part of the classical electric field of the driving field. The operators $\hat{x}$ and $\hat{p}$ are the position and momentum of the centre of mass of the ion. We assume the ion driven by a laser field $ E^{(-)}(\hat{x},t)$ \begin{equation} E^{(-)}(\hat{x},t)= E_{0}e^{-i(k\hat{x}-\omega t)}. \label{2} \end{equation} We want to solve the Schr\"odinger equation \begin{equation} i\hbar \frac{\partial |\xi(t)\rangle}{\partial t}= \hat{H}|\xi(t)\rangle, \end{equation} in order to do this, we make the transformation $|\phi\rangle = \hat{T}(t) |\xi\rangle$, with \cite{fer} \begin{equation} \hat{T}(t)=e^{i\frac{\ln\{\rho(t)\sqrt{\nu_0}\}}{2\hbar}(\hat{x}\hat{p}+\hat{p}\hat{x})} e^{-i\frac{\dot{\rho}(t)}{2\hbar\rho(t)}\hat{x}^2} \label{trans1} \end{equation} with $\rho(t)$ a function that obeys the Ermakov equation \begin{equation} \ddot{\rho}+\nu^{2}(t)\rho=\frac{1}{\rho^3}. \label{erma} \end{equation} such that we obtain the equation for $|\phi\rangle$ \begin{equation} i\hbar\frac{\partial |\phi(t)\rangle}{\partial t}= \hat{\cal H}|\phi(t)\rangle, \label{eq6} \end{equation} with the transformed Hamiltonian given by \begin{equation} \hat{\cal H} = \frac{1}{2\nu_0\rho^2} \left( \hat{p}^2 + \nu^2_0 \hat{x}^2 \right) + \hbar\omega_{21}\hat{A}_{22} +\hbar\Omega(t) [e^{-i(k\hat{x}\rho(t)\sqrt{\nu_0}-\omega t)}\hat{A}_{12}+ H.c.], \label{3} \end{equation} with $\Omega=\lambda E_0$. We consider that $\omega_{21}=\omega+\delta$ where $\delta$ is the so-called detuning. We transform to a frame rotating at $\omega$ by means of the transformation $\hat{T}_{\omega}=e^{-i\omega t \hat{A}_{22}}$ to obtain the Hamiltonian $\hat{\cal H_{\omega}} =\hat{T}_{\omega}\hat{\cal H} \hat{T}_{\omega}^{\dagger}$ ($|\phi\rangle\rightarrow|\phi_{\omega}\rangle$) \begin{equation} \hat{\cal H_{\omega}} =\hbar\tilde{\omega}(t)\left(\hat{n}+\frac{1}{2}\right)+\hbar\delta\hat{A}_{22}+ \hbar\Omega(t) [e^{-i(\hat{a} +\hat{a}^{\dagger})\eta(t)}\hat{A}_{12}+ H.c.]. \label{4} \end{equation} where $\hat{n}=\hat{a}^{\dagger}\hat{a}$ with \begin{equation} \hat{a}=\sqrt{\frac{\nu_0}{2\hbar}}\hat{x}+i\frac{\hat{p}}{\sqrt{2\hbar\nu_0}}, \qquad \hat{a}^{\dagger}=\sqrt{\frac{\nu_0}{2\hbar}}\hat{x}-i\frac{\hat{p}}{\sqrt{2\hbar\nu_0}} \end{equation} the annihilation and creation operators respectively. $\tilde{\omega}(t)= 1/\rho^2$ is the characteristic frequency of the time dependent harmonic oscillator. The time dependent Lamb-Dicke parameter is written as $\eta(t)=\eta_0\rho(t)\sqrt{\nu_0}$ with $\eta_0=k\sqrt{\frac{\hbar}{2\nu_0}}$, where $k$ is the magnitude of the wave vector of the driving field. We will consider now the resonant interaction ($\delta=0$). Passing to a frame rotating at the frequency $\tilde{\omega}(t)$ we may get rid off the harmonic oscillator term in (\ref{4}) to end up with the (time dependent) interaction Hamiltonian \begin{equation} \hat{\cal H}(t) =\hbar\Omega(t) [e^{-i(\hat{a}e^{-i\int\tilde{\omega}(t')dt'} +\hat{a}^{\dagger}e^{i\int\tilde{\omega}(t')dt'})\eta(t)}\hat{A}_{12}+ H.c.]. \label{timedep} \end{equation} \subsection{Linearizing the system} Finally we make the transformation $|\psi\rangle=\hat{R}(t)|\phi\rangle$ with (see \cite{Moya} for the time independent case) \begin{equation} \hat{R}(t)=e^{\frac{\pi}{4}(\hat{A}_{21}-\hat{A}_{12})}e^{-i\frac{\eta(t)}{2}(\hat{a}+\hat{a}^{\dagger})(\hat{A}_{22}-\hat{A}_{11})} \label{trans2} \end{equation} to obtain \begin{equation} i\hbar\frac{\partial |\psi\rangle}{\partial t} =\hbar\left\{\tilde{\omega}(t)\hat{n}+ \Omega (\hat{A}_{22}-\hat{A}_{11}) + \left(\frac{\delta}{2}+i[\hat{a}\beta(t)-\hat{a}^{\dagger}\beta^*(t)]\right)(\hat{A}_{12}+\hat{A}_{21)} \right\}|\psi\rangle, \label{final} \end{equation} with $\beta(t)=\frac{\eta(t)\tilde{\omega}}{2}-i\dot{\eta}(t)/2$ and we have disregarded the term $\tilde{\omega}/2$ as it would add an overall phase. A method to solve JC-like interactions with time dependent parameters has been published by Shen {\it et al.} \cite{shen}. \subsection{Many ions} We generalize the transformation that allows to linearize the ion-laser Hamiltonian \cite{Moya} in an exact form for two different interactions, namely, for the case of many ions in interaction with laser fields \cite{Molmer}, and for an ion vibrating in two dimensions interacting with a laser field. This linearization has been shown to be important for instance in the implementation of fast gates in ion-laser interactions \cite{Plenio}. This is possible, as more regimes may be studied with such a linearization: high intensity, low intensity and middle intensity regimes. In particular we will show that in the case of many ions, the transformation produces a term which correspond to a dipole-dipole interaction. Ions in a linear trap interacting with a laser field may be described by the Hamiltonian \cite{Molmer} \begin{equation} {H}_M = \nu a^{\dagger}a + \frac{\delta}{2}\sum_{j}\sigma_{zj} + \sum_{j} \Omega_j(\sigma_{+j}e^{i\eta_j(a^{\dagger}+a)}+ H.c.) \label{many} \end{equation} where $\nu$ is the frequency of the vibration, $a^{\dagger}$ and $a$ are the creation and annihilation operators of the quantized oscillator, $\delta$ is the detuning between the transition frequency of the internal states of the ion, $\omega_{eg}$, and the laser frequency $\omega_L$, and $\Omega_i$ is the resonant Rabi frequency of the $i$'th ion in the laser field. The exponentials account for the position dependence laser-field and the recoil of the ions upon absorption of a photon. The positions of the ions $x_i$ are replaced by ladder operators $kx_i=\eta_i(a^{\dagger}+a)$, where the Lamb-Dicke parameter $\eta_i$ represents the ration between the ionic excursions within the vibrational ground state wavefunction and the wavelenght of the exciting radiation. We can linearize the Hamiltonian (\ref{many}) via the transformation \cite{Moya} \begin{equation} T_{M}=\prod_{j}e^{\frac{\pi}{4}(\sigma_{+j}-\sigma_{-j})}e^{-i\eta_j\sigma_{zj}(a^{\dagger}+a)/2} \end{equation} which gives the Hamiltonian \begin{eqnarray} \nonumber \mathcal{H}_M &=& THT^{\dagger} \\ &=& \nu a^{\dagger}a - \frac{\delta}{2}\sum_{j}\sigma_{xj} +\sum_{j}\Omega_j\sigma_{zj}+i\sum_{j}\frac{\eta_j\nu(a-a^{\dagger})}{2}\sigma_{xj} + \sum_{j,k}\frac{\eta_j\eta_k}{4}\sigma_{xj}\sigma_{xk} \end{eqnarray} Here $\sigma_{xj}=\sigma_{+j}+\sigma_{-j}$. Note that the transformation, besides linearizing the Hamiltonian, produces an ion-ion (dipole) interaction. \subsection{Two-dimensional vibration} An ion vibrating in two-dimensions has the Hamiltonian \begin{equation} H_{2d}=\nu_x a_x^{\dagger}a_x+\nu_y a_y^{\dagger}a_y + \frac{\delta}{2}\sigma_z +\Omega(\sigma_+e^{i\eta_x( a_x^{\dagger}+a_x)}e^{i\eta_y( a_y^{\dagger}+a_y)}+ H.c.) \label{2d} \end{equation} with the transformation \begin{equation} T_{2d}=e^{\frac{\pi}{4}(\sigma_{+}-\sigma_{-})}e^{-i[\eta_x(a_x^{\dagger}+a_x) +\eta_y(a_y^{\dagger}+a_y)]\sigma_{z}/2} \end{equation} we can cast the Hamiltonian (\ref{2d}) into the linearized Hamiltonian \begin{equation} \mathcal{H}_{2d}=\nu_x a_x^{\dagger}a_x+\nu_y a_y^{\dagger}a_y - \frac{\delta}{2}\sigma_x +\Omega\sigma_x+\frac{i\nu}{2}\sigma_x[\eta_x(a_x-a_x^{\dagger})+\eta_y(a_y-a_y^{\dagger})] \end{equation} where we have disregarded a constant term. The Hamiltonian above looks like a two-mode quantized-field-atom interaction. \section{Slow atom interacting with a quantized field} Here we show how a three body problem may be reduced to a two body problem via a transformation, we treat the problem of a slow atom interacting with a quantized field. Because the slowness of the atom, the field mode-shape affects the interaction. We can write down the Hamiltonian describing a single two-level atom passing an electromagnetic field confined to a cavity. Inn addition to the Jaynes-Cummings Hamiltonian, we have to add the energy of the free atom and the spatial variation it feels from the cavity, the Hamiltonian reads \begin{equation} H=\frac{p^2}{2}+\omega\hat{n}+\frac{\omega_0}{2}\sigma_{z}+g(x)(\hat{a}\sigma_{+} +\hat{a}^{\dagger}\sigma_{-}), \label{slow} \end{equation} on resonance, we can pass to the interaction picture Hamiltonian \begin{equation} H_I=\frac{p^2}{2}+g(x)(\hat{a}\sigma_{+}+\hat{a}^{\dagger}\sigma_{-}), \end{equation} we use the $2\times 2$ notation for the Pauli spin matrices and write the interaction Hamiltonian as (see \cite{Moya-2}) \begin{equation} \hat{H}_I= \frac{p^2}{2}+g(x)\hat{T}^{\dagger} \left( \begin{array}{cc} 0 & \sqrt{\hat{n}+1} \\ \sqrt{\hat{n}+1}& 0 \end{array} \right) \hat{T} \label{INT} \end{equation} where a non unitary transformation $\hat{T}$ has been used. We define $\hat{T}$ as \begin{equation}\hat{T}=\left( \begin{array}{cc} 1 & 0 \\ 0 & \hat{V} \end{array} \right) \end{equation} Note that $\hat{T}\hat{T}^{\dagger}=1$ but $\hat{T}^{\dagger}\hat{T}=1-\rho_{g,v}$ with \begin{equation}\rho_{g,v}=\left( \begin{array}{cc} 0 & 0 \\ 0 & |0\rangle\langle 0| \end{array} \right) \end{equation} We can use the definitions above to rewrite (\ref{slow}) as \begin{equation} \hat{H}_I=(\hat{T}^{\dagger}\hat{T}+\rho_{g,v}) \frac{p^2}{2}(\hat{T}^{\dagger}\hat{T}+\rho_{g,v})+g(x)\hat{T}^{\dagger} \left( \begin{array}{cc} 0 & \sqrt{\hat{n}+1} \\ \sqrt{\hat{n}+1}& 0 \end{array} \right) \hat{T} \label{INT} \end{equation} By noting that $\hat{T}\rho_{g,v}=0$ we rewrite the above equation as \begin{equation} \hat{H}_I=\hat{T}^{\dagger} \frac{p^2}{2}\hat{T} +\frac{p^2}{2}\rho_{g,v}+g(x)\hat{T}^{\dagger} \left( \begin{array}{cc} 0 & \sqrt{\hat{n}+1} \\ \sqrt{\hat{n}+1}& 0 \end{array} \right) \hat{T} \end{equation} where we have used that $\rho_{g,v}^2=\rho_{g,v}$. Finally we factorize the transformation operators in the Hamiltonian above to obtain \begin{equation} \hat{H}_I=\hat{T}^{\dagger}( \frac{p^2}{2} +g(x)\sigma_{x}\sqrt{\hat{n}+1})\hat{T} +\frac{p^2}{2}\rho_{g,v} \end{equation} Note that $[\hat{T}^{\dagger}( \frac{p^2}{2} +g(x)\sigma_{x}\sqrt{\hat{n}+1})\hat{T},\frac{p^2}{2}\rho_{g,v}]=0$ so that the evolution operator for the Hamiltonian above is given by \begin{equation} \hat{U}_I(t)=e^{-i\hat{T}^{\dagger}( \frac{p^2}{2} +g(x)\sigma_{x}\sqrt{\hat{n}+1})\hat{T}t}e^{-i \frac{p^2}{2}\rho_{g,v}t} \end{equation} to obtain the first exponential we can do Taylor series, and we note that the powers of the argument are simply \begin{equation} [\hat{T}^{\dagger}( \frac{p^2}{2} +g(x)\sigma_{x}\sqrt{\hat{n}+1})\hat{T}]^k=\hat{T}^{\dagger}( \frac{p^2}{2} +g(x)\sigma_{x}\sqrt{\hat{n}+1})^k\hat{T} , \qquad k\ge 1 \end{equation} such that \begin{equation} e^{-i\hat{T}^{\dagger}( \frac{p^2}{2} +g(x)\sigma_{x}\sqrt{\hat{n}+1})\hat{T}t}= \hat{T}^{\dagger}e^{-i( \frac{p^2}{2} +g(x)\sigma_{x}\sqrt{\hat{n}+1})t}\hat{T} +\rho_{g,v}. \label{evol} \end{equation} Note that the evolution operator in (\ref{evol}) is effectively the interaction of two systems, as it is written in a form in which the filed operators commute, unlike the Hamiltonian (\ref{slow}), where all the operator involved do not commute. \section{Master Equations} Now we turn our attention to the superoperator solution of master equations for more quantum optical systems, namely a dissipative cavity filed with a Kerr medium \cite{Milburn}, master equation describing phase sensitive processes \cite{Scully} and parametric down conversion \cite{Walls}. Usually these equations are solved by transforming them to Fokker-Planck equations \cite{Risken} which are partial differential equations for quasiprobability distribution functions typically the Glauber-Sudarshan $P$-function and the Husimi $Q$-function. Another usual approach is to solve system-environment problems is through the use of Langevin equations, this is stochastic differential equations that are equivalent to the Fokker-Planck equation \cite{Gardiner}. These approaches to the problem makes it usually difficult to apply the solutions to an arbitrary initial field in contrast with the superoperator techniques where it is direct the application to an initial wave function. We have used this feature in the former Section where we have exploited this fact to obtain reconstruction mechanisms that allowed us to obtain information on the state of the quantized electromagnetic field via quasiprobability distribution functions. \subsection{Kerr medium} Before applying superoperator methods in the solution of the above equation, let us show how it may be casted into a Fokker-Planck equation. In order to do this one writes the density matrix in terms of the Glauber-Sudarshan $P$-function, $\hat{\rho}=1/\pi\int P(\alpha)|\alpha\rangle\langle\alpha|d^2\alpha$. Noting that the creation and annihilation operators have the following relations with the coherent state density matrix \cite{Louissel} \begin{equation} \hat{a}^{\dagger} |\alpha\rangle\langle \alpha| =\left(\frac{\partial}{\partial \alpha} + \alpha^*\right)|\alpha\rangle\langle \alpha|, \label{alfaparcial} \end{equation} \begin{equation} |\alpha\rangle\langle \alpha|\hat{a} =\left(\frac{\partial}{\partial \alpha^*} + \alpha \right)|\alpha\rangle\langle \alpha|, \label{alfaparcialc} \end{equation} we can obtain the following correspondence \begin{equation} \hat{a} \hat{\rho}\rightarrow \alpha P(\alpha), \qquad \hat{a}^{\dagger} \hat{\rho} \rightarrow \left(\alpha^* - \frac{\partial}{\partial \alpha} \right)P(\alpha), \end{equation} and \begin{equation} \hat{\rho}\hat{a}^{\dagger}\rightarrow \alpha^* P(\alpha), \qquad \hat{\rho}\hat{a} \rightarrow \left(\alpha - \frac{\partial}{\partial \alpha^*} \right)P(\alpha). \end{equation} In this form, whenever a creation or annihilation operator occurs in the master equation, we can translate this into a corresponding operation on the Glauber-Sudarshan $P$-function. The equation that results is a Fokker-Planck equation \cite{Risken} \begin{equation} \frac{\partial P(\alpha,t)}{\partial t}=\left[\gamma \left( \frac{\partial }{\partial \alpha} \alpha + \frac{\partial }{\partial \alpha^*} \alpha^* \right)+2\gamma \bar{n} \frac{\partial^2 }{\partial \alpha \partial \alpha^*} \right]P(\alpha,t). \end{equation} This equation is equivalent to the stochastic differential equation \cite{Gardiner} \begin{equation} \frac{d \alpha}{d t}=\gamma \alpha + \sqrt{2\gamma \bar{n}} \xi(t), \label{Lange} \end{equation} and the corresponding complex conjugate equation. The quantity $\xi(t)$ is a white noise fluctuating force with the following correlation properties \begin{equation} \langle \xi(t) \rangle=0,\qquad \langle \xi(t) \xi^*(t)\rangle=\delta(t-t')\qquad \langle \xi(t) \xi(t)\rangle=\langle \xi^*(t) \xi^*(t)\rangle=0 \end{equation} Equation (\ref{Lange}) is usually called a Langevin equation, and also as the "Stratonovich form" of the Fokker-Plank equation (see for instance \cite{Strat,Dutralibro}). The Fokker-Planck equation can be also obtained from the Kramers-Moyal \cite{Kramers,Moyal} expansion, a Langevin equation that does not stop after second order derivatives (see for instance \cite{Puri}). The master equation for a Kerr medium in the Markov approximation and interaction picture has the form \cite{Milburn} \begin{equation} \frac{d\hat{\rho}}{dt}=-i\chi[\hat{n}^2,\hat{\rho}]+2\gamma\hat{a}\hat{\rho}\hat{a}^{\dagger} -\gamma \hat{a}^{\dagger}\hat{a}\hat{\rho} -\gamma \hat{\rho}\hat{a}^{\dagger}\hat{a}. \label{kerr} \end{equation} Milburn and Holmes \cite{Milburn} solved this equation by changing it to a partial differential equation for the $Q$-function and for an initial coherent state. We can have a different approach to the solution by again using superoperators. If we define \begin{equation} \hat{Y}\hat{\rho}=-i\chi[\hat{n}^2,\hat{\rho}] \end{equation} we rewrite (\ref{kerr}) as \begin{equation} \frac{d\hat{\rho}}{dt}=(\hat{Y}+\hat{J}+\hat{L})\hat{\rho}, \end{equation} where the superoperators $\hat{J}$ and $\hat{L}$ are defined as \begin{equation} \hat{J}\hat{\rho}=2\gamma\hat{a}\hat{\rho}\hat{a}^{\dagger}, \qquad \hat{L}\hat{\rho}=-\gamma \hat{a}^{\dagger}\hat{a}\hat{\rho} -\gamma \hat{\rho}\hat{a}^{\dagger}\hat{a}. \end{equation} Now we use the transformation \begin{equation} \hat{\tilde{\rho}}=\exp[(\hat{Y}+\hat{L})t]\hat{\rho} \end{equation} to obtain \begin{equation} \frac{d\hat{\tilde{\rho}}}{dt}=\exp[-i\chi \hat{R}t- 2\gamma t]\hat{J}\hat{\tilde{\rho}}, \label{Eq-milb} \end{equation} with \begin{equation} \hat{R}\hat{\tilde{\rho}}=2(\hat{n}\hat{\tilde{\rho}} - \hat{\tilde{\rho}}\hat{n}). \end{equation} In arriving to equation (\ref{Eq-milb}) we have used the formula $e^{y\hat{A}}\hat{B}e^{-y\hat{A}}=\hat{B}+y[\hat{A},\hat{B}]+y^2/2![\hat{A},[\hat{A},\hat{B}]] +\dots $ for $y$ a parameter and $\hat{A}$ and $\hat{B}$ operators. We have also used the commutation relation \begin{equation} [\hat{Y},\hat{J}]\hat{\rho} =2i\chi\hat{R}\hat{J}\hat{\rho}. \end{equation} Now it is easy to show that $\hat{R}$ and $\hat{J}$ commute, so that we can finally find the solution to equation (\ref{kerr}) as \begin{equation} \hat{\rho}(t)=e^{\hat{Y}t}e^{\hat{L}t}\exp[e^{\frac{-i\chi \hat{R}t- 2\gamma t -1}{-i\chi \hat{R}- 2\gamma }}\hat{J}]\hat{\rho}(0). \end{equation} Note that the above solution may be applied easily to any initial density matrix: \begin{eqnarray} \nonumber \hat{\rho}(t)&=&\sum_{k,n,m=0}^{\infty} \hat{\rho}_{n+k,m+k}(0)e^{-i\chi t(n^2-m^2)-\gamma t(n+m)}\sqrt{\frac{(n+k)!(m+k)!}{n!m!}} \\ &&\left[\frac{1-e^{-2i\chi t(n-m)-2\gamma t}}{2i\chi (n-m)+2\gamma }\right]^k \frac{(2\gamma)^k}{k!}|n\rangle \langle n|, \end{eqnarray} where $\hat{\rho}_{n,m}(0)$ are the (Fock) matrix elements of the initial density matrix. \section{Conclusions} We have shown the importance of looking for transformations that simplify Hamiltonians before trying to solve a given system. In particular, we have shown how a $3$-body system can be reduced to a $2$-body system, in the case of a slow atom interacting with a quantized field. We have given the most complete solutions for the ion-laser interaction in several cases: time dependent case, several ions, ions vibrating in two dimensions.
1,108,101,563,933
arxiv
\section{Introduction} The goal of this paper is to give a self-contained proof of an estimate for solutions of a nonlinear inequality \be\label{e1} \dot{g}(t)\leq -\gamma(t)g(t)+\alpha(t,g(t))+\beta(t),\ t\geq 0;\ g(0)=g_0;\ \dot{g}:=\frac{dg}{dt}, \ee and to demonstrate some of its many possible applications. Denote $\R_+:=[0,\infty)$. It is not assumed a priori that solutions $g(t)$ to inequality \eqref{e1} are defined on all of $\R_+$, that is, that these solutions exist globally. We give sufficient conditions for the global existence of $g(t)$. Moreover, under these conditions a bound on $g(t)$ is given, see estimate \eqref{e5} in Theorem 1. This bound yields the relation $\lim_{t\to \infty}g(t)=0$ if $\lim_{t\to \infty}\mu(t)=\infty$ in \eqref{e5}. Let us formulate our assumptions. \noindent {\it Assumption A).} We assume that the function $g(t)\geq 0$ is defined on some interval $[0,T)$, has a bounded derivative $\dot{g}(t):=\lim_{s\to +0}\frac{g(t+s)-g(t)}{s}$ from the right at any point of this interval, and $g(t)$ satisfies inequality \eqref{e1} at all $t$ at which $g(t)$ is defined. The functions $\gamma(t)$, and $\beta(t)$, are continuous, non-negative, defined on all of $\R_+$. The function $\alpha(t,g)\ge 0$ is continuous on $\R_+\times \R_+$, nondecreasing with respect to $g$, and locally Lipschitz with respect to $g$. This means that $\alpha(t,g)\ge \alpha(t,h)$ if $g\ge h$, and \be\label{e2} |\alpha(t,g)-\alpha(t,h)|\leq L(T,M)|g-h|, \ee if $t\in[0,T]$, $|g|\leq M$ and $|h|\leq M$, $M=const>0$, where $L(T,M)>0$ is a constant independent of $g$, $h$, and $t$. \noindent {\it Assumption B).} There exists a $C^1(\R_+)$ function $\mu(t)>0$, such that \be\label{e3} \alpha\left(t,\frac{1}{\mu(t)}\right)+\beta(t)\leq \frac{1}{\mu(t)}\left(\gamma(t)-\frac{\dot{\mu}(t)}{\mu(t)}\right),\quad \forall t\ge 0, \ee \be\label{e4} \mu(0)g(0)< 1. \ee If $\mu(0)g(0)\le 1$, then the inequality sign $< \frac 1 {\mu(t)}$ in Theorem 1, in formula \eqref{e5}, is replaced by $\le \frac 1 {\mu(t)}$. Our results are formulated in Theorems 1 and 2, and {\it Propositions 1,2}. {\it Proposition 1} is related to Example 1, and {\it Proposition 2} is related to Example 2, see below. \begin{thm}\label{thm1} If Assumptions A) and B) hold, then any solution $g(t)\ge 0$ to inequality \eqref{e1} exists on all of $\R_+$, i.e., $T=\infty$, and satisfies the following estimate: \be\label{e5}0\leq g(t)<\frac{1}{\mu(t)}\quad \forall t\in \R_+. \ee If $ \mu(0)g(0)\le 1$, then $0\leq g(t)\le \frac{1}{\mu(t)}\quad \forall t\in \R_+.$ \end{thm} \begin{rem}\label{rem1} If $\lim_{t\to \infty} \mu(t)=\infty$, then $\lim_{t\to \infty}g(t)=0$. \end{rem} Let us explain how one applies estimate \eqref{e5} in various problems (see also papers \cite{R593}, \cite{R558}, and the monograph \cite{R499} for other applications of differential inequalities which are particular cases of inequality \eqref{e1}). \noindent {\it Example 1.} Consider the problem \be\label{e6} \dot{u}=A(t)u+B(t)u,\quad u(0):=u_0, \ee where $A(t)$ is a linear bounded operator in a Hilbert space $H$ and $B(t)$ is a bounded linear operator such that $$\int_0^\infty \|B(t)\|dt:=C<\infty.$$ Assume that \be\label{e7} \text{Re}(A(t)u,u)\leq 0\quad \forall u\in H,\ \forall t\geq 0.\ee Operators satisfying inequality \eqref{e7} are called {\it dissipative}. They arise in many applications, for example in a study of passive linear and nonlinear networks (e.g., see \cite{R129}, and \cite{R118}, Chapter 3). One may consider some classes of unbounded linear operator using the scheme developed in the proofs of {\it Propositions 1,2}. For example, in {\it Proposition 1} the operator $A(t)$ can be a generator of $C_0$ semigroup $T(t)$ such that $\sup_{t\ge 0}\|T(t)\|\le m$, where $m>0$ is a constant. Let $A(t)$ be a linear closed, densely defined in $H$, dissipative operator, with domain of definition $D(A(t))$ independent of $t$, and $I$ be the identity operator in $H$. Assume that the Cauchy problem $$\dot{U}(t)=A(t)U(t),\quad U(0)=I,$$ for the operator-valued function $U(t)$ has a unique global solution and $$\sup_{t\ge 0}\|U(t)\|\le m,$$ where $m>0$ is a constant. Then such an unbounded operator $A(t)$ can be used in {\it Example 1}. \noindent {\it Proposition 1.} {\it If condition \eqref{e7} holds and $C:=\int_0^\infty \|B(t)\|dt<\infty$, then the solution to problem \eqref{e6} exists on $\R_+$, is unique, and satisfies the following inequality: \be\label{e8} \sup_{t\geq 0}\|u(t)\|\leq e^C\|u_0\|. \ee } Inequality \eqref{e8} implies Lyapunov stability of the zero solution to equation \eqref{e6}. Recall that the zero solution to equation \eqref{e6} is called Lyapunov stable if for any $\epsilon>0$, however small, one can find a $\delta=\delta(\epsilon)>0$, such that if $\|u_0\|\le \delta$, then the solution to Cauchy problem \eqref{e6} satisfies the estimate $\sup_{t\ge 0}\|u(t)\|\le \epsilon$. If, in addition, $\lim_{t\to \infty}\|u(t)\|=0$, then the zero solution to equation \eqref{e6} is called asymptotically stable in the Lyapunov sense. \noindent {\it Example 2.} Consider an abstract nonlinear evolution problem \be\label{e9} \dot{u}=A(t)u+F(t,u)+b(t),\quad u(0)=u_0,\ee where $u(t)$ is a function with values in a Hilbert space $H$, $A(t)$ is a linear bounded operator in $H$ which satisfies inequality \be\label{e10} \text{Re}(Au,u)\leq -\gamma(t)\|u\|^2,\quad t\geq 0;\qquad \gamma=\frac{r}{1+t}, \ee $r>0$ is a constant, $F(t,u)$ is a nonlinear map in $H$, and the following estimates hold: \be\label{e11} \|F(t,u)\|\leq \alpha(t,g),\quad g:=g(t):=\|u(t)\|; \quad \|b(t)\|\leq \beta(t), \ee where $\beta(t)\ge 0$ and $\alpha(t,g)\ge 0$ satisfy the conditions in {\it Assumption A)}. Let us assume that \be\label{e12} \alpha(t,g)\leq c_0g^p,\quad p>1;\quad \beta(t)\leq \frac{c_1}{(1+t)^{\omega}}, \ee where $c_0$, $p$, $\omega$ and $c_1$ are positive constants.\\ \noindent {\it Proposition 2.} {\it If conditions \eqref{e9}-\eqref{e12} hold, and inequalities \eqref{e20},\eqref{e21} and \eqref{e23} are satisfied (see these inequalities in the proof of {\it Proposition 2}), then the solution to the evolution problem \eqref{e9} exists on all of $\R_+$ and satisfies the following estimate: \be\label{e13} 0\leq \|u(t)\|\leq \frac{1}{\lambda(1+t)^q},\qquad \forall t\geq 0, \ee where $\lambda$ and $q$ are some positive constants the choice of which is specified by inequalities \eqref{e20},\eqref{e21} and \eqref{e23}.} The choice of $\lambda $ and $q$ is motivated and explained in the proof of {\it Proposition 2} (see inequalities \eqref{e20}, \eqref{e21} and \eqref{e23} in Section 2). Inequality \eqref{e13} implies asymptotic stability of the solution to problem \eqref{e9} in the sense of Lyapunov and, additionally, gives a rate of convergence of $\|u(t)\|$ to zero as $t\to \infty$. The results in {\it Examples 1,2} can be obtained in Banach space, but we do not go into detail. Proofs of Theorem 1 and {\it Propositions 1} and {2 } are given in Section 2. Theorem 2, which is a discrete analog of Theorem 1, is formulated and proved in Section 3. \section{Proofs} \noindent {\it Proof of Proposition 1.} Local existence of the solution $u(t)$ to problem \eqref{e6} is known (see, e.g., \cite{DK}). Uniqueness of this solution follows from the linearity of the problem and from estimate \eqref{e8}. Let us prove this estimate. Multiply \eqref{e6} by $u(t)$, let $g(t):=\|u(t)\|$, take real part, use \eqref{e7}, and get $$\frac{1}{2}\frac{d g^2(t)}{dt}\leq \text{Re}(B(t)u(t),u(t))\leq \|B(t)\|g^2(t).$$ This implies $g^2(t)\leq g^2(0)e^{2C}$, so \eqref{e8} follows. {\it Proposition 1} is proved. \hfill $\Box$ \noindent {\it Proof of Proposition 2.} The local existence and uniqueness of the solution $u(t)$ to problem \eqref{e9} follow from {\it Assumption A} (see, e.g., \cite{DK}). The existence of $u(t)$ for all $t\ge 0$, that is, the global existence of $u(t)$, follows from estimate \eqref{e13} (see, e.g., \cite{R499}, pp.167-168). Let us derive estimate \eqref{e13}. Multiply \eqref{e9} by $u(t)$, let $g(t):=\|u(t)\|$, take real part, use \eqref{e10}-\eqref{e12} and get \be\label{e14} g\dot{g}\leq -\gamma(t)g^2(t)+\alpha(t,g(t))g(t)+\beta(t)g(t),\ t\geq 0. \ee Since $g\ge 0$, one obtains from this inequality inequality \eqref{e1}. However, first we would like to explain in detail the meaning of the derivative $\dot{g}$ in our proof. By $\dot{g}$ the right derivatives is understood: $$\dot{g}(t):=\lim_{s\to +0}\frac{g(t+s)-g(t)}{s}.$$ If $g(t)=\|u(t)\|$ and $u(t)$ is continuously differentiable, then $\psi(t):=g^2(t)=(u(t),u(t))$ is continuously differentiable, and its derivative at the point $t$ at which $g(t)>0$ can be computed by the formula: $$\dot{g}= Re (\dot{u}(t),u^0(t)),$$ where $u^0(t):=\frac {u(t)}{\|u(t)\|}$ . Thus, the function $g(t)=\sqrt{\psi(t)}$ is continuously differentiable at any point at which $g(t)\neq 0$. At a point $t$ at which $g(t)=0$, the vector $u^0(t)$ is not defined, the derivative of $g(t)$ does not exist in the usual sense, but the right derivative of $g(t)$ still exists and can be calculated explicitly: \bee\begin{split} \dot{g}(t)&=\lim_{s\to +0}\frac{\|u(t+s)\|-\|u(t)\|}{s}=\lim_{s\to +0}\frac{\|u(t)+s\dot{u}(t)+o(s)\| }{s}\\ &=\lim_{s\to 0}\|\dot{u}(t)+o(1)\|=\|\dot{u}(t)\|. \end{split}\eee If $u(t)$ is continuously differentiable at some point $t$, and $u(t)\neq 0$, then $$\dot{g}=\|u(t)\|^.\leq \|\dot{u}(t)\|.$$ Indeed, \bee 2g(t)\dot{g}(t)=(\dot{u}(t),u(t))+(u(t),\dot{u}(t))\leq 2\|\dot{u}\|\|u\|=2\|\dot{u}(t)\|g(t). \eee If $g(t)\neq 0$, then the above inequality implies $\dot{g}(t)\leq \|\dot{u}(t)\|$, as claimed. One can also derive this inequality from the formula $\dot{g}= Re (\dot{u}(t),u^0(t))$, since $|Re (\dot{u}(t),u^0(t))|\leq \|\dot{u}(t)\|$. If $g(t)>0$, then from \eqref{e14} one obtains \be\label{e15} \dot{g}(t)\leq -\gamma(t)g(t)+\alpha(t,g(t))+\beta(t),\quad t\ge 0. \ee If $g(t)=0$ on an open set, then inequality \eqref{e15} holds on this set also, because $\dot{g}=0$ on this set while the right-hand side of \eqref{e15} is non-negative at $g=0$. If $g(t)=0$ at some point $t=t_0$, then \eqref{e15} holds at $t=t_0$ because, as we have proved above, $\dot{g}(t_0)=0$, while the right-hand side of \eqref{e15} is equal to $\beta(t)\ge 0$ if $g(t_0)=0$, and is, therefore, non-negative if $g(t_0)=0$. If assumptions \eqref{e12} hold, then inequality \eqref{e15} can be rewritten as \be\label{e16} \dot{g}\leq -\frac{1}{(1+t)^\nu}g+c_0g^p+\frac{c_1}{(1+t)^\omega},\quad p>1. \ee Let us look for $\mu(t)$ of the form \be\label{e17} \mu(t)=\lambda(1+t)^q,\quad q=const>0,\quad \lambda=const>0. \ee Inequality \eqref{e3} takes the form \be\label{e18} \frac{c_0}{[\lambda(1+t)^q]^p} +\frac{c_1}{(1+t)^\omega}\leq \frac{1}{\lambda(1+t)^q}\left( \frac{r}{(1+t)^\nu}-\frac{q}{1+t}\right),\quad t>0,\ee or \be\label{e19} \frac{c_0}{\lambda^{p-1}(1+t)^{q(p-1)} }+\frac{c_1\lambda}{(1+t)^{\omega-q}}+\frac{q}{1+t}\leq \frac{r}{(1+t)^\nu},\quad t>0 \ee Assume that the following inequalities \eqref{e20}-\eqref{e21} hold: \be\label{e20} q(p-1)\geq \nu,\quad \omega -q\geq \nu,\quad 1\geq \nu, \ee and \be\label{e21} \frac{c_0}{\lambda^{p-1}}+c_1\lambda+q\leq r. \ee Then inequality \eqref{e19} holds, and \thmref{thm1} yields \be\label{e22} g(t)=\|u(t)\|<\frac{1}{\lambda(1+t)^q},\quad \forall t\geq 0, \ee provided that \be\label{e23} \|u_0\|<\frac{1}{\lambda}. \ee Note that for any $\|u_0\|$ inequality \eqref{e23} holds if $\lambda$ is sufficiently large. For a fixed $\lambda$, however large, inequality \eqref{e21} holds if $r$ is sufficiently large. {\it Proposition 2} is proved.\hfill $\Box$ The proof of {\it Proposition 2} provides a flexible general scheme for obtaining estimates of the behavior of the solution to evolution problem \eqref{e9} for $t\to \infty$.\\ \noindent {\it Proof of \thmref{thm1}.} Let \be\label{e24} g(t)=\frac{v(t)}{a(t)},\quad a(t):=e^{\int_0^t\gamma(s)ds}, \ee \be\label{e25} \eta(t):=\frac{a(t)}{\mu(t)},\quad \eta(0)=\frac{1}{\mu(0)}>g(0). \ee Then inequality \eqref{e1} reduces to \be\label{e26} \dot{v}(t)\leq a(t)\alpha\left(t,\frac{v(t)}{a(t)}\right)+a(t)\beta(t),\quad t\geq 0; \quad v(0)=g(0). \ee One has \be\label{e27} \dot{\eta}(t)=\frac{\gamma(t)a(t)}{\mu(t)}-\frac{\dot{\mu}(t)a(t)}{\mu^2(t)} =\frac{a(t)}{\mu(t)}\left(\gamma(t)-\frac{\dot{\mu}(t)}{\mu(t)}\right). \ee From \eqref{e3}, \eqref{e24}-\eqref{e27}, one gets \be\label{e28} v(0)<\eta(0),\quad \dot{v}(0)\leq \dot{\eta}(0). \ee Therefore there exists a $T>0$ such that \be\label{e29} 0\leq v(t)<\eta(t),\quad \forall t\in[0,T). \ee Let us prove that $T=\infty$. First, note that if inequality \eqref{e29} holds for $t\in[0,T)$, or, equivalently, if \be\label{e30} 0\leq g(t)<\frac 1 {\mu(t)}, \qquad \forall t\in[0,T),\ee then \be\label{e31}\dot{v}(t)\leq \dot{\eta}(t),\qquad \forall t\in[0,T).\ee One can pass to the limit $t\to T-0$ in this inequality and get \be\label{e32} \dot{v}(T)\leq \dot{\eta}(T). \ee Indeed, from inequality \eqref{e30} it follows that $$\alpha\left(t,\frac {v}{a}\right)+\beta=\alpha(t,g)+\beta\leq \alpha(t,\frac 1 {\mu})+\beta,$$ because $\alpha(t,g)\le \alpha(t,\frac 1 {\mu})$. Furthermore, from inequality \eqref{e3} one derives: $$\alpha\left(t,\frac 1 {\mu}\right)+\beta\leq\frac 1 {\mu(t)}\left(\gamma(t)-\frac{\dot{\mu}(t)}{\mu(t)}\right).$$ Consequently, from inequalities \eqref{e26}-\eqref{e27} one obtains $$ \dot{v}(t)\leq \frac{a(t)}{\mu(t)}\left(\gamma(t)-\frac{\dot{\mu}(t)}{\mu(t)}\right) =\dot{\eta}(t),\qquad t\in[0,T), $$ and inequality \eqref{e31} is proved. Let $t\to T-0$ in \eqref{e31}. The function $\eta(t)$ is defined for all $t\in \R_+$ and $\dot{\eta}(t)$ is continuous on $\R_+$. Thus, there exists the limit $$\lim_{t\to T-0}\dot{\eta}(t)=\dot{\eta}(T).$$ By $\dot{v}(T)$ in inequality \eqref{e32} one may understand $\limsup_{t\to T-0}\dot{v}(t)$, which does exist because $\dot{v}(t)$ is bounded for all $t<T$ by a constant independent of $t\in [0,T]$, due to the estimate \eqref{e31}. To prove that $T=\infty$ we prove that the "upper" solution $w(t)$ to the inequality \eqref{e26} exists for all $t\in \R_+$. Define $w(t)$ as the solution to the problem \be\label{e33} \dot{w}(t)=a(t)\alpha\left(t,\frac{w(t)}{a(t)}\right)+a(t)\beta(t),\quad w(0)=v_0. \ee The unique solution to problem \eqref{e33} exists locally, on $[0,T)$, because $\alpha(t,g)$ is assumed locally Lipschitz. On the interval $[0,T)$ one obtains inequality $$ 0\le v(t)\leq w(t), \qquad t\in[0,T),$$ by the standard comparison lemma (see, e.g., \cite{R499}, p.99, or \cite{H}). Thus, inequality \be\label{e34} 0\le v(t)\leq w(t)\leq \eta(t),\qquad t\in[0,T), \ee holds. The desired conclusion $T=\infty$ one derives from the following claim: \noindent {\it Proposition 3.} {\it The solution $w(t)$ to problem \eqref{e33} exists on every interval $[0,T]$ on which it is a priori bounded by a constant depending only on $T$.} We prove this claim later. Assuming that this claim is established, one concludes that $T=\infty$. Let us finish the proof of \thmref{thm1} using {\it Proposition 3}. Since $\eta(t)$ is bounded on any interval $[0,T]$ ( by a constant depending only on $T$) one concludes from {\it Proposition 3} that $w(t)$ ( and, therefore, $v(t)$) exists on all of $\R_+$. If $v(t)\leq \eta(t)$ $\forall t\in \R_+$, then inequality \eqref{e5} holds (see \eqref{e24} and \eqref{e25}), and \thmref{thm1} is proved. Let us prove {\it Proposition 3}. \noindent {\it Proof of Proposition 3.} We prove a more general statement, namely, {\it Proposition 4}, from which {\it Proposition 3} follows.\\ \noindent {\it Proposition 4.} {\it Assume that \be\label{e35} \dot{u}=f(t,u),\quad u(0)=u_0, \ee where $f(t,u)$ is an operator in a Banach space $X$, locally Lipschitz with respect to $u$ for every $t$, i.e., $\|f(t,u)-f(t,v)\|\leq L(t,M)\|u-v\|$, $\forall v,v\in \{u\ :\ \|u\|\leq M\}$. The unique solution to problem \eqref{e35} exists for all $t\ge 0$ if and only if \be\label{e36} \|u(t)\|\leq c(t), \quad t\geq 0, \ee where $c(t)$ is a continuous function defined for all $t\geq 0$, and inequality \eqref{e36} holds for all $t$ for which $u(t)$ exists.} \noindent {\it Proof of Proposition 4.} The necessity of condition \eqref{e36} is obvious: one may take $c(t)=\|u(t)\|$. To prove its sufficiency, recall a known local existence theorem, see, e.g., \cite{DK}. {\it Proposition 5.} {\it If $\|f(t,u)\|\leq M_1$ and $\|f(t,u)-f(t,v)\|\leq L\|u-v\|$, $\forall t\in [t_0,t_0+T_1],$ $\|u-u_0\|\leq R,$ $u_0=u(t_0)$, then there exists a $\dl>0$, $\dl=\min(\frac{R}{M_1}, \frac{1}{L}, T_1-T)$, such that for every $\tau_0\in [t_0,T]$, $T<T_1$, there exists a unique solution to equation \eqref{e35} in the interval $(\tau_0-\dl,\tau+\dl)$ and $\|u(t)-u(t_0)\|\leq R.$ } Using {\it Proposition 5}, let us prove the sufficiency of the assumption \eqref{e36} for the global existence of $u(t)$, i.e., for the existence of $u(t)$ for all $t\ge t_0$. Assume that condition \eqref{e36} holds and the solution to problem \eqref{e35} exists on $[t_0,T)$ but does not exist on $[t_0,T_1)$ for any $T_1>T$. Let us derive a contradiction from this assumption. {\it Proposition 5} guarantees the existence and uniqueness of the solution to problem \eqref{e35} with $t_0=T$ and the initial value $u_0=u(T-0).$ The value $u(T-0)$ exists if inequality \eqref{e36} holds, as we prove below. The solution $u(t)$ exists on the interval $[T-\dl,T+\dl]$ and, by the uniqueness theorem, coincides with the solution $u(t)$ of the problem \eqref{e35} on the interval $(T-\dl,T)$. Therefore, the solution to \eqref{e35} can be uniquely extended to the interval $[0,T+\dl)$, contrary to the assumption that it does not exist on the interval $[0,T_1)$ with any $T_1>T$. This contradiction proves that $T=\infty$, i.e., the solution to problem \eqref{e35} exists for all $t\geq t_0$ if estimate \eqref{e36} holds and $c(t)$ is defined and continuous $\forall t\ge t_0$. Let us now prove the existence of the limit $$\lim_{t\to T-0}u(t):=u(T-0).$$ Let $t_n\to T$, $t_n<T$. Then \bee \|u(t_n)-u(t_{n+m})\|\leq \int_{t_n}^{t_{n+m}}\|f(t,u(s))\|ds\leq (t_{n+m}-t_n)M_1\to 0\text{ as }n\to\infty. \eee Therefore, by the Cauchy criterion, there exists the limit $$\lim_{t_n\to T-0}u(t)=u(T-0).$$ Estimate \eqref{e36} guarantees the existence of the constant $M_1$. {\it Proposition 4} is proved \hfill $\Box$ Therefore {\it Proposition 3} is also proved and, consequently, the statement of Theorem 1, corresponding to the assumption \eqref{e5}, is proved. In our case $t_0=0$, but one may replace the initial moment $t_0=0$ in \eqref{e1} by an arbitrary $t_0\in \R_+$. Finally, if $g(0)\leq \frac 1 {\mu(0)}$, then one proves the inequality $$0\le g(t)\le \frac 1 {\mu(t)}, \qquad \forall t\in \R_+$$ using the argument similar to the above. This argument is left to the reader. \thmref{thm1} is proved.\hfill $\Box$ \section{Discrete version of \thmref{thm1}} \begin{thm}\label{thm2} Assume that $g_n\geq 0$, $\alpha(n,g_n)\geq 0,$ \be\label{e37} g_{n+1}\leq (1-h_n\gamma_n)g_n+h_n\alpha(n,g_n)+h_n\beta_n,\quad h_n>0,\ 0<h_n\gamma_n<1,\ee and $\alpha(n,g_n)\geq \alpha(n,q_n)$ if $g_n\geq q_n$. If there exists a sequence $\mu_n> 0$ such that \be\label{e38} \alpha(n,\frac{1}{\mu_n}) +\beta_n\leq \frac{1}{\mu_n}(\gamma_n-\frac{\mu_{n+1}-\mu_n}{h_n\mu_n}), \ee and \be\label{e39} g_0\leq \frac{1}{\mu_0}, \ee then \be\label{e40} 0\leq g_n\leq \frac{1}{\mu_n}\qquad \forall n\geq 0. \ee \end{thm} \begin{proof} For $n=0$ inequality \eqref{e40} holds because of \eqref{e39}. Assume that it holds for all $n\leq m$ and let us check that then it holds for $n=m+1$. If this is done, \thmref{thm2} is proved. Using the inductive assumption, one gets: \bee g_{m+1}\leq (1-h_m\gamma_m)\frac{1}{\mu_m}+h_m\alpha(m,\frac{1}{\mu_m})+h_m\beta_m. \eee This and inequality \eqref{e38} imply: \bee\begin{split} g_{m+1}&\leq (1-h_m\gamma_m)\frac{1}{\mu_m}+h_m\frac{1}{\mu_m}(\gamma_m-\frac{\mu_{m+1}- \mu_m}{h_m\mu_m})\\ &=\frac{\mu_mh_m-\mu_mh^2_m\gamma_m+h^2_m\gamma_m\mu_m-h_m\mu_{m+1}+h_m\mu_m }{\mu^2_mh_m}\\ &=\frac{2\mu_mh_m-h_m\mu_{m+1}}{\mu_m^2h_m}=\frac{2\mu_m-\mu_{m+1}}{\mu^2_m}= \frac{1}{\mu_{m+1}}+\frac{2\mu_m-\mu_{m+1}}{\mu^2_m}-\frac{1}{\mu_{m+1}}. \end{split}\eee The proof is completed if one checks that \bee \frac{2\mu_m-\mu_{m+1}}{\mu^2_m}\leq \frac{1}{\mu_{m+1}}, \eee or, equivalently, that \bee 2\mu_m\mu_{m+1}-\mu^2_{m+1}-\mu^2_m\leq 0. \eee The last inequality is obvious since it can be written as $$-(\mu_m-\mu_{m+1})^2\le 0.$$ \thmref{thm2} is proved. \end{proof} \thmref{thm2} was formulated in \cite{R593} and proved in \cite{R558}. We included for completeness a proof, which is different from the one in \cite{R558} only slightly. \newpage
1,108,101,563,934
arxiv
\section{Introduction} The \noun{$k$-Dominating Set} problem asks, for a graph $G=(V,E)$ and a positive integer $k$ given as inputs, whether there is a vertex-subset $S\subseteq V$ of size at most $k$ such that every vertex in $V\setminus S$ is adjacent to some vertex in $S$. Such a vertex-subset is called a \emph{dominating set} of $G$. This problem is known to be NP-hard even in very restricted graph classes, such as the class of planar graphs with maximum degree~$3$~\cite{GareyJohnson1979}. In the world of parameterized complexity, this is one of the most important hard problems: the problem parameterized by $k$ is the canonical $W\left[2\right]$-hard problem~\cite{DowneyFellows1999}. The problem remains $W\left[2\right]$-hard even in many restricted classes of graphs --- for example, it is $W\left[2\right]$-hard in classes of graphs with bounded average degree~\cite{GolovachVillanger2008}. This latter fact implies that it is unlikely that the problem has a fixed-parameter-tractable (FPT) algorithm on graphs with a bounded average degree, that is, an algorithm that runs in time $f(k)\cdot n^{c}$ for \emph{some} computable function $f(k)$ independent of the input size $n$, and some constant $c$ independent of $k$. % \footnote{To know more about the notions of FPT and $W$-hardness and to see why it is considered unlikely that a $W\left[2\right]$-hard problem will have an FPT algorithm, see~\cite{DowneyFellows1999}.% } The problem has an FPT algorithm on certain restricted families of graphs, such as planar graphs~\cite{FominThilikos2006}, graphs of bounded genus~\cite{EllisFanFellows2004}, $K_{h}$-topological-minor-free graphs, and graphs of bounded degeneracy~\cite{AlonGutner2007}; these last being, to the best of our knowledge, the most general graph class previously known to have an FPT algorithm for this problem. In the current paper, we show that the problem has an FPT algorithm in a class of graphs that encompasses, and is strictly larger than, all such previously known classes --- namely, the class of $K_{i,j}$-free graphs. Closely related to the notion of an FPT algorithm is the concept of a \emph{kernel} for a parameterized problem. For the \noun{$k$-Dominating Set} problem parameterized by $k$, a kernelization algorithm is a polynomial-time algorithm that takes $(G,k)$ as input and outputs a graph $G'$ and a nonnegative integer $k'$ such that the size of $G'$ is bounded by some function $g(k)$ of $k$ alone, $k'\le h(k)$ for some function $h(k)$, and $G$ has a dominating set of size at most $k$ if and only if $G'$ has a dominating set of size at most $k'$. The resulting instance $G'$ is called a kernel for the problem. A parameterized problem has a kernelization algorithm if and only if it has an FPT algorithm~\cite{DowneyFellows1999}, and so it is unlikely that the \noun{$k$-Dominating Set} problem on general graphs or on graphs having a bounded average degree has a kernelization algorithm. For the same reason, this problem has a kernelization algorithm when restricted to those graph classes for which it has an FPT algorithm. But the size of the kernel obtained from such an algorithm could be exponential in $k$, and finding if the kernel size can be made smaller --- in particular, whether it can be made polynomial in $k$ --- is an important problem in parameterized complexity. Proving polynomial bounds on the size of the kernel for different parameterized problems has been a significant practical aspect in the study of the parameterized complexity of NP-hard problems, and many positive results are known. See~\cite{GuoNiedermeier2007} for a survey of kernelization results. For the \noun{$k$-Dominating Set} problem, the first polynomial kernel result was obtained by Alber et al.~\cite{AlberFellowsNiedermeier2004} in 2004: they showed that the problem has a \emph{linear} kernel of at most $335k$ vertices in planar graphs. This bound for planar graphs was later improved by Chen et al.~\cite{ChenFernauKanjXia2007} to $67k$. Fomin and Thilikos~\cite{FominThilikos2004} showed in 2004 that the same reduction rules as used by Alber et al.\ give a linear kernel (linear in $k+g$) for the problem in graphs of genus $g$. The next advances in kernelizing this problem were made by Alon and Gutner in 2008~\cite{AlonGutner2008}. They showed that the problem has a linear kernel in $K_{3,h}$-topological-minor-free graph classes (which include, for example, planar graphs), and a polynomial kernel (where the exponent depends on $h$) for $K_{h}$-topological-minor-free graph classes, these last being the most general class of graphs for which the problem has been previously shown to have a polynomial kernel. In the meantime, the same authors had shown in 2007 that the problem is FPT in (the strictly larger class of) graphs of bounded degeneracy~\cite{AlonGutner2007}, but had left open the question whether the problem has a polynomial kernel in such graph classes. In this paper, we answer this question in the affirmative, and show, in fact, that a strictly larger classes of graphs --- the $K_{i,j}$-free graph classes --- have polynomial kernels for this problem. See Table~\ref{tab:KnownResults} for a summary of some FPT and kernelization results for the \noun{$k$-Dominating Set} problem on various classes of graphs.% \begin{table} \begin{centering} \begin{tabular}{|c|@{\hspace{2mm}}@{\hspace{2mm}}l|@{\hspace{2mm}}@{\hspace{2mm}}l@{\hspace{2mm}}|}\hline \hline \emph{Graph Class} & \emph{FPT Algorithm Running Time} & \emph{Kernel Size}\tabularnewline \hline & & \tabularnewline Planar & $O(k^{4}+2^{15.13\sqrt{k}}k+n^{3})$~\cite{FominThilikos2006} & $O(k)$~\cite{AlberFellowsNiedermeier2004,ChenFernauKanjXia2007} \tabularnewline & & \tabularnewline Genus-$g$ & $O((24g^{2}+24g+1)^{k}n^{2})$~\cite{EllisFanFellows2004} & $O(k+g)$~\cite{FominThilikos2004} \tabularnewline & & \tabularnewline $K_{h}$-minor-free & $2^{O(\sqrt{k})}n^{c}$~\cite{DemaineFominHajiaghayiThilikos2005:2},$O((\log h))^{hk/2}\cdot n$~\cite{AlonGutner2007} & $O(k^{c})$~\cite{AlonGutner2008} \tabularnewline & & \tabularnewline $K_{h}$-topological-minor-free & $(O(h))^{hk}\cdot n$~\cite{AlonGutner2007} & $O(k^{c})$~\cite{AlonGutner2008}\tabularnewline & & \tabularnewline $d$-degenerate & $k^{O(dk)}n$~\cite{AlonGutner2007} & $k^{O(dk)}$~\cite{AlonGutner2007}, $O((d+2)^{2(d+2)}\cdot k^{2(d+1)^{2}})^{\dagger}$\tabularnewline & & \tabularnewline $K_{i,j}$-free & $O(n^{i+O(1)})^{\dagger}$ & $O((j+1)^{2(i+1)}k^{2i^{2}})^{\dagger}$\tabularnewline & & \tabularnewline \hline \hline \end{tabular} \par\end{centering} \caption{\label{tab:KnownResults}Some FPT and kernelization results for $k$\noun{-Dominating Set.} Results proved in this paper are marked with a \noun{$\dagger$.}} \end{table} \paragraph{Our Results.} We show that for any fixed $i,j\ge1$, the \noun{$k$-Dominating Set} problem has a polynomial kernel on graphs that do not have~$K_{i,j}$ (a complete bipartite graph with the two parts having~$i$ and~$j$ vertices) as a subgraph. For input graph~$G$ and parameter~$k$, the size of the kernel is bounded by~$k^{c}$ where~$c$ is a constant that depends only on~$i$ and~$j$. A graph~$G$ is said to be $d$\emph{-degenerate} if every subgraph of~$G$ has a vertex of degree at most~$d$. Since a $d$-degenerate graph does not have~$K_{d+1,d+1}$ as a subgraph, it follows that the \noun{$k$-Dominating Set} problem has a polynomial kernel on graphs of bounded degeneracy. This settles a question posed by Alon and Gutner in~\cite{AlonGutner2008}. We also provide a slightly simpler and a smaller kernel for the version where we want the \noun{$k$-Dominating Set} to be independent as well. Note that except for $d$-degenerate graphs, all the other graph classes in \mbox{Table~\ref{tab:KnownResults}} are minor-closed. This seems to be indicative of the state of the art --- the only other previous FPT or kernelization result for the \noun{$k$-Dominating Set} problem on a non-minor-closed class of graphs that we know of is the $O(k^{3})$ kernel and the resulting FPT algorithm for graphs that exclude triangles and $4$-cycles~\cite{RamanSaurabh2008}. In fact, this result can be modified to obtain similar bounds on graphs with just no $4$-cycles (allowing triangles). Since a $4$-cycle is just $K_{2,2}$, this result follows from the main result of this paper by setting $i=j=2$. Since a $K_{h}$-topological-minor-free graph has bounded degeneracy~\cite[Proposition 3.1]{AlonGutner2008} (for a constant $h$), the class of $K_{i,j}$-free graphs is more general than the class of $K_{h}$-topological-minor-free graphs. Thus we extend the class of graphs for which the \noun{$k$-Dominating Set} problem is known to have (1)~FPT algorithms and (2)~polynomial kernels, to the class of $K_{i,j}$-free graphs. \paragraph{Organization of the rest of the paper.} In Section~\ref{sec:kij-free kernel}, we develop our main algorithm that runs in $O(n^{i+O(1)})$ time and constructs a kernel of size $O((j+1)^{2\left(i+1\right)}k^{2i^{2}})$ for $k$\noun{-Dominating Set} on $K_{i,j}$-free graphs, for fixed $j\ge i\ge2$. As a corollary we obtain, in Section~\ref{sec:kernel_d_degenerate}, a polynomial kernel for $d$-degenerate graphs, with running time $O(n^{O(d)})$ and kernel size $O((d+2)^{2(d+2)}k^{2(d+1)^{2}})$. In Section~\ref{sub:faster_d_degenerate} we describe an improvement to the above algorithm that applies to $d$-degenerate input graphs which yields a kernel of the same size as above and runs in time $O(2^{d}dn^{2})$. In Section~\ref{sec:indep_dom_set_ij} we describe a modification of the algorithm in Section~\ref{sec:kij-free kernel} that constructs a polynomial kernel for the $k$-\noun{Independent Dominating Set} problem on $K_{i,j}$-free graphs. This kernel has $O(jk^{i})$ vertices, and so implies a kernel of size $O((d+1)k^{d+1})$ for this problem on $d$-degenerate graphs. In Section~\ref{sec:Conclusion} we state our conclusions and list some open problems. \paragraph{Notation.} All the graphs in this paper are finite, undirected and simple. In general we follow the graph terminology from~\cite{Diestel2000}. We let $V(G)$ and $E(G)$ denote, respectively, the vertex and edge sets of a graph $G$. The \emph{open-neighborhood} of a vertex $v$ in a graph $G$, denoted $N(v)$, is the set of all vertices that are adjacent to $v$ in $G$. A \emph{$k$-dominating set} of graph $G$ is a vertex-subset $S$ of size at most $k$ such that for each $u\in V(G)\setminus S$ there exists $v\in S$ such that $\{u,v\}\in E(G)$. Given a graph $G$ and $A,B\subseteq V(G)$, we say that $A$ dominates $B$ if every vertex in $B\setminus A$ is adjacent in $G$ to some vertex in $A$. \section{\label{sec:kij-free kernel} A Polynomial Kernel for $K_{i,j}$-free Graphs} In this section we consider the parameterized $k$\noun{-Dominating Set} problem on graphs that do not have $K_{i,j}$ as a subgraph, for fixed $j\ge i\ge1$. It is easy to see that the problem has a linear kernel when $i=1,j\ge i$, so we consider the cases $j\ge i\ge2$. We solve a more general problem, namely the \noun{rwb-Dominating Set} problem, defined as follows: Given a graph $G$ whose vertex set $V$ is partitioned into $R_{G},W_{G}$, and $B_{G}$ (colored red, white, and black, respectively) and a non-negative integer parameter $k$, is there a subset $S\subseteq V$ of size at most $k$ such that $R_{G}\subseteq S$ and $S$ dominates $B_{G}$? We call such an $S$ an \emph{rwb-dominating} set of $G$, and such a graph an \emph{rwb-graph}. Intuitively, the vertices colored red are those that will be picked up by the reduction rules in the $k$-dominating set $D$ that we are trying to construct. In particular, if there is a $k$-dominating set in the graph, there will be one that contains all the red vertices. White vertices are those that have been already dominated. Clearly all neighbors of red vertices are white, but our reduction rules color some vertices white even if they have no red neighbors (at that point). These are vertices that will be dominated by one of some constant number of vertices identified by the reduction rules. See reduction rule 2 for more details. Black vertices are those that are yet to be dominated. It is easy to see that if we start with a general graph $G$ and color all its vertices black to obtain an rwb-graph $G'$, then $G$ has a dominating set of size at most $k$ if and only if $G'$ has an rwb-dominating set of size at most $k$. We first describe an algorithm that takes as input an rwb-graph $G$ on $n$ vertices and a positive number $k$, and runs in $O(n^{i+O(1)})$ time. The algorithm either finds that $G$ does not have any rwb-dominating set of size at most $k$, or it constructs an instance $(G',k')$ on $O((j+1)^{i+1}k^{i^{2}})$ vertices such that~$G$ has an rwb-dominating set of size at most $k$ if and only if $G'$ has an rwb-dominating set of size at most $k'$. The algorithm applies a sequence of reduction rules in a specified order. The input and output of each reduction rule are rwb-graphs. \begin{definition} \label{def:reduced_graph}We say that graph $G$ is \bem{reduced} with respect to a reduction rule if an application of the rule to $G$ does not change $G$. \end{definition} Each reduction rule satisfies the following correctness condition and preserves the invariants stated below: \begin{definition} \label{def:rule_correctness_ij}\emph{(Correctness)} A reduction rule $R$ is said to be \bem{correct} if the following condition holds: if $(G',k')$ is the instance obtained from $(G,k)$ by one application of rule $R$ then $G'$ has an rwb-dominating set $D'$ of size $k'$ if and only if $G$ has an rwb-dominating set $D$ of size $k$. % \footnote{\label{fn:k-no-change}Note, however, that none of our reduction rules changes the value of $k$, and so $k'=k$ for every one of these rules.% } \end{definition} \label{rem:reduction_rule_properties_ij}\textbf{Invariants}: \begin{enumerate} \item \label{rem:no_new_kij}None of the reduction rules introduces a $K_{i,j}$ into a graph. \item \label{rem:red_nbrs_white_ij}In the rwb-graphs constructed by the algorithm, red vertices have all white neighbors. \item \label{rem:rules_no_regress_ij}Let $R$ be any reduction rule, and let $R'$ be a rule that precedes $R$ in the given order. If $G$ is a graph that is reduced with respect to $R'$ and $G'$ is a graph obtained by applying $R$ to $G$, then $G'$ is reduced with respect to $R'$. \end{enumerate} \subsection{\label{sub:rules_ij}The reduction rules and the kernelization algorithm} The kernelization algorithm assumes that the input graph is an rwb-graph. It applies the following rules exhaustively in the {\em given order}. Each rule is repeatedly applied till it causes no changes to the graph and then the next rule is applied. To make it easier to present the reduction rules and the arguments of their correctness, we use a couple of notational conventions in this section. For each rule below, $G$ denotes the graph on which the rule is applied, and $G'$ the graph that results. Further, $D$ and $D'$ are as in Definition~\ref{def:rule_correctness_ij}: $D$ is an rwb-dominating set of size $k$ of $G$, and $D'$ an rwb-dominating of $G'$ of size $k'$. $^{\ref{fn:k-no-change}}$ \begin{rrule}\label{rul:one_ij}Color all isolated black vertices of $G$ red. \end{rrule} Rule~\ref{rul:one_ij} is correct as the only way to dominate isolated black vertices is by picking them in the proposed rwb-dominating set. \begin{rrule}\label{rul:two_ij} For $p=1,2,\ldots,i-2$, in this order, apply Rule $2.p$ repeatedly till it no longer causes any changes in the graph. \textbf{Rule}~$\mathbf{2.p}$ Let $b=jk$ if $p=1$, and $b=jk^{p}+k^{p-1}+k^{p-2}\cdots+k$ if $2\le p\le i-2$. If a set of $(i-p)$ vertices $U=\{u_{1},u_{2},\ldots,u_{i-p}\}$, none of which is red, has more than $b$ common black neighbors, then let $B$ be this set of neighbors. \begin{enumerate} \item Color all the vertices in $B$ white. \item Add to the graph $(i-p)$ new (gadget) vertices $X=\{x_{1},x_{2},\ldots,x_{i-p}\}$ and all the edges $\{u,x\};u\in U,x\in X$, as in Figure~\ref{fig:Rule_2_ij}. \item Color all the vertices in $X$ black. \end{enumerate} \end{rrule} \begin{figure} \includegraphics[bb=-350bp -5bp 770bp 466bp,scale=0.22]{rule2_ij} \caption{\label{fig:Rule_2_ij}Rule~\ref{rul:two_ij}} \end{figure} \begin{numclaim}\label{cla:rule2p_forces_one_ij}Consider the application of Rule~$2.p,1\le p\le i-2$. If $U$ is a set of vertices of $G$ that satisfies the condition in Rule $2.p$, then at least one vertex in $U$ must be in any subset of $V(G)$ of size at most $k$ that dominates $B$. \end{numclaim} \begin{proof} We give a proof when $p=1$. The proof for larger values of $p$ is along similar lines by reducing it to the case for smaller values of $p$ as in the proof of Claim~\ref{cla:forced_red_single_ij} below. When $p=1$, suppose that there is a rwb-dominating set $D$ of $G$ of size at most $k$ that does not contain any vertex of $U$. Since $U$ has more than $b=jk$ common black neighbors, there is a vertex in $D$ that dominates at least $j+1$ common black neighbors of $U$ (possibly including itself). That vertex along with $U$ forms a $K_{i,j}$ in $G$, contradicting either the property of the input graph or the first invariant for the rules. \qed \end{proof} \begin{lemma} \label{lem:rule2_correct_ij} Rule~$2.p$ is correct for $1\le p\le i-2$. \end{lemma} \begin{proof} If $G$ has an rwb-dominating set $D$ of size $k$, then $D\cap U\ne\emptyset$ by Claim~\ref{cla:rule2p_forces_one_ij}. So $D':=D$ is an rwb-dominating set of $G'$, since $D\cap U$ dominates $X$. For the other direction, assume that $D'$ exists. If $D'\cap U=\emptyset$ then since $D'$ dominates $X$ and $X$ is independent, $X\subseteq D'$, and so set $D:=D'\setminus X\cup U$. If $D'\cap X=\emptyset$ then since $D'$ dominates $X$, $D'\cap U\ne\emptyset$, and so set $D:=D'$. If $D'\cap U\ne\emptyset$ and $D'\cap X\ne\emptyset$ then pick an arbitrary vertex $b\in B$ and set $D:=D'\setminus X\cup\{b\}$.\qed \end{proof} \begin{rrule}\label{rul:three_ij}If a black or white vertex $u$ has more than $jk^{i-1}+k^{i-2}+\cdots+k^{2}+k$ black neighbors, then color $u$ red and color all the black neighbors of $u$ white. \end{rrule} \begin{numclaim}\label{cla:forced_red_single_ij} Let $G$ be reduced with respect to Rule~\ref{rul:one_ij} and Rules~$2.1$ to~$2.(i-2)$. If a black or white vertex $u$ of $G$ has more than $h=jk^{i-1}+k^{i-2}+\cdots+k^{2}+k$ black neighbors (let this set of neighbors be $B$), then $u$ must be in any subset of $V(G)$ of size at most $k$ that dominates $B$. \end{numclaim} \begin{proof} Let $S\subseteq V(G)$ be a set of size at most $k$ that dominates $B$. If $S$ does not contain $u$, then there is a $v\in S$ that dominates at least $(h/k)+1$ of the vertices in $B$. The vertex $v$ is not red (because of the second invariant), and $u,v$ have $h/k>jk^{i-2}+k^{i-3}+\cdots+1$ common black neighbors, a contradiction to the fact that $G$ is reduced with respect to Rule~$2.(i-2)$. \qed \end{proof} This proves the correctness of Rule~\ref{rul:three_ij} on graphs reduced with respect to Rule~\ref{rul:one_ij} and Rules~$2.1$ to~$2.(i-2)$. \begin{rrule}\label{rul:four_ij}If a white vertex \emph{$u$} is adjacent to at most one black vertex, then delete $u$ and apply Rule~\ref{rul:one_ij}. \end{rrule} It is easy to see that Rule~\ref{rul:four_ij} is correct, since if $u$ has no black neighbor in $G$ then $u$ has no role in dominating $B_{G}$; if $u$ has a single black neighbor $v$ then we can replace $u$ with $v$ in $D'$. \begin{rrule}\label{rul:five_ij}If there is a white vertex $u$ and a white or black vertex $v$ such that $N(u)\cap B_{G}\subseteq N(v)\cap B_{G}$, then delete $u$ and apply Rule~\ref{rul:one_ij}. \end{rrule} The correctness of this rule follows from the fact that if $D$ chooses $u$, then we can choose $v$ in $D'$. \begin{rrule}\label{rul:six_ij}If $|R_{G}|>k$ or $|B_{G}|>jk^{i}+k^{i-1}+k^{i-2}+\cdots+k^{2}$ then output {}``NO''. \end{rrule} The correctness of the rule when $|R_{G}|>k$ is obvious as the proposed dominating set we construct should contain all of $R_{G}$. Note that in a graph $G$ reduced with respect to Rules~1 to 5, no white or black vertex has more than $jk^{i-1}+k^{i-2}+\cdots+k$ black neighbors, or else Rule~\ref{rul:three_ij} would have applied, contradicting the third invariant. Hence $k$ of these vertices can dominate at most $jk^{i}+k^{i-1}+k^{i-2}+\cdots+k^{2}$ black vertices and hence if $|B_{G}|>jk^{i}+k^{i-1}+k^{i-2}+\cdots+k^{2}$, the algorithm is correct in saying {}``NO''. \subsection{Algorithm correctness and kernel size\label{sub:kernel_algorithm_correctness_ij}} The following claim giving the correctness of the kernelization algorithm follows from the correctness of the reduction rules. \begin{numclaim}\label{cla:alg_preserve_k_ij}Let $G$ be the input rwb-graph and $H$ the rwb-graph constructed by the algorithm after applying all the reduction rules. Then $G$ has an rwb-dominating set of size at most $k$ if and only if there is an rwb-dominating set of size at most $k$ in $H$. \end{numclaim} Now we move on to prove a polynomial bound on the size of the reduced instance. \begin{lemma} \label{lem:algorithm_correctness_ij} Starting with a $K_{i,j}$-free rwb-graph $G$ as input, if the kernelization algorithm says {}``NO'' then $G$ does not have an rwb-dominating set of size at most $k$. Otherwise, if the algorithm outputs the rwb-graph $H$, then $|V(H)|=O((j+1)^{i+1}k^{i^{2}})$. \end{lemma} \begin{proof} The correctness of the Rule~\ref{rul:six_ij} establishes the claim if the algorithm says {}``NO''. Now suppose the algorithm outputs $H$ without saying {}``NO''. The same rule establishes that $|R_{H}|\le k$ and $b=|B_{H}|\le jk^{i}+k^{i-1}+\cdots+k\le(j+1)k^{i}$. Now we bound $|W_{H}|$. Note that no two white vertices have identical black neighborhoods, or else Rule~\ref{rul:five_ij} would have applied. Also each white vertex has at least two black neighbors, or else Rule~\ref{rul:four_ij} would have applied. Hence the number of white vertices that have less than $i$ black neighbors is at most ${b \choose 2}+{b \choose 3}+\cdots+{b \choose i-1}\le2b^{i-1}$. No set of $i$ black vertices has more than $(j-1)$ common white neighbors, or else these form a $K_{i,j}$. Hence the number of white vertices that have $i$ or more black neighbors is at most ${b \choose i}(j-1)\le(j-1)b^{i}$. The bound in the lemma follows.\qed \end{proof} The algorithm can be implemented in $O(n^{i+O(1)})$ time, as the main Rule~$2$ can be applied by running through various subsets of $V(G)$ of size $p$ for $p$ ranging from~$1$ to $i-2$. Thus, we have \begin{lemma} \label{lem:kij_poly_kernel_rwb} For any fixed $j\ge i\ge1$, the \noun{rwb-Dominating Set} problem (with parameter $k$) on $K_{i,j}$-free graphs has a polynomial kernel with $O((j+1)^{i+1}k^{i^{2}})$ vertices. \end{lemma} To obtain a polynomial kernel for the \noun{$k$-Dominating Set} problem on $K_{i,j}$-free graphs, we first color all the vertices black and use Lemma~\ref{lem:kij_poly_kernel_rwb} on this \noun{rwb-Dominating Set} problem instance. To transform the reduced colored instance $H$ to an instance of (the uncolored) \noun{$k$-dominating Set}, we can start by deleting all the red vertices, since they have no black neighbors. But we need to capture the fact that the white vertices need not be dominated. This can be done by, for example, adding a new vertex $v_{x}$ adjacent to every vertex $x$ in $W_{H}$ of the reduced graph $H$, and attaching $k+|W_{H}|+1$ separate pendant vertices to each of the vertices $v_{x}$. It is easy to see that the new graph does not have a $K_{i,j}$, $j\geq i\geq2$, if $H$ does not have one and that $H$ has at most $k$ black or white vertices dominating $B_{H}$ if and only if the resulting (uncolored) graph has a dominating set of size at most $|W_{H}|+k$. Thus after reducing to the uncolored version, $k$ becomes $k+|W_{H}|$ and the number of vertices increases by $(k+|W_{H}|+2)\cdot|W_{H}|$. Hence by Lemma~\ref{lem:kij_poly_kernel_rwb}, we have \begin{theorem} \label{thm:kij_poly_kernel} For any fixed $j\ge i\ge1$, the \noun{$k$-Dominating Set} problem on $K_{i,j}$-free graphs has a polynomial kernel with $O((j+1)^{2(i+1)}k^{2i^{2}})$ vertices. \end{theorem} \section{\label{sec:kernel_d_degenerate}A Polynomial Kernel for $d$-degenerate Graphs} A $d$-degenerate graph does not contain $K_{d+1,d+1}$ as a subgraph, and so the kernelization algorithm of the previous section can be applied to a $d$-degenerate graph, setting $i=j=d+1$. The algorithm runs in time $O((d+1)^{2}n^{d+O(1)})$ and constructs a kernel with~$O((d+2)^{2(d+2)}\cdot k^{2(d+1)^{2}})$ vertices. Since a $d$-degenerate graph on $v$ vertices has at most $dv$ edges, we have: \begin{corollary} \label{cor:d_degenerate_poly_kernel}The \noun{$k$-Dominating Set} problem on $d$-degenerate graphs has a kernel on~$O((d+2)^{2(d+2)}\cdot k^{2(d+1)^{2}})$ vertices and edges. \end{corollary} Corollary~\ref{cor:d_degenerate_poly_kernel} settles an open problem posed by Alon and Gutner in~\cite{AlonGutner2008}. \subsection{Improving the running time\label{sub:faster_d_degenerate}} We describe a modification of our algorithm to $d$-degenerate graphs that makes use of the following well known property of $d$-degenerate graphs, to reduce the running time to $O(2^{d}\cdot dn^{2})$; the bound on the kernel size remains the same. \begin{fact}\label{fac:degenerate_ordering}\cite[Theorem 2.10]{FranceschiniLuccioPagli2006} Let $G$ be a $d$-degenerate graph on $n$ vertices. Then one can compute, in $O(dn)$ time, an ordering $v_{1},v_{2},\ldots,v_{n}$ of the vertices of $G$ such that for $1\le i\le n$, $v_{i}$ has at most $d$ neighbors in the subgraph of $G$ induced on $\{v_{i+1},\ldots,v_{n}\}$.\end{fact} The modification to the algorithm pertains to the way rules~$2.1$ to~$2.(d-1)$ are implemented: the rest of the algorithm remains the same. In implementing Rule~$2.p,1\le p\le(d-1)$, instead of checking each $(d-p+1)$-subset of vertices in the graph to see if it satisfies the condition in the rule, we make use of Fact~\ref{fac:degenerate_ordering} to quickly find such a set of vertices, if it exists. Let $G$ be the graph instance on $n$ vertices on which Rule $2.p$ is to be applied. First we delete, temporarily, all the red vertices in $G$. We then find an ordering $v_{1},v_{2},\ldots,v_{n}$ of the kind described in the above fact, of all the remaining vertices in $G$. Let $U$ and $B$ be as defined in the rule. The first vertex $v_{l}$ in $U\cup B$ that appears in the ordering has to be from $B$, since each vertex in $U$ has degree greater than $d$. The vertex $v_{l}$ will then have a neighborhood of size $d-p+1$ that in turn has $B$ as its common neighborhood. We use this fact to look for such a pair $(U,B)$ and exhaustively apply Rule~$2.p$ to $G$. See Algorithm~\ref{alg:faster_d_degenerate} for pseudocode of the algorithm. We then add back the red vertices that we deleted prior to this step, along with all their edges to the rest of the graph. \begin{algorithm} \texttt{\small for $l:=1$ to $n$}{\small \par} \texttt{\small do}{\small \par} \texttt{\small ~~if $v_{l}$ is black and its degree in $G[v_{l+1},\ldots,v_{n}]$ is at least $d-p+1$}{\small \par} \texttt{\small ~~then}{\small \par} \texttt{\small ~~~~Find the neighborhood $N$ of $v_{l}$ in $G[v_{l+1},\ldots,v_{n}]$}{\small \par} \texttt{\small ~~~~for each $(d-p+1)$-subset $S$ of $N$}{\small \par} \texttt{\small ~~~~do}{\small \par} \texttt{\small ~~~~~~if $S$ has more than $(d+1)k^{p}+k^{p-1}+\cdots+k$}{\small \par} \texttt{\small ~~~~~~common black neighbors in $G$}{\small \par} \texttt{\small ~~~~~~then}{\small \par} \texttt{\small ~~~~~~~~Apply the three steps of Rule $2.p$, taking $S$ as $U$}{\small \par} \texttt{\small ~~~~~~endif}{\small \par} \texttt{\small ~~~~done}{\small \par} \texttt{\small ~~endif}{\small \par} \texttt{\small done}{\small \par} \caption{\label{alg:faster_d_degenerate}Faster implementation of Rule~$2.p$ in $d$-degenerate graphs.} \end{algorithm} As $|N|\le d$, the inner \emph{for} loop is executed at most ${d \choose p-1}$ times for each iteration of the outer loop. Each of the individual steps in the algorithm can be done in $O(dn)$ time, and so Rule~$2.p$ can be applied in $O(dn\sum_{l=1}^{n}{d \choose p-1})$ time. All the rules $2.p$ can therefore be applied in~$O(dn\sum_{l=1}^{n}\sum_{p=1}^{d-1}{d \choose p-1})=O(2^{d}\cdot dn^{2})$ time. Thus we have: \begin{theorem} \label{thm:kij_degenerate_poly_kernel}For any fixed $d\ge1$, the \noun{$k$-Dominating Set} problem on $d$-degenerate graphs has a kernel on $O((d+2)^{2(d+2)}\cdot k^{(2(d+1))^{2}})$ vertices and edges, and this kernel can be found in $O(2^{d}\cdot dn^{2})$ time for an input graph on $n$ vertices. \end{theorem} \section{A polynomial kernel for Independent Dominating Set on $K_{i,j}$-free graphs} \label{sec:indep_dom_set_ij} The $k$\noun{-Independent Dominating Set} problem asks, for a graph $G$ and a positive integer $k$ given as inputs, whether $G$ has a dominating set $S$ of size at most $k$ such that $S$ is an independent set (i.e.\ no two vertices in $S$ are adjacent). This problem is known to be NP-hard for general graphs~\cite{GareyJohnson1979}, and the problem parameterized by $k$ is $W[2]$-complete~\cite{DowneyFellows1999}. Using a modified version of the set of reduction rules in Section~\ref{sec:kij-free kernel} we show that the $k$-\noun{Independent Dominating Set} has a polynomial kernel in $K_{ij}$-free graphs for $j\ge i\ge1$. For $i=1,j\ge1$ we can easily obtain trivial kernels as before, and for $i=2,j\ge2$ a simplified version of the following algorithm gives a kernel of size $O(j^{3}k^{4})$. \subsection{The reduction rules} Rule~$1$ is the same as for the \noun{Dominating Set} kernel for $K_{ij}$-free graphs (Section~\ref{sub:rules_ij}). Rules~$2.1$ to $2.(i-2)$ and Rule~$3$ are modified to make use of the fact that we are looking for a dominating set that is independent. A vertex $u$ that is made white will never be part of the independent dominating set $D$ that is sought to be constructed by the algorithm, since $u$ is adjacent to some vertex $v\in D$. So a vertex can be deleted as soon as it is made white. Also, rules~$1,2.1\ldots2.(i-2)$ and~$3$ are the only rules. Rules~$4$ and~$5$ from that section do not apply, because of the same reason as above. The modified rules ensure that no vertex is colored white, and so they work on \emph{rb-graphs}: graphs whose vertex set is partitioned into red and black vertices. Using these modified rules, the bounds of $|R_{H}|$ and $|B_{H}|$ in the proof of Lemma~\ref{lem:algorithm_correctness_ij}, and the fact that there are no white vertices, we have \begin{theorem} \label{thm:kij_ind_dom_poly_kernel}For any fixed $j\ge i\ge1$, the \noun{$k$-Independent Dominating Set} problem on $K_{i,j}$-free graphs has a polynomial kernel with $O(jk^{i})$ vertices. \end{theorem} For $d$-degenerate graphs, we have $i=j=d+1$, and therefore we have: \begin{corollary} \label{cor:degenerate_ind_dom_poly_kernel}For any fixed $d\ge1$, the \noun{$k$-Independent Dominating Set} problem on $d$-degenerate graphs has a polynomial kernel with $O((d+1)k^{(d+1)})$ vertices. \end{corollary} \section{\label{sec:Conclusion}Conclusions and Future Work} In this paper, we presented a polynomial kernel for the $k$\noun{-Dominating Set} problem on graphs that do not have $K_{i,j}$ as a subgraph, for any fixed $j\ge i\ge1$. We used this to show that the $k$\noun{-Dominating Set} problem has a polynomial kernel of size $O((d+2)^{2(d+2)}\cdot k^{2(d+1)^{2}})$ on graphs of bounded degeneracy, thereby settling an open problem from~\cite{AlonGutner2008}. Our algorithm also yielded a slightly simpler and a smaller kernel for the $k$\noun{-Independent Dominating Set} problem on $K_{i,j}$-free and $d$-degenerate graphs. These algorithms are based on simple reduction rules that look at the common neighborhoods of sets of vertices. Dom et al.~\cite{DomLokshtanovSaurabh2008} have shown, by extending the kernel lower-bound techniques of Bodlaender et al.~\cite{BodlaenderDowneyFellowsHermelin2008}, that the $k$\noun{-Dominating Set} problem on $d$-degenerate graphs does not have a kernel of size polynomial in \emph{both} $d$ and $k$ unless the Polynomial Hierarchy collapses to the third level. This shows that the kernel size that we have obtained for this class of graphs cannot possibly be significantly improved. Many interesting classes of graphs are of bounded degeneracy. These include all nontrivial minor-closed families of graphs such as planar graphs, graphs of bounded genus, graphs of bounded treewidth, and graphs excluding a fixed minor, and some non-minor-closed families such as graphs of bounded degree. Graphs of degeneracy $d$ are $K_{d+1,d+1}$-free. Since any $K_{i,j};j\ge i\ge2$ contains a~$4$-cycle, every graph of girth~$5$ is $K_{i,j}$-free. From~\cite[Theorem 1]{Sachs1963}, there exist graphs of girth~$5$ and arbitrarily large degeneracy. Hence $K_{i,j}$-free graphs are strictly more general than graphs of bounded degeneracy. To the best of our knowledge, $K_{i,j}$-free graphs form the largest class of graphs for which FPT algorithms and polynomial kernels are known for the dominating set problem variants discussed in this paper. One interesting direction of future work is to try to demonstrate kernels of size $f\left(d\right)\cdot k^{c}$ for the $k$\noun{-Dominating Set} problem on $d$-degenerate graphs, where $c$ is independent of $d$. Note that the result of Dom et al.\ mentioned above does \emph{not} suggest that such kernels are unlikely. Another challenge is to improve the running times of the kernelization algorithms: to remove the exponential dependence on $d$ of the running time for $d$-degenerate graphs, and to get a running time of the form $O(n^{c})$ for $K_{i,j}$-free graphs where $c$ is independent of $i$ and $j$. \paragraph{Acknowledgments.} We thank Aravind Natarajan for pointing out the connection between $K_{i,j}$-free and $d$-degenerate graphs, Saket Saurabh and the other authors of~\cite{DomLokshtanovSaurabh2008} for sharing with us the lower-bound result mentioned in their paper, and Shai Gutner for his comments on an earlier draft of this paper. \bibliographystyle{acm}
1,108,101,563,935
arxiv
\section{Introduction}\label{S:introduction} In this article we obtain new information about nodal (i.e. zero) sets of high frequency eigenfunctions and eigenvalue spacings for semi-classical Schr\"odinger operators that are small radial perturbations of the isotropic Harmonic Oscillator: \begin{equation}\label{E:operator-def} \Op_\hbar(\varepsilon):= \HO_\hbar + \varepsilon \hbar V(\abs{x}^2),\quad \HO_\hbar:=-\frac{\hbar^2}{2}\Delta_{{\mathbb R}^d} + \frac{\abs{x}^2}{2},\qquad d\geq 2. \end{equation} See \eqref{E:potential-assumptions} for the assumptions we place on $V.$ Although much is known about nodal sets of eigenfunctions of the Laplacian on a compact manifold, comparatively little has been proved about nodal sets of eigenfunctions of Schr\"odinger operators $-\tfrac{\hbar^2}{2}\Delta+V(x)$, even on ${\mathbb R}^d.$ When $V(x)\ensuremath{\rightarrow} +\infty$ as $\abs{x}\ensuremath{\rightarrow} \infty,$ such operators have a discrete spectrum and a complete eigenbasis for $L^2({\mathbb R}^d,dx).$ Fixing an energy $E>0,$ and letting $\hbar\ensuremath{\rightarrow} 0$, any energy $E$ eigenfunction $\psi_{\hbar, E}$ of $-\tfrac{\hbar^2}{2}\Delta+V(x)$ is rapidly oscillating (with frequency $\hbar^{-1}$) in the classically allowed region \[\mathcal A_E:=\setst{x\in {\mathbb R}^d}{V(x)\leq E}\] and exponentially decaying in the classically forbidden region \[\mathcal F_E:=\setst{x\in {\mathbb R}^d}{V(x)>E}.\] The nodal set of $\psi_{\hbar, E}$ undergoes a qualitative change as it crosses from $\mathcal A_E$ to $\mathcal F_E.$ This transition is illustrated in Figure 1. In $\mathcal A_E,$ the eigenfunction $\psi_{\hbar,E}$ behaves much like an eigenfunction of the Laplacian. For instance, if $V$ is real analytic, then Jin \cite{jin2017semiclassical} proved that for any bounded open $B\subseteq \mathcal A_E$ there exists $c,C>0$ such that \begin{equation}\label{E:allowed-zeros} c\hbar^{-1}\vol(B)\leq \mathcal H^{d-1}\lr{\set{\psi_{\hbar,E}=0}\cap B}\leq C\hbar^{-1}\vol(B). \end{equation} \begin{figure} \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{Nodal8_V2.jpg} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{Nodal5.jpg} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{Nodal4.jpg} \end{subfigure} \caption{Nodal sets of energy $E$ eigenfunctions of $\HO_\hbar$ have qualitatively different behavior in $\mathcal A_E$ and $\mathcal F_E.$ On the left, the nodal set is a black line and $\mathcal A_E$ is a disk with only a quarter shown. This figure was made by Eric Heller \cite{Heller-gallery}. In the middle and right, the boundary between the white and black regions is the nodal set. The allowed region in the middle is the disk and on the right is the left half-plane. The figures in middle and on the right are reproduced from Bies-Heller \cite{bies2002nodal}.} \end{figure} Here and throughout, $\mathcal H^k$ denotes $k-$dimensional Hausdorff measure. The same estimates were proved for compact real analytic Riemannian manifolds by Donnelly-Fefferman \cite{donnelly1988nodal} for eigenfunctions of the Laplacian. Except when $d=1,$ when $\psi_{\hbar, E}$ has no zeros in $\mathcal F_E,$ much less is known about the nodal set of $\psi_{\hbar, E}$ in $\mathcal F_E.$ Jin established that his upper bound in \eqref{E:allowed-zeros} continues to hold in the forbidden region. Aside from this, we are aware of only several strands of prior work on the subject. The oldest are the articles of Hoffman-Ostenhof \cite{hoffmann1988asymptotics2, hoffmann1988asymptotics} and Hoffman-Ostenhoff-Swetina \cite{hoffmann1986continuity, hoffmann1987asymptotics} that study nodal for potentials that vanish at infinity. They show that the nodal set of an eigenfunctions on the sphere at infinity looks locally like the nodal set of a Hermite polynomial. There is also the paper of Canzani-Toth \cite{canzani2016nodal} about the persistence of forbidden hypersurfaces in nodal sets of Schr\"odinger eigenfunctions on a compact manifold and the articles of B\'erard-Helfer \cite{berard2014number, berard2015nodal, berard2017some} on nodal domains for eigenfunctions of the harmonic oscillator and similar operators (mainly in the allowed region). Finally, we mention the articles of Hanin-Zelditch-Zhou \cite{hanin2014nodal, hanin2017scaling}, which study the typical size of the nodal set in $\mathcal F_E$ and near the caustic $\ensuremath{\partial} \mathcal A_E=\set{\abs{x}^2=2E}$ for \textit{random} fixed energy eigenfunctions of $\HO_\hbar$. We also refer the reader to the interesting heuristic physics paper of Bies-Heller \cite{bies2002nodal}. In particular, in \cite{hanin2014nodal} it is shown that for every bounded $B\subseteq \mathcal F_E$ there exists $C>0$ depending only on the minimum and maximum distance from a point in $B$ to $\mathcal A_E$ so that \begin{equation}\label{E:avg-fobidden-zeros} \mathbb E\left[\mathcal H^{d-1}\lr{\set{\psi_{\hbar, E}=0}\cap B}\right]= C\hbar^{-1/2} \vol(B)\lr{1+O(\hbar)}. \end{equation} While the typical nodal density for $\psi_{\hbar, E}$ in $\mathcal F_E$ is therefore $\hbar^{-1/2}$, there are no mathcing deterministic upper and lower bounds. Indeed, for every bounded open $B\subseteq \mathcal F_E$ \begin{align*} \inf_{ \psi_{\hbar, E}\in \ker\lr{\HO_\hbar- E}} \mathcal H^{d-1}\lr{\set{\psi_{\hbar, E}=0}\cap B}&=0\\ \sup_{ \psi_{\hbar, E}\in \ker\lr{\HO_\hbar- E}} \mathcal H^{d-1}\lr{\set{\psi_{\hbar,E}=0}\cap B}&=C\hbar^{-1}\vol(B). \end{align*} The infimum is attained when $\psi_{\hbar,E}$ is the unique radial eigenfunction of $\HO_\hbar$ with given energy $E,$ which has no nodal set whatsoever in $\mathcal F_E,$ and the supremum is attained when $\psi_{\hbar,E}$ is any of the purely angular eigenfunctions, which are eigenfunctions of the Laplacian on $S^{d-1}$ of frequency $\approx \hbar^{-1}.$ The difference in the exponents in the various estimates above raises the question of what happens to the nodal sets of eigenfunctions for other Schr\"odinger operators. We take up this question in the present article for the small radial perturbations $\Op_\hbar(\varepsilon)$ \eqref{E:operator-def} of the harmonic oscillator. We are concerned primarily with the behavior of nodal sets on the sphere at infinity for eigenfunctions of $\Op_\hbar(\varepsilon)$ with approximately the same energy. Our main results in this direction are Theorems \ref{T:Zeros} and \ref{T:Zeros2}, which establish upper and lower bounds on the size of the nodal set of both eigenfunctions and certain quasi-modes near a fixed energy $E$. Since $\Op_\hbar(\varepsilon)$ is rotationally symmetric for all $\varepsilon,$ its eigenfunctions can be obtained by separating variables (see \eqref{E:perturbed-spectrum} and \eqref{E:perturbed-eigenfunctions}). The radial parts of these separation of variables eigenfunctions are deformations in $\varepsilon$ of the Laguerre functions \eqref{E:Laguerre}, while the angular parts are the eigenfunctions of the Laplacian on the round sphere $S^{d-1}.$ At a fixed energy $E$, all such products have the same rate of growth at infinity when $\varepsilon=0$ (see \eqref{E:HO-Efns} in \S \ref{S:HO-spec-theory}), and hence spherical harmonics of many different angular momenta may contribute to the nodal set of eigenfunctions at infinity. However, for $\varepsilon\neq 0,$ the energies $E_{\ell,n}^V(\varepsilon),$ defined in \eqref{E:perturbed-spectrum}, for different angular momenta $\ell$ will no longer be the same (Theorem \ref{T:Main}). Hence, since the rate of growth at infinity of the radial eigenfunctions is an increasing function of $E_{\ell,n}^V(\varepsilon)$ (Proposition \ref{prop:growth}), we see that the nodal sets at infinity of energy $\approx E$ eigenfunctions and quasimodes for $\Op_\hbar(\varepsilon)$ depend on the level spacings of the perturbed energies $E_{\ell,n}^V(\varepsilon)$ for various angular momenta $\ell$. We obtain precise information on these level spacings, for what we call slowly-varying potentials $V$, in Theorem \ref{T:Main}, which is our main techincal result. \section{Statement of Results} \noindent Theorem \ref{T:Main} concerns the eigenvalue spacings for $\Op_\hbar(\varepsilon).$ It holds for $V$ that satisfy \begin{equation}\label{E:potential-assumptions} V\in C^\infty({\mathbb R}_+, {\mathbb R}),\quad \limsup_{|x|\ensuremath{\rightarrow} \infty} |x|^{\eta}V(|x|^2) \leq C,\quad V(0)=V'(0)=0 \end{equation} for some $\eta>0$ and are slowly varying in the sense of Definition \ref{D:slowly-varying} below. The last assumption in \eqref{E:potential-assumptions} is only a matter of convenience since $V(0)$ (resp. $V'(0)$) can be absorbed as shifts (resp. scalings) of the spectrum of $P_\hbar(0)=\HO_\hbar$. Since $\Op_\hbar(\varepsilon)$ is rotationally symmetric for all $\varepsilon$ its spectrum can be decomposed as a union (with multiplicity): \begin{equation*} \Spec\lr{\Op_\hbar(\varepsilon)}= \bigcup_{\ell\geq 0} \Spec\lr{\Op_{\hbar,\ell}(\varepsilon)}, \end{equation*} where $\Op_{\hbar, \ell}(\varepsilon):= \Op_{\hbar}(\varepsilon)\big|_{L_\ell^2}$ is the restriction of $\Op_{\hbar}(\varepsilon)$ to functions with fixed angular momentum: \[L_\ell^2 = L^2\lr{{\mathbb R}_+,\, r^{d-1}dr}\widehat{\otimes} \ker\lr{\Delta_{S^{d-1}}+\ell \lr{\ell + d-2}},\qquad L^2({\mathbb R}^d, dx)= \bigoplus_{\ell \geq 0} ~L_\ell^2.\] In the previous line, $\Delta_{S^{d-1}}$ is the Laplacian for the round metric on $S^{d-1},$ whose spectrum is $\set{-\ell\lr{\ell + d-2}}_{\ell \geq 0}.$ The spectrum \begin{equation}\label{E:perturbed-spectrum} \Spec\lr{\Op_{\hbar, \ell}(\varepsilon)}=\set{E_{\ell, n}^V(\varepsilon)}_{\substack{n\geq \ell,\, n\equiv \ell \text{ (mod 2)}}},\qquad E_{\ell, n}^V(0)=\hbar\lr{n+\frac{d}{2}} \end{equation} of the radial operator for each angular momentum $\ell$ is simple for small $\varepsilon$ since it is an analytic perturbation of the simple spectrum \[\Spec\lr{\Op_{\hbar, \ell}(0)}=\set{\hbar\lr{n+d/2}}_{\substack{n\geq \ell,\,\, n\equiv \ell \text{ (mod 2)}}}.\] Let us fix $E>0$ and define \begin{equation}\label{E:hbar-n} \hbar_n:=\frac{E}{n+\frac{d}{2}},\qquad n\in \mathbb N. \end{equation} At $\varepsilon=0,$ the spectra of the radial oscillators $\Op_{\hbar_n, \ell}(0)$ overlap and contain the same energy \[E=E_{\ell, n}^V(0)\] for all $\ell\leq n$ congruent to $n$ modulo $2.$ However, for small $\varepsilon$ and generic $V$, we expect the spectra of $\Op_{\hbar_n, \ell}(\varepsilon)$ will be disjoint for various $\ell.$ Although we do not have a proof of this fact, Theorem \ref{T:Main} implies that the eigenspaces of $\Op_{\hbar_n, \ell}(\varepsilon)$ will have bounded multiplicity uniformly in $n,\ell$ (see \eqref{E:diff-est}). Theorem \ref{T:Main} concerns the relative positions of the perturbations $E_{\ell, n}^V(\varepsilon)$ of $E$ as a function of $\ell.$ \begin{definition}\label{D:slowly-varying} Let $E,\delta>0.$ We say a potential $V\in C^\infty({\mathbb R}_+, {\mathbb R})$ is $\delta$-slowly varying in the allowed region for energy $2E$ if it satisfies \eqref{E:potential-assumptions}, the condition $\norm{V}_{L^\infty}\leq 1,$ and \begin{equation}\label{E:slowly-varying} \frac{\delta^2}{2}\leq \abs{V''(0)}\leq \delta^2\qquad \text{and}\qquad \sup_{r\in [0,\sqrt{4E} ]}\abs{\frac{V^{(k)}(r)}{k!}}\leq\delta^k \end{equation} for all $k\geq 3.$ \end{definition} \begin{theorem}\label{T:Main} There exist constants $C_1,C_2>0$ with the following property. Suppose $E>0$, $\delta\in \lr{0, (C_1E)^{-1}}$ and $V$ is $\delta-$slowly varying in the allowed region for energy $2E$. Then, for all \begin{equation}\label{E:n-ep-constraints} n\text{ s.t. }\hbar_n<1,\qquad \varepsilon\in [0, 1/5],\qquad \ell\leq n,\, \ell\equiv n \text{ (mod 2)} \end{equation} we have \begin{equation}\label{E:E-expansion} E_{\ell, n}^V(\varepsilon) = E +\varepsilon \hbar_nV''(0) \lr{\frac{E}{2}-\frac{d}{4}}^2 \left[3+\frac{\ell^2}{n^2} \lr{-1+ S(\ell, n,\varepsilon)} + T(n,\varepsilon) \right] + O\lr{\hbar_n^\infty}. \end{equation} The terms in \eqref{E:E-expansion} satisfy the following estimates for every $n,\ell,\varepsilon$ satisfying \eqref{E:n-ep-constraints}: \begin{align} \label{ST-est}\max \set{\abs{S(\ell, n,\varepsilon)}, \abs{T(n,\varepsilon)}} \leq C_2 \cdot \max\set{\delta,\varepsilon}. \end{align} \end{theorem} \begin{remark} Here and throughout, a quantity $A_n$ is $O\lr{ \hbar_n^{\infty} }$ if, for each $\gamma\geq1$, there exists a constant $C_{\gamma}$ such that \begin{align*} |A_{n}|\leq C_{\gamma}\hbar_n^{\gamma} \end{align*} for all $n\geq1$. \end{remark} \noindent Theorem \ref{T:Main} shows that $E_{\ell, n}^V(\varepsilon)-E$ is essentially a monotone function of $\ell$ if $\delta$ and $\varepsilon$ are sufficiently small. More precisely, if $\max\{\delta,\varepsilon\} < (2C_1)^{-1}$ then \begin{equation}\label{E:diff-est} \ell'>\frac{\ell}{1-2C_2\max\set{\delta,\varepsilon}} \qquad \Rightarrow \qquad \text{sgn}(V''(0)) \lr{E_{\ell, n}^V(\varepsilon) - E_{\ell',n}^V(\varepsilon)}>0. \end{equation} \subsection{Nodal Sets of Eigenfunctions for $\Op_\hbar(\varepsilon)$} In this section, we state our results on nodal sets. We define $U_{\ell,n}(\varepsilon)$ to be the span of the eigenfunctions of $\Op_{\hbar_n}(\varepsilon)$ of energy $E_{\ell,n}^V(\varepsilon)$. The vector spaces $U_{\ell,n}(\varepsilon)$ have multiplicity bounded independent of $n,\ell$ (see \eqref{E:diff-est})). Setting $x\in \mathbb{R}^d\mapsto (r,\omega)$ to be the polar decomposition, $U_{\ell,n}(\varepsilon)$ is spanned by functions of the form \begin{align} v_{\ell,n,m}(\varepsilon,x) = \psi_{\ell,n}(\varepsilon,r)Y_{m}^\ell(\omega), \qquad 1\leq m \leq D_{d,\ell}, \label{E:perturbed-eigenfunctions} \end{align} where $D_{d,\ell}=\dim \ker(\Delta_{S^{d-1}}+\ell(\ell+d-2))$ and the spherical harmonics $Y_m^{\ell}(\omega)$ are an ONB for the $-\ell(\ell+d-2)$ eigenspace of the Laplacian on $S^{d-1}$: \begin{equation}\label{E:spherical-harmonics} \ker(\Delta_{S^{d-1}}+\ell(\ell+d-2))=\text{Span}\set{Y_m^{\ell},\,\, m=1,\ldots, D_{d,\ell}}. \end{equation} The function $\psi_{\ell,n}(\varepsilon,r)$ is the unique, tempered, $L^2(r^{d-1}dr)$-normalized solution to the eigenfunction equation \begin{equation}\label{E:growth1a} \Op_{\hbar_n,\ell}(\varepsilon)\psi_{\ell,n}=E_{\ell,n}^V(\varepsilon)\psi_{\ell,n}. \end{equation} The energy $E_{\ell,n}^{V}(\varepsilon)$ controls the rate of growth of $v_{\ell,n,m}(\varepsilon,x)$ for large $|x|$ in the following sense. \begin{proposition} \label{prop:growth} Fix $\eta>0$, and let $V\in C^{\infty}(\mathbb{R}_{+};\mathbb{R})$, such that \begin{align} \label{E:V-assumption} \lim\sup_{r\to\infty} r^{\eta}V(r^2) < \infty. \end{align} Then, there exists a finite constant $C^V_{\ell,n,\varepsilon}\neq 0$ such that \begin{align*} \lim_{r\to\infty}\frac{d^j}{dr^j}\lr{r^{N}e^{\frac{r^2}{2\hbar_n}}\psi_{\ell,n}(\varepsilon,r)} = C^V_{\ell,n,\varepsilon}\cdot \delta_{0,j}, \end{align*} where $N = \frac{1}{\hbar_n}E_{\ell,n}^{V}(\varepsilon) - \frac{d}{2}.$ \end{proposition} Proposition \ref{prop:growth} is essentially a classical result (see \S\S 3.1-3.4 in \cite{erdelyi2010asymptotic}). We give a brief derivation in \S \ref{S:growth}. Our next result concerns the nodal sets of eigenfunctions of $\Op_{\hbar_n}(\varepsilon)$ whose eigenvalue $E_{\ell,n}^V(\varepsilon)$ nearly extremizes the distance to $E=E_{\ell,n}^V(0)$, and hence, by Theorem \ref{T:Main}, are close to \[E_{0,n}^V(\varepsilon)=E+\lr{\frac{E}{2}-\frac{\hbar_n d}{4}}^2\varepsilon \hbar_n V''(0)(3+O\lr{\max\set{\varepsilon,\delta}})\] when $\hbar_n$, $\varepsilon$ and $\delta$ are small. Define \[I_{\varepsilon,\gamma,\hbar_n}=[E_{0,n}^V(\varepsilon) -\hbar_n^{1+2\gamma}, E_{0,n}^V(\varepsilon)+\hbar_n^{1+2\gamma}],\] for some $0\leq \gamma \leq 1,$ and consider the span of the corresponding eigenfunctions \begin{align*} V_{\varepsilon,\gamma,\hbar_n} = \text{span}\{U_{\ell,n}(\varepsilon):E_{\ell,n}^V(\varepsilon)\in I_{\varepsilon, \gamma, \hbar_n}\}. \end{align*} \noindent The following concerns the nodal sets of functions in $V_{\varepsilon,\gamma,\hbar_n}$. \begin{theorem}\label{T:Zeros} Under the assumptions of Theorem \ref{T:Main}, there exists $\varepsilon^*,\delta^*, \hbar^*>0$ such that for every $\delta<\delta^*$, $\varepsilon\in[0,\varepsilon^*],$ $\hbar_n<\hbar^*$, and $\gamma\in [0,1],$ we have \[\inf_{v\in V_{\varepsilon,\gamma,\hbar_n} }\limsup_{R\ensuremath{\rightarrow} \infty}\mathcal H^{d-2}\lr{\set{v =0}\cap S_R^{d-1}}=0, \] and there exist absolute constants $c,C>0$ such that \[c\hbar_n^{-1+\gamma}\leq \sup_{v\in V_{\varepsilon,\gamma,\hbar_n} }\limsup_{R\ensuremath{\rightarrow} \infty}\mathcal H^{d-2}\lr{\set{v =0}\cap S_R^{d-1}}\leq C \hbar_n^{-1+\gamma}.\] Here $S_{R}^{d-1}$ is the $d-1$-dimensional sphere of radius $R$ centred at the origin, and $\mathcal{H}^{d-2}$ is the Haar (probability) measure on $S_R^{d-1}$. \end{theorem} In the case where the energies $E_{\ell,n}^V(\varepsilon)$ are distinct, we can conclude additional properties of the nodal sets. \begin{theorem}\label{T:Zeros2} Under the assumptions of Theorem \ref{T:Main}, and the additional assumption that the energies $E_{\ell,n}^V(\varepsilon)$ are distinct, we have the following: For each $v\in V_{\varepsilon,\gamma,\hbar_n}$, \[\lim_{R\ensuremath{\rightarrow}\infty} \mathcal H_R^{d-2}\lr{\set{v(x) =0}\cap S_R^{d-1}}\] exists, and there exists absolute constants $c, C>0$ such that for every $v\in V_{\varepsilon,\gamma,\hbar_n}$ in the complement of a co-dimension $1$ subspace, we have \begin{align*} \lim_{R\ensuremath{\rightarrow}\infty} \mathcal{H}_R^{d-2}\lr{\set{v(x) =0}\cap S_R^{d-1}} = 0 & \qquad\emph{ if } ~~V''(0) >0 , \\ c\hbar_n^{-1+\gamma}\leq \lim_{R\ensuremath{\rightarrow}\infty} \mathcal{H}_R^{d-2}\lr{\set{v(x) =0}\cap S_R^{d-1}} \leq C\hbar_n^{-1+\gamma} &\qquad \emph{ if } ~~V''(0) < 0. \end{align*} \end{theorem} \subsection{Acknowledgements} We would like to thank Michael Taylor for pointing us to his excellent notes \cite{Taylor-dispert} on analytic perturbation theory. We are also grateful to Vincent Genest for several useful conversations about special functions and for directing us to the linearization formulas in \cite{suslov2008hahn}. \section{Proof of Theorems \ref{T:Zeros} and \ref{T:Zeros2}} We now explain how to derive Theorems \ref{T:Zeros} and \ref{T:Zeros2} from Theorem \ref{T:Main} and Proposition \ref{prop:growth}. We then prove Proposition \ref{prop:growth} in \S \ref{S:growth} below and Theorem \ref{T:Main} in \S \ref{S:main-proof}. Our derivation relies on several well-known properties of the spherical harmonics $Y_{m}^{\ell}(\omega)$. The first is an estimate on the measure of the total nodal set: Since for each $1\leq m \leq D_{d,\ell}$, $Y_{m}^{\ell}(\omega)$ is an eigenfunction on the round sphere $S^{d-1}$ with eigenvalue $-\ell(\ell+d-2)$, the Donnelly-Fefferman bounds \cite{donnelly1988nodal} show that there exist constants $c,C>0$ such that \begin{equation}\label{E:sh-nodal-meas} c\ell \leq \mathcal H^{d-2}\lr{\left\{\omega\in \mathbb{S}^{d-1}:\sum_{1\leq m\leq D_{d,\ell}}a_{m}Y_m^\ell(\omega)} =0\right\}\leq C\ell. \end{equation} In particular, since $E_{0,n}^V(\varepsilon)\in I_{\varepsilon,\gamma,\hbar_{n}}$ by construction for all $0\leq\gamma\leq 1$, this immediately gives the first statement of Theorem \ref{T:Zeros}. Moreover, by a simple application of the Crofton formula ( see for example Theorem \cite{gichev2009some}), the upper bound in \eqref{E:sh-nodal-meas} holds also for (non-identically zero) linear combinations of spherical harmonics up to frequency $\ell:$ \begin{equation}\label{E:sh-nodal-meas2} \mathcal H^{d-2}\lr{\left\{\omega\in \mathbb{S}^{d-1}:\sum_{s\leq \ell, 1\leq m\leq D_{d,\ell}}a_{s, m} Y_m^s(\omega)=0\right\} }\leq C\ell. \end{equation} By the approximate monotonicity of the energies $E_{\ell,n}^{V}(\varepsilon)$ from Theorem \ref{T:Main}, provided $\varepsilon,\delta,\hbar_n$ are sufficiently small, the largest value of $\ell$ for which $E_{\ell,n}^V(\varepsilon)\in I_{\varepsilon,\gamma,\hbar_n}$, is bounded above and below by a constant multiplied by $\hbar_n^{-1+\gamma}$. The second statement in Theorem \ref{T:Zeros} then follows from the lower bound in \eqref{E:sh-nodal-meas} and the upper bound in \eqref{E:sh-nodal-meas2}. To prove Theorem \ref{T:Zeros2} we have the extra assumption that the energies $E_{\ell,n}^V(\varepsilon)$ are distinct. In this case, by Proposition \ref{prop:growth}, the radial part of the eigenfunctions, $\psi_{\ell,n}(\varepsilon,r)$, grows at different rates as $r\to\infty$ for different values of $\ell$. Given $v(x)\in V_{\varepsilon,\gamma,\hbar_n}$, we can write \begin{align*} v(x) = \sum_{\ell\in J_{\varepsilon,\gamma,\hbar_n}}\sum_{1\leq m\leq D_{d,\ell}} a_{\ell,m}\psi_{\ell,n}(\varepsilon,r)Y_m^{\ell}(\omega) . \end{align*} Among the values of $\ell$ for which $a_{\ell,m}\neq0$ for some $m$, let $\ell^*\in J_{\varepsilon,\gamma,\hbar_n}$ correspond to the largest energy $E_{\ell,n}^V(\varepsilon)$. Then, by Proposition \ref{prop:growth}, the function $ r^{N^*}e^{\frac{r^2}{2\hbar_n}}v(r,\omega) $ converges in $C^{\infty}(\mathcal{S}^{n-1})$ to \begin{align} \label{E:linear-combination} C_{\ell^*,n,\varepsilon}^V\sum_{1\leq m\leq D_{d,\ell^*}}a_{\ell^*,m}Y_{m}^{\ell}(\omega) \end{align} as $r\to\infty$, where $N^*= \frac{1}{\hbar_n}E_{\ell^*,n}^V(\varepsilon) - \frac{d}{2}$. Moreover, the function in \eqref{E:linear-combination} has co-dimension $2$ singular set in $\mathbb{S}^{d-1}$ (see e.g. \cite{hardt1999critical, hardt1987nodal}), and so by Corollary 2 in \cite{beck2016nodal} we have the convergence of the nodal set measures, \begin{align} \label{E:linear-combination2} \lim_{R\to\infty}\mathcal H_R^{d-2}\lr{\set{v(x) =0}\cap S_R^{d-1}} = \mathcal H^{d-2}\lr{\left\{\omega\in\mathbb{S}^{d-1}:\sum_{1\leq m\leq D_{d,\ell^*}}a_{\ell,m}Y_{m}^{\ell}(\omega) =0\right\}} . \end{align} By Theorem \ref{T:Main}, provided $\varepsilon,\delta,\hbar_n$ are sufficiently small, for almost every $v\in V_{\varepsilon,\gamma,\hbar_n}$, $\ell^*$ is equal to $0$ if $V''(0)>0$, while $\ell^* = O(\hbar_n^{-1+\gamma})$ if $V''(0)<0$. Thus, \eqref{E:linear-combination2} together with the Hausdorff measure estimates in \eqref{E:sh-nodal-meas} imply Theorem \ref{T:Zeros2}. \subsection{Proof of Proposition \ref{prop:growth}} \label{S:growth} Suppose that $u_{\ell,n}(\varepsilon,r)$ satisfies the equation \begin{equation} \label{E:growth1a} \lr{-\frac{\hbar_n^2}{2}\lr{\frac{d}{dr^2} + \frac{d-1}{r}\frac{d}{dr} - \frac{\ell(\ell+d-2)}{r^2}}+\frac{r^2}{2} + \varepsilon\hbar_n V(r^2) - E_{\ell,n}^{V}(\varepsilon)}u_{\ell,n}(\varepsilon,r) = 0 \end{equation} for $r$ sufficiently large. Setting $t = \frac{r^2}{2\hbar_n}$, and $z_{\ell,n}(\varepsilon,t) = t^{-N/2}e^{t}u_{\ell,n}(\varepsilon,r)$, with $N = \frac{1}{\hbar_n}E_{\ell,n}^V(\varepsilon) - \frac{d}{2}$, this equation becomes \begin{equation} \label{E:growth2} \left(\frac{d^2}{dt^2} + 2\lr{-1 + \frac{N/2+d/4}{t}}\frac{d}{dt} + \frac{F_{\ell,n}(\varepsilon,t)}{t^2}\right)z_{\ell,n}(\varepsilon,t) = 0. \end{equation} Here the function $F_{\ell,n}(\varepsilon,t)$ is given by \begin{align*} F_{\ell,n}(\varepsilon,t) = -\varepsilon tV(2\hbar_{n}t) -\frac{\ell(\ell+d-2)}{4} +\frac{N}{2}(N/2-1) + \frac{Nd}{4}, \end{align*} and so by the assumption on $V(t)$ from \eqref{E:V-assumption}, \begin{align} \label{E:F-assumption} \left|F_{\ell,n}(\varepsilon,t)\right| \leq C_{\ell,n,\varepsilon} t^{1-\eta/2}, \end{align} for a constant $C_{n,\varepsilon}$ for large $t$. Then, for fixed $\ell,n,\varepsilon$, Erdelyi \cite{erdelyi2010asymptotic} (with $\omega = -1$, $\rho = -N/2-d/4$) gives a solution $z^{(1)}_{\ell,n}(\varepsilon,t)$ to \eqref{E:growth2} for $t\geq t_0$, under the assumption on $F_{\ell,n}(\varepsilon,t)$ in \eqref{E:F-assumption}, such that \begin{align} \label{E:growth3} \lim\sup_{t\to\infty} t^{\eta}\left|z^{(1)}_{\ell,n}(\varepsilon,t)-1\right| <\infty. \end{align} (Note that in \cite{erdelyi2010asymptotic}, equation (7), the assumption placed on $F_{\ell,n}(\varepsilon,t)$ is that it is bounded in $x$, but the same proof works for the sub-linear growth from \eqref{E:F-assumption}.) Going back to the original function $u_{\ell,n}(\varepsilon,r)$, we obtain a solution $u^{(1)}_{\ell,n}(\varepsilon,r)$ to \eqref{E:growth1a} for $r\geq r_0$, which is non-zero, and satisfies \begin{align} \label{E:growth4} \lim_{t\to\infty}t^{N/2}e^{t} u^{(1)}_{\ell,n}(\varepsilon,\sqrt{2\hbar_n t}) = 1,\qquad \lim_{t\to\infty}\frac{d^j}{dt^j}\left(t^{N/2}e^{t} u^{(1)}_{\ell,n}(\varepsilon,\sqrt{2\hbar_n t})\right) = 0 \end{align} for $N = \frac{1}{\hbar_n}E_{\ell,n}^{V}(\varepsilon) - \frac{d}{2}$, $j\geq1$. To obtain another solution to \eqref{E:growth1a} for large $r$, we first set $w^{(1)}_{\ell,n}(\varepsilon,r) = r^{\frac{d-1}{2}} u_{\ell,n}^{(1)}(\varepsilon,r)$, to remove the coefficient of $\frac{d}{dr}$ in \eqref{E:growth1a}. Then defining $u^{(2)}_{\ell,n}(\varepsilon,r) = r^{-\frac{d-1}{2}}w^{(2)}_{\ell,n}(\varepsilon,r)$, where \begin{align*} w^{(2)}_{\ell,n}(\varepsilon,r) = w^{(1)}_{\ell,n}(\varepsilon,r)\int_{r_0}^{r}w^{(1)}_{\ell,n}(\varepsilon,s)^{-2} d s, \end{align*} gives the other linearly independent solution to \eqref{E:growth1a} for $r\geq r_0$. Since $u^{(2)}_{\ell,n}(\varepsilon,r)$ grows exponentially as $r$ tends to infinity, it is not $L^2(r^{d-1} dr)$-normalisable, and so our eigenfunction $\psi_{\ell,n}(\varepsilon,r)$ must be proportional to $u^{(1)}_{\ell,n}(\varepsilon,r)$ for $r\geq r_0$. The proposition then follows from the estimates in \eqref{E:growth4}. \qed \section{Background to Proof of Theorem \ref{T:Main}} \subsection{Spectral Theory of $\HO_\hbar$}\label{S:HO-spec-theory} The spectrum of the isotropic harmonic oscillator $\HO_\hbar$ is \[\text{Spec}(\HO_\hbar)=\set{\hbar\lr{n+d/2}}_{n\in \mathbb N}.\] In this article, we will use repeatedly properties of the radial eigenfunctions of $\HO_\hbar$, which we now recall. Recall from \eqref{E:spherical-harmonics} the spectrum of the Laplacian on $S^{d-1}$ and the corresponding real-valued eigenfunctions $\set{Y_m^{\ell},\,\, m=1,\ldots, D_{d,\ell}}.$ A standard calculation shows that an ONB for $\ker(\HO_\hbar - \hbar\lr{n+d/2})$ is given by \[\psi_{\hbar, \ell, n}(r)\cdot Y_m^{\ell}(\omega),\qquad 0\leq \ell \leq n,~~\ell \equiv n~(\text{mod }2),\quad m=1,\ldots, D_{d,\ell},\] where $x\in {\mathbb R}^d \mapsto (r,\omega)$ is the polar decomposition and \begin{align} \label{E:Laguerre} \psi_{\hbar, \ell, n}(r)= \hbar^{-\frac{\ell}{2}-\frac{d}{4}}\mathcal N_{\ell,n}\cdot r^{\ell}e^{-r^2/2\hbar} L_{n'}^{\lr{\alpha}}\lr{r^2/\hbar},\qquad \mathcal N_{\ell, n}^{\,2} = \frac{2\cdot\Gamma\lr{\frac{n-\ell}{2}+1}}{\Gamma\lr{\frac{n+\ell +d}{2}}}. \end{align} In the above, we have set \[n':=\frac{n-\ell}{2},\qquad \alpha:=\ell + \frac{d-2}{2},\] and denoted by $L_k^{(\alpha)}$ the generalized Laguerre polynomials. We often fix $E>0$ and define $\hbar=\hbar_n$ to be a function of $n$ and $E$ as in \eqref{E:hbar-n}. In this case, we abbreviate \[\psi_{\ell, n}:=\psi_{\hbar_n, \ell, n}.\] As explained in the introduction, the energy $E$ determines a clasically forbidden region $\mathcal F_E=\set{r^2>2E}$, where the fixed energy eigenfunctions $\psi_{\hbar_{n},\ell, m}$ for $m\approx n$ are uniformly exponentially small. More precisely, for any $\varepsilon>0$ there exists $C>0$ such that \begin{equation}\label{E:Agmon} \sup_{\substack{0\leq \ell \leq m,\,\, \abs{m-n}<\frac{n}{2}\\ \ell \equiv m \text{ (mod 2)}\\ r\in [\sqrt{4E},\infty)}}\abs{e^{(1-\varepsilon)r^2/2}\psi_{\hbar_n,\ell, m}(r)} \leq C. \end{equation} Since $\lim_{x\to\infty}x^{-k}L_k^{(\alpha)}(x)=\frac{(-1)^k}{k!}$, for each fixed $n,$ the radial eigenfunctions $\psi_{\ell,n}$ differ at infinity only by a constant: \begin{equation}\label{E:HO-Efns} 0 < \lim_{r\ensuremath{\rightarrow} \infty} \frac{\psi_{\ell, n}(r)}{\hbar_n^{-\frac{n}{2}-\frac{d}{4}}r^n e^{-r^2/2\hbar}} = \mathcal N_{\ell, n}< \infty\qquad \forall 0\leq \ell\leq n,\quad \ell \equiv n ~(\text{mod }2). \end{equation} \subsection{Linearization Formulas for Laguerre Functions} In order to perform perturbative calculations about $\Spec\lr{\Op_\hbar(\varepsilon)},$ we will need a convenient expression for \[J_{a,b,k}^\alpha:=\int_0^\infty e^{-\rho}\rho^{\alpha+k} L_a^{\lr{\alpha}}(\rho)L_b^{\lr{\alpha}}(\rho)d\rho,\] where as in \eqref{E:Laguerre} $L_a^{\lr{\alpha}}$, $L_b^{\lr{\alpha}}$ are the generalized Laguerre polynomials. \begin{proposition}[Special Case of \cite{suslov2008hahn} Eqn. (2.5)] For any $a,b,k\in \mathbb N$ and every $\alpha>-1,$ we have \begin{equation}\label{E:overlap3F2} J_{a,b,k}^\alpha = \lr{\alpha+1}_k\binom{k}{\abs{a-b}}\left[\frac{(a\lor b + 1)_{\abs{a-b}}}{\lr{a\lor b + \alpha + 1}}_{\abs{a-b}}\right]^{1/2} \pFq{3}{2}{-k, \,k+1,\,-(a\lor b)}{\abs{a-b}+1, \alpha +1}{1}. \end{equation} \end{proposition} \noindent We have written $a\lor b$ for the minimum of $a,b$, \[\lr{x}_q:=\frac{\Gamma(x+q)}{\Gamma(x)}\] for the Pochhammer symbol and \[ \pFq{3}{2}{a_1, \,a_1,\,a_3}{b_1, b_2}{1} = \sum_{q=0}^\infty\frac{1}{q!} \frac{\lr{a_1}_q\lr{a_2}_q\lr{a_3}_q}{\lr{b_1}_q\lr{b_2}_q}\] for a hypergeometric function. In the case of \eqref{E:overlap3F2}, the sum terminates at $q=k\lor a\lor b$ since $\lr{x}_q$ vanishes for $x=-1,-2,\ldots.$ The expression in \eqref{E:overlap3F2} differs slightly from the one in \cite{suslov2008hahn} because our Laguerre functions are $L^2-$normalized while the ones in \cite{suslov2008hahn} are not. \subsection{Analytic Perturbation Theory}\label{S:perturbation-theory} We recall in this section several results from analytic perturbation theory. These results are classical, and we mainly follow the notes \cite{Taylor-dispert} of M. Taylor. Suppose that $H$ is an unbounded self-adjoint operator on a Hilbert space $\mathcal H$ with discrete spectrum $\set{\lambda_j}_{j=0}^\infty$ and corresponding eigenfunctions \[\mathcal H u_j = \lambda_j u_j.\] Suppose further that $W$ is a bounded self-adjoint operator on $\mathcal H.$ Consider some $\lambda =\lambda_n\in \Spec\lr{H},$ and write $u = u_n$ for the corresponding eigenfunction. Then for all $\varepsilon$ sufficiently small the operator \[H(\varepsilon):=H+\varepsilon W\] has a simple eigenvalue $\lambda(\varepsilon)$ with \[\lr{H+\varepsilon W}u(\varepsilon)=\lambda(\varepsilon)u(\varepsilon).\] Both $\lambda(\varepsilon)$ and $u(\varepsilon)$ are analytic in $\varepsilon.$ Explicitly, write \[\lambda(\varepsilon)= \lambda + \varepsilon \sum_{k\geq 0} \varepsilon^k \mu_k,\quad u(\varepsilon)= u + \varepsilon \sum_{k\geq 0} \varepsilon^k v_k,\] and impose the normalization \[\lr{u(\varepsilon)-u}\perp u.\] We have the following recursive formulas for $\mu_k, v_k$ for each $k\geq 0$ \begin{equation} \begin{cases} w_k=\sum_{m=0}^{k-1} \mu_{k-m-1}v_m, & ~~ w_0=0\\ v_k = \lr{H-\lambda}^{-1}\left[\Pi_u^\perp \lr{Wv_{k-1}} + w_k\right],&~~v_{-1}=u\\ \mu_k = \inprod{Wv_{k-1}}{u} \end{cases}. \end{equation} The operator $\Pi_u^\perp$ is the projection onto the orthogonal complement of $u.$ Using this recursion and integration by parts, we have for any $X\in \mathcal H$ \begin{equation}\label{E:G-def} \inprod{Wv_k}{X}= - \inprod{Wv_{k-1}}{G\lr{X}}, \qquad G=\lr{H-\lambda}^{-1}\circ \Pi_u^\perp \circ W. \end{equation} Writing $u=u_n$ and using \eqref{E:G-def}, we find for $k\geq 0$ \begin{equation}\label{E:muj-G} \mu_k = (-1)^k\inprod{Wu}{G^{(k)}(u)}. \end{equation} Using the definition of $G$ we obtain \begin{align} \notag \mu_0 &= \inprod{Wu}{u}\\ \mu_k &= \sum_{m_1,\ldots, m_k\neq n} \inprod{W u }{u_{m_1}} \prod_{i=1}^k \frac{\inprod{Wu_{m_i}}{u_{m_{i+1}}}}{\lambda_n-\lambda_{m_i}},\qquad k \geq 1, \end{align} with the convention that $u_{m_{k+1}}=u.$ We will also need the following simple estimate. \begin{Lem}\label{L:level-spacing} Suppose that $H$ not only has simple spectrum but also that the spacing between any two consecutive eigenvalues is bounded below by $\eta>0.$ Then, if $\norm{W}_{L^\infty}\leq \eta,$ \begin{equation} \sup_{\substack{n\in \mathbb N\\\varepsilon\in [0,\frac{1}{5}]}} \abs{\lambda_n(\varepsilon)- \lambda_n(0)}< \eta/4. \end{equation} \end{Lem} \begin{proof} Let $\mu_k$ given by \eqref{E:muj-G}. Since \[\norm{G}_{\mathcal H\ensuremath{\rightarrow} \mathcal H}\leq \eta^{-1}\norm{W}_{L^\infty}\leq 1,\] we conclude \[\abs{\mu_k}\leq 1.\] Thus, for $\varepsilon\in[0,\tfrac{1}{5}]$ we can write \[\lambda_n(\varepsilon)-\lambda_n(0) = \varepsilon \sum_{k=0}^\infty \varepsilon^k \mu_k,\] and, in particular \[\abs{\lambda_n(\varepsilon)-\lambda_n(0)} \leq \eta\cdot \frac{\varepsilon}{1-\varepsilon}\leq \frac{\eta}{4}.\] \end{proof} \section{Proof of Theorem \ref{T:Main}}\label{S:main-proof} \noindent Throughout this section, we fix $E>0$ and use the convention \[\hbar = \hbar_n = \frac{E}{n+\frac{d}{2}}\] as in \S \ref{S:introduction}. The proof of Theorem \ref{T:Main} consists of three steps, which we describe below. \subsection{Step 1.} The first step is to replace both $V(r^2)$ and $E_{\ell, n}^V(\varepsilon)$ by an $\hbar-$dependent Taylor series around $r=0$ and $\varepsilon=0,$ respectively. More precisely, for each $K\in \mathbb N,$ define \[ V_K(r):=\sum_{k=0}^K \frac{V^{(k)}(0)}{k!} r^{2k}.\] \begin{proposition}\label{P:step1} There exists a constant $C_1>0$ with the following property. For all $E>0,$ any $\delta \in (0,(C_1 E)^{-1})$, each $\delta-$slowly varying potential $V,$ and every $n,\varepsilon$ such that $\hbar_n<1$ and $\varepsilon\in [0,1/5]$, we have \begin{equation}\label{E:eval-jets} \sup_{ \ell\leq n,\, \ell\equiv n \text{ (mod 2)}}\abs{E_{\ell, n}^V(\varepsilon) - \sum_{j=0}^J \frac{\lr{\hbar_n\varepsilon}^j}{j!} \frac{d^j}{d\varepsilon^j}\bigg|_{\varepsilon=0} E_{\ell,n}^{V_K}(\varepsilon)} = O\lr{ \hbar_n^{\infty} } \end{equation} provided $K=K(n)$ and $J=J(n)$ satisfy \begin{equation}\label{E:KJ-assumptions} \limsup_{n\ensuremath{\rightarrow} \infty} \frac{K(n)}{\log n}=\limsup_{n\ensuremath{\rightarrow} \infty} \frac{J(n)}{\log n}=\infty\qquad \text{and}\qquad \lim_{n\ensuremath{\rightarrow} \infty} \frac{K(n)J(n)}{n}=0. \end{equation} \end{proposition} The approximation \eqref{E:eval-jets} is the source of the $O(\hbar_n^\infty)$ error in \eqref{E:E-expansion}. The function $E_{\ell,n}^{V_K}(\varepsilon)$ whose jets appear in Proposition \ref{P:step1} is formally defined in the same way as $E_{\ell,n}^V(\varepsilon).$ However, note that $V_K$ is not a bounded operator on $L^2([0,\infty), r^{d-1}dr).$ It therefore does not strictly follow from the discussion in \S \ref{S:perturbation-theory} that these jets are well-defined. Nonetheless, we simply \textit{define} these jets by $\mu_{\ell,n}^{V_K}(0)=E$ and for $j\geq 1$ \begin{equation}\label{E:jets-def} \mu^{V_K}_{\ell, n}(j):= \frac{1}{j!}\frac{d^j}{d\varepsilon^j}\bigg|_{\varepsilon=0} E_{\ell,n}^{V_K}(\varepsilon) = \lr{-1}^{j-1} \inprod{V_K \psi_{\hbar_n,\ell,n}}{G_{\ell,K}^{(j-1)} (\psi_{\hbar_n, \ell,n})}, \end{equation} where the inner product is in $L^2([0,\infty), r^{d-1}dr)$ and \[G_{\ell,K}:=\lr{\Op_{\hbar} - E^V_{\ell,n}(0)}^{-1}\circ \Pi_{\psi_{\hbar_n, \ell,n}^{\perp}}\circ V_K.\] The inner products on the right hand side of \eqref{E:jets-def} are finite provided $K(n)$ satisfies \eqref{E:KJ-assumptions} by the Agmon estimates \eqref{E:Agmon}. We prove Proposition \ref{P:step1} in \S \ref{S:step1-proof}. \subsection{Step 2} The second step in the proof of Theorem \ref{T:Main} is to write the derivatives of $E^{V_K}_{\ell,n}(\varepsilon)$ at $\varepsilon=0$ that appear in Proposition \ref{P:step1} in terms of hypergeometric functions and obtain their asymptotics. Unwinding the definition of $G_K,$ using \eqref{E:jets-def}, and recalling that the spectrum of $\HO_\hbar=\Op_\hbar(0)$ has level spacings $\hbar,$ we may write $\mu^{V_K}_{\ell, n}(j)$ as \begin{equation}\label{E:jth-var-sumproduct} \hbar_n^{1-j}\sum_{\substack{m_1,\ldots, m_{j-1}\neq n\\ \abs{m_i - m_{i+1}}\leq 2K\\ \abs{n-m_1}\leq 2K}} \inprod{V_K\psi_{\hbar_n, \ell,n}}{\psi_{\hbar_n, \ell,m_1}} \prod_{i=1}^{j-1} \frac{\inprod{V_K \psi_{\hbar_n,\ell,m_i}}{\psi_{\hbar_n, \ell,m_{i+1}}}}{m_i-n} \end{equation} with the convention that $m_{i+1}=n.$ The restriction that $\abs{m_i-m_{i+1}}\leq 2K$ comes from the binomial coefficient in \eqref{E:overlap2} below. To state our next result, we augment the notation in \S \ref{S:HO-spec-theory} and write for each $n\geq \ell,\,\, \ell \equiv n \text{ (mod 2)}$ and all $s,t\geq \ell$ with $s,t\equiv \ell \text{ (mod 2)}$ \begin{equation}\label{E:prime-defs} s': = \frac{s-\ell}{2},\quad t': = \frac{t-\ell}{2},\quad \alpha := \ell + \frac{d-2}{2},\quad s'\lor t' : = \min\set{s',t'}. \end{equation} For $n\in\mathbb{N}$, we will be interested in the values of $s$, $t$, and $\ell$ in the set \begin{equation} \label{E:parameter-space} U^n_{s,t,\ell} = \left\{(s,t,\ell) \in \mathbb{N}^3: s,t,\ell\equiv n \text{ (mod 2)}, \ell \leq n, |s-n|<\frac{n}{2}, |t-n|<\frac{n}{2},\right\}. \end{equation} For each $K,s,t\in \mathbb N,$ we recall our assumptions $V(0)=V'(0)=0$ and write \begin{equation}\label{E:overlap1} \inprod{V_K\psi_{\hbar_n, \ell,s}}{\psi_{\hbar_n, \ell,t}} = \sum_{k=2}^K \frac{V^{(k)}(0)}{k!} \hbar_n^k A_{k,s,t,\ell}. \end{equation} The following Proposition is proved in \S \ref{S:step2-proof}. \begin{proposition}\label{P:step2} There exist constants $C_1,C_2>0$ with the following property. For any $E>0$, if $\delta \in (0, (C_1E)^{-1})$ and $V$ is a $\delta$-slowly varying potential in the allowed region for energy $2E$ (Definition \ref{D:slowly-varying}), then for each \[k_0\geq 2,\qquad n\text{ s.t. } \hbar_{n}<1,\qquad(s,t,\ell)\in U^n_{s,t,\ell},\] we have \begin{equation} \label{E:overlap2a} \abs{\sum_{k\geq k_0} \frac{V^{(k)}(0)}{k!} \hbar_n^k A_{k,s,t,\ell} - T(n,s,t)}\leq C_2 \frac{1+\ell^2}{\lr{s\lor t}^2}e^{-|s-t|}\lr{E\delta}^{k_0}. \end{equation} Here, as usual $K=K(n)$ satisfies \eqref{E:KJ-assumptions}, and $T(n,s,t)$ is $\ell-$independent and satisfies \[\sup_{\substack{\hbar_n<\hbar^*,\\(s,t,\ell)\in U^n_{s,t,\ell}}} e^{|s-t|}\abs{T(n,s,t)}\leq \lr{ C_2E\delta}^{k_0}.\] \end{proposition} \begin{remark} We will only use Proposition \ref{P:step2} for $k_0=2,3.$ Also, we will obtain the following exact formula for $A_{k,s,t,\ell}:$ \begin{equation} \label{E:overlap2} A_{k,s,t,\ell} = \lr{\alpha+1}_k\binom{k}{\abs{s'-t'}}\left[\frac{(s'\lor t' + 1)_{\abs{s'-t'}}}{\lr{s'\lor t' + \alpha + 1}}_{\abs{s'-t'}}\right]^{1/2} \pFq{3}{2}{-k, \,k+1,\,-(s'\lor t')}{\abs{s'-t'}+1, \alpha +1}{1}, \end{equation} where the notation is from \eqref{E:prime-defs}. In particular, $A_{k,s,t,\ell}=0$ whenever $|s'-t'|>k$. \end{remark} \subsection{Step 3} The final step in the proof of Theorem \ref{T:Main} is to observe that combining Proposition \ref{P:step2} with the expression for $\frac{1}{j!}\frac{d^j}{d\varepsilon^j}\bigg|_{\varepsilon=0} E_{\ell,n}^{V_K}(\varepsilon)$ from \eqref{E:jth-var-sumproduct} and \eqref{E:overlap1}, we obtain the following estimates. \begin{proposition}\label{P:step3} There exist constants $C_1,C_2>0$ with the following property. Fix $E>0$ and $\delta\in (0,\lr{C_1E}^{-1})$ and a $\delta$-slowly varying potential $V$. For $n\in \mathbb N,$ and $K=K(n)$ and $J=J(n)$ satisfying \eqref{E:KJ-assumptions}, we have \[\sum_{j=2}^J \frac{\lr{\hbar_n \varepsilon}^j}{j!} \frac{d^j}{d\varepsilon^j}\bigg|_{\varepsilon=0} E_{ \ell, n}^{V_K}(\varepsilon)=T_{K}(n,\varepsilon)+\frac{\ell^2}{n^2}S_{K}(\ell, n, \varepsilon).\] Here for every $\ell, n,\varepsilon$ satisfying \begin{equation}\label{E:lnep-constraints} 0\leq \ell\leq n,\,\,\ell \equiv n\text{ (mod 2)},\qquad n\text{ s.t. }\hbar_n <1,\qquad \varepsilon\in[0,1/5], \end{equation} we have the estimates \begin{align*} \max\set{\abs{T_{K}(n,\varepsilon)}, \abs{S_{K}(\ell,n,\varepsilon)}}\leq C_2 \hbar_n \varepsilon^2 \lr{E\delta}^2. \end{align*} Moreover, in the notation of Proposition \ref{P:step2}, we have \begin{equation} \frac{d}{d\varepsilon}\bigg|_{\varepsilon=0} E_{ \ell, n}^{V_K}(\varepsilon)=\inprod{V_K\psi_{\hbar_n, \ell, n}}{\psi_{\hbar_n, \ell, n}} = \lr{\hbar_n^2 \frac{V''(0)}{2}A_{2,n,n,\ell} + Y(n)+ \frac{\ell^2}{n^2}X(\ell,n)}, \end{equation} with \begin{align*} \max\set{\abs{X(\ell,n)},\abs{Y(n)}}\leq C_2\lr{E\delta }^3 \end{align*} for $\ell, n,\varepsilon$ satisfying \eqref{E:lnep-constraints}. \end{proposition} The proof of Theorem \ref{T:Main} is complete once we choose $C_1$ to be the maximum of the $C_1$'s that are provided by Propositions \ref{P:step1} and \ref{P:step3}, use that \[\hbar^2 A_{2,n,n,\ell} = 6\lr{\frac{\hbar_n n }{2}}^2\lr{1- \frac{1}{3}\cdot \frac{\ell^2}{n^2} + \frac{2-d}{3n}\cdot \frac{\ell}{n} + \frac{d}{n}+\frac{d(d+2)}{6n^2}},\] and substitute the estimates from Proposition \ref{P:step3} into \eqref{E:eval-jets}. \section{Proof of Proposition \ref{P:step2}}\label{S:step2-proof} Let us first derive \eqref{E:overlap1} and \eqref{E:overlap2}. Recall from \eqref{E:Laguerre} that, as a function of the radial variable $r=\abs{x},$ the radial eigenfunctions of the unperturbed operator ($\varepsilon=0$) are \[\psi_{\hbar_n, \ell, s}(r)= \hbar_n^{-\frac{\ell}{2}-\frac{d}{4}}\mathcal N_{s,\ell, d}\cdot r^{\ell}e^{-r^2/2\hbar} L_{\frac{1}{2}\lr{s-\ell}}^{\lr{\ell + \frac{d-2}{2}}}\lr{r^2/\hbar_n},\qquad \mathcal N_{s, \ell, d}^{\,2} = \frac{2\cdot\Gamma\lr{\frac{s-\ell}{2}+1}}{\Gamma\lr{\frac{s+\ell +d}{2}}},\] where $L_k^{(\alpha)}$ are the generalized Laguerre polynomials. Hence, for $\alpha = \ell + \frac{d-2}{2}$, we have \begin{align*} \inprod{V_K \psi_{\hbar_n, \ell, s}}{\psi_{\hbar_n, \ell, t}} &=\frac{\mathcal N_{s,\ell,d} \mathcal N_{t,\ell,d}}{2} \int_0^\infty V_K(\sqrt{\hbar_n \rho}) \rho^{\alpha} e^{-\rho} L_{s'}^{\alpha}(\rho)L_{t'}^{\alpha}(\rho)d\rho \\ &= \frac{\mathcal N_{s,\ell,d} \mathcal N_{t,\ell,d}}{2} \sum_{k=0}^K \frac{\hbar_n^kV^{(k)}(0)}{k!} \int_0^\infty \rho^{\alpha+k} e^{-\rho} L_{s'}^{\alpha}(\rho)L_{t'}^{\alpha}(\rho)d\rho . \end{align*} Writing \[ \frac{\mathcal N_{s,\ell,d} \mathcal N_{t,\ell,d}}{2}=\left[\frac{\lr{(s'\lor t')+1}_{\abs{s'-t'}}}{\lr{(s'\lor t')+\alpha+1}_{\abs{s'-t'}}}\right]^{1/2}\cdot \frac{\Gamma\lr{(s'\lor t') +1}}{\Gamma\lr{(s'\lor t')+\alpha +1}}\] and using equation (2.5) in \cite{suslov2008hahn} then proves \eqref{E:overlap1} and \eqref{E:overlap2}. Next, we will show that for all $n\in\mathbb{N}$ and $(s,t,\ell)\in U^n_{s,t,\ell}$, $A_{k,s,t,\ell}$ has the expansion \begin{equation}\label{E:overlap3} \hbar_n^k A_{k,s,t,\ell} = \lr{\frac{\hbar_n \lr{s\lor t}}{2}}^k \left[T_1(k,s,t) + \frac{\ell^2}{\lr{s\lor t}^2} T_2(k,s,t,\ell)\right]. \end{equation} Here for some $C_1>0$, we have \begin{align} \sup_{\substack{s,t\in \mathbb N\\ \abs{s-n},\abs{t-n}\leq \frac{n}{2}}}\abs{T_1(k,s,t)}& \leq C \cdot C_1^k,\qquad \sup_{\substack{s,t,\ell\in \mathbb N\\ \abs{s-n},\abs{t-n}\leq \frac{n}{2}\\ \ell \leq n,\, \ell \equiv n \text{ (mod 2)}}} \abs{T_2(k,s,t,\ell)}\leq C\cdot C_1^k.\label{E:overlap4} \end{align} Note that for $s\lor t \leq 3n/2$, we have that $\frac{\hbar_n(s\lor t)}{2}\leq E$. Moreover, by \eqref{E:overlap2}, $A_{k,s,t,\ell}$ is equal to zero when $|s'-t'| = \tfrac{|s-t|}{2}>k$. Hence, the term $e^{-\abs{s-t}}$ appearing in \eqref{E:overlap2a} is bounded by $e^{-2k}$ and can be absorbed into the constant $C_1$ in \eqref{E:overlap3} and \eqref{E:overlap4}. Thus, these estimates, together with Definition \ref{D:slowly-varying} of a $\delta-$slowly varying potential allow us to sum over $k$ to establish \eqref{E:overlap2a} and complete the proof of Proposition \ref{P:step2}. To obtain the estimates in \eqref{E:overlap3} and \eqref{E:overlap4}, we need two lemmas, in which we abbreviate \[N = s\lor t,\qquad \beta = |s'-t'|.\] In particular, this means that $\abs{N-n}\leq \frac{n}{2}$. Since $A_{k,s,t,\ell} = 0$ for $|s'-t'|>k$, we can and will restrict to the case where $0\leq\beta \leq k\leq K(n)\ll n.$ \begin{Lem}\label{L:prefactor-expansion} There exists $C_2>0$ such that for every $0\leq \beta\leq k$, $0\leq \ell\leq N$, $\ell\equiv N$ (mod 2), \[\abs{\frac{\lr{\frac{N-\ell}{2}+1}_\beta}{\lr{\frac{N-\ell}{2}+\alpha +1 }_\beta} -\left[ 1 - 2\beta\cdot \frac{\ell}{N}+S(\beta,N)\right]} \leq \frac{1+\ell^2}{N^2}\cdot C_2^\beta,\] where $S(\beta,N)$ is $\ell-$independent and satisfies \[\abs{S(\beta,N)}\leq \frac{C_2\beta}{N}.\] \end{Lem} \begin{Lem}\label{L:3F2-expansion} There exists $C_3>0$ such that for every $0\leq \beta\leq k$ and each $0\leq \ell\leq N,$ $\ell\equiv N$ (mod 2), we have \begin{align*} & \abs{ (\alpha +1)_k\pFq{3}{2}{-k, \,k+1,\,-N' }{\beta+1, \alpha +1}{1}- \left[ \frac{(2k)!}{k!(\beta+1)_k} \lr{\frac{N}{2}}^k\lr{1+ \beta \cdot \frac{\ell}{N} +T(\beta,k,N)}\right]}\\ & \leq \frac{k(1+\ell^2)}{N^2} C_3^k \end{align*} where $T(\beta,k,N)$ is $\ell$-independent and satisfies \[\sup_{0\leq \beta\leq k}\abs{T(\beta,k,N)}\leq C_3\cdot\frac{k^2}{N}.\] \end{Lem} We will prove these lemmas in \S\S \ref{S:lemma10-proof}-\ref{S:lemma11-proof} below. Assuming them for the moment, we prove \eqref{E:overlap3} and \eqref{E:overlap4} (which were used to complete the proof of Proposition \ref{P:step2}). Using Lemmas \ref{L:prefactor-expansion} and \ref{L:3F2-expansion}, and the expansion for $A_{k,s,t,\ell}$ from \eqref{E:overlap2} we find that we have \begin{align*} &A_{k,s,t,\ell} = \binom{k}{\beta} \lr{\frac{\lr{ N'+ 1}_{\beta}} {\lr{N' + \alpha + 1}_{\beta}}}^{1/2}(\alpha+1)_k\pFq{3}{2}{-k, \,k+1,\,-N' }{\beta+1, \alpha +1}{1}\\ = &\binom{k}{\beta}\cdot\lr{\frac{N}{2}}^k\cdot\lr{\frac{(2k)!}{k!(\beta+1)_k}\cdot \bigg[\lr{1+ \beta \cdot \frac{\ell}{N}} + T(\beta,k,N)\bigg] + O\lr{\frac{1+\ell^2}{N^2} C_2^k}} \\ &\cdot \lr{1 - 2\beta\cdot \frac{\ell}{N} + S(\beta,N) + O\lr{\frac{1+\ell^2}{N^2}\cdot C_3^\beta}}^{1/2}\\ &=\binom{k}{\beta}\cdot \lr{\frac{N}{2}}^k\lr{\frac{(2k)!}{k!(\beta+1)_k}\left[1+ T(\beta,k,N)\right]\cdot[1+S(\beta,N)]^{1/2}+O\lr{\frac{1+\ell^2}{N^2}\cdot (C_3C_2)^k}}, \end{align*} with $S(\beta,N) = O(\tfrac{\beta}{N})$, $T(\beta,k,N)=O\lr{\tfrac{k^2}{N}}.$ Since \begin{align*} \binom{k}{\beta}\cdot\frac{(2k)!}{k!(\beta+1)_k}=\binom{k}{\beta}\cdot \binom{2k}{k}\cdot \frac{k!}{(\beta+1)_k} = O(8^k), \end{align*} and $A_{k,s,t,\ell} = 0$ unless $|s'-t'|\leq k$, we obtain \eqref{E:overlap3} and \eqref{E:overlap4}. \qed \subsection{Proof of Lemma \ref{L:prefactor-expansion}}\label{S:lemma10-proof} We want to estimate \[h_\beta\lr{\tfrac{\ell}{N}}:=\frac{\lr{\frac{N-\ell}{2}+1}_\beta}{\lr{\frac{N-\ell}{2}+\alpha +1 }_\beta}=\frac{g_\beta\lr{\tfrac{\ell}{N}}}{f_\beta\lr{\tfrac{\ell}{N}}},\] where \[g_\beta(x):=\prod_{j=1}^\beta\lr{1-x+ \frac{2j}{N}},\qquad \text{and}\qquad f_\beta(x):=\prod_{j=0}^{\beta-1}\lr{1+x + \frac{d + 2j}{N}}.\] We have the estimates for $|x|\leq 1$, \begin{align*} g_\beta(x)&= \prod_{j=1}^\beta\lr{1-x+\frac{2j}{N}} \leq \lr{2+\frac{2\beta}{N}}^\beta=O\lr{2^{\beta}e^{\beta^2/N}}\\ g_\beta'(x)&=\sum_{j_1=1}^\beta \prod_{j\neq j_1}\lr{1-x+\frac{2j}{N}}= O\lr{2^{\beta}\beta\cdot e^{\beta^2/N}}\\ g_\beta''(x)&=\sum_{1\leq j_1<j_2\leq \beta} \prod_{j\neq j_1,\,j_2}\lr{1-x+\frac{2j}{N}}= O\lr{2^{\beta}\beta^2\cdot e^{\beta^2/N}}, \end{align*} and we have the analogous estimates for the function $f_{\beta}(x)$. By Taylor's Theorem, \[h_\beta\lr{\tfrac{\ell}{N}}= h_\beta(0)+ \frac{\ell}{N} h'_\beta(0)+\frac{\ell^2}{N^2}O\lr{\norm{h_\beta''}_{L^\infty([0,\ell/N])}},\] and by the estimates above, $h_{\beta}(0) = 1 + O(\beta/N)$ and $h_{\beta}'(0) = -2\beta + O(\beta/N)$. Since \begin{align*} h_\beta''(x)= \frac{f_\beta(x)^2\left[g_\beta''(x)f_\beta(x)-f_\beta''(x)g_\beta(x)\right] - 2f_\beta(x)f_\beta'(x)\left[g_\beta'(x)f_\beta(x)-f_\beta'(x)g_\beta(x)\right]}{f_\beta(x)^4}, \end{align*} and $f_\beta(x)\geq1$, the estimates above also imply that for any $C_2>8$, \[\sup_{x\in [0,\ell/N]}\abs{h_\beta''(x)} \leq C_2^\beta.\] as required. \qed \subsection{Proof of Lemma \ref{L:3F2-expansion}}\label{S:lemma11-proof} By definition, \[(\alpha +1)_k\pFq{3}{2}{-k, \,k+1,\,-N' }{\beta+1, \alpha +1}{1} = \sum_{q=0}^k \underbrace{\frac{1}{q!} \frac{(-k)_q(k+1)_q}{(\beta+1)_q}}_{=:a_{q,\beta}} \cdot \underbrace{\frac{(-N')_q (\alpha+1)_k}{(\alpha+1)_q}}_{=:b_q}.\] Let us check that there exists $C>0$ so that \begin{equation}\label{E:small-terms} \sum_{q=0}^{k-2} a_{q,\beta} b_q = O\lr{ \frac{1+\ell^2}{N^2}\cdot \lr{CN}^k}, \end{equation} where the implied constant is independent of $\beta,\ell,k,N.$ Note that $\abs{a_{q,\beta}}\leq \abs{a_{q,0}}.$ Hence, it is sufficient to establish \eqref{E:small-terms} for $\beta=0.$ Define \[f(q):=\abs{\frac{a_{q+1,0}}{a_{q,0}}}=\frac{(k-q)(k+q+1)}{(q+1)^2}.\] We have \[f'(x)= -\frac{2(k-x)(k+x+1)}{(x+1)^3}- \frac{2x+1}{(x+1)^2}<0,\qquad \forall x\in [0,k],\] and hence \[\sup_{q=0,\ldots, k-2}\abs{a_{q,0}}=\abs{a_{q^*,0}},\qquad q^*:=\max \setst{q}{f(q)\geq 1}.\] The equation $f(x)=1$ is \[(k-x)(k+x+1)=(x+1)^2,\] which has a unique positive solution $\eta k$ with $\eta\in [1/2, 1].$ Using Stirling's approximation, we find there exists $C>0$ so that \begin{align*} \sup_{q=0,\ldots k-2} \abs{a_{q,0}} &=O\lr{ \frac{\lr{k(1+\eta)}!}{\lr{(k\eta)!}^2\lr{k(1-\eta)}!}}=O\lr{\frac{1}{k}C^k}. \end{align*} Hence, to prove \eqref{E:small-terms}, it remains to establish the estimate \begin{equation}\label{E:small-terms2} \abs{ b_q}= O\lr{(1+\ell^2) N^{k-2}2^{k-q}}. \end{equation} To do this, write \[\abs{b_q}=\lr{\frac{N}{2}}^k\underbrace{\prod_{j=0}^{q-1}\lr{1-\frac{\ell}{N} - \frac{2j}{N}}}_{=:g_1(\ell/N)}\underbrace{\prod_{j=q+1}^k\lr{\frac{\ell +\frac{d-2}{2} + j}{N}}}_{=:f_1(\ell/N)}.\] Since $q\leq k-2,$ we have \[f_1(\ell/N) = \frac{\lr{\ell + \frac{d-2}{2}+k}\lr{\ell +\frac{d-2}{2} + k-1}}{N^2} \prod_{j=q+1}^{k-2}\lr{\frac{\ell + \frac{d-2}{2}+j}{N}}.\] Next, since $\ell \leq N$ and $k\leq N/2,$ we have \[\prod_{j=q+1}^{k-2}\lr{\frac{\ell + \frac{d-2}{2}+j}{N}} \leq 2^{k-q-2}.\] Observing that \[\frac{\lr{\ell + \frac{d-2}{2}+k}\lr{\ell +\frac{d-2}{2} + k-1}}{N^2} = O\lr{k^2 \frac{(1+\ell^2)}{N^2}}\] confirms \eqref{E:small-terms2} and completes the proof of \eqref{E:small-terms}. For the remaining two terms, we write \begin{align*} a_{k,\beta}b_k + a_{k-1,\beta}b_{k-1} & = \frac{1}{k!} \cdot \frac{(-1)^k (2k)!}{(\beta+1)_k} (-N')_k + \frac{1}{(k-1)!} \cdot \frac{(-1)^{k-1} (2k-1)!}{(\beta+1)_{k-1}} (-N')_{k-1}(\alpha+k)\\ & = \frac{(2k)!}{k! (\beta+1)_k}\lr{\frac{N}{2}}^k \tilde{g}\lr{\frac{\ell}{N}}\lr{1 + \frac{\ell}{N}\lr{\beta+k-1} +T(\beta,k,N)} . \end{align*} Here $T(\beta,k,N)=O\lr{\frac{k^2}{N}}$ and is independent of $\ell$, and \[\tilde{g}(x):=\prod_{j=0}^{k-2}\lr{1-x - \frac{2j}{N}} .\] We have \[\tilde{g}(\ell/N) = \tilde{g}(0) + \frac{\ell}{N}\tilde{g}'(0) + O\lr{\frac{\ell^2}{N^2} \sup_{x\in [0,\ell/N]}\abs{\tilde{g}''(x)}},\] where \begin{align*} \tilde{g}(0) = 1 + O(k/N), \qquad \tilde{g}'(0) = (1-k)\lr{1+O(k/N)}, \end{align*} and \[\sup_{x\in [0,\ell/N]}\abs{\tilde{g}''(x)} = \sup_{x\in [0,\ell/N]}\abs{\sum_{0\leq j_1< j_2\leq k-2} \prod_{j\neq j_1, j_2}\lr{1-\frac{\ell}{N} - \frac{2j}{N}}}=O(k^2).\] Putting this all together, we obtain, \begin{align*} & (\alpha +1)_k\pFq{3}{2}{-k, \,k+1,\,-N' }{\beta+1, \alpha +1}{1} \\ &=\lr{\frac{N}{2}}^k\cdot \left\{\frac{(2k)!}{k! (\beta+1)_k}\lr{1+ \beta\cdot \frac{\ell}{N}+ T(\beta,k, N)}+ O\lr{\frac{(1+\ell^2)}{N^2}C_3^k}\right\} \end{align*} as required. \qed \section{Proof of Proposition \ref{P:step1}}\label{S:step1-proof} To prove Proposition \ref{P:step1}, we begin with the following result, which allows us to replace $E_{\ell,n}^V(\varepsilon)$ by a finite number (depending on $n$) of its jets at $\varepsilon=0.$ \begin{proposition}\label{P:step1.1} For any $V\in L^\infty({\mathbb R}_+)$ with $\norm{V}_{L^\infty}=1$, $n$ such that $\hbar_{n}<1$, and any $J=J(n)$ satisfying $\liminf_{n\ensuremath{\rightarrow} \infty} J(n)/\log n=\infty,$ we have \begin{equation} \sup_{\substack{ \ell\leq n,\, \ell\equiv n \text{ (mod 2)}\\ \varepsilon\in [0,\frac{1}{5}]}}\abs{E_{\ell,n}^V(\varepsilon) - \sum_{j=0}^J \frac{\lr{\hbar_n\varepsilon}^j}{j!} \frac{d^j}{d\varepsilon^j}\bigg|_{\varepsilon=0} E_{\ell,n}^{V}(\varepsilon)} = O\lr{ \hbar_n^\infty }. \end{equation} \end{proposition} \begin{proof} Applying Lemma \ref{L:level-spacing} with $W(r) = \varepsilon\hbar_nV(r^2)$, we find that for every $\varepsilon\in [0,1/5]$ \begin{equation}\label{E:spacing-est} \sup_{ \ell\leq n,\, \ell\equiv n \text{ (mod 2)}} d\lr{E_{\ell, n}^V(\varepsilon), \, \Spec(\Op_{\hbar_n, \ell}(\varepsilon))\backslash \set{ E_{\ell, n}^V(\varepsilon)}} > \frac{\hbar_n}{2}, \end{equation} where $d(x, A)$ denotes the distance from a point $x$ to a set $A.$ As explained in \S \ref{S:perturbation-theory}, we have \[\frac{d^j}{d\varepsilon^j}E_{\ell, n}^V(\varepsilon)= (-1)^{j-1}\inprod{V \psi_{\hbar_n, \ell,n}(\varepsilon)}{[G_\ell(\varepsilon)]^{(j-1)}(\psi_{\hbar_n,\ell, n})},\] where \begin{equation}\label{E:Gell-def} G_\ell(\varepsilon)= \lr{\Op_{\hbar_n, \ell}-E_{\ell, n}^V(\varepsilon)}^{-1}\circ \Pi_{\psi_{\hbar_n, \ell, n}^{\perp}}\circ V, \end{equation} and $V$ denotes multiplication by the function $V(r^2).$ Hence, using \eqref{E:spacing-est} and that $\norm{V}_{L^\infty}\leq1$, we find \begin{equation}\label{E:Gell-norm-est} \sup_{\substack{ \ell\leq n,\, \ell\equiv n \text{ (mod 2)}\\ \varepsilon\in [0,1/5]}} \norm{G_\ell(\varepsilon)}\leq \frac{2}{\hbar_n}\quad \Rightarrow\quad \sup_{\substack{ \ell\leq n,\, \ell\equiv n \text{ (mod 2)}\\ \varepsilon\in [0,1/5]}}\abs{\frac{d^j}{d\varepsilon^j}E_{ \ell, n}^V(\varepsilon)}\leq \lr{\frac{2}{\hbar_n}}^j. \end{equation} Applying Taylor's theorem then gives \[ \sup_{\substack{ \ell\leq n,\, \ell\equiv n \text{ (mod 2)}\\ \varepsilon\in [0,1/5]}}\abs{E_{\ell,n}^V(\varepsilon) - \sum_{j=0}^J \frac{\lr{\hbar_n\varepsilon}^j}{j!} \frac{d^j}{d\varepsilon^j}\bigg|_{\varepsilon=0} E_{ \ell,n}^{V}(\varepsilon)} = O\lr{\lr{2^J\lr{J+1}!}^{-1}}= O(\hbar_n^\infty)\] since $(J+1)!\geq e^{\lr{-\log \hbar_n}^2}.$ \end{proof} To complete the proof of Proposition \ref{P:step1}, it remains to check that, provided $\hbar_n<1$ and $J(n),K(n)$ satisfy \eqref{E:KJ-assumptions}, we have \begin{equation}\label{E:step1-goal1} \sup_{\substack{ \ell\leq n,\, \ell\equiv n \text{ (mod 2)}\\ 0\leq j \leq J}}\hbar_n^j\abs{\frac{d^j}{d\varepsilon^j}\bigg|_{\varepsilon=0} E_{ \ell,n}^{V}(\varepsilon) - \frac{d^j}{d\varepsilon^j}\bigg|_{\varepsilon=0} E_{\ell,n}^{V_K}(\varepsilon)} = O(\hbar_n^\infty). \end{equation} To prove this estimate, we again use \[\frac{d^j}{d\varepsilon^j}\bigg|_{\varepsilon=0} E_{\ell,n}^{V}(\varepsilon)= (-1)^{j-1} \inprod{V\psi_{\hbar_n, \ell, n}}{G_{\ell}^{(j-1)}(\psi_{\hbar_n, \ell,n})},\] where \[G_{\ell}=G_{\ell}(0)=\lr{\Op_{\hbar_n, \ell}(0)-E}^{-1}\circ \Pi_{\psi_{\hbar_n, \ell, n}}^{\perp}\circ V.\] Setting for each $K\geq 1$, \[G_{\ell,K}:=\lr{\Op_{\hbar_n, \ell}(0)-E}^{-1}\circ \Pi_{\psi_{\hbar_n, \ell, n}}^{\perp}\circ V_K,\] we have \[\frac{d^j}{d\varepsilon^j}\bigg|_{\varepsilon=0} E_{\hbar_n, n, \ell}^{V_K}(\varepsilon)= (-1)^{j-1} \inprod{V_K\psi_{\hbar_n, \ell, n}}{G_{\ell,K}^{(j-1)}(\psi_{\hbar_n, \ell,n})}\] and \eqref{E:step1-goal1} reduces to showing that for each $j\leq J$ and every $\ell\leq n,\, \ell\equiv n \text{ (mod 2)}$ \begin{equation}\label{E:step1-goal2} \abs{\hbar_n^j\lr{\inprod{V \psi_{\hbar_n, \ell, n}}{G_{\ell}^{(j-1)}(\psi_{\hbar_n, \ell, n})} - \inprod{V_K\psi_{\hbar_n, \ell, n}}{G_{\ell,K}^{(j-1)}(\psi_{\hbar_n, \ell, n})}}} = O(\hbar_n^\infty), \end{equation} with the implied constant independent of $j,\ell,n.$ We will establish \eqref{E:step1-goal2} by induction with the help of the following lemma. \begin{Lem}\label{L:truncate-potential} Suppose $\hbar_n<1$ and $J=J(n),\, K=K(n)$ satisfy \eqref{E:KJ-assumptions}. Then, there exists a constant $C_1>0$ so that if $V$is a $\delta-$slowly varying potential for the energy $E$, with $\delta\in(0,(C_1E)^{-1})$, \begin{equation} \sup_{\substack{ \abs{m-n}\leq\frac{n}{2}\\ \norm{X}=1}} \abs{\inprod{\lr{V -V_K} \psi_{\hbar_n, \ell, m}} {X}} = O\lr{\hbar_n^\infty}. \end{equation} \end{Lem} \begin{proof} Let $\chi(r)$ be an auxiliary cut-off function that equals $1$ for $r\leq \sqrt{4E}$ and $0$ otherwise. Then, since $V$ is bounded, by the exponential decay \eqref{E:Agmon} of $\psi_{\hbar_n, \ell, m}$ \begin{equation} \label{E:truncate-potential1} \sup_{\substack{ \abs{m-n}\leq\frac{n}{2}\\ \norm{X}=1}} \abs{\inprod{\lr{1-\chi}V \psi_{\hbar_n, \ell, m}} {X}} = O\lr{\hbar_n^\infty}. \end{equation} Using the definition \eqref{E:slowly-varying} that $V$ is $\delta-$slowly varying on the support of $\chi$ and the assumption $\lim\sup_{n\to\infty}K(n)/\log n = \infty$, we have, for all $\delta E$ sufficiently small \begin{equation} \label{E:truncate-potential2} \sup_{\substack{ \abs{m-n}\leq\frac{n}{2}\\ \norm{X}=1}} \abs{\inprod{\chi\lr{V-V_K} \psi_{\hbar_n, \ell, m}} {X}} = O\lr{\hbar_n^\infty}. \end{equation} Finally, again using the exponential decay \eqref{E:Agmon} of $\psi_{\hbar_n, \ell, m}$ and $\lim\inf_{n\to\infty}K(n)/ n = 0,$ we obtain \begin{equation} \label{E:truncate-potential3} \sup_{\substack{ \abs{m-n}\leq\frac{n}{2}\\ \norm{X}=1}} \abs{\inprod{\lr{1-\chi}V_K \psi_{\hbar_n, \ell, m}} {X}} = O\lr{\hbar_n^\infty}, \end{equation} which completes the proof. \end{proof} To prove \eqref{E:step1-goal2} by induction, note that Lemma \ref{L:truncate-potential} is precisely the base case $j=1.$ Next, suppose we have already shown \eqref{E:step1-goal2} for some $j\geq 1.$ Then, using Lemma \ref{L:truncate-potential} and the norm estimate from \eqref{E:Gell-norm-est}, we have \begin{equation}\label{E:V-to-VK} \hbar_n^{j}\inprod{V\psi_{\hbar_n, \ell, n}}{G_{\ell}^{(j)}(\psi_{\hbar_n, \ell,n})}=\hbar_n^{j}\inprod{V_K\psi_{\hbar_n, \ell, n}}{G_{\ell}^{(j)}(\psi_{\hbar_n, \ell,n})}+ O(\hbar_n^\infty). \end{equation} The adjoint of $G_{\ell}$ is \[G_{\ell}^*=V\circ \lr{\Op_{\hbar_n, \ell}- E}^{-1}\circ \Pi_{\psi_{\hbar_n, \ell, n}}^{\perp}\] and hence \[G_{\ell}^*\lr{V_K \psi_{\hbar_n,\ell, n}}= \hbar_n^{-1}\sum_{\substack{m\text{ s.t. } \abs{m-n}\leq 2K\\ m\neq n}}\frac{\inprod{V_K \psi_{\hbar_n,\ell, m}}{\psi_{\hbar_n,\ell, n}}}{m-n} \psi_{\hbar_n, \ell, m}.\] The sum in the previous line is truncated to $\abs{m-n}\leq 2K$ since by Proposition \ref{P:step2}, the numerator vanishes unless $\abs{m-n}\leq 2K.$ To complete the proof, we write \begin{align*} & \hbar_n^j \inprod{V\psi_{\hbar_n, \ell, n}}{G_{\ell}^{(j)}(\psi_{\hbar_n, \ell,n})} = \hbar_n^j \inprod{V_K \psi_{\hbar_n, \ell, n}}{G_\ell^{(j)}(\psi_{\hbar_n, \ell, n})}+O(\hbar^\infty)\\ &= \sum_{\substack{m\text{ s.t. } \abs{m-n}\leq 2K\\ m\neq n}}\frac{\inprod{V_K \psi_{\hbar_n, \ell,m}}{\psi_{\hbar_n,\ell,n}}}{m-n} \hbar^{j-1} \inprod{V\psi_{\hbar_n, \ell, m}}{G_{\ell}^{(j-1)}\psi_{\hbar_n, \ell, n}} + O\lr{\hbar_n^\infty}\\ &= \sum_{\substack{m\text{ s.t. } \abs{m-n}\leq 2K\\ m\neq n}}\frac{\inprod{V_K \psi_{\hbar_n,\ell, m}}{\psi_{\hbar_n, \ell,n}}}{m-n} \hbar^{j-1} \inprod{V_K\psi_{\hbar_n, \ell, m}}{G_{\ell,K}^{(j-1)}\psi_{\hbar_n, \ell, n}} + O\lr{\hbar_n^\infty}\\ &=\hbar_n^j \inprod{V_K\psi_{\hbar_n, \ell, n}}{G_{\ell,K}^{(j)}(\psi_{\hbar_n, \ell,n})}+O\lr{\hbar_n^\infty}, \end{align*} where in the second-to-last line we used \eqref{E:V-to-VK} the inductive hypothesis and the fact that $\limsup_{n\ensuremath{\rightarrow} \infty}\frac{K(n)}{n}=0.$
1,108,101,563,936
arxiv
\section{Introduction} As is common in the literature about countable dense homogeneity, by \emph{space} we will always mean ``separable metrizable topological space''. By \emph{countable} we will always mean ``at most countable''. Our reference for general topology is \cite{vanmilli}. Our reference for descriptive set theory is \cite{kechris}. For all other set-theoretic notions, we refer to \cite{kunen}. Recall the following definitions. A space is \emph{Polish} if it admits a complete metric. A subspace of a Polish space is \emph{analytic} if it is the continuous image of a Polish space, and it is \emph{coanalytic} if its complement is analytic. A space $X$ is \emph{countable dense homogeneous} (briefly, $\mathsf{CDH}$) if for every pair $(A,B)$ of countable dense subsets of $X$ there exists a homeomorphism $h:X\longrightarrow X$ such that $h[A]=B$. The fundamental positive result in the theory of $\mathsf{CDH}$ spaces is the following (see \cite[Theorem 5.2]{andersoncurtisvanmill}). In particular, it shows that the Cantor set $2^\omega$, the Baire space $\omega^\omega$, the Euclidean spaces $\mathbb{R}^n$, the spheres $S^n$ and the Hilbert cube $[0,1]^\omega$ are all examples of $\mathsf{CDH}$ spaces. See \cite[Sections 14-16]{arkhangelskiivanmill} for much more on this topic. Recall that a space $X$ is \emph{strongly locally homogeneous} (briefly, $\mathsf{SLH}$) if there exists a base $\mathcal{B}$ for $X$ such that for every $U\in\mathcal{B}$ and $x,y\in U$ there exists a homeomorphism $h:X\longrightarrow X$ such that $h(x)=y$ and $h\upharpoonright (X\setminus U)=\mathsf{id}_{X\setminus U}$. \begin{theorem}[Anderson, Curtis, van Mill]\label{maincdh} Every Polish $\mathsf{SLH}$ space is $\mathsf{CDH}$. \end{theorem} This article is ultimately motivated by the second part of the following question (see \cite{fitzpatrickzhoud}), which is Problem 387 from the book ``Open problems in topology''. Recall that a space $X$ is \emph{homogeneous} if for every pair $(x,y)$ of elements of $X$ there exists a homeomorphism $h:X\longrightarrow X$ such that $h(x)=y$. \begin{question}[Fitzpatrick, Zhou]\label{mainquestion} Which subspaces $X$ of $2^\omega$ are such that $X^\omega$ is homogeneous? $\mathsf{CDH}$? \end{question} While the first question was answered by the following remarkable result\footnote{Subsequently, Theorem \ref{hompower} was greatly generalized by Dow and Pearl (see \cite[Theorem 2]{dowpearl}), by combining the methods of Lawrence with the technique of elementary submodels.} (see \cite[page 3057]{lawrence}), the second question is still open. \begin{theorem}[Lawrence]\label{hompower} Let $X$ be a subspace of $2^\omega$. Then $X^\omega$ is homogeneous. \end{theorem} However, if one focuses on \emph{definable} spaces, it is possible to obtain the following result (see \cite[Corollary 2.4]{hrusakzamoraaviles}). \begin{theorem}[Hru\v{s}\'ak, Zamora Avil\'es]\label{borelcdh} Let $X$ be a Borel subspace $2^\omega$. If $X$ is $\mathsf{CDH}$ then $X$ is Polish. \end{theorem} Furthermore, there exist consistent examples of an analytic subspace of $2^\omega$ and a coanalytic subspace of $2^\omega$ that are $\mathsf{CDH}$ but not Polish (see \cite[Theorem 2.6]{hrusakzamoraaviles}), which show that Theorem \ref{borelcdh} is sharp. Such definable examples could not have been constructed in $\mathsf{ZFC}$ because, under the axiom of Projective Determinacy, Theorem \ref{borelcdh} extends to all projective subspaces of $2^\omega$ (see \cite[Corollary 2.7]{hrusakzamoraaviles}). Using Theorem \ref{borelcdh} (see also the proof of Theorem \ref{coanalyticcdhpower}), it is possible to obtain the following result (see \cite[Theorem 3.2]{hrusakzamoraaviles}), which was the first breakthrough on the second part of Question \ref{mainquestion}. \begin{theorem}[Hru\v{s}\'ak, Zamora Avil\'es]\label{borelcdhpower} Let $X$ be a Borel subspace of $2^\omega$. Then the following are equivalent. \begin{itemize} \item $X$ is Polish. \item $X^\omega$ is $\mathsf{CDH}$. \end{itemize} \end{theorem} As above, it is easy to realize that, under the axiom of Projective Determinacy, Theorem \ref{borelcdhpower} extends to all projective subspaces of $2^\omega$. At this point, it seems natural to wonder whether the ``Borel'' assumption in the above theorem can be dropped. In other words, is being Polish the characterization that we are looking for? This is precisely what the following question asks (see \cite[Question 3.2]{hrusakzamoraaviles}). \begin{question}[Hru\v{s}\'ak, Zamora Avil\'es]\label{Gdeltaquestion} Is there a non-Polish subspace $X$ of $2^\omega$ such that $X^\omega$ is $\mathsf{CDH}$? \end{question} The following (see \cite[Theorem 21]{medinimilovich}) is the first consistent answer\footnote{Subsequently, Hern\'andez-Guti\'errez and Hru\v{s}\'ak showed that both $\mathcal{F}$ and $\mathcal{F}^\omega$ are $\mathsf{CDH}$ whenever $\mathcal{F}$ is a non-meager $\mathsf{P}$-filter on $\omega$ (see \cite[Theorem 1.6]{hernandezgutierrezhrusak}). In fact, as it was recently shown by Kunen, Medini and Zdomskyy, a filter on $\omega$ is $\mathsf{CDH}$ if and only if it is a non-meager $\mathsf{P}$-filter (see \cite[Theorem 10]{kunenmedinizdomskyy}). However, it is a long-standing open problem whether non-meager $\mathsf{P}$-filters exist in $\mathsf{ZFC}$ (see \cite{justmathiasprikrysimon} or \cite[Section 4.4.C]{bartoszynskijudah}).} to the above question, where ultrafilters on $\omega$ are viewed as subspaces of $2^\omega$ through characteristic functions. \begin{theorem}[Medini, Milovich]\label{consistentanswer} Assume that $\mathsf{MA}(\mathrm{countable})$ holds. Then there exists a non-principal ultrafilter $\mathcal{U}$ on $\omega$ such that $\mathcal{U}^\omega$ is $\mathsf{CDH}$. \end{theorem} Since a non-principal ultrafilter on $\omega$ can never be analytic or coanalytic (see \cite[Section 2]{medinimilovich}), the following question seems natural (see \cite[Question 6]{medinimilovich}). \begin{question}[Medini, Milovich]\label{optimizedefinability} Is there a non-Polish analytic subspace $X$ of $2^\omega$ such that $X^\omega$ is $\mathsf{CDH}$? Coanalytic? \end{question} We will give a stronger version of Theorem \ref{borelcdhpower} (namely, Theorem \ref{coanalyticcdhpower}) and show that this version is sharp (see Theorem \ref{mainexample}), while simultaneously answering Question \ref{Gdeltaquestion} and Question \ref{optimizedefinability}. The countable dense homogeneity of the example given by Theorem \ref{mainexample} will follow from Theorem \ref{main}, whose proof uses the technique of Knaster-Reichbach covers. Finally, by combining Theorem \ref{main} with several results about $\omega$-th powers, we will obtain a simple sufficient condition for the countable dense homogeneity of $X^\omega$ (see Theorem \ref{sufficient}). \section{Some preliminary notions} Recall that a space is \emph{crowded} if it is non-empty and it has no isolated points. Given spaces $X$ and $Y$, we will write $X\approx Y$ to mean that $X$ and $Y$ are homeomorphic. Given a space $Z$, we will say that a subspace $S$ of $Z$ is a \emph{copy} of a space $X$ if $S\approx X$. The following four classical results are used freely throughout this entire article (see \cite[Theorem 1.5.5]{vanmilli} and \cite[Theorem 1.9.8 and Corollary 1.9.9]{vanmilli}, \cite[Theorem A.6.3]{vanmilli}, \cite[Theorem 13.6]{kechris} and \cite[Lemma A.6.2]{vanmilli} respectively). \begin{theorem} Let $X$ be a zero-dimensional space. \begin{itemize} \item If $X$ is compact and crowded then $X\approx 2^\omega$. \item If $X$ is Polish and nowhere locally compact then $X\approx\omega^\omega$. \end{itemize} \end{theorem} \begin{theorem} Let $X$ be a subspace of a Polish space $Z$. Then $X$ is Polish if and only if $X$ is a $\mathsf{G_\delta}$ subset of $Z$. \end{theorem} \begin{theorem} Let $Z$ be a Polish space. If $X$ is an uncountable Borel subspace of $Z$ then $X$ contains a copy of $2^\omega$. \end{theorem} \begin{proposition} Let $I$ be a countable set. If $X_i$ is Polish for every $i\in I$ then $\prod_{i\in I}X_i$ is Polish. \end{proposition} Recall that a space $X$ is \emph{completely Baire} (briefly, $\mathsf{CB}$) if every closed subspace of $X$ is a Baire space. For a proof of the following result, see \cite[Corollary 21.21]{kechris} and \cite[Corollary 1.9.13]{vanmilli}. \begin{theorem}[Hurewicz]\label{hurewicz} Let $X$ be a space. Consider the following conditions. \begin{enumerate} \item\label{polishcb} $X$ is Polish. \item\label{cb} $X$ is $\mathsf{CB}$. \item\label{noQ} $X$ does not contain any closed copy of $\mathbb{Q}$. \end{enumerate} The implications $(\ref{polishcb})\rightarrow(\ref{cb})\leftrightarrow (\ref{noQ})$ hold for every $X$. If $X$ is a coanalytic subspace of some Polish space then the implication $(\ref{polishcb}) \leftarrow (\ref{cb})$ holds as well. \end{theorem} Recall that a \emph{$\lambda$-set} is a space in which every countable set is $\mathsf{G_\delta}$. Observe that no $\lambda$-set can contain a copy of $2^\omega$. Recall that a \emph{$\lambda'$-set} is a subspace $X$ of $2^\omega$ such that $X\cup D$ is a $\lambda$-set for every countable $D\subseteq 2^\omega$. For a proof of Lemma \ref{unionlambda}, see \cite[Theorem 7.2]{millers}. For a proof of Theorem \ref{zfclambda}, which is based on the existence of a Hausdorff gap, see \cite[Theorem 5.5]{millers} and the argument that follows it. \begin{lemma}[Sierpi\'{n}ski]\label{unionlambda} A countable union of $\lambda'$-sets is a $\lambda'$-set. \end{lemma} \begin{theorem}[Sierpi\'{n}ski]\label{zfclambda} There exists a $\lambda'$-set of size $\omega_1$. \end{theorem} Recall that a subspace $B$ of an uncountable Polish space $Z$ is a \emph{Bernstein set} if $B\cap K\neq\varnothing$ and $(Z\setminus B)\cap K\neq\varnothing$ for every copy $K$ of $2^\omega$ in $Z$. It is easy to see that Bernstein sets exist in $\mathsf{ZFC}$, and that they never have the property of Baire (see \cite[Example 8.24]{kechris}). Using Theorem \ref{hurewicz}, one can show that every Bernstein set is $\mathsf{CB}$. \section{The property of Baire in the restricted sense} All the results in this section are classical, and they will be needed in the next section. The exposition is based on \cite[Appendix D]{medinit}. Given a space $Z$, we will denote by $\mathcal{B}(Z)$ be the collection of all subsets of $Z$ that have the property of Baire. For proofs of the following two well-known results, see \cite[Proposition 8.22]{kechris} and \cite[Proposition 8.23]{kechris} respectively. \begin{proposition}\label{sigmapb} Let $Z$ be a space. Then $\mathcal{B}(Z)$ is the smallest $\sigma$-algebra of subsets of $Z$ containing all open sets and all meager sets. \end{proposition} \begin{proposition}\label{equivalentpb} Let $Z$ be a space. Then the following conditions are equivalent for every subset $X$ of $Z$. \begin{itemize} \item $X\in\mathcal{B}(Z)$. \item $X=G\cup M$, where $G$ is a $\mathsf{G_\delta}$ subset of $Z$ and $M$ is a meager subset of $Z$. \end{itemize} \end{proposition} Recall that a subset $X$ of a space $Z$ has the \emph{property of Baire in the restricted sense} if $X\cap S\in\mathcal{B}(S)$ for every subspace $S$ of $Z$ (see \cite[Subsection VI of Section 11]{kuratowski}). We will denote by $\mathcal{B}_r(Z)$ the collection of subsets of $Z$ that have the property of Baire in the restricted sense. Using Proposition \ref{sigmapb}, it is easy to check that $\mathcal{B}_r(Z)$ is a $\sigma$-algebra. The inclusion $\mathcal{B}_r(Z)\subseteq\mathcal{B}(Z)$ is obvious. To see that the reverse inclusion need not hold, let $Z=2^\omega\times 2^\omega$ and fix $z\in 2^\omega$. Let $X$ be a Bernstein set in $S=\{z\}\times 2^\omega$. In particular, $X\cap S=X\notin\mathcal{B}(S)$, so $X\notin\mathcal{B}_r(Z)$. However, since $X$ is nowhere dense in $Z$, it is clear that $X\in\mathcal{B}(Z)$. Notice that the same example $X$ shows that, in the following proposition, the hypothesis ``$X\in\mathcal{B}_r(Z)$'' cannot be weakened to ``$X\in\mathcal{B}(Z)$''. \begin{proposition}\label{dichotomy} Let $Z$ be a Polish space, and assume that $X\in\mathcal{B}_r(Z)$. Then either $X$ has a dense Polish subspace or $X$ is not Baire. \end{proposition} \begin{proof} Since $X\in\mathcal{B}(\mathsf{cl}(X))$, by Proposition \ref{equivalentpb}, there exist a $\mathsf{G_\delta}$ subset $G$ of $\mathsf{cl}(X)$ and a meager subset $M$ of $\mathsf{cl}(X)$ such that $X=M\cup G$. Notice that $G$ is Polish because $\mathsf{cl}(X)$ is Polish. Furthermore, since $X$ is dense in $\mathsf{cl}(X)$, the set $M$ is meager in $X$ as well. Therefore, if $G$ is dense in $X$, then the first alternative will hold. Otherwise, the second alternative will hold. \end{proof} Finally, we will point out a significant class of sets that have the property of Baire in the restricted sense. Given a Polish space $Z$, we will denote by $\mathcal{A}_\sigma(Z)$ the $\sigma$-algebra of subsets of $Z$ generated by the analytic sets. \begin{proposition}\label{sigmaanalytic} Let $Z$ be a Polish space. Then $\mathcal{A}_\sigma(Z)\subseteq\mathcal{B}_r(Z)$. \end{proposition} \begin{proof} Since, as we have already observed, $\mathcal{B}_r(Z)$ is a $\sigma$-algebra, it will be enough to show that every analytic subset of $Z$ has the property of Baire in the restricted sense. Trivially, every closed subset of $Z$ has the property of Baire in the restricted sense. Therefore, since every analytic set is obtained by applying Souslin operation $\mathcal{A}$ to a family of closed sets (see \cite[Theorem 25.7]{kechris}), it will be enough to show that the property of Baire in the restricted sense is preserved by operation $\mathcal{A}$. This is a straightforward corollary of the classical fact that the property of Baire is preserved by operation $\mathcal{A}$ (see \cite[Corollary 29.14]{kechris}). \end{proof} \section{Strengthening a result of Hru\v{s}\'ak and Zamora Avil\'es} The main result of this section is Theorem \ref{coanalyticcdhpower}, which gives the promised strengthening of Theorem \ref{borelcdhpower} and answers the second part of Question \ref{optimizedefinability}. We will need a few preliminaries. Proposition \ref{notcdh} first appeared as \cite[Proposition 13]{kunenmedinizdomskyy}. Proposition \ref{meagerdenseGdelta} first appeared as \cite[Lemma 3.2]{fitzpatrickzhoub}. Corollary \ref{cdhlambda} first appeared as the first part of \cite[Theorem 3.4]{fitzpatrickzhoub}. Proposition \ref{baire} first appeared as \cite[Theorem 3.1]{hrusakzamoraaviles}. \begin{proposition}[Kunen, Medini, Zdomskyy]\label{notcdh} Let $X$ be a space that is not $\mathsf{CB}$ but has a dense $\mathsf{CB}$ subspace. Then $X$ is not $\mathsf{CDH}$. \end{proposition} \begin{proof} Let $D$ be a dense $\mathsf{CB}$ subspace of $X$, and let $A$ be a countable dense subset of $D$. By Theorem \ref{hurewicz}, there exists a closed subspace $Q$ of $X$ that is homeomorphic to $\mathbb{Q}$. Extend $Q$ to a countable dense subset $B$ of $X$. Clearly there is no homeomorphism $h:X\longrightarrow X$ such that $h[A]=B$. \end{proof} \begin{proposition}[Fitzpatrick, Zhou]\label{meagerdenseGdelta} Every meager space has a countable dense $\mathsf{G_\delta}$ subset. \end{proposition} \begin{proof} Let $\{U_n:n\in\omega\}$ be a countable base for $X$. Assume that $X=\bigcup_{\ell\in\omega}K_\ell$, where each $K_\ell$ is a closed nowhere dense subset of $X$. Let $D=\{d_n:n\in\omega\}$, where each $d_n\in U_n\setminus\bigcup_{\ell<n}K_\ell$. It is clear that $D$ is a countable dense subset of $X$. To see that $D$ is $\mathsf{G_\delta}$, notice that $$ X\setminus D=\bigcup_{\ell\in\omega}(K_\ell\setminus\{d_n:n\leq\ell\}) $$ is $\mathsf{F_\sigma}$ because each $K_\ell\setminus\{d_n:n\leq\ell\}$ is $\mathsf{F_\sigma}$. \end{proof} \begin{corollary}[Fitzpatrick, Zhou]\label{cdhlambda} Let $X$ be a meager $\mathsf{CDH}$ space. Then $X$ is a $\lambda$-set. \end{corollary} \begin{proof} By Proposition \ref{meagerdenseGdelta}, there exists a countable dense $\mathsf{G_\delta}$ subset $A$ of $X$. Now let $D$ be an arbitrary countable subset of $X$. Extend $D$ to a countable dense subset $B$ of $X$. Notice that $B$ is $\mathsf{G_\delta}$ because there exists a homeomorphism $h:X\longrightarrow X$ such that $h[A]=B$. Since $B\setminus D$ is countable, it follows that $D$ is $\mathsf{G_\delta}$. \end{proof} \begin{proposition}[Hru\v{s}\'ak, Zamora Avil\'es]\label{baire} Let $X$ be a space such that $X^\omega$ is $\mathsf{CDH}$. Then $X$ is Baire. \end{proposition} \begin{proof} If $|X|\leq 1$ then $X$ is obviously Baire, so assume that $|X|\geq 2$. In particular, $X^\omega$ contains a copy of $2^\omega$. Assume, in order to get a contradiction, that $U$ is a non-empty meager open subset of $X$. Let $M_n=\{x\in X^\omega:x(n)\in U\}$ for $n\in\omega$, and observe that each $M_n$ is a meager subset of $X^\omega$. Notice that $X^\omega$ is meager because $$ X^\omega=(X\setminus U)^\omega\cup\bigcup_{n\in\omega}M_n $$ and $(X\setminus U)^\omega$ is a closed nowhere dense subset of $X^\omega$. Therefore, $X^\omega$ is a $\lambda$-set by Corollary \ref{cdhlambda}. This contradicts the fact that $X^\omega$ contains a copy of $2^\omega$. \end{proof} \begin{theorem}\label{coanalyticcdhpower} Let $X$ be a coanalytic subspace of $2^\omega$. Then the following are equivalent. \begin{enumerate} \item\label{polish} $X$ is Polish. \item\label{cdh} $X^\omega$ is $\mathsf{CDH}$. \end{enumerate} \end{theorem} \begin{proof} In order to prove the implication $(\ref{polish})\rightarrow (\ref{cdh})$, assume that $X$ is Polish and that $|X|\geq 2$. Then $X^\omega$ is a crowded zero-dimensional Polish space that is either compact or nowhere locally compact. It follows that $X^\omega\approx 2^\omega$ or $X^\omega\approx\omega^\omega$. In both cases, $X^\omega$ is homogeneous and zero-dimensional, hence $\mathsf{SLH}$. In conclusion, $X^\omega$ is $\mathsf{CDH}$ by Theorem \ref{maincdh}. Notice that Theorem \ref{sufficient} gives an alternative proof of the implication $(\ref{polish})\rightarrow (\ref{cdh})$, since being Polish is obviously stronger than being countably controlled (see Definition \ref{cc}). In order to prove the implication $(\ref{cdh})\rightarrow (\ref{polish})$, assume that $X^\omega$ is $\mathsf{CDH}$. By Proposition \ref{baire}, it follows that $X$ is Baire. Clearly $X\in\mathcal{A}_\sigma(2^\omega)$, so $X\in\mathcal{B}_r(2^\omega)$ by Proposition \ref{sigmaanalytic}. Therefore, $X$ has a dense Polish subspace by Proposition \ref{dichotomy}. In particular, $X^\omega$ has a dense $\mathsf{CB}$ subspace, hence it is $\mathsf{CB}$ by Proposition \ref{notcdh}. Notice that $X$ is homeomorphic to a closed subspace of $X^\omega$, so it is $\mathsf{CB}$ as well. Since $X$ is coanalytic, it follows that $X$ is Polish by Theorem \ref{hurewicz}. \end{proof} \section{Knaster-Reichbach covers} The results in this section and the next are known and by no means optimal: we simply tried to make the main part of this article as self-contained as possible. Knaster-Reichbach covers were introduced in \cite{knasterreichbach} and have been successfully applied by several authors, including van Engelen, Medvedev and Ostrovski{\u\i}. Let us mention for example the articles \cite{vanengelen}, \cite{medvedevf}, \cite{medvedevb}, \cite{medvedevc}, \cite{medvedevp} and \cite{ostrovskii}, where one can find much more general results than the ones stated here. The first application of this technique to the theory of countable dense homogeneity was recently given by Hern\'andez-Guti\'errez, Hru\v{s}\'ak and van Mill in \cite{hernandezgutierrezhrusakvanmill}. Fix a homeomorphism $h:E\longrightarrow F$ between closed nowhere dense subsets of $2^\omega$. We will say that $\langle \mathcal{V},\mathcal{W},\psi\rangle$ is a \emph{Knaster-Reichbach cover} (briefly, a $\mathsf{KR}$-cover) for $\langle 2^\omega\setminus E,2^\omega\setminus F,h\rangle$ if the following conditions hold. \begin{itemize} \item $\mathcal{V}$ is a partition of $2^\omega\setminus E$ consisting of non-empty clopen subsets of $2^\omega$. \item $\mathcal{W}$ is a partition of $2^\omega\setminus F$ consisting of non-empty clopen subsets of $2^\omega$. \item $\psi:\mathcal{V}\longrightarrow\mathcal{W}$ is a bijection. \item If $f:2^\omega\longrightarrow 2^\omega$ is a bijection such that $h\subseteq f$ and $f[V]=\psi(V)$ for every $V\in\mathcal{V}$, then $f$ is continuous on $E$ and $f^{-1}$ is continuous on $F$. \end{itemize} \noindent Whenever $f:2^\omega\longrightarrow 2^\omega$ is a bijection such that $f[V]=\psi(V)$ for every $V\in\mathcal{V}$, we will say that $f$ \emph{respects} $\psi$. The following lemma will be the key ingredient at the inductive step in the proof of Theorem \ref{main}. The proof given here is inspired by \cite[Theorem 3.1]{vanmillc}. \begin{lemma}\label{KRexists} Let $h:E\longrightarrow F$ be a homeomorphism between closed nowhere dense subsets of $2^\omega$. Then there exists a $\mathsf{KR}$-cover for $\langle 2^\omega\setminus E,2^\omega\setminus F,h\rangle$. \end{lemma} \begin{proof} The case in which $E$ and $F$ are empty is trivial, so assume that $E$ and $F$ are non-empty. Let $X\oplus Y$ be the disjoint topological sum of two spaces that are homeomorphic to $2^\omega$. Without loss of generality, assume that $E$ is a subspace of $X$ and $F$ is a subspace of $Y$. Consider the equivalence relation on $X\oplus Y$ obtained by identifying $x$ with $h(x)$ for every $x\in E$. Denote by $Z$ the corresponding quotient space. For simplicity, we will freely identify an element of $X\oplus Y$ with its equivalence class in $Z$. Notice that $Z$ is separable and metrizable by \cite[Theorem A.11.2]{vanmilli}. Furthermore, it is clear that $Z$ is compact. \newpage Fix an admissible metric $\mathsf{d}$ on $Z$. Fix a partition $\mathcal{V}$ of $X\setminus E$ consisting of non-empty clopen subsets of $X$ and a partition $\mathcal{W}$ of $Y\setminus F$ consisting of non-empty clopen subsets of $Y$ such that $\mathsf{diam}(V_k)\to 0$ and $\mathsf{diam}(W_k)\to 0$ as $k\to\infty$, where $\mathcal{V}=\{V_k:k\in\omega\}$ and $\mathcal{W}=\{W_k:k\in\omega\}$ are injective enumerations. Pick $a_k\in V_k$ and $b_k\in W_k$ for each $k$. It is easy to check that the sequences $\langle a_k:k\in\omega\rangle$ and $\langle b_k:k\in\omega\rangle$ have the same set of limit points in $Z$, namely $E=F$. Therefore, by a result of von Neumann from \cite[pages 11-12]{vonneumann} (see also \cite{halmos} and \cite{yorke} for simpler proofs), there exists a bijection $\pi:\omega\longrightarrow\omega$ such that $\mathsf{d}(a_k,b_{\pi(k)})\to 0$ as $k\to\infty$. Define $\psi:\mathcal{V}\longrightarrow\mathcal{W}$ by setting $\psi(V_k)=W_{\pi(k)}$ for $k\in\omega$. We claim that $\langle \mathcal{V},\mathcal{W},\psi\rangle$ is a $\mathsf{KR}$-cover for $\langle 2^\omega\setminus E,2^\omega\setminus F,h\rangle$. Let $f:X\longrightarrow Y$ be a bijection that extends $h$ and respects $\psi$. We need to show that $f$ is continuous on $E$ and $f^{-1}$ is continuous on $F$. Since these proofs are similar, we will only deal with the first statement. So fix $x\in E$, and let $\langle x_n:n\in\omega\rangle$ be a sequence that converges to $x$ in $X$. Let $y=f(x)$, and notice that $x=y$ in $Z$. We will show that the sequence $\langle f(x_n):n\in\omega\rangle$ converges to $y$ in $Y$. Fix a neighborhood $W$ of $y$ in $Y$. Let $\varepsilon >0$ be such that $\mathsf{B}(y,\varepsilon)\cap Y\subseteq W$, where $\mathsf{B}(y,\varepsilon)=\{z\in Z:\mathsf{d}(z,y)<\varepsilon\}$. It will be enough to show that $f(x_n)\in\mathsf{B}(y,\varepsilon)$ for all but finitely many values of $n$. The case in which $x_n\in E$ for all but finitely many values of $n$ is trivial by the continuity of $h$, so assume that $x_n\notin E$ for infinitely many values of $n$. For every $n\in\omega$ such that $x_n\notin E$, define $k_n\in\omega$ to be the unique index such that $x_n\in V_{k_n}$, and notice that $f(x_n)\in W_{\pi(k_n)}$ because $f$ respects $\psi$. Furthermore, it is easy to check that $b_{\pi(k_n)}\to y$ as $n\to\omega$, since $a_{k_n}\to x=y$ and $\mathsf{d}(a_{k_n},b_{\pi(k_n)})\to 0$ as $n\to\omega$. Therefore, given that $$ \mathsf{d}(f(x_n),y)\leq\mathsf{d}(f(x_n),b_{\pi(k_n)})+\mathsf{d}(b_{\pi(k_n)},y), $$ there exists $m\in\omega$ such that $f(x_n)\in\mathsf{B}(y,\varepsilon)$ whenever $n\geq m$ and $x_n\notin E$. Finally, since $h$ is continuous, we can also assume without loss of generality that $f(x_n)\in\mathsf{B}(y,\varepsilon)$ whenever $n\geq m$ and $x_n\in E$. \end{proof} \section{Knaster-Reichbach systems} Throughout this section, we will denote by $\mathsf{d}$ a fixed admissible metric on $2^\omega$. We will say that a sequence $\langle\langle h_n,\mathcal{K}_n\rangle:n\in\omega\rangle$ is a \emph{Knaster-Reichbach system} (briefly, a $\mathsf{KR}$-system) if the following conditions are satisfied. \begin{enumerate} \item Each $h_n:E_n\longrightarrow F_n$ is a homeomorphism between closed nowhere dense subsets of $2^\omega$. \item\label{hincreasing} $h_m\subseteq h_n$ whenever $m\leq n$. \item\label{KRcovercondition} Each $\mathcal{K}_n=\langle \mathcal{V}_n,\mathcal{W}_n,\psi_n\rangle$ is a $\mathsf{KR}$-cover for $\langle 2^\omega\setminus E_n,2^\omega\setminus F_n,h_n\rangle$. \item\label{mesh} $\mathsf{mesh}(\mathcal{V}_n)\leq 2^{-n}$ and $\mathsf{mesh}(\mathcal{W}_n)\leq 2^{-n}$ for each $n$. \item\label{refinement} $\mathcal{V}_m$ refines $\mathcal{V}_n$ and $\mathcal{W}_m$ refines $\mathcal{W}_n$ whenever $m\geq n$. \item\label{coherencepsi} Given $U\in\mathcal{V}_m$ and $V\in\mathcal{V}_n$ with $m\geq n$, then $U\subseteq V$ if and only if $\psi_m(U)\subseteq\psi_n(V)$. \end{enumerate} \begin{theorem}\label{KRsystem} Assume that $\langle\langle h_n,\mathcal{K}_n\rangle:n\in\omega\rangle$ is a $\mathsf{KR}$-system. Then there exists a homeomorphism $h:2^\omega\longrightarrow 2^\omega$ such that $h\supseteq\bigcup_{n\in\omega}h_n$. \end{theorem} \begin{proof} Let $E=\bigcup_{n\in\omega}E_n$ and $F=\bigcup_{n\in\omega}F_n$. Given $x\in 2^\omega\setminus E$ and $n\in\omega$, denote by $V_n^x$ the unique element of $\mathcal{V}_n$ that contains $x$. Given $y\in 2^\omega\setminus F$ and $n\in\omega$, denote by $W_n^y$ the unique element of $\mathcal{W}_n$ that contains $y$. \newpage If $x\in E_n$ for some $n\in\omega$, define $h(x)=h_n(x)$. The choice of $n$ is irrelevant by condition (\ref{hincreasing}). Now assume that $x\in 2^\omega\setminus E$. Notice that every finite subset of $\{\psi_n(V_n^x):n\in\omega\}$ has non-empty intersection by conditions (\ref{refinement}) and (\ref{coherencepsi}). Since $2^\omega$ is compact and condition (\ref{mesh}) holds, it follows that there exists $y\in 2^\omega$ such that $\bigcap_{n\in\omega}\psi_n(V_n^x)=\{y\}$. Set $h(x)=y$. This concludes the definition of $h$. Similarly, define $g:2^\omega\longrightarrow 2^\omega$ by setting $g(y)=h_n^{-1}(y)$ if $y\in F_n$ for some $n\in\omega$, and $g(y)=x$ if $y\in 2^\omega\setminus F$, where $x\in 2^\omega$ is such that $\bigcap_{n\in\omega}\psi_n^{-1}(W_n^y)=\{x\}$. It is easy to check that $g=h^{-1}$, hence $h$ is a bijection. It is straightforward to verify that $h$ respects $\psi_n$ for each $n$. Therefore, by condition (\ref{KRcovercondition}), $h$ is continuous on $E$ and $h^{-1}$ is continuous on $F$. It remains to show that $h$ is continuous on $2^\omega\setminus E$ and that $h^{-1}$ is continuous on $2^\omega\setminus F$. Since these proofs are similar, we will only deal with the first statement. Fix $x\in 2^\omega\setminus E$, and let $y=h(x)$. Fix a neighborhood $W$ of $y$ in $2^\omega$. By condition (\ref{mesh}), there exists $n\in\omega$ such that $W_n^y\subseteq W$. It remains to observe that $h[V_n^x]=W_n^y$. \end{proof} \begin{corollary}\label{KRcor} Let $X$ be a subspace of $2^\omega$. Assume that $\langle\langle h_n,\mathcal{K}_n\rangle:n\in\omega\rangle$ is a $\mathsf{KR}$-system satisfying the following additional conditions. \begin{enumerate} \item[(7)] $2^\omega\setminus\bigcup_{n\in\omega}E_n\subseteq X$. \item[(8)] $2^\omega\setminus\bigcup_{n\in\omega}F_n\subseteq X$. \item[(9)] $h_n[X\cap E_n]=X\cap F_n$ for each $n$. \end{enumerate} Then there exists a homeomorphism $h:2^\omega\longrightarrow 2^\omega$ such that $h\supseteq\bigcup_{n\in\omega}h_n$ and $h[X]=X$. \end{corollary} \begin{proof} By Theorem \ref{KRsystem}, there exists a homeomorphism $h:2^\omega\longrightarrow 2^\omega$ such that $h\supseteq\bigcup_{n\in\omega}h_n$. In order to show that $h[X]\subseteq X$, fix $x\in X$. If $x\in\bigcup_{n\in\omega}E_n$, then $h(x)\in X$ by condition (9). On the other hand, if $x\in 2^\omega\setminus\bigcup_{n\in\omega}E_n$ then $h(x)\in 2^\omega\setminus\bigcup_{n\in\omega}F_n$, which implies $h(x)\in X$ by condition (8). A similar argument shows that $h^{-1}[X]\subseteq X$. It follows that $h[X]=X$. \end{proof} \section{The main result} The following two definitions are crucial for our purposes. Recall that a \emph{$\pi$-base} for a space $Z$ is a collection $\mathcal{B}$ consisting of non-empty open subsets of $Z$ such that for every non-empty open subset $U$ of $Z$ there exists $V\in\mathcal{B}$ such that $V\subseteq U$. \begin{definition} Let $X$ be a subspace of $Z$. We will say that $X$ is \emph{h-homogeneously embedded} in $Z$ if there exists a $\pi$-base $\mathcal{B}$ for $Z$ consisting of clopen sets and homeomorphisms $\varphi_U:Z\longrightarrow U$ for $U\in\mathcal{B}$ such that $\varphi_U[X]=X\cap U$. \end{definition} \begin{definition}\label{cc} We will say that a space $X$ is \emph{countably controlled} if for every countable $D\subseteq X$ there exists a Polish subspace $G$ of $X$ such that $D\subseteq G\subseteq X$. \end{definition} The technique used in the proof of the following theorem is essentially due to Medvedev (see \cite[Theorem 5]{medvedevp}). \begin{theorem}\label{main} Assume that $X$ is h-homogeneously embedded in $2^\omega$ and countably controlled. Then $X$ is $\mathsf{CDH}$. \end{theorem} \begin{proof} If $X$ is empty then $X$ is obviously $\mathsf{CDH}$, so assume that $X$ is non-empty. Since $X$ is h-homogeneously embedded in $2^\omega$, there exists a (countable) $\pi$-base $\mathcal{B}$ for $2^\omega$ consisting of clopen sets and homeomorphisms $\varphi_U:2^\omega\longrightarrow U$ for $U\in\mathcal{B}$ such that $\varphi_U[X]=X\cap U$. In particular, $X$ is dense in $2^\omega$. Fix a pair $(A,B)$ of countable dense subsets of $X$. Let $D_0=A\cup B$, and given $D_n$ for some $n\in\omega$, define $$ D_{n+1}=\bigcup\{\varphi_U^{-1}[D_n\cap U]:U\in\mathcal{B}\}. $$ In the end, let $D=\bigcup_{n\in\omega}D_n$. It is easy to check that $D$ is a countable dense subset of $2^\omega$ such that $A\cup B\subseteq D\subseteq X$. Furthermore, it is clear that $\varphi_U^{-1}(x)\in D$ whenever $x\in D$ and $U\in\mathcal{B}$ is such that $x\in U$. Since $X$ is countably controlled, it is possible to find a $\mathsf{G_\delta}$ subset $G$ of $2^\omega$ such that $D\subseteq G\subseteq X$. By removing countably many points from $G$, we can assume without loss of generality that $2^\omega\setminus G$ is dense in $2^\omega$. Fix closed nowhere dense subsets $K_\ell$ of $2^\omega$ for $\ell\in\omega$ such that $2^\omega\setminus G=\bigcup_{\ell\in\omega}K_\ell$. Also fix the following injective enumerations. \begin{itemize} \item $A=\{a_i:i\in\omega\}$. \item $B=\{b_j:j\in\omega\}$. \end{itemize} Fix an admissible metric $\mathsf{d}$ on $2^\omega$ such that $\mathsf{diam}(2^\omega)\leq 1$. Our strategy is to construct a suitable $\mathsf{KR}$-system $\langle\langle h_n,\mathcal{K}_n\rangle:n\in\omega\rangle$, then apply Corollary \ref{KRcor} to get a homeomorphism $h:2^\omega\longrightarrow 2^\omega$ such that $h\supseteq\bigcup_{n\in\omega}h_n$ and $h[X]=X$. We will use the same notation as in Section 6. In particular, $h_n:E_n\longrightarrow F_n$ and $\mathcal{K}_n=\langle\mathcal{V}_n,\mathcal{W}_n,\psi_n\rangle$ for each $n$. Of course, we will have to make sure that conditions (1)-(6) in the definition of a $\mathsf{KR}$-system are satisfied. Furthermore, we will make sure that the following additional conditions are satisfied for every $n\in\omega$. \begin{enumerate} \item[(I)] $\bigcup_{\ell<n}K_\ell\subseteq E_n$. \item[(II)] $\bigcup_{\ell<n}K_\ell\subseteq F_n$. \item[(III)] $h_n[X\cap E_n]=X\cap F_n$. \item[(IV)] $\{a_i:i<n\}\subseteq E_n$. \item[(V)] $\{b_j:j<n\}\subseteq F_n$. \item[(VI)] $h_n[A\cap E_n]=B\cap F_n$. \end{enumerate} Conditions (I)-(III) will guarantee that conditions (7)-(9) in Corollary \ref{KRcor} hold. On the other hand, conditions (IV)-(VI) will guarantee that $h[A]=B$. Start by letting $h_0=\varnothing$ and $\mathcal{K}_0=\langle\{2^\omega\},\{2^\omega\},\{\langle 2^\omega,2^\omega\rangle\}\rangle$. Now assume that $\langle h_n,\mathcal{K}_n\rangle$ is given. First, for any given $V\in\mathcal{V}_n$, we will define a homeomorphism $h_V:E_V\longrightarrow F_V$, where $E_V$ will be a closed nowhere dense subset of $V$ and $F_V$ will be a closed nowhere dense subset of $\psi_n(V)$. So fix $V\in\mathcal{V}_n$, and let $W=\psi_n(V)$. Define the following indices. \begin{itemize} \item $\ell(V)=\mathsf{min}\{\ell\in\omega:K_\ell\cap V\neq\varnothing\}$. \item $\ell(W)=\mathsf{min}\{\ell\in\omega:K_\ell\cap W\neq\varnothing\}$. \item $i(V)=\mathsf{min}\{i\in\omega:a_i\in V\setminus K_{\ell(V)}\}$. \item $j(W)=\mathsf{min}\{j\in\omega:b_j\in W\setminus K_{\ell(W)}\}$. \end{itemize} Notice that the indices $\ell(V)$ and $\ell(W)$ are well-defined because $\bigcup_{\ell\in\omega}K_\ell=2^\omega\setminus G$ is dense in $2^\omega$. Let $S=(V\cap K_{\ell(V)})$. Since $K_{\ell(V)}$ is a closed nowhere dense subset of $2^\omega$, we can fix $U(S)\in\mathcal{B}$ such that $U(S)\subseteq V\setminus (S\cup\{a_{i(V)}\})$. Let $T=(W\cap K_{\ell(W)})$. Since $K_{\ell(W)}$ is a closed nowhere dense subset of $2^\omega$, we can fix $U(T)\in\mathcal{B}$ such that $U(T)\subseteq W\setminus (T\cup\{b_{j(W)}\})$. \newpage Define $E_V=\{a_{i(V)}\}\cup S\cup\varphi_{U(S)}[T]$ and $F_V=\{b_{j(W)}\}\cup T\cup\varphi_{U(T)}[S]$. Observe that $E_V$ is a closed nowhere dense subset of $V$ and $F_V$ is a closed nowhere dense subset of $W$. Define $h_V:E_V\longrightarrow F_V$ by setting $$ \left. \begin{array}{lcl} & & h_V(x)= \left\{ \begin{array}{ll} b_{j(W)} & \textrm{if }x=a_{i(V)},\\ \varphi_{U(T)}(x) & \textrm{if }x\in S,\\ (\varphi_{U(S)})^{-1}(x) & \textrm{if }x\in\varphi_{U(S)}[T]. \end{array} \right. \end{array} \right. $$ It is clear that $h_V$ is a homeomorphism. Therefore, by Lemma \ref{KRexists}, there exists a $\mathsf{KR}$-cover $\langle\mathcal{V}_V,\mathcal{W}_V,\psi_V\rangle$ for $\langle V\setminus E_V,W\setminus F_V,h_V\rangle$. Furthermore, it is easy to realize that $h_V[X\cap E_V]=X\cap F_V$, which will allow us to mantain condition (III). Notice that $\phi_{U(S)}[T]\cap D=\varnothing$, because $\phi_{U}[K_\ell]\cap D=\varnothing$ for every $U\in\mathcal{B}$ and $\ell\in\omega$ by the choice of $D$. Similarly, one sees that $\phi_{U(T)}[S]\cap D=\varnothing$. Since $A\cup B\subseteq D$, it follows that $h_V[A\cap E_V]=h_V[\{a_{i(V)}\}]=\{b_{j(W)}\}=B\cap F_V$, which will allow us to mantain condition (VI). Repeat this construction for every $V\in\mathcal{V}_n$, then let $E_{n+1}=E_n\cup\bigcup\{E_V:V\in\mathcal{V}_n\}$ and $F_{n+1}=F_n\cup\bigcup\{F_V:V\in\mathcal{V}_n\}$. Define $$ h_{n+1}=h_n\cup\bigcup_{V\in\mathcal{V}_n}h_V, $$ and observe that $h_{n+1}:E_{n+1}\longrightarrow F_{n+1}$ is a bijection. Now extend $h_V$ to a bijection $f_V:V\longrightarrow\psi_n(V)$ for every $V\in\mathcal{V}_n$, and let $f_n=h_n\cup\bigcup_{V\in\mathcal{V}_n}f_V$. Clearly, $f_n:2^\omega\longrightarrow 2^\omega$ is a bijection that extends $h_{n+1}\supseteq h_n$ and respects $\psi_n$. Since $\mathcal{K}_n=\langle\mathcal{V}_n,\mathcal{W}_n,\psi_n\rangle$ is a $\mathsf{KR}$-cover for $\langle 2^\omega\setminus E_n, 2^\omega\setminus F_n,h_n\rangle$, it follows that $h_{n+1}$ is continuous on $E_n$ and $h_{n+1}^{-1}$ is continuous on $F_n$. On the other hand, it is straightforward to check that $h_{n+1}$ is continuous on $E_{n+1}\setminus E_n=\bigcup\{E_V:V\in\mathcal{V}_n\}$ and $h_{n+1}^{-1}$ is continuous on $F_{n+1}\setminus F_n=\bigcup\{F_V:V\in\mathcal{V}_n\}$. In conclusion, $h_{n+1}$ is a homeomorphism. Finally, we define $\mathcal{K}_{n+1}=\langle\mathcal{V}_{n+1},\mathcal{W}_{n+1},\psi_{n+1}\rangle$. Let $\mathcal{V}_{n+1}=\bigcup\{\mathcal{V}_V:V\in\mathcal{V}_n\}$ and $\mathcal{W}_{n+1}=\bigcup\{\mathcal{W}_V:V\in\mathcal{V}_n\}$. By further refining $\mathcal{V}_{n+1}$ and $\mathcal{W}_{n+1}$, we can assume that $\mathsf{mesh}(\mathcal{V}_{n+1})\leq 2^{-(n+1)}$ and $\mathsf{mesh}(\mathcal{W}_{n+1})\leq 2^{-(n+1)}$. Let $\psi_{n+1}=\bigcup_{V\in\mathcal{V}_n}\psi_V$. Using the fact that $\langle\mathcal{V}_V,\mathcal{W}_V,\psi_V\rangle$ is a $\mathsf{KR}$-cover for $\langle V\setminus E_V,W\setminus F_V,h_V\rangle$ for each $V\in\mathcal{V}_n$ together with condition (\ref{KRcovercondition}), it is easy to realize that $\mathcal{K}_{n+1}$ is a $\mathsf{KR}$-cover for $\langle 2^\omega\setminus E_{n+1}, 2^\omega\setminus F_{n+1},h_{n+1}\rangle$. \end{proof} \section{Infinite powers and $\lambda'$-sets} The main result of this section is Theorem \ref{mainexample}, which simultaneously answers Question \ref{Gdeltaquestion}, the first part of Question \ref{optimizedefinability}, and shows that Theorem \ref{coanalyticcdhpower} is sharp. The idea of looking at (the complements of) $\lambda'$-sets is inspired by a recent article of Hern\'andez-Guti\'errez, Hru\v{s}\'ak, and van Mill (more precisely, by \cite[Theorem 4.5]{hernandezgutierrezhrusakvanmill}). We will need a few preliminary results. The straightforward proofs of the following two propositions are left to the reader. \begin{proposition}\label{prodpreservehhomemb} Let $I$ be a countable set. If $X_i$ is h-homogeneously embedded in $Z_i$ for every $i\in I$ then $\prod_{i\in I}X_i$ is h-homogeneously embedded in $\prod_{i\in I}Z_i$. \end{proposition} \begin{proposition}\label{prodpreservecc} Let $I$ be a countable set. If $X_i$ is countably controlled for each $i\in I$ then $\prod_{i\in I}X_i$ is countably controlled. \end{proposition} \begin{proposition}\label{existslambda} There exists a $\lambda'$-set of size $\omega_1$ which is h-homogeneously embedded in $2^\omega$. \end{proposition} \begin{proof} Fix a (countable) $\pi$-base $\mathcal{B}$ for $2^\omega$ consisting of clopen sets and homeomorphisms $\varphi_U:2^\omega\longrightarrow U$ for $U\in\mathcal{B}$. Let $X_0$ be a $\lambda'$-set of size $\omega_1$ (whose existence is guaranteed by Theorem \ref{zfclambda}) and, given $X_n$ for some $n\in\omega$, define $$ X_{n+1}=\bigcup\{\varphi_U[X_n]:U\in\mathcal{B}\}\cup\bigcup\{\varphi_U^{-1}[X_n\cap U]:U\in\mathcal{B}\}. $$ In the end, let $X=\bigcup_{n\in\omega}X_n$. Using induction and Lemma \ref{unionlambda}, it is easy to see that each $X_n$ is a $\lambda'$-set of size $\omega_1$. Therefore, $X$ is a $\lambda'$-set of size $\omega_1$. Finally, the construction of $X$ ensures that $\varphi_U[X]=X\cap U$ for every $U\in\mathcal{B}$. \end{proof} \begin{theorem}\label{mainexample} There exists a subspace $X$ of $2^\omega$ with the following properties. \begin{itemize} \item $X$ is not Polish. \item $X^\omega$ is $\mathsf{CDH}$. \item If $\mathsf{MA}+\neg\mathsf{CH}+\omega_1=\omega_1^\mathsf{L}$ holds then $X$ is analytic. \end{itemize} \end{theorem} \begin{proof} By Proposition \ref{existslambda}, we can fix a $\lambda'$-set $Y$ of size $\omega_1$ which is h-homogeneously embedded in $2^\omega$. Let $X=2^\omega\setminus Y$. By Theorem \ref{martinsolovay}, if $\mathsf{MA}+\neg\mathsf{CH}+\omega_1=\omega_1^\mathsf{L}$ holds then $X$ is analytic. It is straightforward to verify that $X$ is is h-homogeneously embedded in $2^\omega$. By Proposition \ref{prodpreservehhomemb}, it follows that $X^\omega$ is h-homogeneously embedded in $(2^\omega)^\omega\approx 2^\omega$. Furthermore, the definition of $\lambda'$-set immediately implies that $X$ is countably controlled. By Proposition \ref{prodpreservecc}, it follows that $X^\omega$ is countably controlled. In conclusion, $X^\omega$ is $\mathsf{CDH}$ by Theorem \ref{main}. Assume, in order to get a contradiction, that $X$ is Polish. This means that $X$ is a $\mathsf{G_\delta}$ subspace of $2^\omega$, so $Y$ is an $\mathsf{F_\sigma}$. Since $Y$ is uncountable, it follows that $Y$ contains a copy of $2^\omega$, which contradicts the fact that $Y$ is a $\lambda$-set. \end{proof} Observe that, by the remark that follows Theorem \ref{borelcdhpower}, the analytic counterexample given by Theorem \ref{mainexample} could not have been constructed in $\mathsf{ZFC}$. The following is a classical result (see \cite[Theorem 23.3]{millerd}). For a new, topological proof, based on a result of Baldwin and Beaudoin, see \cite[Theorem 8.1]{medinizdomskyy}. \begin{theorem}[Martin, Solovay]\label{martinsolovay} Assume $\mathsf{MA} + \neg\mathsf{CH} + \omega_1=\omega_1^\mathsf{L}$. Then every subspace of $2^\omega$ of size $\omega_1$ is coanalytic. \end{theorem} \section{A sufficient condition} The main result of this section is Theorem \ref{sufficient}, which shows that being countably controlled is by itself a sufficient condition on a zero-dimensional space $X$ for the countable dense homogeneity of $X^\omega$. It is easy to realize that Theorem \ref{mainexample} could have been proved using Corollary \ref{everylambda}. However, since the proof of Theorem \ref{sufficient} relies on deep results such as \cite[Theorem 1]{dowpearl} and Theorem \ref{hhompower}, we preferred to make the rest of the paper more self-contained. The following result is inspired by \cite[Proposition 24]{medinip}, where the proof of the equivalence $(\ref{isolated})\leftrightarrow (\ref{hhom})$ first appeared. Recall that a space $X$ is \emph{h-homogeneous} (or \emph{strongly homogeneous}) if $C\approx X$ for every non-empty clopen subspace $C$ of $X$. \begin{proposition}\label{strengthenhhom} Let $X$ be zero-dimensional space such that $|X|\geq 2$. Then the following are equivalent. \begin{enumerate} \item\label{isolated} $X^\omega\approx Y^\omega$ for some space $Y$ with at least one isolated point. \item\label{hhomemb} $X^\omega$ can be h-homogeneously embedded in $2^\omega$. \item\label{hhom} $X^\omega$ is h-homogeneous. \end{enumerate} \end{proposition} \begin{proof} In order to prove the implication $(\ref{isolated})\rightarrow (\ref{hhomemb})$, assume that $X^\omega\approx Y^\omega$, where $Y$ is a space with at least one isolated point. Assume without loss of generality that $Y$ is a subspace of $2^\omega$, and let $z\in 2^\omega$ be an isolated point of $Y$. Let $K=\mathsf{cl}(Y)$, where the closure is taken in $2^\omega$, and notice that $z$ remains isolated in $K$. Also notice that $K^\omega$ is crowded because $|X|\geq 2$ and $Y^\omega\approx X^\omega$. It follows that $K^\omega\approx 2^\omega$, so it will be enough to show that $Y^\omega$ is h-homogeneously embedded in $K^\omega$. Let $[\omega]^{<\omega}=\{F\subseteq\omega:F\text{ is finite}\}$. Given any $F\in [\omega]^{<\omega}$, define $$ U_F=\{x\in K^\omega: x(n)=z\text{ for all }n\in F\}, $$ and notice that each $U_F$ is a clopen subset of $K^\omega$. Furthermore, it is clear that $\{U_F:F\in [\omega]^{<\omega}\}$ is a local base for $K^\omega$ at $\langle z,z,\ldots\rangle$. By \cite[Theorem 1]{dowpearl}, given any $x\in Y^\omega$, there exists a homeomorphism $h_x:K^\omega\longrightarrow K^\omega$ such that $h_x[Y^\omega]=Y^\omega$ and $h_x(\langle z,z,\ldots\rangle)=x$. Fix a countable dense subset $D$ of $Y^\omega$. It is easy to realize that the collection $$ \mathcal{B}=\{h_x[U_F]:x\in D, F\in [\omega]^{<\omega}\} $$ is a countable $\pi$-base for $K^\omega$ consisting of clopen sets. For every $F\in [\omega]^{<\omega}$, fix a bijection $\pi_F:\omega\setminus F\longrightarrow\omega$, then define $h_F:K^\omega\longrightarrow U_F$ by setting $$ \left. \begin{array}{lcl} & & h_F(x)(n)= \left\{ \begin{array}{ll} z & \textrm{if }n\in F,\\ x(\pi_F(n)) & \textrm{if }n\in\omega\setminus F \end{array} \right. \end{array} \right. $$ for every $x\in K^\omega$ and $n\in\omega$. One can easily check that each $h_F$ is a homeomorphism such that $h_F[Y^\omega]=Y^\omega\cap U_F$. Given any $U\in\mathcal{B}$, where $U=h_x[U_F]$ for some $x\in D$ and $F\in [\omega]^{<\omega}$, let $\varphi_U=h_x\circ h_F$. It is straightforward to verify that each $\varphi_U:K^\omega\longrightarrow U$ is a homeomorphism such that $\varphi_U[Y^\omega]=Y^\omega\cap U$. In order to prove the implication $(\ref{hhomemb})\rightarrow (\ref{hhom})$, assume that $X^\omega$ is h-homogeneously embedded in $2^\omega$. In particular, $X^\omega$ has a $\pi$-base consisting of clopen sets that are homeomorphic to $X^\omega$. If $X^\omega$ is compact then $X^\omega\approx 2^\omega$, which is well-known to be h-homogeneous. On the other hand, if $X^\omega$ is non-compact then it is non-pseudocompact (see \cite[Proposition 3.10.21 and Theorem 4.1.17]{engelking}), in which case the desired result follows from a theorem of Terada (see \cite[Theorem 2.4]{terada} or \cite[Theorem 2 and Appendix A]{medinip}). In order to prove the implication $(\ref{hhom})\rightarrow (\ref{isolated})$, assume that $X^\omega$ is h-homogeneous. It will be enough to show that $X^\omega$ and $(X\oplus 1)^\omega$ are both homeomorphic to the space $C=(X\oplus 1)^\omega\times X^\omega$, where $X\oplus 1$ denotes the space obtained by adding one isolated point to $X$. Notice that $X^\omega$ can be partitioned into two non-empty clopen subsets because $|X|\geq 2$. Therefore $$ X^\omega\approx X^\omega\oplus X^\omega\approx (X\times X^\omega )\oplus X^\omega\approx (X\oplus 1)\times X^\omega. $$ By taking the $\omega$-th power of both sides, one sees that $X^\omega\approx C$. On the other hand, we know that $(X\oplus 1)^\omega$ is h-homogeneous by the implication $(\ref{isolated})\rightarrow (\ref{hhom})$. Since $$ (X\oplus 1)^\omega\approx (X\oplus 1)\times (X\oplus 1)^\omega\approx (X\times (X\oplus 1)^\omega)\oplus (X\oplus 1)^\omega, $$ it follows that $(X\oplus 1)^\omega\approx X\times (X\oplus 1)^\omega$. By taking the $\omega$-th power of both sides, one sees that $(X\oplus 1)^\omega\approx C$. \end{proof} The following result has been obtained independently by van Engelen (see \cite[Theorem 4.4]{vanengelen}) and Medvedev (see \cite[Corollary 6]{medvedevb}). \begin{theorem}[van Engelen; Medvedev]\label{hhompower} Let $X$ be a zero-dimensional space. If $X$ has a dense Polish subspace then $X^\omega$ is h-homogeneous. \end{theorem} \begin{corollary}\label{densePolishcor} Let $X$ be a zero-dimensional space such that $|X|\geq 2$. If $X$ has a dense Polish subspace then $X^\omega$ can be h-homogeneously embedded in $2^\omega$. \end{corollary} \begin{proof} Apply Proposition \ref{strengthenhhom}. \end{proof} \begin{theorem}\label{sufficient} Let $X$ be a zero-dimensional countably controlled space. Then $X^\omega$ is $\mathsf{CDH}$. \end{theorem} \begin{proof} The case $|X|=1$ is trivial, so assume that $|X|\geq 2$. Clearly, the fact that $X$ is countably controlled implies that $X$ has a Polish dense subspace. Therefore $X^\omega$ can be h-homogeneously embedded in $2^\omega$ by Corollary \ref{densePolishcor}. Furthermore, Proposition \ref{prodpreservecc} shows that $X^\omega$ is countably controlled. In conclusion, $X^\omega$ is $\mathsf{CDH}$ by Theorem \ref{main}. \end{proof} \begin{corollary}\label{everylambda} If $Y$ is a $\lambda'$-set then $(2^\omega\setminus Y)^\omega$ is $\mathsf{CDH}$. \end{corollary} It seems natural to wonder whether, in the above theorem, it would be enough to assume that $X$ has a dense Polish subspace, instead of assuming that $X$ is countably controlled. The following simple proposition shows that this is not the case. \begin{proposition} There exists a zero-dimensional space $X$ such that $X$ has a dense Polish subspace while $X^\omega$ is not $\mathsf{CDH}$. \end{proposition} \begin{proof} Fix $z\in 2^\omega$. Let $D=2^\omega\times (2^\omega\setminus\{z\})$, and fix a countable dense subset $Q$ of $2^\omega\times\{z\}$. Define $$ X=Q\cup D\subseteq 2^\omega\times 2^\omega. $$ It is clear that $D$ is a dense Polish subspace of $X$. Furthermore, $X$ is not Polish because $Q$ is a closed countable crowded subspace of $X$. Since $X$ is a coanalytic subspace of $2^\omega\times 2^\omega\approx 2^\omega$ (actually, it is $\sigma$-compact), if follows that $X^\omega$ is not $\mathsf{CDH}$ by Theorem \ref{coanalyticcdhpower}. \end{proof} Finally, we remark that, by Theorem \ref{consistentanswer}, it is not possible to prove in $\mathsf{ZFC}$ that being countably controlled (or even having a dense Polish subspace) is a necessary condition for the countable dense homogeneity of $X^\omega$.
1,108,101,563,937
arxiv
\section{Introduction} The development of deep learning makes remarkable progresses in many tasks\cite{image-classification1, image-classification2, image-classification3, machine-trans1}. To achieve all of them, large amounts of thousands and even millions of labeled data are required for the deep learning approach to obtain satisfactory performance. However, collecting and annotating abundant data is notoriously expensive. Therefore, few-shot learning\cite{match-network,prototypical,li2019revisiting} which requires the model to learn from a few data, has attracted researchers' attention in recent years. Learning from few-data is challenging for Computer Vision. In comparison, we human beings can rapidly learn new categories from very few examples. Recently, meta-learning\cite{Learning-a-synaptic, On-the-optimiazation, An-alternative-to, MAML, Meta-SGD, miniimagenet, Reptile, SNAIL, zhuang, RL2, Metagan, metanet} has shown promising performance to improve the few-shot learning for Computer Vision. However, existing meta-learning methods commonly ignore prior-knowledge\cite{langer1981prior,dochy1990instructional,shapiro2004including,wylie2004interactive,hsin2015effects} and attention mechanism\cite{human_attention1,human_attention2} which have been both demonstrated important for human cognitive and learning process. We illustrate a few-shot classification problem in Fig.\ref{fig:example to understand idea} for a better understanding of the role of prior-knowledge and attention mechanism in human few-shot learning. In Fig.\ref{fig:example to understand idea}, we unconsciously leverage our learned knowledge about the world to understand and express these images into high-level compact representations, such as plant, animal, tree, and table \emph{etc.} However, according to the four training images, we discover that only the feature of the tree and table are useful for us to recognize these two classes of images. Then, we quickly adjust ourselves to pay attention to the critical features and make the decision based on the focused features. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{figure1.pdf} \caption{ An Example of few-shot classification task. The six images come from two classes, where four labeled ones are training data with the two unlabeled for test. When predicting the two testing images, we utilize our prior-knowledge about the world to understand all components in these images and use our vision attention to pay attention to key components table and tree. Finally, we predict the image (c) belongs to class 1 that contains table, while the image (f) is associated with class 2 of tree. } \label{fig:example to understand idea} \end{figure} Evidently, we can summarize two main modules in human few-shot learning: \textbf{a stable Representation module that utilizes prior-knowledge to express the image into compact feature representations; and a smart attention-based decision logical module that adapts accurately and performs recognition based on the feature representations}. While existing meta-learning approaches commonly train meta-learners to learn adaptive networks directly based on the original input data with no attention mechanism and prior-knowledge. In this paper, inspired by the human cognition process, we present a novel paradigm of meta-learning approach with three developments to introduce attention mechanism and prior-knowledge step-by-step for meta-learning. Here, we briefly introduce the proposed methods. \textbf{1)} The first method is \textbf{A}ttention based \textbf{M}eta-\textbf{L}earning (AML) which leverages attention mechanism to enable the meta-learner paying more attention on essential feature. \textbf{2)} For the meta-learner enjoying not only attention but also prior-knowledge, we present another method \textbf{R}epresentation and \textbf{A}ttention based \textbf{M}eta-\textbf{L}earning (RAML). Its network contains a Representation module and an attention-based prediction (ABP) module. The Representation module is similar to the same module of human vision. It learns the prior-knowledge in a supervised fashion and is responsible for understanding and extracting stable compact feature representations from the input image. The ABP module plays the same role as the smart attention-based decision logic module of human vision. It enables the meta-learner to precisely adjusting first its attention to the most discriminative feature representations of input images and second the corresponding predictions. \textbf{3)} In the third method, to take full advantage of endless unlabeled data, we design a novel method where the Representation module learns the past knowledge in unsupervised fashion \cite{AE,VAE,context-encoder, colorization,deep-cluster,split-brain}. We call this method \textbf{U}nsupervised \textbf{R}epresentation and \textbf{A}ttention based \textbf{M}eta-\textbf{L}earning(URAML). With URAML, we show in our experiments that the growth of the number of unlabeled data and the development of unsupervised learning both improve the performance of URAML apparently. In addition, we show a Task-Over-Fitting (TOF) problem for existing meta-learning and present a Cross-Entropy across Tasks (CET) metric to evaluate how much a meta-learning method is troubled by the TOF problem. An example of the TOF problem is, the meta-learner trained on 5-way 1-shot tasks is not as capable as the one trained on 5-way 5-shot tasks when they are tested on 5-way 5-shot tasks, and vice versa. However, in practical applications, it is uncertain how much data and how many shot times are available to the meta-learner to learn. Therefore, we argue that the trained meta-learner should generalizes well to different $K$-shot tasks. The possible reason behind the TOF problem is that existing meta-learners are vulnerable to the features irrelevant to the presented tasks since they ignoring both priori knowledge and attention mechanism. Our experiment validates that by incorporating prior-knowledge and attention mechanism, our methods suffer less from the TOF problem than existing meta-learning methods. We summarize the main contributions of our work as: \begin{itemize} \item We propose that both attention mechanism and prior-knowledge are crucial for meta-learner to reduce its cognition burden in few-shot learning, and we develop three methods AML, RAML, and URAML to step-by-step leverage attention mechanism and prior-knowledge in meta-learning. \item We discover the TOF problem for meta-learning, and design a novel metric Cross-Entropy across Tasks (CET) to measure how much meta-learning approaches suffer from the TOF problem. \item Through extensive experiments, we show that the proposed methods achieve state-of-the-art performance on several few-shot learning benchmarks and in the meantime, they are less sensitive to the TOF problem, especially the RAML and URAML. \end{itemize} \section{Related Work} \subsection{Meta-learning for Few-Shot Learning} An $N$-way $K$-shot learning task contains a support set and a query set. The support and query set contain $K$ and $L$ examples for each of the $N$ classes, respectively Existing meta-learning approaches usually solve the few-shot learning by training a meta-learner on the $N$-way $K$-shot learning tasks in the following way. Firstly, the meta-learner is required to inner-update itself on the support set. Secondly, after the inner-updating, meta-learner is evaluated on the query set. Finally, by minimizing the loss on the query set, the meta-learner learns a base learner which has easy-fine-tune weights\cite{MAML,Reptile} or a skillful weight updater\cite{miniimagenet, metanet} or both\cite{Meta-SGD} or the ability to memorize the support set\cite{SNAIL}. The methods train the meta-learner learning an easy-fine-tune base learner are also called as weight initialization based methods, as the meta-learner learns generalized initial weight for few-shot learning tasks. Recently, MAML, which is a classical weight initialization based method, is popular and lots of MAML based methods have been proposed. For example, LLAML\cite{LLAML} uses a local Laplace approximation to model the task parameters, and MTL\cite{sun2019meta-transfer} trains a meta-transfer to adapt a pre-trained deep network to few-shot learning tasks. Besides, MetaGAN\cite{Metagan} shows that by coupling MAML with adversarial training, the meta-learner is trained to learn a better decision boundaries between different classes in few-shot learning. To reduce the computation and memory cost of MAML, iMAML\cite{IMAML} leverages implicit differentiation to remove the need of differentiation through the inner-update path. Though existing meta-learning methods performs promising, they seldom consider the prior-knowledge and attention mechanism in meta-learning. In our paper, we improve meta-learning for few-shot learning by introducing prior-knowledge and attention mechanism to meta-learning. \subsection{Attention Mechanism} Recent years, attention mechanism\cite{soft_attention1,soft_attention3,hard_attention,attention-is-all-you-need} has been widely used in computer vision systems, machine translation and \emph{etc.}. Several manners of the attention mechanism have been proposed, such as soft attention\cite{soft_attention1,soft_attention3}, hard attention\cite{hard_attention} and self attention\cite{attention-is-all-you-need} \emph{etc.} Soft attention can be seen as simulating the attention mechanism by multiplying weight on the neural unit so that the network pays more attention on the neural unit which multiplies with larger weight. SENet\cite{soft_attention3} takes advantage of soft attention mechanism to win the champion on the image classification task of ILSVRC-2017\cite{berg2010large}. Hard attention\cite{hard_attention} can be seen as a module that decides a block region of the input image where is visible to the network, and the other region is invisible. Self-attention\cite{attention-is-all-you-need} improves the performance of the machine translation system by training a network to find the inner dependency of the input and that of the output. In this paper, we use soft attention mechanism as the meta-learner's attention mechanism. \subsection{Unsupervised Representation Learning} Supervised learning is a data-hungry manner to train deep network. Considering this, several unsupervised learning approach\cite{AE,VAE,context-encoder,colorization,deep-cluster,split-brain} have been proposed. A well-known way is training a neural network to reconstruct the original input through an Encoder-Decoder architecture, such as Auto-Encoder\cite{AE}, Variational Auto-Encoder (VAE)\cite{VAE} and \emph{etc}. Given partial masked images, Context Auto-Encoder\cite{context-encoder} trains a network to reconstruct not only the visible but also the masked region of the image. Colorization\cite{colorization} uses \emph{Lab} images to train a network to generate the unseen \emph{ab} channels from the input \emph{L} channel. Based on Colorization, Split-Brain\cite{split-brain} trains two separated networks to separately generate the \emph{ab} channels from the \emph{L} channel and generate the \emph{L} channel from the \emph{ab} channels. Different from these methods, DeepCluster\cite{deep-cluster} couples deep learning with Cluster algorithm\cite{cluster1,cluster2}. However, in real world, many unlabeled images containing complex semantic information and are not suitable to be categorized into specific clusters. Therefore, we consider there might be a limitation for DeepCluster and we utilize Split-Brain as the unsupervised learning method in URAML. \section{Method} \subsection{Problem of Learning from Few-Data} Learning from few-data is extremely difficult for the deep learning model. One reason is that the original input data is commonly represented in a large dimension space. Usually, tens or hundreds of thousands of dimension space is required. For example, for the image classification task, the original image is commonly stored in a large dimensional space (dimension of an 224x224 RGB image is 150528). In such a large dimension space, it is difficult for a few samples of one category to accurately reflect the character of this category. Humans learn new categories efficiently because they utilize prior-knowledge and attention mechanism in cognition\cite{langer1981prior,dochy1990instructional,ormerod1990human,oliva2003top,van2003selective,posner1990attention,ungerleider2000mechanisms,hsin2015effects,wylie2004interactive}. Prior-knowledge facilitates human to express perceptual images into high-level representations or descriptions, and attention mechanism helps human to focus on critical components of the representations. In this way, humans reduce the dimension of images and maintain the discriminative components of the images, which alleviates human cognition load and facilitate humans to efficiently learn new categories. Existing meta-learning approaches improve deep learning a lot in few-shot learning. However, they train the meta-learner to quickly fit few-shot learning tasks directly on the few original high dimensional input data and pay little attention to the importance of prior-knowledge and attention mechanism, leading to unsatisfactory performance. Besides, as introduced before, we propose that ignoring prior-knowledge and attention mechanism is also the possible reason for existing meta-learning approaches to be vulnerable to suffer from the TOF problem. In this paper, inspired by human cognition and for addressing the problem existing meta-learning approaches expose, we propose three methods step-by-step: Attention based Meta-Learning (AML), Representation and Attention based Meta-Learning (RAML), Unsupervised Representation and Attention based Meta-Learning (URAML). \subsection{AML} AML equips the meta-learner with the power of attention mechanism. We first introduce the network structure and then detail the training of AML. \noindent \textbf{AML Network} \quad The network architecture of AML is shown in Fig.\ref{fig:network of the AML method}. The network consists of a feature extractor and an attention-based prediction (ABP) module. The feature extractor is a CNN $\mathcal{F}$ which is composed of four stacking convolutional layers. The ABP module contains an convolution-based attention model $\mathcal{A}$ and a fully-connect layer based classifier $\mathcal{C}$. Eq.\ref{eq:AML} shows the inference of the network. $\theta_{f}$, $\theta_{a}$, and $\theta_{c}$ are weights of $\mathcal{F}$, $\mathcal{A}$, and $\mathcal{C}$, respectively. $\mathcal{F}$ extracts features $\gamma_i$ of the input image $x_i$ and feed $\gamma_i$ into the attention model $\mathcal{A}$. Then, $\mathcal{A}$ calculates the soft attention mask $m_i$ of the features $\gamma_i$. By channel-wise multiplication $\odot$ between $\gamma_i$ and $m_i$, the focused features $\gamma^\alpha_i$ is calculated. Finally, the classifier $\mathcal{C}$ predicts the category of the input image, and $\hat{y_i}$ is the corresponding prediction of $x_i$. We simplify and integrate the inference in Eq.\ref{eq:AML} as $\hat{y_i} = \mathbb{F}(x_i; \theta_f, \theta_a, \theta_c)$. \begin{equation} \left\{ \begin{array}{lr} \gamma_i = \mathcal{F}(x_i; \ \theta_{f}) \\ m_i = \mathcal{A}(\gamma_i; \ \theta_{a}) \\ \gamma^\alpha_i = \gamma_i \odot m_i \\ \hat{y_i} = \mathcal{C}(\gamma^\alpha_i; \ \theta_{c}) \end{array} \right. \label{eq:AML} \end{equation} In this paper, we use soft attention mechanism to build up the attention model. Although the soft attention mechanism is not exactly the same with the attention mechanism in human vision, it still plays a similar role with the human attention mechanism and helps the meta-learner to control its attention to key features. Fig.\ref{fig:4b} is used to better understand the soft attention processing of the meta-learner. Fig.\ref{fig:network structure of attention} shows the attention model structure and Eq.\ref{eq:attention model} shows the inference of the attention model. The input feature $\gamma$ is firstly global-average-pooled to get feature $\gamma$$'$, and then a convolution layer coupled with a sigmoid activation layer are used to predict the attention mask \emph{m} from the feature $\gamma$$'$. \begin{equation} \left\{ \begin{array}{lr} \gamma' = \mathcal{P}_{a}(\gamma), \\ m = \sigma(\mathcal{F}_{a}(\gamma'; \ \theta_{a})) \end{array} \right. \label{eq:attention model} \end{equation} \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{AML.pdf} \caption{ Network structure of the proposed method AML. There is an attention model inserted explicitly in the meta-learner's network.} \label{fig:network of the AML method} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\columnwidth]{attention.pdf} \caption{ Inner network structure of the attention model. The shape of feature map $\gamma$ is (\emph{b,w,h,c}) which is shown at the left of the figure, where \emph{b, w, h, c} are the batch size, width, height and umber of channels of the feature map $\gamma$, and the shape of $\gamma'$ and \emph{m} are both (\emph{b},1,1,\emph{c}). } \label{fig:network structure of attention} \end{figure} $\mathcal{P}_{a}$ is the global-average-pooling operation, and $\sigma$ is the sigmoid activation, and $\mathcal{F}_{a}$ is the convolution layer in the attention model. \noindent \textbf{AML Meta-Train Process} \quad Given a few-shot classification task $\tau$, AML meta-trains the meta-learner to solve the task $\tau$ in the two steps. \textbf{First}, AML requires the meta-learner to inner-update itself on the the support set of $\tau$, which can be formulated as Eq.\ref{eq:AML_support_update1} and Eq.\ref{eq:AML_support_update2}. \begin{equation} \left\{ \begin{array}{lr} \hat{y_i} = \mathbb{F}(x_i; \theta_f, \theta_a, \theta_c), \\ \mathcal{L}_i(\theta_f, \theta_a, \theta_c) = l(\hat{y_i}, y_i), \\ \mathfrak{L}_s(\theta_f, \theta_a, \theta_c) = \frac{1}{N_s} \displaystyle{\sum_{i=1}^{N_s}} \mathcal{L}_i(\theta_f, \theta_a, \theta_c) \end{array} \right. \label{eq:AML_support_update1} \end{equation} \begin{equation} (\theta^{'}_f, \theta^{'}_a, \theta^{'}_c) = (\theta_f, \theta_a, \theta_c) - \alpha{\boldmath \circ}\nabla_{(\theta_f, \theta_a, \theta_c)}\mathfrak{L}_{s}(\theta_f, \theta_a, \theta_c) \label{eq:AML_support_update2} \end{equation} In Eq.\ref{eq:AML_support_update1}, $x_i$ is any image that belongs to the support set, $l$ is the cross-entropy loss function, $\mathcal{L}_i$ is the meta-learner's loss on the image $x_i$, $\mathfrak{L}_s$ is the meta-learner's loss on the total support set, and $N_s$ is the number of images in the support set. In Eq.\ref{eq:AML_support_update2}, inspired by Meta-SGD\cite{Meta-SGD}, we set $\alpha$ as a trainable vector which adjusts the inner-update direction and $\alpha$ has the same shape with the weights $\theta_f, \theta_a$, and $\theta_c$. $\alpha$ can also be presented as $\alpha = [\alpha_f, \alpha_a, \alpha_c]$ and the Eq.\ref{eq:AML_support_update2} can be split into three equations, \emph{i.e.} $\theta^{'}_f = \theta_f - \alpha_f{\boldmath \circ}\nabla_{\theta_f}\mathfrak{L}_{s}(\theta_f, \theta_a, \theta_c)$ and \emph{etc.}. For simplicity, we merge these three equations into one equation as Eq.\ref{eq:AML_support_update2} shows. $\circ$ is the element-wise multiplication. Supervised by the loss on the support set, the meta-learner inner-updates its weights $\theta_f, \theta_a, \theta_c$ to $\theta^{'}_f, \theta^{'}_a, \theta^{'}_c$. \textbf{Second}, as the inner-updated weight $\theta^{'}_f, \theta^{'}_a$, and $\theta^{'}_c$ depend on not only the initial values of $\theta_f, \theta_a$, and $\theta_c$, but also $\alpha$, all $\theta_f, \theta_a, \theta_c$, and $\alpha$ can be meta-optimized. We formulate this process as Eq.\ref{eq:AML_support_optimize1} and Eq.\ref{eq:AML_support_optimize2}. \begin{equation} \left\{ \begin{array}{lr} \hat{y_i} = \mathbb{F}(x_i; \theta^{'}_f, \theta^{'}_a, \theta^{'}_c), \\ \mathcal{L}_i(\theta^{'}_f, \theta^{'}_a, \theta^{'}_c) = l(\hat{y_i}, y_i), \\ \mathfrak{L}_q(\theta^{'}_f, \theta^{'}_a, \theta^{'}_c) = \frac{1}{N_q} \displaystyle{\sum_{i=1}^{N_q}} \mathcal{L}_i(\theta^{'}_f, \theta^{'}_a, \theta^{'}_c) \end{array} \right. \label{eq:AML_support_optimize1} \end{equation} \begin{equation} (\theta_f, \theta_a, \theta_c, \alpha) = (\theta_f, \theta_a, \theta_c, \alpha) - \beta{\boldmath \cdot}\nabla_{(\theta_f, \theta_a, \theta_c, \alpha)}\mathfrak{L}_{q}(\theta^{'}_f, \theta^{'}_a, \theta^{'}_c) \label{eq:AML_support_optimize2} \end{equation} In Eq.\ref{eq:AML_support_optimize1}, $x_i$ is an image belonging to the query set, and $N_q$ denotes the number of images in the query set. $\mathfrak{L}_q$ is the inner-updated meta-learner's loss on the query set. It should be noted that $\nabla_{(\theta_f, \theta_a, \theta_c, \alpha)}\mathfrak{L}_{q}(\theta^{'}_f, \theta^{'}_a, \theta^{'}_c)$ computes the gradient of $\mathfrak{L}_{q}$ towards $(\theta_f, \theta_a, \theta_c, \alpha)$ but not $(\theta^{'}_f, \theta^{'}_a, \theta^{'}_c)$. By optimizing $\mathfrak{L}_q$, the meta-learner is forced to learn not only the suitable initial weights $\theta_f, \theta_a, \theta_c$ but also $\alpha$ for task $\tau$. With the learned initial weights and $\alpha$, the meta-learner can inner-update itself precisely on the support set and then perform well on the query set. In AML, the meta-learner is trained on lots of few-shot learning tasks with these two steps, which makes the meta-learner learn generalizable initial weights for not only the feature extractor $\mathcal{F}$ and the classifier $\mathcal{C}$, but also the attention model $\mathcal{A}$. While existing initialization based meta-learning methods only train the meta-learner to learn initial weights for the feature extractor and the classifier. Therefore, compared with existing meta-learners, AML simplifies the few-shot problem and improves performance since its attention ability is meta-trained and can be easily adjusted to the crucial features for solving few-shot learning, which leads the classifier can make a precise prediction for the input. In our experiment, we show the positive effect of attention mechanism. \begin{figure*}[t] \centering \subfigure[] { \includegraphics[scale=0.66]{RAML1.pdf} \label{fig:4a} } \subfigure[] { \includegraphics[scale=0.66]{RAML2.pdf} \label{fig:4b} } \caption{ (a) Network structure of the proposed RAML. The meta-learner is composed of a Representation module and an ABP module. The Auxiliary module is used to assist the meta-learner to learn prior-knowledge. (b) An example that interprets the principle of soft attention mechanism for few-shot learning.} \label{fig:network of the RAML method} \end{figure*} \subsection{RAML} RAML assembles the meta-learner not only the attention mechanism but also the ability to well use the past learned knowledge. Fig.\ref{fig:4a} shows the meta-learner's network structure. Its network consists of a Representation module and an ABP module. The Representation module is different from the feature extractor in AML because the Representation module here is responsible for the meta-learner learning and leveraging prior-knowledge to understand the input image. While the feature extractor in AML is meta-trained for learning how to update itself for solving few-shot learning tasks. In our work, the Representation module is a ResNet-50 network. Similar to the ABP module in AML, the ABP module here also contains an attention model and a classifier. It is responsible for quickly adjusting the meta-learner's attention and prediction based on the output feature from the Representation module. Besides, Fig.\ref{fig:4a} contains an Auxiliary module. The Auxiliary module does not belong to the meta-learner, and it is only used to assist the meta-learner learning prior-knowledge. \noindent \textbf{RAML Training Process} \quad The training process of RAML can be separated into two stages: prior-knowledge learning and meta-training stage. \textbf{At the prior-knowledge learning stage}, with the assist of the Auxiliary module, the Representation module is trained to learn prior-knowledge about image classification in a supervised manner. The training process can be formulated as \begin{equation} \left\{ \begin{array}{lr} \gamma_i = \mathcal{F}_r(x_i; \ \theta_r) \\ \hat{y_i} = \mathcal{C}_{au}(\gamma_i; \ \theta_{au}) \\ L_{au} = \frac{1}{n} \displaystyle{\sum_{i=1}^{n}} l (\hat{y_i}, y_i) \\ \theta^*_r, \theta^{*}_{au} = \mathop{argmin}\limits_{\theta_r, \theta_{au}} L_{au} \end{array} \right. \label{eq:RAML_prior_train} \end{equation} $\mathcal{F}_r$ and $\mathcal{C}_{au}$ denote the Representation and Auxiliary modules, respectively, and $\theta_r$ and $\theta_{au}$ are their weights. $x_i$ is an input image used for the representation model learning prior-knowledge, and $n$ is the number of images. $\theta^*_r$ and $\theta^*_{au}$ are the learned values of $\theta_r$ and $\theta_{au}$. \textbf{At the meta-training stage}, for the meta-learner well using the learned knowledge to stably express the input image into high-level representations, the Representation module will not be meta-trained. Similar to AML, in RAML, we simplify the prediction of the meta-learner as $\hat{y_i} = \mathbb{F}(x_i; \theta_r^*, \theta_a, \theta_c)$, where all symbols denote the same meanings as those in AML. In RAML, the inner-update of the meta-learner on the support set can be formulated as Eq.\ref{eq:RAML_support_update1} and Eq.\ref{eq:RAML_support_update2}. We can see that different from the inner-update of AML which update all weights of the network, the inner-update of RAML only update the weights $\theta_a$ and $\theta_c$ of the ABP module. The weight $\theta_r^*$ of the Representation module is fixed to keep the learned prior-knowledge. \begin{equation} \left\{ \begin{array}{lr} \hat{y_i} = \mathbb{F}(x_i; \theta_r^*, \theta_a, \theta_c), \\ \mathcal{L}_i(\theta_r^*, \theta_a, \theta_c) = l(\hat{y_i}, y_i), \\ \mathfrak{L}_s(\theta_r^*, \theta_a, \theta_c) = \frac{1}{N_s} \displaystyle{\sum_{i=1}^{N_s}} \mathcal{L}_i(\theta_r^*, \theta_a, \theta_c) \end{array} \right. \label{eq:RAML_support_update1} \end{equation} \begin{equation} (\theta^{'}_a, \theta^{'}_c) = (\theta_a, \theta_c) - \alpha{\boldmath \circ}\nabla_{(\theta_a, \theta_c)}\mathfrak{L}_{s}(\theta_r^*, \theta_a, \theta_c) \label{eq:RAML_support_update2} \end{equation} The meta-optimizing in RAML can be formulated as Eq.\ref{eq:RAML_meta_optimize1} and Eq.\ref{eq:RAML_meta_optimize2}. \begin{equation} \left\{ \begin{array}{lr} \hat{y_i} = \mathbb{F}(x_i; \theta_r^*, \theta^{'}_a, \theta^{'}_c), \\ \mathcal{L}_i(\theta_r^*, \theta^{'}_a, \theta^{'}_c) = l(\hat{y_i}, y_i), \\ \mathfrak{L}_q(\theta_r^*, \theta^{'}_a, \theta^{'}_c) = \frac{1}{N_q} \displaystyle{\sum_{i=1}^{N_q}} \mathcal{L}_i(\theta_r^*, \theta^{'}_a, \theta^{'}_c) \end{array} \right. \label{eq:RAML_meta_optimize1} \end{equation} \begin{equation} (\theta_a, \theta_c, \alpha) = (\theta_a, \theta_c, \alpha) - \beta{\boldmath \cdot}\nabla_{(\theta_a, \theta_c, \alpha)}\mathfrak{L}_{q}(\theta_r^*, \theta^{'}_a, \theta^{'}_c) \label{eq:RAML_meta_optimize2} \end{equation} The character of RAML is that the Representation module and the ABP module are trained separately. The Representation module is supervisorily trained to learn the prior-knowledge about image classification, and the ABP module is meta-trained to learn how to adjust itself quickly to solve few-shot learning tasks in the representation space provided by the Representation module. Compared with AML, which meta-trains the meta-learner not only adjusting the feature extractor but also the ABP module, RAML meta-trains the meta-learner simplify the few-shot learning problem as the meta-learner only need to adjust its ABP module in the representation space. This is possibly the reason why RAML outperforms AML in our experiment. \subsection{URAML} \begin{figure*} \centering \includegraphics[width=1.98\columnwidth]{URAML.pdf} \caption{ The network structure of URAML. The meta-learner is composed of a Representation module and an ABP module. The Auxiliary module is used to assist the meta-learner to learn prior-knowledge. } \label{fig:figure5} \end{figure*} The prior-knowledge can be learned on not only labeled data but also large-scale unlabelled data. Thus, we design the method URAML and show its network structure in Fig.\ref{fig:figure5}. Similar to RAML, the meta-learner is also composed of a Representation module and an ABP module, and the Auxiliary module does not belong to the meta-learner. The training process of URAML can be separated into two stages: prior-knowledge learning and meta-training stage. \textbf{At the prior-knowledge learning stage}, the Representation module learns the knowledge with an unsupervised learning algorithm: Split-Brain auto-encoder\cite{split-brain}. The Split-Brain auto-encoder simultaneously trains two auto-encoders with \emph{Lab} images. In \emph{Lab} color system, the \emph{L} channel determines the brightness of the image, and the \emph{ab} channels determine the color. One auto-encoder in Split-Brain is trained to predict the unseen \emph{ab} channels of the input \emph{Lab} image, given only the \emph{L} channel. Another is trained to predict the unseen \emph{L} channel, given the \emph{ab} channels. As Fig.\ref{fig:figure5} shows, the Representation module consists of two ResNet-50 based encoders and the Auxiliary module consists of two corresponding deconvolution\cite{deconv} based decoders. We formulate the prior-knowledge learning process as Eq.\ref{eq:URAML_knowledge1} and Eq.\ref{eq:URAML_knowledge2}. \begin{equation} \left\{ \begin{array}{lr} \gamma_i^l = \mathcal{F}_{l}(x_i^l; \ \theta_{l}) \\ \hat{x}^{ab}_i = \mathcal{D}_l(\gamma_i^l; \ \omega_{l}) \\ L_{l}(\theta_l, \omega_{l}) = \frac{1}{n} \displaystyle{\sum_{i=1}^{n}} l_2(x_i^{ab}, \ \hat{x}_i^{ab}) \\ \theta_l^{*}, \omega_{l}^{*} = \mathop{argmin}\limits_{\theta_l, \omega_{l}} L_{l}(\theta_l, \omega_{l}) \end{array} \right. \label{eq:URAML_knowledge1} \end{equation} In Eq.\ref{eq:URAML_knowledge1}, $x^l_i$ and $x^{ab}_i$ are the \emph{L} and \emph{ab} channels of the input \emph{Lab} image $x_i$, respectively. $\mathcal{F}_l$ and $\mathcal{D}_l$ are the encoder and decoder that predict $x^{ab}_i$ based on $x^l_i$, respectively, and $\hat{x}_i^{ab}$ is the prediction. $\theta_l$ and $\omega_{l}$ are the weights of $\mathcal{F}_l$ and $\mathcal{D}_l$, respectively, and $\theta_l^{*}$ and $\omega_{l}^{*}$ are the optimized values of $\theta_l$ and $\omega_{l}$. $\gamma_i^l$ is the squeezed feature of $x^l_i$ by the encoder $\mathcal{F}_l$. $L_l$ is the loss of $\mathcal{F}_l$ and $\mathcal{D}_l$, and $l_2$ is the \emph{MSE} loss function. $n$ is the number of \emph{Lab} images that trains $\mathcal{F}_l$ and $\mathcal{D}_l$. In Eq.\ref{eq:URAML_knowledge2}, all symbols are defined in the same way with those in Eq.\ref{eq:URAML_knowledge1}. \begin{equation} \left\{ \begin{array}{lr} \gamma_i^{ab} = \mathcal{F}_{ab}(x_i^{ab}; \ \theta_{ab}) \\ \hat{x}_i^l = \mathcal{D}_i^{ab}(\gamma_i^{ab}; \ \omega_{ab}) \\ L_{ab}(\theta_{ab}, \omega_{ab}) = \frac{1}{n} \displaystyle{\sum_{i=1}^{n}} l_2(x_i^{l}, \ \hat{x}_i^{l}) \\ \theta_{ab}^{*}, \omega_{ab}^{*} = \mathop{argmin}\limits_{\theta_{ab}, \omega_{ab}} L_{ab}(\theta_{ab}, \omega_{ab}) \end{array} \right. \label{eq:URAML_knowledge2} \end{equation} After unsupervised learning, the representations $\gamma_i$ of an \emph{Lab} image $x_i$ can be calculated by first concatenating $\gamma_i^l$ with $\gamma_i^{ab}$ and second average-pooling, which is shown as $\gamma_i = \mathcal{P}_{a}(\gamma_i^l, \gamma_i^{ab})$, where $\mathcal{P}_{a}$ is an average-pooling layer. \textbf{At the meta-training stage}, the ABP module is trained in the same way with that in RAML. Note that, the learned weight of the Representation module in URAML is $\theta_r^* = [\theta_{l}^*, \theta_{ab}^*]$. At the end of our methodology, we summarize our three methods briefly. Inspired by human cognition which makes full use of attention mechanism and prior-knowledge to efficiently learn new knowledge, we design a novel paradigm with three methods to step-by-step utilize attention mechanism and prior-knowledge in meta-learning. Firstly, the method AML is designed to leverage attention mechanism in meta-learning. Secondly, the method RAML is designed to use not only the attention mechanism but also prior-knowledge in meta-learning. Compared with RAML, the method URAML learns the prior-knowledge with unsupervised learning, which brings URAML the advantage that with the growth of available unlabeled images used in the prior-knowledge learning stage and the progress of unsupervised learning algorithm, the performance of the meta-learner will be boosted up. \section{Experiments} In this section, we firstly present the datasets we used in our experiments, and then the details and results of our experiments. \subsection{Dataset} We use several datasets in all our experiments: MiniImagenet\cite{miniimagenet}, Omniglot\cite{omniglot}, MiniImagenet-900, Places2\cite{places365}, COCO\cite{COCO}, and OpenImages-300. Note that, we resize all the images in Omniglot into 28x28 resolution, and all the other images into 84x84. \subsubsection{MiniImagenet} MiniImagenet\cite{miniimagenet} is popularly used for evaluating few-shot learning and meta-learning. It contains 100 image classes, including 64 training classes, 16 validation classes, and 20 testing classes. Each image class with 600 images are sampled from the ImageNet dataset\cite{imagenet}. \subsubsection{Omniglot} Omniglot\cite{omniglot} is another widely used dataset for meta-learning. It contains 50 different alphabets and 1623 characters from these alphabets, and each character has 20 images that hand-drawn by 20 different people. \subsubsection{MiniImagenet-900} MiniImagenet-900 dataset is designed for the Representation modules in RAML and URAML learning prior-knowledge, and it is composed of 900 image classes. Each image class with 1300 images are collected from the original ImageNet dataset. It is worth noting that there is no image class in MiniImageNet-900 coincides with the classes in the MiniImagenet dataset. \subsubsection{Other Datasets} As the Representation module of URAML is trained by unsupervised learning, we take full advantage of this characteristic by training the Representation module of URAML on not only MiniImagenet-900 but also Places2\cite{places365}, COCO2017\cite{COCO}, and OpenImages-300. The dataset OpenImages-300 is a subset of the OpenImages-V4 dataset\cite{openimages}. The total OpenImages-V4 dataset contains 9 million images, and we randomly downloaded 3 million images from the OpenImages-V4 website to form the OpenImages-300 dataset. \subsection{Experiments on MiniImagenet} On MiniImagenet, we test all our methods on 5-way 1-shot and 5-way 5-shot classification tasks. The testing accuracy is averaged by the accuracies on 600 tasks, with 95\% confidence intervals, and all these 600 tasks are randomly generated on the test set of MiniImagenet. The support and query set of each $N$-way $K$-shot task contains $NK$ and $15*N$ images, respectively. \begin{table*} \centering \caption{Few-shot learning performance on Omniglot. The method which is colored with blue uses deep network (ResNet) to extract image features, while the other use shallow network (4 cascading convolution layers). The accuracy is tested as the same way as MAML\cite{MAML}} \resizebox{1.7\columnwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method} &\multirow{2}{*}{Venue} &\multicolumn{2}{c|}{5-way Accuracy} &\multicolumn{2}{c|}{20-way Accuracy} \\ \cline{3-6} & &1-shot &5-shot &1-shot &5-shot\\ \hline MAML\cite{MAML} &ICML-17 &98.70$\pm$0.40\% &99.90$\pm$0.10\% &95.80$\pm$0.30\% &98.90$\pm$0.20\% \\ \hline Prototypical Nets\cite{prototypical} &NIPS-17 &98.80\% &99.70\% &96.00\% &98.90\% \\ \hline Meta-SGD\cite{Meta-SGD} &/&99.53$\pm$0.26\% &{\bfseries 99.93$\pm$0.09\%} &95.93$\pm$0.38\% &98.97$\pm$0.19\% \\ \hline Relation Net\cite{comparenet} &CVPR-18 &99.60$\pm$0.20\% &99.80$\pm$0.10\% &97.60$\pm$0.20\% &99.10$\pm$0.10\% \\ \hline GNN\cite{GNN} &ICLR-18 &99.20\% &99.70\% &97.40\% &99.00\% \\ \hline Spot-Learn\cite{chu2019spot} &CVPR-19 &97.56$\pm$0.31\% &99.65$\pm$0.06\% &/ &/ \\ \hline iMAML HF\cite{IMAML} &NIPS-19 &{99.50$\pm$0.26\%} &99.74$\pm$0.11\% &{96.18$\pm$0.36\%} &{99.14$\pm$0.10\%} \\ \hline {\color{blue}SNAIL}\cite{SNAIL}&ICLR-18 &99.07$\pm$0.16\% &99.78$\pm$0.09\%&97.64$\pm$0.30\% &99.36$\pm$0.18\% \\ \hline {\color{blue}MetaGAN+RN}\cite{Metagan}& NIPS-18 &{\bfseries{\color{black}99.67$\pm$0.18\%}} & 99.86$\pm$0.11\% & 97.64$\pm$0.17\% & 99.21$\pm$0.10\%\\ \hline AML(ours) &/ &{\bfseries 99.65$\pm$0.10\%} &99.85$\pm$0.04\% &{\bfseries 98.48$\pm$0.09\%} &{\bfseries 99.55$\pm$0.06\%} \\ \hline \end{tabular} } \label{tab:result on Omniglot} \end{table*} \begin{table} \centering \caption{Few-shot learning performance on MiniImagenet. The method which is colored with blue uses deep network to extract image features, while the other use shallow network. We separately highlight the best result of the methods using shallow network and that of the methods using deep network, for each task. } \resizebox{0.98\columnwidth}{!}{ \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Method} &\multirow{2}{*}{Venue} &\multicolumn{2}{c|}{5-way Accuracy} \\ \cline{3-4} & &1-shot &5-shot \\ \hline MAML\cite{MAML} &ICML-17 &48.70$\pm$1.84\% &63.11$\pm$0.92\% \\ \hline Prototypical Nets\cite{prototypical} &NIPS-17 &49.42$\pm$0.78\% &68.20$\pm$0.66\% \\ \hline Meta-SGD\cite{Meta-SGD} & / &50.47$\pm$1.87\% &64.03$\pm$0.94\% \\ \hline LLAMA\cite{LLAML} & ICLR-18 & 49.40$\pm$1.83\% & / \\ \hline Relation Net\cite{comparenet} &CVPR-18 &51.38$\pm$0.82\% &67.07$\pm$0.69\% \\ \hline GNN\cite{GNN} &ICLR-18 &50.33$\pm$0.36\% &66.41$\pm$0.63\% \\ \hline Spot-Learn\cite{chu2019spot} &CVPR-19 &51.03$\pm$0.78\% &67.96$\pm$0.71\% \\ \hline iMAML HF\cite{IMAML} &NIPS-19 &49.30$\pm$1.88\% &/ \\ \hline Meta-MinibatchProx\cite{zhou2019efficient} &NIPS-19 &50.77$\pm$0.90\% &67.43$\pm$0.89 \\ \hline AML(ours)& / &{\bfseries {\color{black} 52.25$\pm$0.85\%}} &{\bfseries {\color{black} 69.46$\pm$0.68\%} } \\ \hline \hline {\color{blue}SNAIL}\cite{SNAIL}& ICLR-18 &55.71$\pm$0.99\% &68.88$\pm$0.92\% \\ \hline {\color{blue}TADAM}\cite{TADAM}& NIPS-18 &58.50$\pm$0.30\% &76.70$\pm$0.30\% \\ \hline {\color{blue}MetaGAN+RN}\cite{Metagan}& NIPS-18 &52.71$\pm$0.64\% & 68.63$\pm$0.67\% \\ \hline {\color{blue}AM3-TADAM}\cite{xing2019adaptive}& ICLR-19 &65.30$\pm$0.49\% &78.10$\pm$0.36\% \\ \hline {\color{blue}Incremental}\cite{ren2019incremental}& NIPS-19 &54.95$\pm$0.30\% &63.04$\pm$0.30\% \\ \hline {\color{blue}RAML(ours)} &/&{\bfseries {\color{black}63.66$\pm$0.85\%}} &{\bfseries {\color{black}80.49$\pm$0.45\%}} \\ \hline {\color{blue}URAML(ours)} &/&{\bfseries {\color{black}49.56$\pm$0.79\%}} &{\bfseries {\color{black}63.42$\pm$0.76\%}} \\ \hline \end{tabular} } \label{tab:result on MiniImagenet} \end{table} \textbf{In AML}, the network structure of the meta-leaner is shown in Fig.\ref{fig:network of the AML method}. The feature extractor is composed of 4 Convolution layers and the classifier is a fully-connect layer, and the attention model structure is shown in Fig.\ref{fig:network structure of attention}. Each Convolution layer consists of 64 channels and is followed with a ReLU and batch-normalization layer. We train the meta-learner on 200000 randomly generated tasks for 60000 iterations, and set the learning rate to 0.001, and decay the learning rate to 0.0001 after 30000 iterations. Moreover, Dropout with dropout-rate 0.2, L1 and L2 normalization with 0.001 and 0.00001, respectively, are used to prevent the meta-learner from over-fitting. The experimental result of the method AML on MiniImagenet shows in Tab.\ref{tab:result on MiniImagenet}. Note that in Tab.\ref{tab:result on MiniImagenet}, the method whose name is printed as black uses a shallow network consists of 4 or 5 Convolution layers and one or two fully-connect layers, and the method whose name is printed as blue uses a deep ResNet-based network. Among all the methods using shallow network, AML attained the state-of-the-art on both the 5-way 1-shot and 5-way 5-shot image classification tasks. \textbf{In RAML}, the Representation module is a ResNet-50\cite{resnet} network, and the Auxiliary module is a fully-connect layer. The attention model is the same as that in AML, and the classifier is composed of two fully-connect layers. At the prior-knowledge learning stage, we set the batch size to 256, and the learning rate to 0.001, and decay the learning rate to 0.0001 after 30000 iterations, and use L2 normalization with 0.00001 and Dropout with 0.2 to prevent the Representation module from over-fitting. At the meta-training stage, the ABP module is meta-trained with the same setting as AML. The experiment result of RAML is shown in Tab.\ref{tab:result on MiniImagenet}. Compared to method AML, RAML improves the meta-learner's performance more significantly. It rises the accuracy on 5-way 1-shot tasks from 52.25\% to 63.66\%, and the accuracy on 5-way 5-shot tasks from 69.46\% to 80.49\%. The most likely reason why RAML performs well is: before the meta-training stage, the Representation module has learned old knowledge to help the meta-learner understanding new input image and provides high-level meaningful representations and features of the input image. In the meta-training stage, the meta-learner's work becomes more comfortable because it only needs to learn how to quickly adjust its ABP module according to the compact features the Representation module provided, and do not need to take care of the original high dimensional input data. While the meta-learner of AML works harder than the meta-learner of RAML, as it has to adjust its total network to fit new few-shot learning tasks according to the original input data. \begin{table*} \centering \caption{Ablation experimental results about the attention mechanism on Omniglot.} \resizebox{1.4\columnwidth}{!}{ \begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{Method} &\multicolumn{2}{c|}{5-way Accuracy} &\multicolumn{2}{c|}{20-way Accuracy} \\ \cline{2-5} &1-shot &5-shot &1-shot &5-shot \\ \hline MAML* &97.40$\pm$0.27\% &{\bfseries 99.71$\pm$0.05\%} &{\bfseries 93.37$\pm$0.23\%} &97.46$\pm$0.11\% \\ \hline MAML+attention &{\bfseries 97.41$\pm$0.28\%} &99.48$\pm$0.12\% &92.99$\pm$0.25\% &{\bfseries 97.94$\pm$0.10\%} \\ \hline Meta-SGD* &98.94$\pm$0.17\% &99.51$\pm$0.07\% &95.82$\pm$0.21\% &98.40$\pm$0.09\% \\ \hline Meta-SGD+attention &{\bfseries 99.26$\pm$0.15\%} &{\bfseries 99.79$\pm$0.04\%} &{\bfseries 97.94$\pm$0.14\%} &{\bfseries 98.99$\pm$0.10\%} \\ \hline \end{tabular} } \label{tab:ablation result on Omniglot} \end{table*} \textbf{In URAML}, the Representation module learns the prior-knowledge with an unsupervised learning algorithm: Split-Brain. As Fig.\ref{fig:figure5} shows, two independent ResNet-50 network-based encoders compose the Representation module, and we halve all the filters in each encoder so that the Representation module outputs feature vector with a dimension of 2048, which is the same with that in RAML. The Auxiliary module is composed of two deconvolution-based decoders, and Tab.\ref{tab:URAML_decoder_net} shows the detail of the decoder network structure. The last Conv-layer's number of filters is 1 or 2 according to that the decoder is recovering the \emph{L} channel or the \emph{ab} channels of the \emph{Lab} image. At both the prior-knowledge learning and meta-training stage, we set all hyperparameters the same with those in the RAML experiment. Noted that for saving the training computation cost, the decoders in the Auxiliary module recover the \emph{ab} and \emph{L} channels into 11x11 resolution, but not the original 84x84. When calculating the \emph{MSE} losses $L_{l}(\theta_l, \omega_{l})$ and $L_{ab}(\theta_{ab}, \omega_{ab})$ shown in Eq.\ref{eq:URAML_knowledge1} and Eq.\ref{eq:URAML_knowledge2}, we first resize \emph{ab} and \emph{L} channels of the input \emph{Lab} image into 11x11 resolution and then calculate $L_{l}(\theta_l, \omega_{l})$ and $L_{ab}(\theta_{ab}, \omega_{ab})$. The experiment result of URAML is shown in Tab.\ref{tab:result on MiniImagenet}. We also highlight the result of URAML in Tab.\ref{tab:result on MiniImagenet}, even though its result is not state-of-the-art. In our viewpoint, the reason why URAML lags behind RAML is that the Representation module in URAML learns the prior-knowledge with unsupervised learning while the Representation module in RAML learns with supervised learning. \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Detailed structure of the decoder module in URAML.} \centering \label{tab:URAML_decoder_net} \resizebox{0.6\columnwidth}{!}{ \begin{tabular}{|c|c|c|c|} \hline Layers & Number of filters & Kernel \\ \hline CONV & 1024 &5 \\ \hline DeCONV & 512 &3 \\ \hline DeCONV & 256 &3 \\ \hline CONV & 1 or 2 &1 \\ \hline \end{tabular} } \end{table} \begin{table} \centering \caption{Ablation experimental results about the attention mechanism on MiniImagenet} \resizebox{0.9\columnwidth}{!}{ \begin{tabular}{|l|c|c|} \hline \multirow{2}{*}{Method} &\multicolumn{2}{c|}{5-way Accuracy} \\ \cline{2-3} &1-shot &5-shot \\ \hline MAML* &48.03$\pm$0.83\% &64.11$\pm$0.73\% \\ \hline MAML+attention &{\bfseries 48.52$\pm$0.85\%} &{\bfseries 64.94$\pm$0.69\%} \\ \hline Reptile* &48.23$\pm$0.43\% &63.69$\pm$0.49\% \\ \hline Reptile+attention &{\bfseries 48.30$\pm$0.45\%} &{\bfseries 64.22$\pm$0.39\%} \\ \hline Meta-SGD* &48.15$\pm$0.93\% &63.73$\pm$0.85\% \\ \hline Meta-SGD+attention &{\bfseries 49.11$\pm$0.94\%} &{\bfseries 65.54$\pm$0.84\%} \\ \hline \end{tabular} } \label{tab:ablation result on MiniImagenet} \end{table} \subsection{Experiments on Omniglot} As Omniglot is a much easier dataset than MiniImagenet that existing meta-learners can easily achieve more than 95\% accuracy on most testing tasks generated on Omniglot, we only test method AML on Omniglot. Same to the experiments on Miniimagenet, we also train the meta-learner on 200000 randomly generated tasks for 60000 iterations and set the learning rate to 0.001. The experiment results are shown in Tab.\ref{tab:result on Omniglot} It is clear that the proposed method AML attains state-of-the-art performance on 2 of all 4 kinds of few-shot image classification tasks. On the 5-way 1-shot task, though the method MetaGAN+RN performs slightly better than AML, we still highlight AML as MetaGAN+RN uses a deeper ResNet-based network while AML uses a shallower network. On the 20-way 1-shot task, our method AML surpasses other methods by a large margin. For example, compared to IMAML HF, AML improves the meta-learner's performance from 96.18\% to 98.48\%. \subsection{Ablation Study} \subsubsection{Ablation Study about the Attention Mechanism} To confirm the promotion effect of the attention mechanism for meta-learning, we conduct experiments to compare the performance of the meta-learner which is equipped with the attention model and its counterpart which is not. The experimental results show in Tab.\ref{tab:ablation result on MiniImagenet} and Tab.\ref{tab:ablation result on Omniglot}. The compared meta-learner which is marked with * is the meta-learner re-implemented by ourselves. The performances of our re-implemented meta-learners differ slightly from those reported in their original papers. This is probably caused by different hyper-parameters or experiment settings (all methods in this experiment use convolution layers with 32 filters). The comparisons in Tab.\ref{tab:ablation result on MiniImagenet} and Tab.\ref{tab:ablation result on Omniglot} revealing that in most cases, the attention mechanism improves the meta-learner significantly, which demonstrates the reason-ability of our idea. As attention mechanism brings the meta-learner more weights and computation cost, we do another experiment to validate that the improvement of AML is the contribution of the attention mechanism but not the growth of the number of weights and computation cost. The experiment detail is: since the attention model in AML is a convolution layer with the kernel size of 1x1, we remove the attention model, and stack a convolution layer with the same kernel size on the top of the CNN feature extractor. We name the meta-learner with this network as AML-attention, and its number of weight is the same as that of AML. The corresponding experimental result is shown in Tab.\ref{tab:other ablation}, and it is clear that AML outperforms AML-attention, which further shows the improvement effect of attention mechanism for meta-learning. \begin{table} \centering \caption{Results of several ablation experiments. } \resizebox{0.8\columnwidth}{!}{ \begin{tabular}{|l|c|c|} \hline \multirow{2}{*}{Method} &\multicolumn{2}{c|}{5-way Accuracy} \\ \cline{2-3} &1-shot &5-shot \\ \hline AML&{\bfseries 52.25$\pm$0.85\%} &{\bfseries69.46$\pm$0.68\%} \\ \hline AML-attention &51.27$\pm$0.78\% &67.73$\pm$0.65\% \\ \hline RAML &{\bfseries 63.66$\pm$0.85\%} & {\bfseries 80.49$\pm$0.45\%} \\ \hline RAML-Places2 &58.82$\pm$0.89\% &74.09$\pm$0.76\% \\ \hline \end{tabular} } \label{tab:other ablation} \end{table} \begin{table*} \centering \footnotesize \caption{Ablation experimental results about URAML.} \resizebox{1.9\columnwidth}{!}{ \begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Dataset} & \multirow{2}{*}{Number of images} & \multicolumn{2}{c|}{5-way Accuracy} \\ \cline{4-5} & & &1-shot &5-shot \\ \hline URAML-V1 & MiniImagenet-900 & 1.15million &45.91$\pm$0.79\% &61.04$\pm$0.71\% \\ \hline URAML-V2 & MiniImagenet-900, places365, COCO2017 & 4.10million &48.82$\pm$0.79\% &62.84$\pm$0.78\% \\ \hline URAML-AE & MiniImagenet-900, places365, COCO2017, OpenImages-300 & 7.10million &33.29$\pm$0.71\% &43.60$\pm$0.66\% \\ \hline URAML & MiniImagenet-900, places365, COCO2017, OpenImages-300 & 7.10million &{\bfseries 49.56$\pm$0.79\%} &{\bfseries63.42$\pm$0.76\%} \\ \hline \end{tabular} } \label{tab:URAML_exp} \end{table*} \subsubsection{Prior-Knowledge Learning Dataset} We do experiments to test how does the prior-knowledge learning dataset affects RAML and URAML. \emph{a) affects to RAML}: In RAML, the default prior-knowledge learning dataset is our reorganized \emph{Miniimagenet-900} dataset. In this experiment, the Representation module learns the prior-knowledge on Places2\cite{places365} instead of \emph{Miniimagenet-900}, and all the other experiment settings and hyper-parameters are constant with the primordial RAML. We denote this meta-learner as RAML-Places2. Corresponding experimental result shows in Tab.\ref{tab:other ablation}. It is clear that prior-knowledge learning dataset affects the meta-learner. The reason is that different prior-knowledge learning dataset leads the Representation module learning different knowledge and expressing image features differently. Places2 is a dataset commonly used for scene classification, which results in that the Representation module learning the knowledge about scene understanding rather than object classification. \emph{b) affects to URAML}: In this experiment, we test how the quantity of unlabeled \emph{Lab} images in the prior-knowledge learning dataset affect URAML. We design two new versions of URAML: URAML-V1 and URAML-V2. The Representation module of URAML-V1 learns prior-knowledge only on MiniImagenet-900, and that of URAML-V2 learns prior-knowledge on not only MiniImagenet-900, but also the Places2 and COCO2017. Compared with URAML-V1 and URAML-V2, the quantity of unlabeled \emph{Lab} used in the primordial URAML is the largest, as MiniImagenet-900, places365, COCO2017, and OpenImages-300 are all used in the primordial URAML. Tab.\ref{tab:URAML_exp} shows the prior-knowledge learning dataset and the performances of URAML-V1, URAML-V2, and the primordial URAML. It is clear that the primordial URAML performs the best, and the more the unlabeled \emph{Lab} images used for the meta-learner to learn prior-knowledge, the better the meta-learner performs. Besides, there remains a large performance progress space as we can use more unlabeled data in URAML. \subsubsection{Unsupervised Learning for URAML} The development of unsupervised learning also affects URAML a lot. To verify this viewpoint, we do an experiment that the Representation module in URAML learns the prior-knowledge with a basic unsupervised learning method Auto-Encoder\cite{AE}, and we name this version of URAML as URAML-AE. The experimental result of URAML-AE shown in Tab.\ref{tab:URAML_exp} revealing that the unsupervised learning algorithm affects the meta-learner significantly. Maybe the most promising way to improve the performance of URAML is to develop the unsupervised learning algorithm and collect more unlabeled data. \subsection{Cross-Testing Experiment} We find that existing meta-learning methods generally suffer from a Task-Over-Fitting (TOF) problem, and this problem has seldom been studied. An example of the TOF problem is that the meta-learner to be tested on 5-way 1-shot classification tasks should be trained on 5-way 1-shot tasks rather than on other tasks, and similarly, the meta-learner to be tested on 5-way 5-shot tasks should be trained on 5-way 5-shot tasks. This is because the meta-learner trained on 5-shot tasks over-fits to 5-shot tasks, and when testing it on 1-shot tasks, it will perform obviously worse than the meta-learner trained on 1-shot tasks. We do lots of cross-testing experiments to test how much does MAML, Meta-SGD, AML, RAML, and URAML suffer from the TOF problem, and the experimental results show that compared with the other methods, our methods suffer less from this problem, especially RAML and URAML. For each tested meta-learning method, we do the cross-testing experiments in the following way: 1) train the meta-learner on 5-way $K$-shot image classification tasks, where $K\in$\{1,3,5,7,9\}, 2) test the meta-learner on 5-way $J$-shot tasks, where $J\in$\{1,3,5,7,9\}. For example, we train a meta-learner with MAML on 5-way 3-shot tasks and test its performance on all 5-way $K$-shot tasks, $K\in$\{1,3,5,7,9\}. The experimental results are shown in Fig.\ref{fig:cross_test}. \begin{figure*} \centering \includegraphics[width=2.05\columnwidth]{miniimagenet_cross_test.pdf} \caption{Results of cross-testing experiments amoung MAML, Meta-SGD, AML, RAML and URAML. Each column presents a meta-learner trained on specific $K$-shot training tasks and each row presents specific $J$-shot testing tasks, where $K, J\in$\{1,3,5,7,9\}. For each method, the value at $J$-shot row $K$-shot column presents the $J$-shot testing accuracy of the meta-learner trained on $K$-shot training tasks. For example, the value 59.99 at 3-shot row 7-shot column of MAML presents the 3-shot testing accuracy of the MAML meta-learner trained on 7-shot training tasks. The value 80.83 at 9-shot row 1-shot column of RAML presents the 9-shot testing accuracy of the RAML meta-learner trained on 1-shot training tasks. } \label{fig:cross_test} \end{figure*} Obviously, Fig.\ref{fig:cross_test} shows that MAML suffers seriously from the TOF problem, because its meta-learner which performs best on $K$-shot tasks probably performs not well on $J$-shot tasks, where $K \neq J$. For example, in MAML, the meta-learner trained on 1-shot tasks performs best on the 1-shot tasks, but it can not perform as well as the other meta-learners on 3-, 5-, 7-, and 9-shot tasks, which means the meta-learner trained on 1-shot tasks over-fits to 1-shot tasks. The meta-learner trained by URAML troubled little by the TOF problem because the meta-learner which performs best on $K$-shot tasks probably performs best on $J$-shot tasks, where $K$, $J\in$\{1,5,7,9\}. For example, in URAML, the meta-learner trained on 1-shot tasks performs best not only on the 1-shot tasks but also on 5-, 7-, and 9-shot tasks, which means the meta-learner trained on 1-shot tasks generalizes well to the other $J$-shot tasks. We design a metric Cross-Entropy across Tasks (CET), to quantize how much does a meta-learning approach be vulnerable to the TOF problem. The evaluation process is shown as Eq.\ref{eq:entropy}, where \emph{i}, \emph{j}$\in$\{1,3,5,7,9\} and overstriking variables are vector. $\mathcal{S}$ and $\mathcal{D}$ are the softmax and cross-entropy operation. $\bm{a}_i$ is the testing accuracies of five meta-learners trained on 1-, 3-, 5-, 7-, 9-shot tasks when they are tested on $i$-shot tasks. $\bm{d}_i$ is the meta-learners' accuracy distribution on $i$-shot tasks. $l_{i,j}$ presents the similarity between accuracy distribution vector \emph{\textbf{d$_{i}$}} and \emph{\textbf{d$_{j}$}}, where \emph{i,j$\in${\{1,3,5,7,9\}}}. \emph{L} presents the overall similarities of $l_{i,j}$ for a specific approach. \begin{equation} \left\{ \begin{array}{lr} \bm{d}_i = \mathcal{S}(\bm{a_i}/max(\bm{a_i})) \\ l_{ij} = \mathcal{D}(\bm{d_i}, \bm{d_j}) \\ L = \sum_{i,j\in{1,3,5,7,9}}^{i\neq j}l_{ij} \end{array} \right. \label{eq:entropy} \end{equation} For example, the testing accuracies $\bm{a}_{3}$ of Meta-SGD [58.24\%, 59.18\%, 58.90\%, 58.75\%, 59.15\%] is the five trained meta-learners of Meta-SGD when they are tested on 3-shot tasks. So, $\bm{a}_3 / max(\bm{a}_3)$ = [58.24\%, 59.18\%, 58.90\%, 58.75\%, 59.15\%] / 59.18\%, and $\bm{d}_{3}$ = $\mathcal{S}(\bm{a}_3 / max(\bm{a}_3))$ = [0.116, 0.255, 0.202, 0.178, 0.249]. Similarly, \emph{\textbf{d$_{7}$}} = [0.122, 0.206, 0.255, 0.233, 0.184]. Then, \emph{l$_{3,7}$} = 1.603, and \emph{L} = 34.22. Obviously, the smaller the total distance \emph{L} appears, the less the meta-learning approach suffers from the TOF problem. We show different meta-learning approaches' performance on the CET metric in Tab.\ref{tab:cross testing}. This experiment shows that the proposed AML, RAML, and URAML performs better then MAML and Meta-SGD on the CET metric, and RAML and URAML performs best. The possible reason for this is that prior-knowledge and attention mechanism are both helpful for the meta-learner to reduce its few-shot cognitive load and to avoid itself be affected by redundant useless information. \begin{table} \centering \caption{Performance of different meta-learning methods on the CET metric.} \resizebox{0.9\columnwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|} \hline Method & MAML & Meta-SGD & AML & RAML &URAML \\ \hline CET & 57.19 & 34.22 & 33.35 & {\bfseries 32.13} & 32.16 \\ \hline \end{tabular} } \label{tab:cross testing} \end{table} We can see an interesting phenomenon in Fig.\ref{fig:cross_test}, that the meta-learner trained by RAML on 5-way 9shot tasks performs best in most of the test tasks, while the meta-learner trained by URAML on 5-way 1-shot tasks performs best. The possible reason behind this phenomenon is that the Representation module of RAML learns knowledge by supervised learning, while the Representation module of URAML learns knowledge by unsupervised learning, which results in the output features between these two kinds of Representation module be different. \subsection{Feature Analysis} To understand the effect of attention mechanism, we visualize the distributions of feature $\gamma$ and $\gamma^\alpha$ (shown in Fig.\ref{fig:network of the AML method}, Fig.\ref{fig:4a} and Fig.\ref{fig:figure5}) in Fig.\ref{fig:feature points} with t-SNE\cite{tSNE}. In Fig.\ref{fig:feature points}, 500 feature points of each picture represent 500 $\gamma$ or $\gamma^\alpha$ of the query set images of a 5-way 1 or 5 shot task that randomly generated on the test set of MiniImagenet. The average distribution inner-class distance D1 of $\gamma^\alpha$ is smaller than that of $\gamma$, and the average inter-class distance D2 of $\gamma^\alpha$ is larger than that of $\gamma$. This result indicates that among different image classes, the distribution of $\gamma^\alpha$ is more distinguishable than that of $\gamma$. The reason for this is that the attention mechanism makes the meta-learner be able to adjust its attention quickly to critical image features and makes $\gamma^\alpha$ more distinguishable than $\gamma$ to differentiate images of different classes. \begin{figure*} \centering \includegraphics[width=2.05\columnwidth]{distribution.pdf} \caption{ Visualization of the distributions of features $\gamma$ and $\gamma^\alpha$ of all our three methods. For each method, we randomly generate a 5-way 1-shot and a 5-way 5-shot testing task on Miniimagenet, and the query set of each task contains 100 images for each image class. For each testing task, after the meta-learner inner-updating on the support set, we use t-SNE to visualize the distributions of the meta-learner's $\gamma$ and $\gamma^\alpha$ of the query set images. For each picture, five colors are used to represents 5 image classes in the testing task and each point denotes the feature $\gamma$ or $\gamma^\alpha$. We also show the average distribution inner-class distance D1 and inter-class distance D2 above each picture to better understand the distributions. } \label{fig:feature points} \end{figure*} \subsection{Heat-Map of $\gamma$ and $\gamma^\alpha$} To further analyze how the attention mechanism affects the meta-learner, we visualize the heat-maps of $\gamma$ and $\gamma^\alpha$ in Fig.\ref{fig:att_heat map}. To get the heat-map of $\gamma$, we first inner-update the RAML meta-learner on the support set of a randomly generated 5-way 1-shot testing task on MiniImagenet. Then, we feed the meta-learner with the query set images and average the feature maps $\gamma$ across the channel axis to get the heat-maps of $\gamma$. Similarly, the heat-maps of $\gamma^\alpha$ can be got. From the heat-maps shown in Fig.\ref{fig:att_heat map}, we can see that compared with $\gamma$, $\gamma^\alpha$ is more sensitive to the distinguishable part of the input image, revealing that the meta-learner changes its attention to the most discriminative image feature. For example, the first column of Fig.\ref{fig:att_heat map} is a fish. Besides the fish body, $\gamma$ is also sensitive to some background region of the image. However, the meta-learner discovers that only the fish body is the crucial feature to category this image and shrinks its attention region so that $\gamma^\alpha$ sensitive only to the fish body. Through the visualization and analysis of the heat-map of $\gamma$ and $\gamma^\alpha$, we can see that the attention mechanism helps the meta-learner to focus on the most distinguishable image feature, and further helps the meta-learner to do a better few-shot learning task. \begin{figure} \centering \includegraphics[width=1\columnwidth]{att1.pdf} \caption{We show some images which are sampled from the query set of a 5-way 1-shot classification task, and the corresponding heat-maps of $\gamma$ and $\gamma^\alpha$.} \label{fig:att_heat map} \end{figure} \section{Conclusion and the Future Work} In this paper, be inspired by human cognition and learning process, we find the importance of attention mechanism and the prior-knowledge for meta-learning based few-shot learning. To solve a few-shot learning task, the meta-learner should first well use stable prior-knowledge to understand images and extract compact feature representations of images so that it can solve the task in the compact representation space rather than the original image space. Then, the meta-learner should adjust its attention to the crucial feature of the extracted feature representations, and make the final decision based on its attention. Therefore, we step-by-step propose three methods AML, RAML, and URAML to introduce attention mechanism and the prior-knowledge to meta-learning. All of them work successfully with state-of-the-art performance on several few-shot learning benchmarks, which indicating the rationality of our viewpoints and methods. Besides, we find existing meta-learning approaches suffer from the TOF problem, which is unfriendly to practical applications. We design a novel Cross-Entropy across Tasks (CET) metric to evaluate how much does a meta-learning suffers from TOF. The experiment shows that compared to existing meta-learning methods, the proposed methods suffer less from the TOF problem, especially the RAML and URAML methods. Among all the proposed methods, though URAML performs not the best, we think it is the most promising method yet because there is a large development space for the performance of URAML method which will also be the direction of our future work. From the ablation study, two manners seem can improve the performance of URAML significantly. One is to develop the unsupervised learning algorithm or self-supervised learning. RAML performs better than URAML revealing that the current unsupervised learning algorithm falls behind supervised learning. Bridging the gap between unsupervised learning and supervised learning algorithms will boost up the performance of URAML in a substantial probability. The other manner is to use more unlabeled data for URAML to learn prior-knowledge. Although 7.1 million unlabeled images are used in URAML, it still dramatically falls behind the images that humans have ever seen in terms of both quantity and quality. As for the quantity, we assume that, if a person watches 1 image per second and keep watching 15 hours per day, he/she can see 100 million images in 5 years. As for quality, humans see the world in a multimodal way, that is, the human can not only see the object but also touch and move around the object, which helps humans understand the world more accurately than Computer Vision. In a word, developing the unsupervised or self-supervised learning algorithm and collecting more unlabeled images will both help URAML to perform well. \section*{Acknowledgements} This work is supported by the National Natural Science Foundation of China (No. 61573286). \bibliographystyle{IEEEtran}
1,108,101,563,938
arxiv
\section{Introduction} \label{sec:intro} Methods for survival, or time-to-event, analysis are frequently used in epidemiological and clinical studies of human health. The more than 30,000 Pubmed citations for the Cox proportional hazards model alone attest to the critical role of such methods in modern health research. Most of the observable health outcomes, such as disease onset, progression, cure or death, are the result of the evolution of relevant biological systems resulting from a natural aging process or the effects of exposures and treatments that may accumulate over time; hence a time-to-event paradigm provides a natural framework for their analyses. Accordingly, biostatisticians working in medical research are very likely to encounter problems requiring time-to-event analyses, even if their training and interests lie in different areas of statistical research. Time-to-event data typically feature particular challenges related to, among other things, censored observations and changes over time in the absolute and/or relative risks, as well as in the values of the predictors. To further complicate matters, there are several issues in survival analysis for which no clear consensus, or published guidelines exist. The lack of clear guidance on how to address these challenges may explain why many published applications involving survival analysis have important weaknesses \citep[e.g.][]{Altman95}. These considerations motivated us to create the `Survival Analysis' topic group within the STRATOS (STRengthening Analytical Thinking for Observational Studies) initiative. The over-arching aim of the STRATOS initiative is to provide guidance for accurate and efficient analyses in different areas of statistics relevant for observational (non-randomized) studies \citep{Sauerbrei14}. The current paper reflects the discussions within our STRATOS topic group (TG8), and presents the first step toward a coherent approach to real-life applications of survival analyses based on intensity (or `hazard') models. In particular, we discuss fundamental assumptions, outline the basic steps necessary to ensure that the analysis appropriately uses the data at hand to address the substantive research question. We also discuss some pitfalls and ways to avoid them, point out some subtle complexities that may arise in applications, and suggest how the basic methodology may be adapted or extended to address these additional issues. In many observational cohort studies, interest lies in the occurrence of a particular \emph{event} among subjects with a given condition (i.e. those `exposed') and among those without the condition (`unexposed'), and the goal may be to compare the pattern of event occurrence. In other settings factors of interest may evolve over time as exposure changes with varying treatments. Consider a register study of the association between exposure to the drug lithium and the incidence of dementia \citep{Kessing08} (later referred to as the lithium and dementia study). In this study, the event is a hospital diagnosis of dementia and the levels of exposure correspond to different numbers of redeemed prescriptions (0, 1, 2, ...) of the drug lithium. The lithium and dementia study will be used to set examples throughout the first sections of the paper while other studies and data sets will be used for illustrations in Section \ref{sec:examples}. There, for example, we study the risk factors for all cause mortality in women with ovarian cancer and discuss more complex clinical cohorts and raise the issue of what can (and cannot) be reliably estimated from the samples in the studies of patients with non-alcoholic fatty liver disease (NAFLD) and peripheral arterial disease (PAD). The common situation for the examples is depicted in Figure \ref{fig:statediagram}. In such settings there is a time axis usually measuring the time from the origin of the process or some other natural starting point that may be defined by the context. We denote this by $t$ and presume it is measured on a common scale for all individuals of interest. The left-side graph refers to the simplest situation, where death of any cause is the only event of interest as in the ovarian cancer study. There the time origin would most naturally be the age of diagnosis with ovarian cancer. The subjects in the \emph{state} denoted as `0' are alive (with ovarian cancer) and, thereby, \emph{at risk} of experiencing the event and the state corresponding to this event is denoted `1'. The right-side graph refers to a more general situation, where the event of interest may be part of a larger system containing aspects that may or may not be relevant for the study. In that case, the event may occur via other states than `0', other events (transitions) may happen to subjects in state 0, thereby preventing the occurrence of the event (competing risks), and subjects may or may not return to state 0 after the occurrence of the event (dashed line). For subjects in state 0, there is a probabilistic \emph{intensity}, say $\lambda(t)$ (or $\lambda_{01}(t)$), governing event occurrence and this intensity is often the primary target for statistical modelling. An important point to note in connection with such studies is that the event of interest will typically not be observed for all subjects. This incomplete information could be caused by different mechanisms, including being event-free at end of follow-up (i.e., still in state 0 when the study ends or at an earlier time due to loss of follow-up) or experiencing a competing event (i.e., leaving state 0 to another state than 1). We will discuss the corresponding concepts of \emph{censoring} and \emph{competing risks} in detail in what follows. \begin{figure} \includegraphics[width=15cm, height=6cm]{figure1-1} \caption{Representation of multi-state processes and transitions between states. On the left is the simplest case of survival until death from any cause, on the right a more general situation where the transition of interest (from state 0 to state 1) is part of a larger multi-state model.} \label{fig:statediagram} \end{figure} We discuss survival analysis using intensity models on data from cohort studies like those described in the examples above. We will emphasize that there may be different types of scientific questions to be addressed in such observational cohort studies, and that the analysis should be properly targeted to those questions. Nonetheless there are a number of special features of survival data arising from such studies of which investigators should be aware. In Section \ref{sec:basics}, we will discuss such features with examples and give recommendations. We will also discuss potential pitfalls connected with such analyses and how to avoid them. In Section \ref{sec:hazmod} we will focus on models for the intensity $\lambda(t)$. We will see that, for such an analysis, a more detailed description of other (`competing') events that subjects may experience while being at risk for the event of interest may not be needed. However, an important point will be that even though such intensity models may suffice for addressing some questions, one will have to go beyond intensities to deal with other questions, including estimation of the absolute risk of experiencing the event and also various \emph{causal} questions. This is the focus of Sections \ref{sec:beyond} and \ref{sec:causal}. Section \ref{sec:examples} illustrates the relevant issues and methods through analysis of the above examples, and the paper is concluded by a brief discussion in Section \ref{sec:disc}. \section{Getting the basics ready} \label{sec:basics} This section introduces the notation to be used throughout the paper and defines the intensity. We also give a checklist of items that are important to consider in observational time-to-event studies. \subsection{Notation} \label{sec:notation} We assume that the following data can be available for subject $i$ in a sample of $n$ independent individuals, $i=1,...,n$: \begin{itemize} \item The follow-up time $T_i$, i.e., the time (relative to the chosen time origin) where the subject exited from the study. \item The indicator variable $\delta_i$, indicating whether or not, at time $T_i$, the event of interest occurred ($\delta_i=1$ for event and 0 otherwise). \item A time $V_i$ (relative to the chosen time origin where $V_i<T_i$) where the subject entered the study. Thus, if the subject was included already at the chosen time origin then $V_i=0$ , but $V_i>0$ (\emph{delayed entry}) is possible. \item A vector of covariate values $Z_i(t)=(Z_{i1}(t),...,Z_{ip}(t))$, some of which may be time-dependent but other may be time-fixed (baseline). \end{itemize} Two common approaches exist for notation in survival data. A traditional way to describe the survival data is the vector $(T_i,V_i,\delta_i,Z_i(t))$, and this works well for time to all-cause death, the simplest classic example of survival data (Figure \ref{fig:statediagram}, left panel). Alternatively, one can view each subject as an evolution over time leading to the counting process notation $(Y_i(t), N_i(t), Z_i(t))$. This notation allows also more complex situations (as implied from the right graph of Figure \ref{fig:statediagram}) to be studied in a straightforward fashion and will be hence used throughout the paper. Here, the process $Y_i(t)$ is equal to 1 while a person is known to be at risk and 0 otherwise, i.e. $Y_i(t)=I(V_i<t\leq T_i)$, and the process $N_i(t)$ denotes the number of events by time $t$, here simply $N_i(t)=I(T_i\leq t, \delta_i$=1). Defining $dN_i(t)=N_i(t)-N_i(t-)$, observation of the event at time $t$ can then be represented by $dN_i(t)=1$, i.e. the counting process for $i$ counts +1 at that time. \subsection{Preliminary concepts and issues} \label{sec:check1} The most important aspect of any analysis is to first think carefully about ($i$) what question(s) we want to answer, and ($ii$) whether and how the data at hand will be sufficient to answer them. With respect to the latter ($ii$), important issues are the source of the data, what population it represents, what variables are relevant and which among these are available, and data completeness, both with respect to inclusion of subjects and missing data for those that are included. We can then proceed with a more technical checklist: \begin{itemize} \item \textit{Time origin}: The follow-up time $T_i$ is measured from a meaningful starting point of the process ($t=0$), which should be unambiguously defined, comparable between individuals, and ideally clinically relevant. Typical examples include age, time since diagnosis or time since treatment initiation. The choice of time origin should depend on the scientific questions. In the lithium and dementia study, the time origin was defined as the date of the start of the first lithium prescription for a given patient. Patients who started lithium treatment after 1996 are included in the study. In the PAD study, the patients with peripheral artery disease were identified at a visit to their physician. While the onset of the symptoms may be observable, it is impossible to know their times of the disease onset. The time origin does not always correspond to any `clinically meaningful' date but to an administrative date of the `start of the follow-up', so the study results must be interpreted accordingly. In these cases age (as a major determinant of many health outcomes) may be the appropriate time axis, and it is increasingly used in population based studies investigating development of diseases. If the underlying risks change systematically with e.g. time since diagnosis, then this will be the preferable time axis. The definition of the time origin determines the primary time axis. In the lithium and dementia study, the time of interest is the time from treatment initiation to dementia onset. As the time origin is the time of lithium treatment initiation, all patients are at risk at time $t=0$, i.e. $V_i=0$ and $Y_i(0)=1$ for all $i$. The study also addressed a second research question comparing the lithium treated patients to the general population and a sample from the general population is followed from 1st of January 1995 onwards. In the general population, the time since lithium initiation of course cannot be defined and, thus, to answer the second research question, one should then use the age as the time axis~ instead, for both the lithium treated and the general population subjects. This is an example of delayed entry for all individuals, everyone has $Y_i(0)=0$, since no one is included into the study at birth, the $Y_i(t)$ values switch to 1 at the age at which the individual was included in the study. Accounting for the delayed entry is necessary here to avoid so-called \emph{immortal time bias} \citep{suissa-AJE2007,anderson}, see Section \ref{sec:timebias}. To address the important question in the study \emph{multiple time axes} are needed -- both time since lithium treatment initiation and age. This is a common feature in epidemiological follow-up studies. \item \textit{Inclusion criteria}: The inclusion criteria for individual $i$ must be met by the time that the patient is declared to enter the study, i.e., at time $V_i$ when $Y_i(t)$ first becomes 1. Say, we wished to analyse survival time of the general population in the lithium and dementia study: one cannot say that only individuals who are never treated with lithium throughout the study can be used for this analysis. While this information may be known later, at the date of data analysis, it was not known when the individuals were included. If the individuals who were treated with lithium at a certain point later in the study were excluded from the study, this would imply that patients with a shorter time to event (and hence less time to get lithium treatment) were more likely to be included and the survival of this general population group would be underestimated. On the other hand, if 'ever treated' individuals from the general population were included and were considered to be a part of the patient group throughout the study, this would in turn imply the overestimation of patients' survival. This is another example of immortal time bias. A correct way of analysis of these data is to regard the exposure (lithium treatment) as a time-varying covariate, we return to this in Section \ref{sec:timebias}. \item \textit{Event definition}: The time of event occurrence must be clearly defined. In the PAD study several events are of interest. The time of death or the time of a major cardiovascular event (stroke, infarction) are examples of well defined events with an exact date typically known. Other types of events, such as revascularisation procedure are slightly more subjective in nature, since this is an operation that has been scheduled following the doctor's evaluation of the patient's state of the disease. Judgement is therefore involved and moreover, the date when it took place depended not only on the stage of the disease but also the ability to schedule the procedure. Another example of an event with some ambiguity is the diagnosis of dementia. Here, one knows that the onset of dementia has happened between two consecutive visits, but is impossible to know when. This is referred to as interval-censoring. This could be further complicated if not all patient visits were scheduled with the same frequency, whereby some patients may be diagnosed earlier than others. In the lithium exposure study, these problems were avoided by \emph{defining} the event as the first hospitalization leading to a diagnosis of dementia (which may not coincide with the actual onset of the disease and must thus be interpreted accordingly). The decision whether the event of interest has already occurred must be known at the time when $N(t)$ switches to 1. A typical example occurs with validated endpoints. An endpoint such as diabetes might be mis-coded, for instance, and investigators will often require `two diagnoses at least 30 days apart' as proof. An error occurs when the date of diabetes is backdated to be that of the first instance and, therefore, such an event definition would depend on something happening in the future. \item \textit{Censoring}: The goal of the survival analysis is to estimate quantities relating to a complete, i.e. uncensored, population. A basic assumption in the estimation is that the information that a patient has \emph{not been censored} at a certain time point does not carry any information about his or her prognosis beyond that time point. We need this assumption since we shall regard the patients at risk at a certain point $t$ ($Y_i(t)=1$) as a representative sub-sample of all the patients that would be at risk if there was no censoring. The assumption is referred to as \emph{independent censoring}. The assumption can be weakened to \emph{conditionally} independent censoring, i.e. independent censoring within a group of patients with a certain set of characteristics, defined by the covariates available in the study data base. Administrative censoring, i.e. censoring at a certain calendar time due to the end of study, is a common example of independent censoring, as the censoring mechanism is not related to individual patient prognosis. However, if the patient prognosis has improved through the calendar time covered by the study, the patients diagnosed later have a better prognosis and are also censored earlier, so the independent censoring assumption is not met. On the other hand, the censoring pattern is conditionally independent given the period of diagnosis, so inclusion of this covariate in the model will avoid potential bias (provided the model is correct). It is, thus, important to consider what causes censoring in any given follow-up study. Is it mostly administrative censoring, i.e., being event-free at end of planned follow-up, or are there drop-outs? While the former can often be taken to be independent, more suspicion should be exercised for the latter and, ideally, it should be noted in the data set \emph{why} any given subject was censored. In some studies one considers deaths due to causes unrelated to the disease in question as censoring, e.g. regarding the patients who die from non-CV causes in the PAD study (causes that were not of the main interest in this study) to be equivalent to those who were administratively censored. This is not a formal violation of the definition of independent censoring, rather it is inconsistent with the definition of our population of interest. This is because the competing risk (death from non-CV causes) is present also in the complete population and we usually are not interested in the population where this risk would be eliminated. We address this situation in the Competing risks section (Section \ref{sec:comprisk}). \end{itemize} \subsection{The intensity} \label{sec:intensity} We now return to the concept of the \emph{intensity} function. Once it has been established who is at risk, what is the event and how should subjects should be aligned over time (i.e. how time $t$ is defined), one can define the \emph{intensity} for subject $i$ as: \begin{eqnarray} \lambda_i(t)&\approx& P(\mbox{event in }(t,t+dt)\mid \mbox{past at time }t-)/dt \label{eq:int}\\ &=& P(dN_i(t)=1\mid H_i(t-))/dt,\nonumber \end{eqnarray} where, $H_i(t-)=(N_i(s), Y_i(s), Z_i(s), s<t)$ summarizes the past information for the subject $i$ that is available just before time $t$. (More formally: $\lambda_i(t)=\lim_{\Delta t\rightarrow 0}P(dN_i(t)=1\mid H_i(t-))/\Delta t$.) For \emph{survival data} involving only states 0 and 1 as in the left panel of Figure \ref{fig:statediagram}, the \emph{hazard function} is given by: \begin{equation} A(t)=-\frac{d\log S(t)}{dt}, \label{eq:hazard} \end{equation} where the \emph{survival} function $S(t)$ is $P(T^*>t)$ and $T^*$ is the uncensored -- and incompletely observed -- time to death. When no time-dependent covariates are considered in $H_i(t)$, the intensity in (\ref{eq:int}) is simply given by the hazard in (\ref{eq:hazard}) and, for that reason, we will use the terms \emph{intensity} and \emph{hazard} interchangeably in this paper. While the hazard function for survival data (as seen in equation (\ref{eq:hazard})) is in one-to-one correspondence with $S(t)$, and thereby with the cumulative risk $F(t)=1-S(t)$, there may more generally be other events competing with the event of interest in which case $A(t)$ is a transition hazard (or cause-specific hazard) from state 0 to state 1 (and it is some times denoted $A_{01}(t)$ to emphasize this). In that case, a one-to-one correspondence between the single cause-specific hazard and the absolute probabilities no longer exists, see e.g. \cite{PKA-IJE}. There are several reasons why the intensity function plays a central role in survival analysis and analysis of cohort studies. \begin{itemize} \item The idea is that subjects are followed over time and, at each time $t$ where the subject is still observed to be at risk, it is asked, given the information that is available so far for the subject, what is then the probability per time unit that the subject experiences the event in the next little time interval from $t$ to $t+dt$? Thus, the intensity gives a dynamical description of how events occur over time and, in this description, all aspects of the \emph{past} observed for the subject up to time $t$ may be taken into account, such as (time-dependent) covariates and, possibly, previous events. \item For survival data, the survival probability $S(t)=P(T^*>t)$ cannot be estimated in a straightforward way as a simple proportion due to censoring. The hazard function, however, can still be studied since it relies exclusively on the information on the individuals still at risk (assuming the independent censoring assumption to hold). \item One of the greatest strengths of describing the data via the intensity is that the past of a subject could include not only the \emph{baseline covariates}, i.e. information available at time of entry into the study but also information contained in \emph{time-dependent} covariates that are updated during follow-up. Time-dependent covariates have a very natural connection to clinical practice: when assessing a long-term patient, a physician will naturally use the most recent measurements in addition to those collected a long time ago at their initial visit. Time-dependent covariates thus allow accounting for the changes over time in the relevant variables (patient characteristics, exposures, treatment), which may alter the intensity. In the lithium and dementia study, the exposure itself (number of lithium prescriptions redeemed until the current time) is time-dependent and, when including the unexposed control group of non-treated subjects, people will change status from being unexposed to belonging to the exposed group at the time of their first lithium prescription. Unfortunately, misunderstandings and errors in creating time-dependent covariates are one of the most common sources of immortal time bias, we thus pay this issue special attention in Section \ref{sec:timebias}. Time-dependent covariates also create issues with model prediction, which will be discussed in Section \ref{sec:Prediction}. \item As the intensity is a dynamic description of the data generating process over time, delayed entry is naturally taken into account. \item Most often, the hazard depends on patient characteristics, so that \emph{covariates} need to be taken into account when analyzing data from cohort studies. In this context, as discussed in Section \ref{sec:check1} above, an advantage of using a hazard regression model is that it `corrects for non-independent censoring' in the sense that regression coefficients are estimated consistently even if censoring depends on covariates, as long as these covariates are included as predictors in the hazard model (e.g., \cite{ABGK-book}, Section III.2). \end{itemize} We argue that the intensity may be of interest in its own right and that it, therefore, could be an obvious target for an analysis of the cohort data. However, as previously mentioned, there are important scientific questions for the answer of which the intensity will be insufficient. We will return to that in Sections \ref{sec:beyond} and \ref{sec:causal}. \section{Hazard models} \label{sec:hazmod} In order to make a \emph{model} for the intensity one needs to specify how it depends on time $t$ and on the available information in $H_i(t)$, more specifically: how it depends on the covariates. Before specifying a hazard model, descriptive analysis should be conducted to explore the data. The \emph{Kaplan-Meier estimator} for the whole cohort, or for sub-groups, will provide useful insights in the case of a single (possibly composite) terminal event, see the left panel of Figure \ref{fig:statediagram}. However, this will not be the case in, e.g. the lithium and dementia study where, obviously, all-cause mortality is a competing risk for dementia, the event of interest. To describe the \emph{hazard}, one could divide the time variable of interest into suitable intervals (e.g., yearly intervals) and calculate the \emph{incidence} (or `occurrence-exposure') rate for each interval by dividing the total number of events of interest observed in the interval by the total time at risk in the interval. This corresponds to an assumption of a piecewise constant hazard function, an assumption that is often a reasonable approximation and which is used for \emph{Poisson regression} which will be discussed further below. To estimate the hazard \emph{non-parametrically} requires some sort of smoothing. What may be estimated in a simple non-parametric fashion for a defined population of subjects is the \emph{cumulative hazard} $\Lambda(t)=\int_0^t A(u)du$. This may be done using the \emph{Nelson-Aalen} estimator \begin{equation} \widehat{\Lambda}(t)=\int_0^t\frac{dN(u)}{Y(u)}=\sum_{i:T_i\leq t}\frac{ \delta_i}{Y(T_i)} \label{eq:NAa} \end{equation} where $Y(t)=\sum_i Y_i(t)$ and $N(t)=\sum_i N_i(t)$. This is an increasing step function with steps at each observed event time, the step size at $t$ being inversely proportional to the number $Y(t)$ at risk, Figure \ref{fig:nafld2_na} shows an example. Pointwise confidence limits may be added (e.g., \cite{ABGK-book}, Chapter IV). The approximate `local slope' of $\widehat{\Lambda}$ at $t$ reflects the hazard at that time. However, the value of the cumulative hazard, does, in general, not have a simple interpretation (an exception is in studies of \emph{recurrent} events, a topic beyond the scope of this paper \citep{cook-lawless-book2007}. \subsection{Proportional hazards models} \label{sec:Cox} Most applications of intensity models for cohort studies aim at relating the rate of event occurrence to (time-fixed or time-dependent) \emph{covariates}. The complexity of such a hazard model will both depend on what is the scientific question that it is meant address and on what information is available. Here, it is important to notice that hazard models, as any other statistical model, can typically not be expected to be `correct' in any strict sense but still they may be sufficiently flexible to give a sensible answer to the question raised. Survival analysis and, thereby, also analysis of cohort studies is dominated by the Cox model. In his breakthrough paper, \cite{Cox72} introduced the maximum partial likelihood as a method to estimate the regression coefficients in the proportional hazards (PH) model. The general definition of the proportional hazards model is \begin{eqnarray} A_i(t)=A(t|Z_i(t))=A_0 (t)\exp(Z_i(t)^\top\beta). \label{eq:PHmodel} \end{eqnarray} In the Cox PH model, the \emph{baseline hazard} $A_0(t)$ (i.e., the hazard for individuals with all covariates equal to 0) is left unspecified, other alternatives for the PH model are considered at the end of this section. The name `proportional hazards' refers to the, possibly strong, assumption that the ratio of the hazards corresponding to two different values of $Z_i(t)$ is the same for all times $t$. This constant hazard ratio is $\exp(\beta)$. In this model, the regression parameter(s) $\beta$ are estimated by maximizing the log partial likelihood \begin{equation} pl(\beta) = \sum _{i=1}^n \int_0 ^\infty \log \left( \frac{\exp(Z_i(t)^\top \beta)}{\sum_{j} Y_j(t) \exp(Z_j(u)^\top \beta)} \right)dN_i(t). \label{eq:CoxPL} \end{equation} Once the covariate effects $\beta$ have been estimated, the cumulative baseline hazard $\Lambda_0(t)=\int_0^t A_0(u)du$ can be estimated by the Breslow estimator \citep{Breslow74} \begin{equation} \widehat \Lambda_0 (t |\widehat \beta) =\int_0^t { { d{N} (u)} \over { \sum_i Y_i(u) \exp(Z_i(u)^\top \widehat{\beta})}}. \label{eq:breslow} \end{equation} With no covariates ($\beta=0$) the Breslow estimator is simply the Nelson-Aalen estimator (\ref{eq:NAa}). The partial likelihood may be seen as a profile likelihood resulting from eliminating the baseline hazard from a joint likelihood including both $\beta$ and $A_0(t)$. It is important to notice that this joint likelihood is valid both for all-cause survival data (left panel of Figure \ref{fig:statediagram}) and for the more general situation depicted in the right panel of that figure. This is because the likelihood for the whole system \emph{factorizes} into a number of factors, each depending on a separate transition in the model \citep{Kalbfleisch02, ABGK-book}. Also, the Cox partial likelihood (\ref{eq:CoxPL}) enjoys the properties of standard likelihood functions such that standard errors and test statistics may be obtained in the `usual way' \citep{Cox-75, andersen-gill-AS1982}. We will use these features in the Examples section \ref{sec:examples}. If more time axes are relevant in a given study then, using the Cox model, one of these must be selected as the baseline time axis $t$. Other time axes (e.g., current age if $t$ is time since disease onset) can then be included as time-dependent covariates. This type of time-dependent covariates is said to be \emph{external} (or \emph{exogenous}) because they `exist' whether or not the subject is still under observation. On the other hand, time-dependent covariates such as current blood pressure or current cholesterol level (that can only be ascertained for subjects still under observation) are \emph{internal} (or \emph{endogenous}). The completely unspecified baseline in the Cox model makes it quite flexible, however, a limitation of this non-parametric model component is that it only allows direct estimation of the \emph{cumulative} baseline hazard $\Lambda_0(t)$, but fails to produce an estimate of the hazard $A_0(t)$ itself. To obtain an estimate of $A_0(t)$, some smoothing would be required. The alternative to letting the baseline hazard remain unspecified in the model is to fit a fully parametric proportional hazards model, some of the common options are: \begin{itemize} \item The simplest (and most restrictive) option is to assume a \emph{constant} baseline hazard corresponding to an exponential distribution of $T^*$ in the case of all-cause survival data. \item A useful extension of the model above is the \emph{piecewise exponential} model that divides the time-range into intervals on which the baseline hazard is constant. Here, cut-points $0=s_0<s_1<\dots<s_{K-1}<s_K=\infty$ for the time axis are selected and it is assumed that $\lambda_0(t)=\lambda_{j0}$ when $s_{j-1}\leq t<s_j$. This is often referred to as the \emph{Poisson} (or piecewise exponential) regression model for survival data and it is frequently used in epidemiological studies. It tends to produce results that are very close to those obtained using the Cox model. An advantage is that the baseline hazard is fully parametric and yet flexible. Further advantages are that a potentially large (e.g., registry-based) data set can be pre-processed into tables of \emph{event counts} and \emph{person-time at risk} according to the chosen time intervals and to (discrete-valued) covariates and, furthermore, that \emph{multiple time axes} are very easily handled by splitting event counts and person-years at risk simultaneously according to all time axes \citep{CH-book}. Drawbacks of the piecewise exponential model include the fact that the intervals must be selected and that this choice to some extent may affect the detailed results and, furthermore, that it does not produce a \emph{smooth} hazard function. \item A smooth extension of the constant baseline hazard model is the Weibull model, where $A_0(t)=\lambda \gamma(\lambda t)^{\gamma-1}$. The extra parameter $\gamma$ allows some flexibility, but assumes a monotone baseline hazard function and the model is not flexible around $t=0$. To allow greater flexibility and obtain a smooth baseline hazard one may use flexible parametric models for $A_0(t)$, e.g., via splines in combination with penalized likelihood \citep{joly-etal}. \end{itemize} \subsection{Alternatives to proportional hazards models} \label{sec:alternat} The PH assumption is strong and may often not fit the data well for the entire time range studied. An extension of the Cox model, to relax the PH assumption, is to allow the covariates $Z(t)$ to have \emph{time-varying} effects, i.e., assume that the hazard is given by \begin{equation} A(t|Z_i(t))=A_0(t)\exp(Z_i(t)^\top\beta (t)). \label{eq:timevarcoeff} \end{equation} Here, explicit interactions between covariates and functions of time may be introduced, e.g., by defining a model with $\beta(t)=\sum _j \gamma_jf_j(t)$ (for a set of pre-specified functions $f_j(t)$ containing $f_0(t) \equiv 1 $) for each component of $Z$. The simplest example is to split time into two intervals (splitting at time $\tau$, say) and assume proportional hazards within each. This corresponds to choosing $f_0(t)=1\mbox{ and }f_1(t)=I(t\geq \tau)$. Alternatives include the use of splines \citep{royston}. Some care is needed here since the number of parameters in such models with time-varying effects can become quite large and there is a danger of overfitting. In the situation where the PH assumption needs to be relaxed for a single categorical covariate, the \emph{stratified Cox model} is useful. In this model, each level of that covariate has its own baseline hazard which is not further specified, i.e. \begin{equation} A(t|Z_i(t))=A_{0s}(t)\exp(Z_i(t)^\top\beta), \label{eq:stratCox} \end{equation} when subject $i$ belongs to stratum $s$. An alternative to the multiplicative Cox model is the \emph{additive hazards} (or \emph{Aalen}) \emph{model} \citep{Aalen89} \begin{equation} A(t|Z_i(t))=A_0 (t)+ Z_i(t)^\top\beta(t). \label{eq:Aalenmodel} \end{equation} In this model, both the baseline hazard $A_0(t)$ and the regression functions $\beta(t)$ are completely unspecified (like the baseline hazard for the Cox model) and their cumulatives $\int_0^t A_0(u)du, \quad\int_0^t \beta(u)du$ can be estimated using a least squares technique. Versions of the model where some or all $\beta(t)$ are time-constant are also available \citep{Scheike07}. A drawback of the model is that the estimated hazard can become negative while an advantage is that it is very flexible \citep{aalen-book2008}. A completely different approach is given by the accelerated failure time (AFT) model where the covariates are assumed to extend or shorten the survival time by a constant time ratio $\exp(\beta)$ \citep[e.g.][Ch. 7]{Kalbfleisch02} $$ S(t|Z) = S_0(\exp(-Z^\top \beta)t), $$ or equivalently: $$ \ln(T^*)=Z^\top\beta + \epsilon . $$ The model is a viable alternative to the PH model and although one could derive the hazard function for this model, it does not naturally fall under the heading of `hazard models'. Also, it is mostly used for survival data (Figure \ref{fig:statediagram}, left part) and less so in the general situation of Figure \ref{fig:statediagram}, right part, and it will not be considered further in this paper. Discussion of pros and cons of PH and AFT can be found in \cite{wei92} and \cite{Keiding97}. \subsection{A checklist when fitting the Cox model} \label{sec:checkCox} We next propose a checklist for the Cox model. Most of the items below are relevant also for other hazard regression models. We list the issues that one should be careful about both before fitting a model and after having performed an analysis. We add tests and approaches that can be helpful in understanding the sources of the problems and evaluating their extent. Note, however, that these checks are not conclusive, they serve only as an aid in thinking about the issues. Before fitting a model the following items should be considered: \begin{itemize} \item \textit{Checks on the covariates} to be included in the model: for continuous covariates examine the distribution, check for extreme data (leverage) points, make histograms. For categorical covariates, the frequencies of the categories should be reported and also the choice of the reference categories. \item \textit{Check dates.} A trivial, but often relevant warning is that survival data often contain a series of dates, that may come in varying formats and are prone to typing mistakes. The fact that the dates follow each other in the proper sequence should thus be carefully checked. \item \textit{Investigate censoring.} As mentioned in the previous checklist, it is first of all important to think about what causes censoring. Next, plotting a `survival curve' estimating $C(t)=P(\mbox{no censoring before }t)$ (or its complement $1-C(t)$) could be done to give an impression of the proportion censored in time. Here, censoring is the `event' and a failure is a `censoring event' that prevents observation of the `event of interest'. Also, a Cox regression model with `censoring' as event can help to check whether the censoring depends on any of the covariates under consideration. If there are some variables that one may or may not include in the model (maybe they are not crucial for the question asked) then they should be included if they affect the censoring, since in this way the independent censoring assumption is relaxed to conditional independence, as discussed in Section \ref{sec:check1}. An important feature of hazard models is that they can be used exactly as described in Section \ref{sec:basics} by \emph{formally censoring for the competing events (including all-cause death)}. This is not a violation of the independent censoring assumption, the point being, as mentioned above, that the joint likelihood function for both the event of interest and the competing events \emph{factorizes} and the factor corresponding to the intensity for the event of interest has the same form as it would have had if competing events were regarded as censoring events \citep{Kalbfleisch02}. In such situations one should carefully consider if the (cause-specific) hazard for the event of interest properly answers the scientific question or whether one needs to go beyond this hazard model (see Sections \ref{sec:beyond} and \ref{sec:causal}). \item \textit{Time-dependent covariates.} When defining the model in (\ref{eq:PHmodel}), we assume that the $Z_i(t)$ are continuously measured and, thus, available at all times $t$, for which subject $i$ is at risk. A feature of the partial likelihood estimation method for the Cox model is that the values of time-dependent covariates are needed for everyone at risk at all the event times, cf. equation (\ref{eq:CoxPL}). Some extrapolation or other ways of predicting the value of a time-dependent covariate at event times based on \emph{past} observations $(Z(s), s\leq t)$ may be needed \citep{bycott-taylor}. In practice, most recently observed values of $Z(t)$ are typically carried forward until the next value is observed. However, such last-value-carried-forward approach can induce some bias toward the null if the current hazard depends truly on the current (unknown) covariate value \citep{andersen-BIOSTATISTICS2003, de-bruijne-SIM2001}. A more advanced approach for internal time-dependent covariates which are not measured at all times uses joint longitudinal-survival models \citep{wulfsohn-tsiatis, tsiatis-davidian} to obtain estimates of $\beta$, and also allows the possibility that the observed $Z(t)$ is measured with error. Note that, for external time-dependent covariates, extrapolations are sometimes not needed since, e.g. current age can be calculated based on age at baseline. Covariates that change shortly before the endpoint should be viewed with particular suspicion. A common example is a change in medication in the last 1 or 2 weeks before death; such changes often occur when a patient enters terminal hospice care for instance. The most serious examples of such `anticipation' involve \emph{reverse causality bias} where a change in $Z(t)$ occurs \emph{because of} early symptoms of the event of interest \citep{horwitz-feinstein}. In some applications it may be therefore more plausible that the current hazard depends on the past rather than most recent value(s) of a time-dependent covariate implying either lagged or cumulative effects that would require more complex modelling \citep{gasparrini, sylvestre}. \end{itemize} After having fitted a Cox model one should consider: \begin{itemize} \item \textit{Check proportional hazards and the functional form}. Two basic assumptions of the model are that the coefficients $\beta$ are time-fixed (PH assumption) and that the covariate effect is linear on the log hazard. Checking the PH assumption has developed into a large `industry' within survival analysis and giving a comprehensive review is beyond the scope of the present paper. Among the many methods proposed (some of which will be illustrated in the Examples section \ref{sec:examples}) we mention those based on Schoenfeld and martingale residuals \citep{therneau00, Lin93}, graphical methods such as plots of the cumulative hazard \citep{ABGK-book}, or through estimates of $\beta(t)$ in a time-varying coefficient Cox model \citep{Scheike07}. For relaxing the linearity assumption, one may wish to use simple transformations like the logarithm or the square root or, alternatively, flexible modeling using, e.g., splines \citep{royston}. For continuous covariates, functional form (i.e., non-linear effects) should ideally be investigated jointly with assessing possible violation of the PH hypothesis (i.e, their `time-varying effects'). Indeed, a failure to account for a time-varying effect may induce a `spurious evidence' of non-linearity and vice versa \citep[e.g.][]{Abrahamowicz07,Wynant14}. Another question is what to do if model assumptions seem to be violated. Here, the answer must depend on what are the consequences of the model violation. In a classical epidemiological `exposure-confounder' situation, if the assumptions do not hold for some of the \emph{confounders}, one may wish to perform a sensitivity analysis. Specifically, to relax the PH assumption, one can introduce time-varying effects $\beta(t)$ in the model (see (\ref{eq:timevarcoeff})) or use a stratified model and if the results for the exposure in the sensitivity analysis do not change materially, the assumption may not be problematic. If, on the other hand, the assumptions do not hold for the exposure, one should carefully think about the study question and then employ extensions of the basic model if needed. In such cases modelling the time-varying hazard ratio may yield important insights into the role of a given exposure or risk factor. Note that violation of the PH assumption may be some times induced by a failure to include in the Cox model a strong predictor of the outcome \citep{schmoor, bretagnolle}. \item \textit{Reporting.} Users of the Cox model often report the regression coefficients, but not the baseline hazard. This means that measures like absolute risk cannot be retrospectively obtained from published reports. This is insufficient because the regression parameters figuring in the partial likelihood only give information about the hazard ratios and the relevance and importance of the hazard ratios at any follow-up time depends on the concurrent values of the baseline hazard. The discrete nature of the estimated baseline hazards in the Cox model makes it hard to compare the hazards. For survival data, the estimated survival probability $\widehat S(t|Z)$ can be used to quantify the effects of $Z$. This is only possible if $Z(t)$ is a time-fixed or an external covariate (see Section \ref{sec:beyond}). Predicted survival curves for a population may be calculated by averaging over the observed covariate distribution (using the $g-$formula, see Section \ref{sec:causal}, equation (\ref{eq:g-formula})). \item\textit{Interpretation} Three phenomena hamper the interpretation of the results (hazard ratios) of a Cox model: \begin{itemize} \item \textit{Noncollapsibility.} It is frequently seen that the effects, $\beta$, in a Cox model gradually decay with time toward 0. This happens even if the true effect (i.e., given all relevant covariates) is perfectly constant over time if a covariate with an effect on the hazard is omitted, even if that covariate is completely independent of the other covariates. Thus, if the correct model is $A(t)=A_0(t)\exp(\beta_1Z_1+\beta_2Z_2)$ then a reduced model $A(t)=\widetilde{A}_0(t)\exp(\widetilde{\beta} Z_1)$ cannot hold even if $Z_1$ and $Z_2$ are independent, so that $Z_2$ is not considered a confounder for $Z_1$ \citep{struthers1986}. The non-collapsibility suggests that proportional hazards can only be seen as a working hypothesis allowing a simple structure. It can be noted that the logistic regression model for a binary response variable suffers from the same problem, while the additive hazards model does not. \item \textit{Competing risks.} The function obtained from the Cox model (or any other hazard model) using the formula $F(t|Z)=1-\exp(-{\Lambda}((t|Z))$) can only be interpreted as the risk of failure up to $t$ if there are \emph{no other causes of death}. If dying from other causes (competing risks) is handled as censoring, the resulting function will over-estimate the probability of the event of interest \citep{PKA-IJE}. This must be represented by a cumulative incidence function instead, see Section \ref{sec:beyond}. \item \textit{Lack of causal interpretation.} Suppose the estimated hazard ratio for a treatment variable changes over time in such a way that, before some time point $\tau$, it is less than 1 (suggesting a beneficial effect) and after $\tau$ it is equal to 1, i.e.: $$A(t)=A_0(t)\exp(\beta_1ZI(t<\tau)+\beta_2ZI(t\geq \tau))$$ with $\beta_1<0, \beta_2=0$. Even though this may be a correct model for the data it would be incorrect to assert that `treatment only has an impact before time $\tau$'. This is because the hazard does not provide a `causal contrast' \citep{hernan-EPIDEMIOLOGY2010, aalen-LiDA2015, torben-etal}. We will also elaborate on this point in Section \ref{sec:causal}. \end{itemize} \end{itemize} \subsection{Time-dependent covariates and immortal time bias} \label{sec:timebias} In the process of creating the data set it is all too easy, unfortunately, to ignore the fact that a covariate is time-dependent and treat it as time-fixed. This is a common source of `immortal time bias' and may be the single most prevalent reason for invalid survival analyses in the literature. It is crucial that only information reflecting covariate values observed before time $t$ (i.e, from the `past' at $t$) is used to define the value of a variable $Z(t)$ at time $t$. Thus, even though later information pertaining to changes that occurred \emph{after} $t$ may be available to the investigator at the time of analysis, only information for a given subject that reflects changes that occurred before time $t$ can be included as part of that subject's past at time $t$. We give a couple of the most common examples with invalid $Z(t)$, but these do not nearly exhaust the possibilities. \begin{itemize} \item A common example is to group patients at time $t=0$ according to the use of drugs or treatments at any time during the follow-up even if many might have started their use only at some time after time $t=0$ (`ever treated' vs `never treated'). \\ E.g., in the lithium and dementia study, as mentioned above, one cannot say that only individuals that are never treated with lithium throughout the study serve as a control and thus create a time-fixed covariate `exposure'. While the information that an individual has not been treated throughout our follow-up is known at the time of data analysis, it could not be known when the patients entered the cohort. The exposure should, thus, be coded as a time-dependent covariate that starts with 0 for all individuals that were sampled from the general population, but may switch to 1 at a subject specific time $t_i$ if subject $i$ did start lithium treatment at that time. The alternative, i.e. to treat all individuals sampled from the general population as un-exposed regardless of what happens to them later will bias the comparison in the direction of ‘protective effect’ of lithium exposure, as explained in Section \ref{sec:check1} above.\\ Another classical example of the same problem is the Stanford heart transplant data \citep{crowley-hu}, which included patients who were eligible for heart transplant. The event of interest was death and the focus was how the survival of transplanted subjects compares to that of not transplanted. When the data were first analysed \citep{clark}, the patients were divided into two groups (`ever transplanted' vs. `never transplanted'), the group membership was wrongly represented as a time-fixed covariate. As it turned out later with correct analysis, original results which suggested that the transplant is beneficial were solely due to the immortal time bias. \item A similar situation occurs when studying a trait that develops in time (e.g., with time patients develop side effects or, with time, patients may respond to chemotherapy). Here, the value of their covariate starts as 0 and may become equal to 1 later. This automatically implies that individuals must survive at least some time to develop the trait, the early deaths are hence more likely to occur in patients without the trait. Considering the value of the covariate as time-fixed and wrongly coding it as 1 already at the start (ever developed a trait or ever had side effects), will mis-attribute the portion of event-free survival time from the 'unexposed' (no trait yet) to the 'exposed' group and, thus, underestimate the hazard ratio associated with having the trait. \item A further example is to model the total dose received during the entire follow-up period as a time-fixed variable. \cite{Redmond83} investigated a claim of \cite{Bonadonna81} that disease-free survival improved with increased total amount of drug received, and found it to be entirely due to immortal time bias because the patients who died early could not have accumulated high doses. The false result is not benign, since it would encourage providers to continue full dose treatment in the face of dose-limiting toxicities, leading to increased morbidity, suffering, and possibly even death. \end{itemize} In all of the above cases, creation of a well-defined time-dependent covariate where $Z(t)$ does not depend on any of $Z(s), N(s)$, or $Y(s)$ for any $s > t$ repairs the bias. \section{Prediction using hazard models} \label{sec:beyond} Even though the intensity discussed in previous sections provides a useful framework for statistical modelling it may be hard to explain the model results to the general public. Communication is usually easier in terms of the \emph{absolute risk}, i.e., the probability of the event occurring in some interval or, more generally, the probability of being in a certain \emph{state} by time $t$. For estimation of the absolute risk it now becomes crucial to consider if other (competing) events may occur, thereby preventing the event of interest from happening, see Figure \ref{fig:statediagram}. In general, \emph{all} transition hazards out of the initial state, 0, are needed to estimate absolute risks. \subsection{Prediction in the absence of competing risks} \label{sec:Prediction} In the case of no competing risks, there is only one hazard function in the model and the absolute risk for the interval from 0 to $t$ is obtained directly from that hazard: \begin{equation} F(t|Z)=1-S(t|Z)=1-\exp(-\Lambda(t|Z)) \label{eq:absrisk} \end{equation} provided there are no time-dependent covariates. Absolute risk is often used to describe survival or recurrence-free survival in clinical cohorts of patients treated for cancer or other life-threatening diseases. \subsubsection*{Prediction from $t=0$ onwards} This is relevant when the time origin is well-defined. In clinical cohorts it can be the time of diagnosis or start of treatment. In population cohorts it can be a fixed value of age. The prediction is given by the survival probability $S(t|Z)$ where $Z$ contains the relevant information available at $t=0$. \begin{itemize} \item \textit{Using the Cox model:} The PH assumption is often not satisfied over the whole time range. That could be fixed by introducing time-varying effects of the covariates (or by finding another model that fits the data better, e.g. an additive hazards model). If the focus is on one particular value $t_s$ (e.g., the 5-year survival in cancer), a surprisingly robust estimator of $S(t_s|Z)$ can often be obtained by applying administrative censoring at $t_s$ and using a simple Cox model with the effects $\beta$ fixed in time provided $t_s$ is not too large \citep{hans-stopped}. \item \textit{Direct modeling:} If there were no censoring before $t_s$, the survival probability $S(t_s|Z)$ could be directly estimated by models for the binary outcome $I(T^* > t_s)$. There is a choice of link function: probit, logit and complementary log-log. The latter measures the effect on the same scale as the PH model. Models can be fitted using full maximum likelihood or estimating equations approaches. Censoring before $t_s$ can be handled by modeling the censoring distribution and using inverse probability of censoring weighting (IPCW) or by using pseudo-observations based on jack-knifing \citep{PKA-maja}. \end{itemize} \subsubsection*{Dynamic prediction} Predictions made at $t=0$ need to be updated later on for those individuals that are still alive and at risk for the events of interest. First of all, the survival probabilities have to be replaced by the conditional probability $P(T^* > t | T^* \ge t_{pred}, Z(t_{pred}))$, where $t_{pred}$ is the time from which a new prediction is wanted. If the model for the hazard is perfect, the conditional probability can directly be computed from the hazard using $\widehat{\Lambda}(t|Z(t_{pred}))-\widehat{\Lambda}(t_{pred}|Z(t_{pred}))$ for $t \ge t_{pred}$. However, it may be hard to make models that are valid over the whole time range. Therefore, an alternative is to develop a new model using the data of the individuals still alive at $t_{pred}$. If there is a fixed prediction window $t_s$, the conditional survival $P(T^* > t_{pred}+t_s|T^* > t_{pred},Z(t_{pred}))$ can be estimated robustly by the methods discussed above. Prediction later on is known as \textit{dynamic prediction} and the approach of building new models using the individuals at risk at $t_{pred}$ is known as \textit{landmarking} \citep{hans-sjs}. This is also of interest more generally when there is \textit{delayed entry}, i.e. individuals entering the cohort at $V>0$. In this case, the hazard can be hard to estimate around $t=0 $, since only few individuals may be at risk early on. Hence, predictions are hard to make at $t=0$, but conditional survival probabilities could be estimated reliably later in the follow-up. That might be particularly relevant for analyses that use age as the time axis. \subsubsection*{Prediction exploiting time-dependent covariates} Dynamic predictions using landmarking can thus be used when the PH assumption does not give a reasonable description over the entire time range. The technique may, however, be even more useful when doing predictions based on a model with time-dependent covariates. A hazard model with time-dependent covariates $Z(t)$ for which the trajectories are still unknown at $t=0$ can be useful when the aim of modeling is to describe the processes behind the hazard, but it is no longer simple to calculate the survival probabilities from the hazard using the relationship $S(t) = \exp(-\Lambda(t))$. This means that such a model cannot be used for predictions at $t=0$. However, they can still be useful because the history of $Z(t)$ before $t_{pred}$ can be informative for the future of the process. Therefore, such predictions can be based on landmark models. The history of $Z(t)$ up to $t_{pred}$ is summarized in a single statistic that is used as time-fixed covariate in the prediction model. The simplest approach is to use the last observation before $t_{pred}$. While this approach does not satisfy the consistency condition that a prediction model at two different times should be compatible \citep{jewell-nielsen, suresh-etal}, it can be extended to give better predictions if more flexible prediction models from the landmark time are used, and more than just the last observation of $Z(t_{pred})$ is used to represent the effect of $Z(t)$, including e.g., cumulative effects \citep{keogh-etal}. Another way is to develop a joint model for $Z(t)$ and $A(t|Z(t))$ and estimate survival probabilities by conditioning on the history of $Z(t)$ at $t_{pred}$ and $T^* \ge t_{pred}$ in the joint model. Estimation for such models can be challenging \citep{rizo-book} and while such an approach has a better theoretical justification and is efficient, there can be concerns about the robustness. \subsection{Prediction with competing risks} \label{sec:comprisk} Very often intensity models are used for an event that does not include all-cause mortality. This was for example the case in the lithium and dementia study. However, in the presence of competing risks, naively inserting the estimated hazard into equation (\ref{eq:absrisk}) will produce an upwards biased estimate of the absolute risk (cumulative incidence). This is because, by treating competing events as censorings, one pretends that the target population is one where the competing events are not operating and therefore neglects the fact that subjects who have died from competing causes can no longer experience the event of interest. In such a situation it is necessary also to estimate the intensity of the competing events and to combine such estimates with those for the event of interest into an estimate of the \emph{competing risks cumulative incidence}. If the cumulative hazard for the competing events is $\Lambda_{02}(t|Z)$ then the cumulative incidence for a 1-event is given by \begin{equation} F_1(t\mid Z)=\int_0^t\exp(-\Lambda_{02}(u\mid Z)-\Lambda_{01}(u\mid Z))d\Lambda_{01}(u\mid Z). \label{eq:cuminc} \end{equation} Without covariates, the estimator obtained by plugging-in Nelson-Aalen estimates for the cumulative hazards in (\ref{eq:cuminc}) is known as the \emph{Aalen-Johansen estimator} \citep{ABGK-book}.\\ It is also possible to set up direct regression models for $F(t\mid z)$, e.g., using the Fine and Gray regression model \citep{Fine99} but a further discussion of such methods is beyond the scope of the present paper. We will, however, exemplify the use of the cumulative incidence in the examples in Section \ref{sec:examples}. \section{Issues in causal inference} \label{sec:causal} `Causality' may be defined in a number of different ways but the most commonly used approach is based on potential outcomes and randomized experiments \citep{rubin-book, goetghebeur, hernan-robin-book2020}. This is because a well conducted randomized experiment allows a causal interpretation of the estimated treatment effect. However, also in certain observational studies a causal interpretation of the effect of a non-randomized exposure is of interest. Two classic examples where randomization cannot be employed are ($i$) post-marketing studies of potential “adverse effects” of medications/treatments already approved (based on earlier randomized trials that focused on their effectiveness), and ($ii$) environmental or occupational exposures (randomization often impossible and/or un-ethical). However, any attempt of causal interpretation in an observational study, obviously, requires strong assumptions. It is not the intention of this paper to go into details concerning causal inference but in the current section we will briefly discuss the topic. \\ First of all, causal questions are most natural and relevant for \emph{modifiable variables} for which a hypothetical randomized study could, in principle, be done. They are less relevant for variables that you cannot change (such as sex or race). Causal parameters are typically defined as contrasts between average outcomes for the same population under the hypothetical scenarios of every one being `treated' versus every one being `untreated'. Thus, the causal risk difference at time $t$ is: \begin{equation} \Delta(t)=P(T^*(0)\leq t)-P(T^*(1)\leq t) \label{eq:Delta} \end{equation} where $T^*(0), T^*(1)$ are the (possibly counterfactual) survival times `under no treatment', vs. `under treatment'. The causal parameter, $\Delta(t)$ in equation (\ref{eq:Delta}) is directly estimable in a randomized study and may be estimable based on observational data under a set of assumptions, including `no unmeasured confounders' -- a condition that can, obviously, never be tested based on the available data. Under these assumptions, one way of getting from a hazard model to an estimate of the counterfactual risk, had all subjects in the population been treated with treatment $a=0,1$, is to use \emph{inverse probability of treatment weights}; another is to use the `$g$-formula'. The latter works, as follows. If the hazard model for given treatment $A$ and confounders $Z$ leads to an estimated absolute risk of $\widehat{F}(t\mid A,Z)$ then the estimate of $P(T^*(a)\leq t)$ using the $g$-formula is \begin{equation} \widehat{P}(T^*(a)\leq t)=\frac{1}{n}\sum_i \widehat{F}(t\mid a,Z_i),\quad a=0,1. \label{eq:g-formula} \end{equation} That is, the risk is predicted for each given subject under treatment $A=a$ given his or her observed covariates and then \emph{averaged} over the sample $i=1,\dots,n$. Formula (\ref{eq:g-formula}) is applied separately for each treatment ($a=0$ vs $a=1$) and the estimate of $\Delta(t)$ is obtained. Note that the $g$-formula is useful for predicting average risk in a sample even though a causal interpretation is not aimed at. We will illustrate this is the examples of Section \ref{sec:examples}. Another use of the $g$-formula can yield an estimate of the number of events `attributable to' a certain modifiable risk factor, $A$. This number is given by the difference between the `total risk' \emph{observed} in the population before time $t$: $\sum_i\widehat{F}(t\mid A_i, Z_i)$, and that expected \emph{if every one was unexposed} ($A_i=0$): $\sum_i\widehat{F}(t\mid 0, Z_i)$. When doing this for a number of risk factors, these may be ranked in a way that also accounts for their prevalence in the population.\\ Though a hazard model may, thus, be useful both for describing associations between covariates and a time-to-event outcome and serving as a useful step towards estimating a causal contrast like (\ref{eq:Delta}), it does not itself provide a causal contrast. This may be seen, as follows. Recall the intuitive definition (\ref{eq:int}) of the hazard function for survival data: $$\lambda(t)=P(T^*\leq t+dt\mid T^*>t)/dt.$$ This shows that contrasts based on the hazard functions for the counterfactual outcomes $T^*(0),T^*(1)$, e.g. the hazard ratio at time $t$, $$\frac{\lambda^1(t)}{\lambda^0(t)}=\frac{P(T^*(1)\leq t+dt\mid T^*(1)>t)}{P(T^*(0)\leq t+dt\mid T^*(0)>t)}$$ are not directly causally interpretable since they contrast different sub-populations: those who survive past $t$ under treatment ($T^*(1)>t$) and those who survive past time $t$ under no treatment ($T^*(0)>t$). For this reason, a statement saying that `treatment only works until time $\tau$ but not beyond' in a situation with $\beta(t)<0 \mbox{ for }t<\tau$ and $\beta(t)=0 \mbox{ for }t>\tau$ is not justified \citep{hernan-EPIDEMIOLOGY2010, aalen-LiDA2015, torben-etal}. A special problem with causal inference for survival data is \emph{time-depending confounding/mediation} where a time-dependent covariate both affects future treatment and survival outcome and is affected by past treatment. For this situation, special techniques are needed to draw causal conclusions concerning the treatment effect \citep{daniel-etal, hernan-robin-book2020}. \section{Illustrative applications} \label{sec:examples} In this section, we illustrate the points given in the paper with three real data examples. A more detailed analysis (along with code in R statistical software) is provided in the online Appendix. \subsection{Peripheral arterial disease} Peripheral arterial disease (PAD) is a common circulatory problem in which narrowed arteries reduce blood flow to peripheral limbs, often the legs. It is also likely to be a sign of a more widespread atherosclerosis, and subjects manifesting the disease carry an increased risk for atherothrombotic events. The PAD data set contains the results of a Slovene study reported in \cite{Blinc17, Blinc11}. Briefly, the study was conducted by 74 primary care physicians-researchers (GPs), who recruited subjects with PAD along with age and sex matched `controls' without PAD. Yearly examination visits were planned with a total of 5 years of follow-up. The final study included 742 PAD patients and 713 controls, with baseline data for each subject, measurements at each visit, and endpoints. Important endpoints are death, either due to cardiovascular disease (CVD) or other causes, non-fatal CVD endpoints of infarction and stroke, and patient interventions attributed to the disease such as re-vascularization procedures. All the individuals in the study were treated according to the latest treatment guidelines and the goal of the study was to study survival of the patients with PAD (in comparison to controls) despite optimal treatment.\\ \noindent \textit {Endpoints:} Most of this analysis will focus on death as the outcome of interest. The causes of death are split into two groups: cardiovascular (CV) or other (non CV) and, in addition, we will also consider all CV events (stroke, infarction or death) as an outcome for modelling. \\ \noindent \emph{Inclusion criteria, follow-up and censoring:} With most GPs, the follow-up visits of the patients followed a yearly plan, though, in practice, the visits tended to be moderately delayed, with time to the 5th visit ranging from 4.8 to 6.8 years. Most data on patients were recorded at the time of their visit, with the exception of deaths which were reported as they occurred (along with all other events that occurred since the last visit). Because of between-physician differences in whether patients were followed after 5 years, to avoid possibly non-independent censoring, all individuals alive at 5 years after enrollment were censored at that time. \\ \noindent \emph{Time axis, basic survival analysis:} For the PAD patients the time since diagnosis is a natural time axis as it represents both the progression of disease and treatments for the disease. Survival curves for the control subjects serve as a comparison outcome of similarly aged subjects without the disease, but do not have a natural stand-alone interpretation. Figure \ref{fig:pad1_km} contains the overall Kaplan-Meier curves for PAD and control subjects, male and female (deaths of any cause are considered as the outcome). The survival is higher for females than for males, which is no surprise given a mean age at entry of 65 years, and is lower for PAD subjects than for the age and sex matched controls. The right hand panel shows the curves on age as the time axis, a very similar pattern can be observed. When using age, the left hand portion of a survival curve can often be highly variable due to the small number of the patients at risk at a young age and this early high variability can then affect the entire curve. To avoid this, we estimate conditional survival $P(T^*>t | T^*>t_0)$ for $t>t_0$, with $t_0$ chosen so that the risk set is large enough and most of the information of interest is included. In the case of PAD, we choose $t_0=55$ years.\\ \begin{figure}\centering \includegraphics[height=5cm]{pad1_kmf.pdf} \caption{Kaplan-Meier estimates of cumulative probability of survival with respect to time since enrollment (left) and age (right).} \label{fig:pad1_km} \end{figure} \noindent\emph{Hazard regression models} \noindent We next study how the covariates affect the hazard of dying. \noindent\emph{Covariates:} We will be interested in PAD, sex (38\% women), age, and later also in LDL and HDL. The distribution of the continuous covariates with respect to PAD is given in Figure \ref{fig:pad2_cov}. By study design there should be no difference in the age distribution, HDL and LDL are slightly lower for the PAD subjects. When used as a covariate, age will be expressed in decades to give a coefficient of a more interpretable size. \begin{figure}\centering \includegraphics[height=5cm]{pad2_covf.pdf} \caption{Distributions of continuous variables with respect to PAD (red=PAD, black=Control).} \label{fig:pad2_cov} \end{figure} In the analysis presented in Table \ref{tab:padfit}, we focus on the effect of PAD and the effect of sex in each PAD subgroup. First, we fit a Cox model with time since enrollment as the time axis. Knowing that age is a strong predictor, we include it in the model (i.e., age/10 is used as a covariate). We learn that patients with PAD have a 2.4 times higher hazard than the controls, and that male sex and 10 additional years of age each increase the hazard by approximately 2 fold. The effect of both sex and age is very similar in both groups (patients and controls).\\ Alternatively we can use age as the time axis and add time since enrollment (FU time) as a possible predictor. Using age as the time axis the effect of male sex is a 2-fold hazard increase, just as before. The time-dependent variable years-since-enrollment compares the hazard of death for subjects with more study years to those recently enrolled, and shows an increase in death rates over time (HR=1.2 per year). By choosing one time axis and adding the other into the model as a covariate, the interpretation of the HR for sex becomes equal (the HR for two patients of the same age and same time since enrollment) under the condition that the assumptions of the linearity (and PH) of the covariate are met. To avoid the problem of choosing the time axis and adding assumptions, the Poisson model that allows multiple time axes can be used instead. To this end, we assume the baseline hazard constant within yearly intervals of time since enrollment and five-year intervals of age. The results are given in the last two rows of Tablec \ref{tab:padfit}. We can see that, in our case, all three approaches coincide well, so, the possible violations of the assumptions of the different options had no effect.\\ \begin{table} \centering \begin{tabular}{lccccccc} & \multicolumn{2}{c}{Overall} & \multicolumn{2}{c}{PAD} & \multicolumn{2}{c}{Control} & p \\ & HR & 95\% CI & HR & 95\% CI & HR & 95\% CI & PAD vs C\\ \hline \multicolumn{8}{c}{Time since enrollment axis, Cox model}\\ PAD& 2.40 &(1.71, 3.37) \\ Sex (m vs. f)& 2.00 &(1.40, 2.86) &2.01 &(1.31, 3.08) &1.97 &(1.02, 3.79) &0.96 \\ Age (per10yrs)& 1.93 &(1.57, 2.37) &1.91 &(1.49, 2.45) &1.98 &(1.36, 2.89) &0.88 \\ \\ \multicolumn{8}{c}{Age axis, Cox model}\\ PAD& 2.40&(1.70, 3.37) \\ Sex (m vs. f)& 2.02 &(1.42, 2.90) &2.01 &(1.31, 3.08) &2.01 &(1.04, 3.88) &1.00 \\ FU (per1yr)& 1.18 &(1.05, 1.33) &1.20 &(1.05, 1.38) &1.12 &(0.91, 1.39) &0.61 \\ \\ \multicolumn{8}{c}{Both time axes, Poisson model}\\ PAD& 2.38&(1.70, 3.35) \\ Sex (m vs. f)& 2.01 &(1.41, 2.88) &2.03 &(1.33, 3.11) &1.97 &(1.02, 3.81) &0.95 \\ \end{tabular} \caption{Estimated hazard ratios (HR) and 95\% confidence intervals (CI) in models with different time axes, fitted with Cox or Poisson model. The last column reports the p-value for interaction of each covariate with group (PAD or control).} \label{tab:padfit} \end{table} \noindent\emph{Competing risks and time-dependent covariates}\\ The analysis so far considered all causes of death equally, but the cardiovascular deaths (CV) are of particular interest. In 5 years, 159 patients died, 68 of these due to cardiovascular reasons. Figure \ref{fig:pad3_AJ} presents the Aalen-Johansen estimator of the absolute risk (also known as the cumulative incidence function). The estimated 5-year survival probability of PAD patients is 84.5 \% and we can see that 6.9\% of the PAD patients are estimated to have died due to CV reasons and 8.4 \% due to other reasons. Both the probability of CV but also that of non-CV death are considerably greater than in the control group.\\ \begin{figure}\centering \includegraphics[height=5cm]{pad3_ajf.pdf} \caption{The probability of dying due to cardivascular (solid line) or other reasons (dashed line) with respect to PAD (red=patients, black=controls) } \label{fig:pad3_AJ} \end{figure} We wish to explore the effect of our basic covariates (sex, age and PAD) and cholesterol (LDL, HDL) on the hazard for cardiovascular death or major cardiovascular events. Hazard models for a particular endpoint can be fitted by censoring all `other cause' deaths. The results for Cox models on the time-since-enrollment axis are given in Table \ref{tab:cmp_tdc}. \begin{itemize} \item A: As before, we see that male sex and higher age increase the hazard, we also see that PAD is a strong risk factor, the hazard of PAD patients is almost 3 times higher than that of the controls. Neither LDL nor HDL values at baseline seem to have an important effect. \item B: We now include all the information available - the values of HDL and LDL were updated on a yearly basis (if missing, the last value was carried forward), i.e., we use them as time-dependent variables. We can now observe a much larger effect of HDL, patients whose HDL is lower by 1 mmol/l have a 4.7 ($\approx 1/0.21$) times higher hazard. We can also observe a more pronounced effect of LDL. Its direction may seem counterintuitive, but may be due to the fact that patients at a higher risk have lower target values of LDL and hence the lower LDL may be a proxy for the higher risk patients. \item C: This model regards not only CV death but also stroke and infarction as events. The effects of the covariates do not change much, but all the standard errors have decreased as the number of events increased to 142. \end{itemize} \begin{table} \centering \begin{tabular}{l|cc|cc|cc|} & \multicolumn{2}{|c|}{A}& \multicolumn{2}{|c|}{B} & \multicolumn{2}{|c|}{C} \\ & \multicolumn{2}{|c|}{CV death} & \multicolumn{2}{|c|}{CV death} &\multicolumn{2}{|c|}{CV events}\\ & \multicolumn{2}{|c|}{Time-fixed}& \multicolumn{2}{|c|}{Time-dependent} & \multicolumn{2}{|c|}{Time-dependent}\\ \hline & HR &95\%CI& HR&95\%CI& HR &95\%CI \\ \hline PAD&2.87&(1.65-5)&2.40&(1.37-4.20)&2.27&(1.57-3.28)\\ Sex (m vs. f)&1.67&(0.97-2.88)&1.36&(0.79-2.35)&1.90&(1.28-2.81)\\ Age (per10yrs)&1.93&(1.40-2.66)&2.01&(1.45-2.77)&1.54&(1.25-1.90)\\ HDL (mmol/l)&0.74&(0.39-1.41)&0.21&(0.10-0.48)&0.48&(0.29-0.79)\\ LDL (mmol/l)&0.92&(0.72-1.18)&0.76&(0.57-1.01)&0.88&(0.73-1.07)\\ \end{tabular} \caption{Estimated hazard ratios (HR) and 95 \% confidence intervals (CI) in Cox models. A: CV deaths, baseline HDL and LDL; B: CV deaths, time-dependent HDL and LDL; C: all CV events, time-dependent HDL and LDL.} \label{tab:cmp_tdc} \end{table} To check whether the above interpretation makes sense, we further examine the goodness-of-fit of the models. \\ We focus on model C, which uses all the information available. Adding a spline to the model, i.e. replacing $\beta$ HDL by s(HDL), we can see that the linearity of HDL may be problematic. The protective effect increases with the value of HDL but may level off for values above approx 1.5 mmol/l, see Figure \ref{fig:pad4_nlin} (the huge confidence interval beyond 2 mmol/l is due to very few individuals with HDL above 2). Allowing HDL to be non-linear, the Schoenfeld's residuals test for proportional hazards indicates no further issues. \begin{figure}\centering \includegraphics[height=5cm]{pad4_nlinf.pdf} \caption{The effect of HDL on the hazard of having a CV event modelled by using restricted cubic splines in a multiple Cox regression model (an extension of model C). } \label{fig:pad4_nlin} \end{figure} The calculation of the absolute risk is based on model A, as only the baseline information can be used for prediction. Unlike in the case of pure hazard modeling, the other causes of death cannot be simply censored - to estimate the probability of dying due to cancer, the hazard of dying due to other causes must be estimated as well, see Table \ref{tab:cmp_tdc2}. The absolute risks of two individuals, one aged 58 and the other 72 (25th and 75th percentile of age) and median values of lipids are plotted in Figure \ref{fig:pad5_AR}. \begin{table} \centering \begin{tabular}{r|cc} &\multicolumn{2}{c}{Time-fixed, other cause} \\ & HR &95\% CI \\ \hline PAD & 2.04 & (1.31--3.19) \\ Sex (m vs. f) & 2.12 & (1.29--3.50) \\ Age (per 10 yrs) & 1.93 & (1.45--2.56) \\ HDL & 0.82 & (0.43--1.55) \\ LDL & 1.02 & (0.83--1.26) \\ \end{tabular} \caption{Estimated hazard ratios (HR) and 95\% confidence intervals (CI) for other cause mortality. Baseline HDL and LDL.} \label{tab:cmp_tdc2} \end{table} \begin{figure} \centering \includegraphics[height=5cm]{pad5_ARf.pdf} \caption{The absolute risk of dying of CV (solid line) and other causes (dashed line) by PAD status (patients=red, controls=black) and age (left graph 58, right graph 72). The values of baseline HDL and LDL are 1.3 and 3, respectively.} \label{fig:pad5_AR} \end{figure} To conclude, we have seen that the PAD remains a strong risk factor despite following the latest treatment guidelines. This is true regardless of whether we focus on all major cardiovascular events or on CV death only. Old age and too low HDL are further associated with a higher event rate. \subsection{Non-alcoholic fatty liver disease} Non-alcoholic fatty liver disease (NAFLD) is defined by three criteria: presence of greater than 5\% fat in the liver (steatosis), absence of other indications for the steatosis (such as excessive alcohol consumption or certain medications), and absence of other liver disease \citep{Puri12}. NAFLD is currently believed to be responsible for almost 1/3 of liver transplants and its impact is growing. It is expected to be a major driver of hepatology practice in the coming decades \citep{Tapper18}. The study included all patients with a NAFLD diagnosis in Olmsted County, Minnesota between 1997 and 2014 along with up to four age and sex matched controls for each case \citep{Allen18}. (Note that some changes to the public data have been made to protect patient confidentiality; analysis results here will not exactly match the original paper). The goal of the study is to investigate whether NAFLD subjects are at increased mortality risk compared to the general population, and if so the amount of increase. Only a minority of subjects are tested for NAFLD since this requires an abdomnial scan, and we can, therefore, only address the progression of \emph{detected} NAFLD.\\ \noindent \emph{Entry time, inclusion criteria: } In the PAD study, the data were collected prospectively and hence the inclusion criteria were naturally evaluated at the time of inclusion. On the contrary, the NAFLD data were collected retrospectively using existing databases and are hence more prone to mistakes regarding the time when the inclusion criteria are known. Subjects enter the study at the age of NAFLD diagnosis or selection as a control, whichever comes first. Because NAFLD is often a disease of exclusion, a NAFLD diagnosis followed shortly by the diagnosis of another liver disease is considered a false positive. The data set is restricted to `confirmed NAFLD', i.e., if someone were diagnosed on 2001-06-20, the index date for confirmed NAFLD would be 2002-06-20, assuming that another liver diagnosis, death, or incomplete follow-up did not intervene. The follow-up of the matched control subjects also commences on the `confirmed NAFLD' date. This is important. If the matched subjects' follow-up were started on 2001-06-20 then the control has the opportunity to die during that first year while the case does not, leading to immortal time bias. When selecting the controls for any given NAFLD case at age $a$, it is very important only to use information that was available at age $a$ for the controls. We cannot exclude subjects who have too short a follow-up (die or censored before age $a+2$ say), will later have diabetes, or, most particularly, those who will later become NAFLD patients. Each of these is a variant of immortal time bias. In this data set, 331 of the subjects selected as controls were diagnosed with NAFLD at a later age. Care must be taken at the time of analysis to correctly deal with these patients. The preliminary checks and figures will treat each subject's value at study entry as fixed, the hazard models will treat it as a time-dependent covariate.\\ \noindent \emph{Endpoints and censoring:} The primary focus of this analysis is death, which means the endpoint is not problematic. All the subjects in the study are administratively censored at the end of 2017, when the data set was created. A small number has been censored due to migration, about 1\% per year over the age of 50 \citep{Sauver12}. Because the publicly available NAFLD data set does not contain dates, a plot of the censoring distribution is not particularly informative: we do not know what the result \emph{should} look like. Since the follow-up is as long as 20 years for some subjects, care must be taken with the independent censoring assumption - the later included subjects have a systematically lower death rate, e.g. due to improved general population medical care, and are, due to the later inclusion date, followed-up for a shorter time period. \\ \noindent \emph{Time axis, basic survival analysis:} The NAFLD is a not an acute condition, it may well exist for many years before detection. Furthermore, the age range in the study is very wide, death as the primary endpoint is highly related to age and cases and controls match with each other on age (with 'time since NAFLD' not well defined for the controls). All these reasons make age the natural time axis for the NAFLD study. This approach mimics an idealized (but impractical) study which included the entire population from birth forward, with time dependent NAFLD as the covariate. As a first description of data, we plot the estimated survival curves for the patients by sex and NAFLD group, see Figure \ref{fig:nafld1_surv}. For the latter we use the subject's NAFLD status at enrollment as a time-fixed variable. This approach is similar in spirit to using intent-to-treat in a clinical trial, in that it gives a reliable estimate but one that may underestimate the true clinical effect of a covariate. As with the PAD study we estimate conditional survival. \begin{figure}\centering \includegraphics[height=5cm]{nafld1_survf.pdf} \caption{Survival curves from age 50 forward, comparing NAFLD to non-NAFLD at study entry, stratified by male/female.} \label{fig:nafld1_surv} \end{figure} An alternative summary is to report the cumulative hazard (using the Nelson-Aalen estimator) by sex and time-dependent NAFLD status, see Figure \ref{fig:nafld2_na}. If the hazard is constant on the interval, the increase of the cumulative hazard on each interval is close to the death rates (proportion of deaths per person-year) given in Table \ref{tab:nafld_py}. The hazard ratio (difference on log scale) between male patients and controls is nearly constant in time, which suggests a proportional hazards model may fit well. On the other hand, in women the NAFLD/control hazard ratio is highest at the youngest ages and decreases with age. \\ \begin{table} \centering \begin{tabular}{l|c|c|c|c|} &Female control&Female NAFLD&Male control&Male NAFLD \\ \hline 40-50 & 1.3&2.4&2.2&2.5 \\ 50-60 & 2.5&5.9&5.2&8.3 \\ 60-70 & 5.4&14.8&11.6&22.8 \\ 70-80 & 18.0&28.1&23.4&37.2 \\ 80-90 & 68.1&76.3&79.6&108.4 \\ \end{tabular} \caption{Death rates per 1000 person years, split by age group, sex, and time-dependent NAFLD status. } \label{tab:nafld_py} \end{table} \begin{figure} \centering \includegraphics[height=5cm]{nafld2_na50f.pdf} \caption{Nelson-Aalen estimates for the cumulative hazard from age 50, stratified by gender (left: females, right: males) and NAFLD (NB: log-scale on the vertical axis). } \label{fig:nafld2_na} \end{figure} \noindent\textit{Hazard models:}\\ In the hazard models we can incorporate NAFLD as a time dependent covariate. Subjects who were NAFLD at enrollment have a value of 1, controls start with a value of 0 at enrollment, some of whom turn to 1 when they are diagnosed with NAFLD at a later age. Also, the overall model is not adversely affected by the small sample issue at younger ages, so there is no need to use a restricted age range. Because NAFLD is strongly associated with obesity, we also fit models that adjust for other conditions associated with obesity: diabetes, hypertension and dyslipidemia. Fits were done overall and for males and females separately. \begin{table} \centering \begin{tabular}{l|cc|cc|cc|} & \multicolumn{2}{|c|}{Overall} & \multicolumn{2}{|c|}{Females} & \multicolumn{2}{|c|}{Males} \\ & HR &95\%CI& HR&95\%CI& HR&95\%CI\\ \hline NAFLD only & 1.62 & (1.44--1.82) & 1.65 & (1.39--1.95) & 1.60 & (1.35--1.88) \\ \hline NAFLD & 1.43 & (1.26--1.62) & 1.39 & (1.17--1.67) & 1.45 & (1.22--1.73) \\ Diabetes & 1.77 & (1.57--2.01) & 1.94 & (1.62--2.32) & 1.64 & (1.38--1.94) \\ Hypertension & 1.24 & (1.08--1.42) & 1.33 & (1.10--1.63) & 1.16 & (0.96--1.41) \\ Dyslipidemia & 0.68 & (0.60--0.78) & 0.65 & (0.54--0.79) & 0.72 & (0.60--0.88) \\ \end{tabular} \caption{Estimated hazard ratios (HR) and 95 \% confidence intervals (CI) from Cox models that have only NAFLD as a covariate, and models with NAFLD and covariates. The overall model is fit to all subjects with sex as a stratification variable. \label{tab:nafld2} } \end{table} Table \ref{tab:nafld2} contains the estimated hazard ratios from Cox models that include all subjects, only males or only females, and for models that include only NAFLD as a (time-dependent) covariate as well as adjusting for confounders. The estimated effect of NAFLD is attenuated when adjusting for the three covariates. The higher prevalence of diabetes and other conditions explains a portion of the NAFLD effect. The overall NAFLD effect does not differ markedly for males and females.\\ \noindent\textit{Model checks:}\\ Since all of the covariates in the models are binary, there is no need to explore functional form. An overall test for proportional hazards based on Schoenfeld's residuals has results that mimic what was seen in the hazard plot of Figure \ref{fig:nafld2_na}: males fit the proportional hazards model well (p=.4) while females have significant non-proportionality ($p<0.001$). It is interesting that the overall `average' effects, over age, are nearly the same for male and females, however. Checks of the multiple Cox model show that non-proportionality is more severe with respect to diabetes (for both males and females) and for hypertension for women, see Figure \ref{fig:nafld3_nph}. The relative effect of comorbidities on death rates is higher at younger ages, but may get high again with very old age. \begin{figure}\centering \includegraphics[height=8cm]{nafld3_nphf.pdf} \caption{The changing effect of diabetes (diab) and hypertension (htn) in the multiple regression models. Top row: females, bottom row: males} \label{fig:nafld3_nph} \end{figure} In conclusion, NAFLD is associated with an increased mortality compared to disease-free, age and sex matched controls. Part of this increase may be explained by the different diabetes, hypertension and dyslipidemia distributions in the two groups. \subsection{Advanced ovarian cancer} As the last example, we consider the advanced ovarian cancer data set. It contains follow-up on 358 subjects who were enrolled in two trials of chemotherapy for advanced ovarian cancer, conducted around 1980 by a multi-institution research network in the Netherlands. The eligibility criteria for enrollment included pathologic confirmation of advanced disease, age less than 70 years, lack of serious cardiac or renal disease, and favorable haematological status. Patients could not have a second tumor, brain metastases, or prior radiation or chemotherapy. The treatment is not of our main interest here, hence we treat this data set primarily as an observational study. The data were extensively analyzed in Chapter 6 of \cite{van12}, and further references can be found there as well. Patient follow-up in the data set continued for 6 years. The goal of the analysis was to predict the survival probability of patients using covariates that were recorded at baseline. \\ Focusing on a fatal condition such as advanced cancer in a data set that comes from a clinical trial with excellent follow-up, the basic aspects of these data are particularly simple: the sole event of interest is death of any cause, the inclusion criteria are clear and the most natural time axis is time from diagnosis as this is the time frame of most direct interest to both the patient and the care provider. The left panel of figure \ref{fig:ovarian1_censurv} shows the censoring pattern for the study, which follows the expected `hockey stick' shape for a formal trial with 3 years of enrollment, 4 years of follow-up after enrollment of the final subject, and no subjects lost to follow-up. The graph shows no censoring before 4 years followed by an upward line corresponding to uniform accrual each year. The Kaplan-Meier curves give the overall pattern of survival for this cohort, see the right hand graph of Figure \ref{fig:ovarian1_censurv}. \\ \begin{figure}\centering \includegraphics[height=5cm]{ovarian1_censurvf.pdf} \caption{Censoring fraction and survival curve (with 95\% confidence interval) for the ovarian cancer study.} \label{fig:ovarian1_censurv} \end{figure} \noindent\emph{Regression:} In our analysis, we focus on three covariates: \begin{itemize} \item FIGO: This is a staging system for ovarian cancer. Advanced ovarian cancer comprises the stages III ($n=262$, used as reference group) and IV ($n=96$). Stage IV patients are known to have a very poor prognosis. \item The diameter of the residual tumor after surgery, categorized as micro, $< 1$ cm, 1--2 cm, 2--5 cm, and $>5$ cm, with the last category being the most frequent one ($n=145$). We will use the 'micro' category ($n=29$) as the reference group. \item Karnofsky index. A measure of the patient's functional status at the time of diagnosis. The maximum score 10 is an indication of no physical limitations. We will regard the covariate as quantitative in the model. \end{itemize} The coefficients in the fitted PH model are given in Table \ref{tab:ovarian1}.\\ \begin{table}\centering \begin{tabular}{l|cc} & HR & 95\% CI \\ \hline Diameter$<$1cm & 1.38 &(0.73-2.57) \\ Diameter 1-2cm & 2.24 &(1.19-4.21) \\ Diameter 2-5cm & 2.38 &(1.29-4.39) \\ Diameter$>$5cm & 2.53 &(1.40-4.57) \\ FIGO (stage IV vs. III) & 1.73 &(1.33-2.25) \\ Karnofsky index (per 1 point)& 0.84 &(0.75-0.93) \\ \end{tabular} \caption{Estimated hazard ratios (HR) and 95 \% confidence intervals (CI) in the Cox model for the ovarian cancer data. The reference group for the covariate Diameter is `micro'. \label{tab:ovarian1} } \end{table} \noindent\emph{Checking the assumptions of the model:}\\ To check the proportional hazards assumptions, we consider both the method using cumulative Schoenfeld residuals of \cite{Lin93} and the smoothed Schoenfeld residuals method proposed by \cite{therneau00}. Both methods agree that the PH assumption seems to be problematic for the Karnofsky score (Figure \ref{fig:ovarian2_PH}). The test based on cumulative Schoenfeld residuals returns a $p$-value of 0.009. The left hand graph of Figure \ref{fig:ovarian2_PH} (smoothed residuals) shows a rapid early drop in importance of the Karnofsky score, implying that baseline Karnofsky score, measured at diagnosis, is not predictive of mortality beyond the first year of follow-up, something that could be expected for advanced cancer. \\ \begin{figure}\centering \includegraphics[height=5cm]{ovarian2_phf.pdf} \caption{Proportional hazards plots for Karnofsky score (left: smoothed Schoenfeld residuals, right: cumulative Lin et al )} \label{fig:ovarian2_PH} \end{figure} \noindent\emph{Dealing with the lack of PH:}\\ One approach to deal with the violation of proportional hazards assumption is to fit a set of landmark models. In this case we consider 2-year windows and set the landmark points at 0, 1 and 2 years. By fitting a separate model at each landmark point, we allow the coefficient to change in time but also to use the most recent covariate values for the prediction (in this case there are no time-dependent covariates, so all fits use the same values). As can be seen from Table \ref{tab:ovarian2}, the effect of the residual tumor diameter increases with size at all the landmark points, though quite some variability can be observed there (the subgroups are rather small and hence the standard errors quite large). The coefficient for the stage at baseline (FIGO) remains rather constant in time, whereas the effect of Karnofsky index quickly approaches zero (hazard ratio approaches 1), for predictions from 1 or 2 years onward, the baseline value of Karnofsky score carries no important information. \\ \begin{table} \centering \begin{tabular}{l|cc|cc|cc} & \multicolumn{2}{c}{From time 0} & \multicolumn{2}{|c|}{From 1 year} &\multicolumn{2}{c}{From 2 years} \\ \hline &HR& 95\% CI&HR& 95\% CI&HR& 95\% CI\\ \hline Diameter$<$1cm & 1.31 &(0.5-3.3)&1.63 &(0.7-4.1)&2.73 &(0.8-9.4) \\ Diameter 1-2cm & 2.92 &(1.2-7.1)&2.89 &(1.2-7.2)&2.17 &(0.5-8.7) \\ Diameter 2-5cm & 3.04 &(1.3-7.2)&2.75 &(1.1-6.7)&3.55 &(1.0-12.7) \\ Diameter$>$5cm & 2.69 &(1.2-6.3)&3.21 &(1.4-7.6)&5.54 &(1.7-18.4) \\ FIGO (stage IV vs. III) & 1.76 &(1.3-2.4)&1.70 &(1.2-2.5)&1.64 &(0.9-2.9) \\ Karnofsky index & 0.77 &(0.7-0.9)&0.89 &(0.8-1.0)&1.07 &(0.8-1.4) \\ \end{tabular} \caption{Estimated hazard ratios (HR) and 95 \% confidence intervals (CI) in landmark models (with 2-year windows) for the Ovarian cancer data. \label{tab:ovarian2} } \end{table} \noindent \textit{Prediction:} The hazard ratios describe the relative effect of each covariate, but do not tell anything about the absolute risk of the patients. To this end, Figure \ref{fig:ovarian4_pred} shows risk estimates for a set of covariate values. The probability of dying in the first 2 years is comparable in size to the conditional probability of dying in the next 2 years at each of chosen time points. As seen in Table \ref{tab:ovarian2}, the Karnofsky index at baseline is crucial for the prognosis in the first 2 years, but less relevant for patients who survive the initial period. Obviously, observation of an updated Karnofsky index could change this conclusion. \begin{figure}\centering \includegraphics[height=5cm]{ovarian4_predf.pdf} \caption{The conditional probability of dying for patients with diameter $<1$ cm with respect to stage and two chosen levels of Karnofsky index. Two-year conditional probabilities for patients still at risk at the beginning of each time window are estimated.} \label{fig:ovarian4_pred} \end{figure} In conclusion, the fitted Cox regression model enabled estimation of the absolute risk of dying within two years, even in the presence of a covariate for which the PH assumption was not satisfied. For that purpose, the technique of landmarking proved very useful. \section{Summary and discussion} \label{sec:disc} In general multi-state models, the \emph{intensity} is the basic parameter \citep{ABGK-book} and we have argued that, in the analysis of time-to-event data from observational studies, the intensity is, therefore, an obvious parameter to target. Focus has been on a single occurrence of a single type of event, such as (cause-specific) death, onset/diagnosis of a disease, or first hospital re-admission. Recurrent themes have been that hazard models known from survival analysis are applicable in such situations and that studies of this kind have a number of common features. These include, e.g., specification of the time axis for analysis, how to deal with incomplete observation in the form of right-censoring and delayed entry, and how to use and interpret models including time-dependent covariates. Also, the concept of immortal time bias is relevant in all such studies. We have provided some checklists that we find useful to consider, however, it is important to emphasize that these checklists cannot be taken as `cook books' on how to conduct time to event analysis in observational studies. Rather, they are meant as guidelines and we have also emphasized that the most important item to consider when planning such an analysis is to clearly specify the research question and think about to what extent the available data allow an answer to that question. We have also identified research questions for which an intensity model only provides one step towards an answer and where further analyses are needed. These include risk prediction for non-fatal events and causal inference. Finally, we have presented some worked examples using the methods summarized in earlier sections and going through the checklists provided. Our examples illustrate also the need to interpret the results with some caution, taking into account both the limitations of the data at hand and the underlying assumptions, which should be carefully checked, possibly triggering some additional analysis. Further details concerning these examples are collected as Supplementary Material that also includes information on how the analysis results were obtained using {\tt R}. Even though the paper is not short, it fails to discuss a number of aspects that are also of importance. These include most mathematical details about properties of the methods, as well as more comprehensive analyses of data with competing risks, recurrent events, and more general multi-state models \citep{cook-lawless-book2007, cook-lawless-book2018}. We have focussed on the Cox regression model throughout (and to a lesser extent the piecewise exponential/`Poisson' model) and discussion of AFT models, additive hazards models as well as random effects (`frailty') models, e.g. joint models for the event intensity and an internal time-dependent covariate, is not included \citep{hougaard-book, rizo-book}. The same holds for models for relative survival \citep{maja-etal} and how to deal with interval-censoring \citep{joly-etal}. Some of these may be topics for forthcoming papers from the STRATOS TG8 topic group. \subsection*{Acknowledgements} MA is a James McGill Professor at McGill University. His research is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) grant 228203 and the Canadian Institutes of Health Research (CIHR) grant PJT-148946. The research of MPP is supported by Slovenian Research Agency (grant P3-0154 , 'Methodology for data analysis in medical sciences').
1,108,101,563,939
arxiv
\section*{Acknowledgements} This work was supported by the Institute for Basic Science in Korea (Grant No. IBS-R009-D1) and the National Research Foundation of Korea (Grant No. NRF-2018R1A2B3003373). JWP acknowledges support from the POSCO Science Fellowship of the POSCO TJ Park Foundation. The data supporting this manuscript are available from the corresponding authors upon request. \newpage \setcounter{page}{1} \beginsupplement \section*{\centering{Supplementary Materials for \\\vspace{5mm} Kibble-Zurek universality in a strongly interacting Fermi superfluid}} \renewcommand\thesubsection{\arabic{subsection}} \setcounter{secnumdepth}{2} \setcounter{tocdepth}{2} \renewcommand{\contentsname}{Materials and Methods} \vspace{5mm}\tableofcontents\vspace{5mm} \subsection{Sample preparation} A detailed description of our experimental setup and the preparation of a strongly interacting Fermi gas of $^{6}$Li is given in Ref.~\cite{Park2018}. To create a large superfluid sample that can support a high number of vortices, we use bosonic $^{23}$Na to sympathetically cool fermionic $^{6}$Li to degeneracy. To this end, both atomic species are simultaneously loaded into a magneto-optical trap, optically pumped to their magnetically trappable stretched states, and subsequently transferred to a plugged magnetic quadrupole trap, where radio-frequency (rf) evaporation of $^{23}$Na cools the $^{6}$Li atoms to quantum degeneracy. Then, the sample is moved to a single-beam optical dipole trap (ODT) with an aspect ratio of 110:1 ($\lambda = 1064$ nm), and the remaining $^{23}$Na atoms are removed using a resonant light pulse. To access the broad $s$-wave Feshbach resonance between the two lowest hyperfine states of $^{6}$Li (denoted by $\lvert1\rangle = \lvert F{=}1/2, m_{F}{=}1/2 \rangle$ and $\lvert2\rangle = \lvert F{=}1/2, m_{F}{=}-1/2 \rangle$) located at 832 G, the $^{6}$Li atoms in the ODT are initially transferred to the $\lvert 1 \rangle$ state by an rf Landau-Zener sweep at 3 G, and the magnetic field is increased to 870 G, where another rf Landau-Zener sweep creates an equal mixture of $\lvert1\rangle$ and $\lvert2\rangle$. Subsequently, the magnetic field is ramped to 815 G on the BEC side of the resonance, where the sample is further cooled by reducing the ODT beam intensity to a value that corresponds to a trap depth of $U_{\rm i}=1.15~U_{\rm c}(B)$, where $U_{\rm c}(B)$ is the critical ODT depth for the onset of condensation at magnetic field $B$. Since $U_{\rm c}$ is dependent on the interaction strength, its value for a given atom number is determined at each $B$ accessed in the experiment by examining the condensate fraction as a function of the trap depth. Finally, the magnetic field is adiabatically ramped to $B$ where the thermal quench will be performed. At this stage of the experiment prior to the thermal quench, the gas is composed of approximately $2.0 {\times} 10^{6}$ $^{6}$Li atoms per spin state for all explored values of $B$. The trapping frequencies in the radial plane of the sample are $(\omega_{x}, \omega_{y}) = 2 \pi \times (20, 19)$ Hz, and the axial trapping frequency ranges between {$\omega_{z} = 2 \pi \times$ 1080 Hz at 757 G and 780 Hz at 898 G} depending on $B$. Here, the radially symmetric confinement is dominantly provided by the residual magnetic curvature of the Feshbach field, and the tight axial confinement is provided by the ODT. The variation of $\omega_{x}$ and $\omega_{y}$ for the explored range of $B$ is negligible. \subsection{Thermal quenching} Once the sample is prepared, the strongly interacting Fermi gas is thermally quenched across the spontaneous symmetry-breaking normal to superfluid phase transition by linearly reducing the ODT trap beam intensity. From its initial value of $U_{\rm i}=1.15~U_{\rm c}(B)$, the intensity is ramped to a final value of $U_{\rm f} = 0.15~U_{\rm c}(B)$ ($U_{\rm f} = 0.3~U_{\rm c}$ for measurements at $B = 898$ G) in variable durations ranging between 0.2 s and 2.6 s. The linear relationship between the sample temperature and the ODT beam intensity can be inferred by observing the evolution of the thermal cloud size in time-of-flight images from the experiment. Fig.~\ref{RTF} shows that the square of the thermal cloud size, which is proportional to the sample temperature, linearly decreases as the intensity is linearly reduced during the quench, and it remains constant after the quench is completed. \subsection{Detecting and counting vortices} The spontaneously created vortices are manifested as density depleted holes in time-of-flight (TOF) images of the gas. The detection sequence begins by simultaneously switching off the ODT and rapidly decreasing the Feshbach field to a value close to zero. This initiates the TOF expansion of the sample and converts the fermion pairs into tightly bound molecules~\cite{Regal2004}. The sample freely expands for 13.5 ms, and then the magnetic field is quickly ramped up to 695 G in 10 ms, where an absorption image of the gas is taken. Due to the tight axial confinement provided by the ODT, the condensate rapidly expands in the axial direction upon its release, and the radial size remains fairly constant during the TOF. For the experiments performed at 898 G on the BCS side of the resonance, the vortex visibility is reduced due to the higher thermal fraction of the sample. To enhance the depletion contrast and help identify the vortices, we ramp the magnetic field from 898 G to 855 G in 5 ms before releasing the gas from the trap. This additional ramp increases the condensate fraction of the sample and boosts the contrast of the depletion~\cite{Zwierlein2005}. We check that this procedure does not influence the number of observed vortices by comparing the number with and without the ramp. To extract the vortex number from TOF images, we use an automized image processing method, similar to the one outlined in Ref.~\cite{Kwon2014}. First, for a given absorption image, the contribution to the optical depth from the non-condensed fraction of the sample is removed by fitting a Gaussian to the thermal wings and subtracting it off. Then, a copy of this image is created, and a two-dimensional Gaussian smoothing filter is applied, whose width is chosen to be comparable to the typical size of a density depleted hole. This filtered image is used to divide the unfiltered image, which is then binarized using an empirically chosen threshold value that best identifies the density depleted holes as isolated ``particles" [see Fig.~\ref{binary}]. Using a standard particle analysis package included in most mathematical computing software (e.g. Matlab, Igor, Mathematica), the boundaries and the areas of the individual particles are identified. Fig.~\ref{count} exhibits exemplary TOF images taken from Fig.~1D of the main manuscript together with the processed images showing the boundaries of the density depleted holes, demonstrating the reliability of this procedure. For samples that are densely populated by vortices, large holes that represent multiple vortices are observed. To assign a proper quanta, we plot the histogram of the hole area at each investigated interaction strength and assign a cut-off area for each quanta based on the multiple peak structure of the histogram [see Fig.~\ref{histo}]. The minima between two adjacent peaks are used to set the vortex number transition lines of the particles. Based on this criteria, each particle is assigned a quanta equal to the number of vortices it represents, and their sum is recorded as the number of vortices of the image. Also, once the vortex number is determined by this procedure, every image was double-checked by eye to correct for possible misassignments. \subsection{Condensate formation and vortex decay} In the KZ mechanism, topological defects emerge in the system through the merging of domains with independent order parameters. To reliably extract the number of defects, a certain amount of hold time must be applied to the sample after the thermal quench to ensure that the condensate growth and the domain merging dynamics have been completed. However, in the presence of destructive interactions among the defects, this hold time must be kept as short as possible, such that the ensuing reduction of their numbers does not affect the observed scaling relationship between the defect density and the quench rate. It should be noted that for the case of weakly interacting BECs, investigations on the effect of the dissipative evolution of the defects on the observed KZ scaling have shown that the KZ exponent is fairly robust against the decay of the defect number~\cite{Donadello2016, Liu2018}. Nonetheless, to apply a well-defined hold time $t_{\rm h}$ to the sample before releasing it for TOF imaging, we investigate the evolution of the condensate fraction and the vortex number as a function of $t_{\rm h}$ for a number of quench times, at each the interaction strength accessed in the experiment [see Fig.~\ref{decay}]. Generically, following the quench, the condensate fraction initially rises until it reaches a maximal value near $t_{\rm h} = 200 \sim 300$ ms ($50 \sim 150$ ms for measurements on the BCS side), and then it decays due to three-body losses. This value of $t_{\rm h}$ may represent the intrinsic time scale for the condensate growth dynamics of the system. The condensate fraction measured after this hold time is independent of the quench time for each $-1/k_{\rm F}a$, indicating that the sample is well thermalized during the quench for the investigated quench times. Also, when the hold time of each data set is normalized by its respective quench time [see insets of Fig.~\ref{decay}A-D], the evolution of the condensate fraction during the ODT ramp closely tracks each other for all the quench times apart from 200 ms, where it starts to show a weak deviation. This observation implies that the temperature of the sample is set by the ODT depth for the explored quench times during the quench. The evolution of the vortex number shows a similar trend compared to that of the condensate fraction. Specifically, the area of the density depleted regions initially rises as the domains merge until it reaches a maximal value, and then it decays as the loss mechanism among the defects starts to dominate. Based on these observations, we set $t_{\rm h}$ to be the time at which the defect number reaches its maximum, corresponding to $t_{\rm h} = 200$ ms for measurements performed on the BEC side of the resonance and at unitarity, and $t_{\rm h} = 50$ ms for the data taken on the BCS side due to a faster decay rate. An important observation is that the decay rate of the vortices is dependent on the initial vortex density set by the quench time $t_{\rm q}$. Specifically, it becomes enhanced at short quench times where the initial vortex number is higher. The insets in Fig.~\ref{decay}E-H show the decay constant $\gamma$ as a function of the quench time when we fit an exponential decay of the vortex number $N(t) = N_{0}e^{-\gamma t}$ to the data whose hold time is equal to or greater than $t_{\rm h}$. Here, $t$ is the hold time and $N_0$ is the the hypothetical vortex number at $t=0$. The increase in the exponential decay rate for shorter quench times reveals the presence of a beyond one-body decay mechanism, which likely arises from the destructive interactions among vortices with opposite charges. In the experiment, this loss mechanism is associated with the departure from the KZ scaling and the saturation of the vortex number for short quench times. \subsection{Mean-field critical velocity} The mean-field Landau critical velocity of a strongly interacting Fermi superfluid in the BEC-BCS crossover is given by min$(v_{\rm s}, v_{\rm pb})$ where $v_{\rm s}$ is the speed of sound and $v_{\rm pb}$ is the mean-field BCS pair-breaking velocity of the superfluid~\cite{Combescot2006}. The speed of sound is obtained from the quantum Monte Carlo equation of state in Ref.~\cite{Manini2005}. For our inhomogeneous superfluid sample in the harmonic potential, the critical velocity $v_{\rm c}$ at its center is calculated by assuming the local density approximation. Here, the column averaged density, instead of the central density, has to be employed in computing $v_{\rm s}$ since the superfluid is hydrodynamic in the axial direction. \newpage \begin{figure*}[p] \vspace{1cm} \centering \includegraphics[width=0.4\columnwidth]{FigS1.pdf} \caption{The evolution of the square of the thermal cloud size in the experiment for 600 ms quench at $-1/k_{\rm F}a{=}-1.53$. Each data point comprises ten realizations of the same experiment, and the error bars represent the standard deviation. The blue line is a bilinear fit to the data, where one of the lines is kept horizontal. The vertical dotted line marks the end of the quench. } \label{RTF} \vspace{3cm} \end{figure*} \newpage \begin{figure*}[p] \vspace{1cm} \centering \includegraphics[width=7cm]{FigS2.pdf} \caption{Creating a binary image of the density depleted holes. ({\bf A}) is an exemplary image of the optical depth of the sample after TOF, and ({\bf B}) shows the converted binary image identifying the density depleted holes.} \label{binary} \vspace{3cm} \end{figure*} \begin{figure*}[p] \vspace{1cm} \centering \includegraphics[width=1\columnwidth]{FigS3.pdf} \caption{Computer-assisted counting of vortex number. Images from Fig.~1D and when they are applied to the vortex counting procedure. Identified vortices are encompassed by red lines.} \label{count} \vspace{3cm} \end{figure*} \begin{figure*}[p] \vspace{1cm} \centering \includegraphics[width=8.5cm]{FigS4.pdf} \caption{The histogram of the identified vortex areas from 70 images at unitarity, where the quench time extends between 200 ms to 800 ms.} \label{histo} \vspace{3cm} \end{figure*} \begin{figure*}[p] \vspace{1cm} \centering \includegraphics[width=1\columnwidth]{FigS5.pdf} \caption{The evolution of the sample during and after various quench times $t_{\rm q}=200$ ms (circle), 400 ms (inverted triangle), 600 ms (square), 1000 ms (diamond), and 2000 ms (triangle). ({\bf A})-({\bf D}) The growth and decay of the condensate fraction of the sample during and after the quench. ({\bf E})-({\bf H}) The decay of the average number of counted vortices during hold time $t_{\rm h}$ after the quench. Each data point is the average of ten realizations of the same experiment and the error bars give the standard error of the mean.} \label{decay} \vspace{3cm} \end{figure*} \end{document}
1,108,101,563,940
arxiv
\section{Introduction} Cosmological observations of Type Ia Supernovae and Cosmic Microwave Background suggest that the Universe had started to accelerate its expansion at the present epoch \cite{perlmutter1,riess1,perlmutter2,riess2,wmap9,planck18}. The standard explanation refers to an exotic component which has positive energy density and negative pressure, known as ``dark energy'' (DE). A variety of theoretical models has been proposed to explain this acceleration. The most natural and simplest model for DE is the $\Lambda$CDM model, containing a mixture of cosmological constant $\Lambda$ and cold dark matter (CDM), for which the equation of state parameter is $\omega=-1$ \cite{peebles1,padma,carrol}. However, this model suffers from two major problems, namely, fine-tuning and cosmological coincidence problems \cite{carrol,weinberg,steinhardt}. In order to solve these problems, alternative DE models have been proposed, where the equation of state parameter evolves with cosmic time, mainly the canonical scalar field DE models \cite{peebles1,efstathiou,weller,bassett} - see also \cite{peebles2,peebles3} (for a historical review), which study the cosmological evolution with minimally coupled scalar fields. Another possibility is to consider non-canonical scalar field models, which has shown increasing interest \cite{planckxiv,padma1,jassal,linder1,mamon1,mamon2}. We pay special attention on the tachyonic fields, which emerged in the context of string theories \cite{sen1,sen2,sen3,garousi}, and have been intensively studied in cosmology \cite{padma1,gibbons,frolov,bagla,abramo}. It is possible to have accelerated expansion of the universe during the late times for both choices, and we have searched for situations in which correspondences can be established from a modified potential function. The correspondence above referred has already been investigated for defect structures, describing different scalar field theories with very similar properties \cite{andrews,bazeia1,adam1,bazeia2,adam2,bazeia3,bazeia4,bazeia5}. The same idea was applied in the context of Friedmann-Robertson-Walker (FRW) cosmology \cite{bazeia6}, in a previous paper. In the present work, we extend this view and consider some popular DE parametrizations for canonical and tachyonic scalar field models. We find that the two models present the very same acceleration parameter, with the same energy density, and we name them twinlike models. The canonical potential and the tachyonic potential are distinct, but they lead to the same cosmological evolution. Our motivation for studying these twinlike models is as follows: the medium with negative pressure capable of accelerating the expansion of the universe has two possible sources - a minimally coupled scalar field (canonical) or a non-minimally coupled scalar field (tachyonic). The basic concepts are presented in Sec. \ref{eeq}. In Sec. \ref{ntwin} we investigated the twin nature of the standard and tachyonic models. In Sec. \ref{ilust} we present some illustrations. The paper ends with a summary in Sec. \ref{sandc}. \section{Einstein equations} \label{eeq} In order to investigate this proposal, we present some basic theoretical considerations. The action for a universe with spacetime curvature $R$, filled with a scalar field $\phi$ and containing matter, is given by \begin{equation} S=\bigintssss d^4x\sqrt{-g}\left[-\frac{1}{4}R+{\cal L}(\phi,X)\right]+S_m \end{equation} where we have made $4\pi G=c=1$, $X=\frac{1}{2}\partial^{\mu}\phi\partial_{\mu}\phi$, and $S_m$ is the action of the matter. The metric representing a homogeneous, isotropic and spatially flat universe is the FRW metric \begin{equation} ds^2=dt^2-a^2(t)\left(dr^2+r^2d\theta^2+r^2\sin^2\theta\;d\phi^2\right) \end{equation} where $a(t)$ is the scale factor of the universe, $r$ is the radial coordinate and $d\Omega^2=d\theta^2+\sin^2\theta\;d\phi^2$ describes the angular portion of the metric. In this scenario, the Einstein equations are \begin{equation} H^2=\frac{2}{3}(\rho_{\phi}+\rho_m) \label{h2} \end{equation} \begin{equation} \dot{H}=-(\rho_{\phi}+p_{\phi}+\rho_m) \end{equation} where $\rho_{\phi}$ and $p_{\phi}$ are respectively energy density and pressure of the scalar field $\phi$, $\rho_m$ represents the energy density of the matter component of the universe, $H=\dot{a}/a$ denotes Hubble parameter, and an overdot indicates differentiation with respect to time $t$. The conservation of the scalar field and matter is represented respectively by the equations of continuity \begin{equation} \dot{\rho}_{\phi}+3H(\rho_{\phi}+p_{\phi})=0 \label{conti} \end{equation} \begin{equation} \dot{\rho}_m+3H\rho_m=0 \label{contim} \end{equation} and the cosmic acceleration parameter is given by \begin{equation} q=\frac{\ddot{a}a}{\dot{a}^2}=1+\frac{\dot{H}}{H^2} \end{equation} Rewriting the equations in terms of redshift $z=\dfrac{a_0}{a}-1$, from (\ref{conti}) and (\ref{contim}), we obtain \begin{equation} \rho_{\phi}(z)=\rho_{\phi 0}\exp\left(3\int_0^z\dfrac{1+\omega_{\phi}(z')}{1+z'}dz'\right) \label{densidade} \end{equation} \begin{equation} \rho_m(z)=\rho_{m0}(1+z)^3 \label{densidadem} \end{equation} where $\omega_{\phi}=p_{\phi}/\rho_{\phi}$ is the dark energy EoS parameter and the subscript $0$ indicates the present epoch. The Friedmann equations then take the form \begin{eqnarray} H^2&=&H_0^2\bigg[\Omega_{m0}(1+z)^3+\nonumber\\ &+&\Omega_{\phi 0}\exp\left(3\int_0^z\dfrac{1+\omega_{\phi}(z')}{1+z'}dz'\right)\bigg] \end{eqnarray} \begin{eqnarray} \dot{H}&=&-\dfrac{3}{2}H_0^2\bigg[\Omega_{m0}(1+z)^3+\nonumber\\ &+&\Omega_{\phi0}\left(1+\omega_{\phi}(z)\right)\exp\left(3\int_0^z\dfrac{1+\omega_{\phi}(z')}{1+z'}dz'\right)\bigg] \end{eqnarray} where $\Omega_{m0}=\dfrac{2\rho_{m0}}{3H_0^2}$ and $\Omega_{\phi0}=\dfrac{2\rho_{\phi0}}{3H_0^2}=1-\Omega_{m0}$ are the density parameters of the matter and scalar field, respectively, at the present epoch. The acceleration parameter is also rewritten as \begin{equation} q=1-(1+z)\dfrac{d\ln H(z)}{dz} \label{parametro} \end{equation} \section{The twinlike models} \label{ntwin} \subsection{Standard case} If the scalar field (dark energy) is described by the standard dynamics, we have \begin{equation} {\cal L}=X-V(\phi) \end{equation} where $V(\phi)$ is the potential of the scalar field. In this case, energy density and pressure are given by \begin{equation} \rho_{\phi}=\frac{1}{2}\dot{\phi}^2+V(\phi) \label{repstandard1} \end{equation} \begin{equation} p_{\phi}=\frac{1}{2}\dot{\phi}^2-V(\phi) \label{repstandard2} \end{equation} and the scalar field evolves as follows \begin{equation} \ddot{\phi}+3H\dot{\phi}+V_{,\phi}=0 \end{equation} From (\ref{repstandard1}) and (\ref{repstandard2}), we express the potential, \begin{equation} V(z)=\dfrac{1}{2}\left[1-\omega_{\phi}(z)\right]\rho_{\phi}(z) \end{equation} and we write an equation for the scalar field, \begin{equation} \phi_{,z}=\dfrac{\sqrt{\left[1+\omega_{\phi}(z)\right]\rho_{\phi}(z)}}{(1+z)H} \label{dpstandard} \end{equation} both in terms of redshift $z$. \subsection{Tachyonic modified case} Let us now consider the scalar field described by tachyonic dynamics. We change the model as follows \begin{equation} {\cal L}=-U(\phi)\sqrt{1-2X}+f(\phi) \end{equation} where $U(\phi)$ and $f(\phi)$ are functions to be determined. Energy density and pressure are now given by \begin{equation} \rho_{\phi}=\frac{U}{\sqrt{1-\dot{\phi}^2}}-f \label{repmodified1} \end{equation} \begin{equation} p_{\phi}=-U\sqrt{1-\dot{\phi}^2}+f \label{repmodified2} \end{equation} and the scalar field obeys \begin{equation} \ddot{\phi}+\left(1-\dot{\phi}^2\right)\left(3H\dot{\phi}+\frac{U_{,\phi}}{U}\right)-\left(1-\dot{\phi}^2\right)^{3/2}\frac{f_{,\phi}}{U}=0 \end{equation} From (\ref{repmodified1}) and (\ref{repmodified2}), in terms of redshift $z$, we obtain \begin{equation} U(z)=\sqrt{\left[\rho_{\phi}(z)+f\right]\left[f-\rho_{\phi}(z)\omega_{\phi}(z)\right]} \end{equation} and \begin{equation} \phi_{,z}=\dfrac{1}{(1+z)H}\sqrt{\dfrac{\left[1+\omega_{\phi}(z)\right]\rho_{\phi}(z)}{f+\rho_{\phi}(z)}} \label{dpmodified} \end{equation} \subsection{The twin nature} In order to get to twinlike models, we need to make the appropriate choice for $f(z)$. In this case we consider \begin{equation} f(z)=1-\rho_{\phi}(z) \end{equation} So, the modified potential takes the form \begin{equation} U=\sqrt{1-\left[1+\omega_{\phi}(z)\right]\rho_{\phi}(z)} \end{equation} In both cases (standard and modified), we have the same energy density, given by (\ref{densidade}), and the same pressure. The scalar field is also the same in both cases, being \begin{equation} \phi(z)=\phi_0+\bigintss_0^z{\dfrac{\sqrt{\left[1+\omega_{\phi}(z')\right]\rho_{\phi}(z')}}{(1+z')H(z')}}dz' \label{phidz} \end{equation} The acceleration parameter also has the same form, given by (\ref{parametro}), and the Friedmann equations have the same evolution in both cases. The models are twin. However, twin models can be further differentiated! The fingerprint signature is defined by the effective speed of sound, entering a general rule for the evolution of small perturbations \cite{erickson}. We can obtain: \begin{equation} c_s^2=\frac{p_{\phi,X}}{\rho_{\phi,X}}=\frac{{\cal L}_{,X}}{{\cal L}_{,X}+2{\cal L}_{,XX}X} \end{equation} Disregarding the trivial solution $f(\phi)=U(\phi)\sqrt{1-2X}$, when ${\cal L}=0$, the speed of sound can evolve differently for the chosen model, and we can admit the solution $c_s^2<1$, which leads to the growth of inhomogeneities in the present cosmic acceleration - see \cite{bean,blomqvist} for a further discussion. In this sense, the growth of inhomogeneities can occur differently for twin models, but with no change in evolution of the density, or acceleration parameter, of the Universe as a whole. Explicitly, twin models describe the same cosmic expansion, being able to measure local changes in the growth of inhomogeneities. In the next section we present how to build twinlike models for dark energy models. It is important to emphasize that the results presented are valid for the current acceleration regime. In the context of the primordial universe, it is also possible to obtain twinlike models, see \cite{adam3}, where the slow-roll inflation, evolving under different potentials, lead to a very similar inflationary phase. \section{Illustrations} \label{ilust} \subsection{Cosmological constant} As a first example we take $\omega_{\phi}(z)=\omega_0$, a cosmological constant, in the limit $-1<\omega_0<-\dfrac{1}{3}$. In this situation, the energy density of scalar field is written as \begin{equation} \rho_{\phi}(z)=\dfrac{3}{2}H_0^2(1-\Omega_{m0})\left(1+z\right)^{3\left(1+\omega_0\right)} \label{rhodz1} \end{equation} So the potentials of standard and modified cases are, respectively, \begin{equation} V(z)=\dfrac{3}{4}H_0^2(1-\Omega_{m0})\left(1-\omega_0\right)\left(1+z\right)^{3\left(1+\omega_0\right)} \label{v1} \end{equation} \begin{equation} U(z)=\dfrac{1}{2}\sqrt{4-6H_0^2(1-\Omega_{m0})\left(1+\omega_0\right)\left(1+z\right)^{3\left(1+\omega_0\right)}} \label{u1} \end{equation} The evolution of the Hubble parameter with the redshift is given by the Friedmann equation \begin{equation} H^2(z)=H_0^2\left[\Omega_{m0}(1+z)^3+(1-\Omega_{m0})(1+z)^{3(1+\omega_0)}\right] \end{equation} And the acceleration parameter is \begin{equation} q(z)=-\dfrac{1}{2}-\dfrac{3}{2}\left[\dfrac{\omega_0}{1+\alpha(1+z)^{-3\omega_0}}\right] \end{equation} where $\alpha=\Omega_{m0}/(1-\Omega_{m0})$. The density parameters of the matter and scalar field are respectively \begin{equation} \Omega_m(z)=\dfrac{1}{1+\frac{1}{\alpha}(1+z)^{3\omega_0}} \end{equation} \begin{equation} \Omega_{\phi}(z)=\dfrac{1}{1+\alpha(1+z)^{-3\omega_0}} \end{equation} With the help of Eq. (\ref{rhodz1}), we can solve Eq. (\ref{phidz}) for the scalar field analytically. The result is \begin{eqnarray} \phi(z)&=&\phi_0+\dfrac{\sqrt{6(1+\omega_0)}}{3\omega_0}\bigg[\tanh^{-1}\left(\sqrt{1+\alpha(1+z)^{-3\omega_0}}\right)-\nonumber\\ &-&\tanh^{-1}\left(\sqrt{1+\alpha}\right)\bigg] \end{eqnarray} Figure \ref{fconstant}(a) shows the plot of $\phi(z)$. Equations (\ref{v1}) and (\ref{u1}) express $V$ and $U$ as functions of $z$. It is very difficult to work with these potentials in terms of $\phi$. Figure \ref{fconstant}(b) shows the plot of $V(\phi)$ and $U(\phi)$ from numerical results. The $V$ and $U$ curves are clearly distinct, but the twin nature is shown in the graph of the acceleration parameter $q$, which is the same for both models. The plot of $q(z)$ in Figure \ref{fconstant}(c) shows the transition from a decelerating to an accelerating regime as $z$ decreases. The evolutions of $\Omega_{\phi}$ and $\Omega_m$ are showns in Figure \ref{fconstant}(d). Note that $\Omega_{\phi}$ starts dominating over $\Omega_m$ at around $z\sim0.4$. \begin{figure*}[htb!] \centering \subfigure[]{\includegraphics[scale=0.39]{1field.jpg}}\;\subfigure[]{\includegraphics[scale=0.39]{1pot.jpg}}\;\subfigure[]{\includegraphics[scale=0.39]{1q.jpg}}\;\subfigure[]{\includegraphics[scale=0.39]{1w.jpg}} \caption{(a) Plot of $\phi$ as a function of $z$. (b) Plot of $V$ (solid curve) and $U$ (dashed curve) as a function of $\phi$. (c) Plot of $q$ as a function of $z$. (d) Plot of $\Omega_{\phi}$ (solid curve) and $\Omega_m$ (dashed curve) as a function of $z$. This is for cosmological constant, with $\phi_0=0$, $H_0=1$, $\omega_0=-0.95$ and $\Omega_{m0}=0.27$.} \label{fconstant} \end{figure*} \subsection{Linear parametrization} As a second example we now consider $\omega_{\phi}(z)=\omega_0+\omega_1z$ \cite{cooray1,weller1}. In this case, the energy density of scalar field takes the form \begin{equation} \rho_{\phi}(z)=\dfrac{3}{2}H_0^2(1-\Omega_{m0})\left(1+z\right)^{3\left(1+\omega_0-\omega_1\right)}\exp(3\omega_1z) \end{equation} So the potentials of standard and modified cases are, respectively, \begin{eqnarray} V(z)&=&\dfrac{3}{4}H_0^2(1-\Omega_{m0})\big(1-\omega_0-\nonumber\\ &-&\omega_1z\big)\left(1+z\right)^{3(1+\omega_0-\omega_1)}\exp\left(3\omega_1z\right) \label{v2} \end{eqnarray} \begin{eqnarray} U(z)&=&\dfrac{1}{2}\Big[4-6H_0^2(1-\Omega_{m0})\big(1+\omega_0+\nonumber\\ &+&\omega_1z\big)\left(1+z\right)^{3(1+\omega_0-\omega_1)}\exp\left(3\omega_1z\right)\Big]^{1/2} \label{u2} \end{eqnarray} The Friedmann equation is given by \begin{eqnarray} H^2(z)&=&H_0^2\Big[\Omega_{m0}(1+z)^3+\nonumber\\ &+&(1-\Omega_{m0})(1+z)^{3(1+\omega_0-\omega_1)}\exp\left(3\omega_1z\right)\Big] \end{eqnarray} The acceleration parameter is \begin{equation} q(z)=-\dfrac{1}{2}-\dfrac{3}{2}\left[\dfrac{\omega_0+\omega_1z}{1+\alpha(1+z)^{-3(\omega_0-\omega_1)}\exp(-3\omega_1z)}\right] \end{equation} The density parameters of the matter and scalar field are, respectively, \begin{equation} \Omega_m(z)=\dfrac{1}{1+\frac{1}{\alpha}(1+z)^{3(\omega_0-\omega_1)}\exp(3\omega_1z)} \end{equation} \begin{equation} \Omega_{\phi}(z)=\dfrac{1}{1+\alpha(1+z)^{-3(\omega_0-\omega_1)}\exp(-3\omega_1z)} \end{equation} Figure \ref{flinear} shows the plots of (a) $\phi(z)$, (b) $V(\phi)$ and $U(\phi)$ from numerical results. The plot of $q(z)$ in Figure \ref{flinear}(c) shows also the transition from a decelerating to an accelerating regime as $z$ decreases. The evolutions of $\Omega_{\phi}$ and $\Omega_m$ are shown in Figure \ref{flinear}(d), and $\Omega_{\phi}$ starts dominating over $\Omega_m$ at around $z\sim0.4$. \begin{figure*}[htb!] \centering \subfigure[]{\includegraphics[scale=0.39]{2field.jpg}}\;\subfigure[]{\includegraphics[scale=0.39]{2pot.jpg}}\;\subfigure[]{\includegraphics[scale=0.39]{2q.jpg}}\;\subfigure[]{\includegraphics[scale=0.39]{2w.jpg}} \caption{(a) Plot of $\phi$ as a function of $z$. (b) Plot of $V$ (solid curve) and $U$ (dashed curve) as a function of $\phi$. (c) Plot of $q$ as a function of $z$. (d) Plot of $\Omega_{\phi}$ (solid curve) and $\Omega_m$ (dashed curve) as a function of $z$. This is for linear parametrization, with $\phi_0=0$, $H_0=1$, $\omega_0=-1$, $\omega_1=0.1$ and $\Omega_{m0}=0.27$.} \label{flinear} \end{figure*} \subsection{Chevallier-Polarski-Linder (CPL) parametrization} The CPL parametrization \cite{chevallier, linder2, jing, scherrer} is characterized by \begin{equation} \omega_{\phi}(z)=\omega_0+\omega_1\left(\dfrac{z}{1+z}\right) \end{equation} The energy density of scalar field is \begin{equation} \rho_{\phi}(z)=\dfrac{3}{2}H_0^2(1-\Omega_{m0})\left(1+z\right)^{3\left(1+\omega_0+\omega_1\right)}\exp\left(-\frac{3\omega_1z}{1+z}\right) \end{equation} The potentials of standard and modified cases are, respectively, \begin{eqnarray} V(z)&=&\dfrac{3}{4}H_0^2(1-\Omega_{m0})\bigg(1-\omega_0-\nonumber\\ &-&\frac{\omega_1z}{1+z}\bigg)\left(1+z\right)^{3(1+\omega_0+\omega_1)}\exp\left(-\frac{3\omega_1z}{1+z}\right) \label{v2} \end{eqnarray} \begin{eqnarray} U(z)&=&\dfrac{1}{2}\bigg[4-6H_0^2(1-\Omega_{m0})\bigg(1+\omega_0+\nonumber\\ &+&\frac{\omega_1z}{1+z}\bigg)\left(1+z\right)^{3(1+\omega_0+\omega_1)}\exp\left(-\frac{3\omega_1z}{1+z}\right)\bigg]^{1/2} \label{u2} \end{eqnarray} The Friedmann equation is \begin{eqnarray} H^2(z)&=&H_0^2\bigg[\Omega_{m0}(1+z)^3+\nonumber\\ &+&(1-\Omega_{m0})(1+z)^{3(1+\omega_0+\omega_1)}\exp\left(-\frac{3\omega_1z}{1+z}\right)\bigg] \end{eqnarray} The acceleration parameter is given by \begin{equation} q(z)=-\dfrac{1}{2}-\dfrac{3}{2}\left[\dfrac{\omega_0+\omega_1\left(\frac{z}{1+z}\right)}{1+\alpha(1+z)^{-3(\omega_0+\omega_1)}\exp\left(\frac{3\omega_1z}{1+z}\right)}\right] \end{equation} The density parameters of the matter and scalar field are, respectively, \begin{equation} \Omega_m(z)=\dfrac{1}{1+\frac{1}{\alpha}(1+z)^{3(\omega_0+\omega_1)}\exp\left(-\frac{3\omega_1z}{1+z}\right)} \end{equation} \begin{equation} \Omega_{\phi}(z)=\dfrac{1}{1+\alpha(1+z)^{-3(\omega_0+\omega_1)}\exp\left(\frac{3\omega_1z}{1+z}\right)} \end{equation} Figure \ref{fcpl} shows the plots of (a) $\phi(z)$, (b) $V(\phi)$ and $U(\phi)$, (c) $q(z)$, (d) $\Omega_m$ and $\Omega_{\phi}$. Once again, the distinction between the models is evidenced in the graphs of $V$ and $U$, as well as the twin nature of these models requires that the curve of the acceleration parameter $q$ be the same for both cases. \begin{figure*}[htb!] \centering \subfigure[]{\includegraphics[scale=0.39]{3field.jpg}}\;\subfigure[]{\includegraphics[scale=0.39]{3pot.jpg}}\;\subfigure[]{\includegraphics[scale=0.39]{3q.jpg}}\;\subfigure[]{\includegraphics[scale=0.39]{3w.jpg}} \caption{(a) Plot of $\phi$ as a function of $z$. (b) Plot of $V$ (solid curve) and $U$ (dashed curve) as a function of $\phi$. (c) Plot of $q$ as a function of $z$. (d) Plot of $\Omega_{\phi}$ (solid curve) and $\Omega_m$ (dashed curve) as a function of $z$. This is for CPL parametrization, with $\phi_0=0$, $H_0=1$, $\omega_0=-1$, $\omega_1=0.1$ and $\Omega_{m0}=0.27$.} \label{fcpl} \end{figure*} \subsection{Barboza-Alcaniz (BA) parametrization} The last example is the BA parametrization, proposed by Barboza and Alcaniz \cite{barboza,magana,yang}, which is represented by \begin{equation} \omega_{\phi}(z)=\omega_0+\omega_1\dfrac{z(1+z)}{1+z^2} \end{equation} In this case, the energy density of scalar field is \begin{equation} \rho_{\phi}(z)=\dfrac{3}{2}H_0^2(1-\Omega_{m0})\left(1+z\right)^{3\left(1+\omega_0\right)}\left(1+z^2\right)^{\frac{3\omega_1}{2}} \end{equation} The potentials of standard and modified cases are, respectively, \begin{eqnarray} V(z)&=&\dfrac{3}{4}H_0^2(1-\Omega_{m0})\bigg(1-\omega_0-\nonumber\\ &-&\omega_1\frac{z(1+z)}{1+z^2}\bigg)\left(1+z\right)^{3(1+\omega_0)}\left(1+z^2\right)^{\frac{3\omega_1}{2}} \label{v2} \end{eqnarray} \begin{eqnarray} U(z)&=&\dfrac{1}{2}\bigg[4-6H_0^2(1-\Omega_{m0})\bigg(1+\omega_0+\nonumber\\ &+&\omega_1\frac{z(1+z)}{1+z^2}\bigg)\left(1+z\right)^{3(1+\omega_0)}\left(1+z^2\right)^{\frac{3\omega_1}{2}}\bigg]^{1/2} \label{u2} \end{eqnarray} The Hubble parameter evolves as follows \begin{eqnarray} H^2(z)&=&H_0^2\bigg[\Omega_{m0}(1+z)^3+\nonumber\\ &+&(1-\Omega_{m0})(1+z)^{3(1+\omega_0)}\left(1+z^2\right)^{\frac{3\omega_1}{2}}\bigg] \end{eqnarray} The acceleration parameter is then \begin{equation} q(z)=-\dfrac{1}{2}-\dfrac{3}{2}\left[\dfrac{\omega_0+\omega_1\frac{z(1+z)}{1+z^2}}{1+\alpha(1+z)^{-3\omega_0}\left(1+z^2\right)^{-\frac{3\omega_1}{2}}}\right] \end{equation} The density parameters of the matter and scalar field are, respectively, \begin{equation} \Omega_m(z)=\dfrac{1}{1+\frac{1}{\alpha}(1+z)^{3\omega_0}\left(1+z^2\right)^{\frac{3\omega_1}{2}}} \end{equation} \begin{equation} \Omega_{\phi}(z)=\dfrac{1}{1+\alpha(1+z)^{-3\omega_0}\left(1+z^2\right)^{-\frac{3\omega_1}{2}}} \end{equation} Figure \ref{fba} shows the plots of (a) $\phi(z)$, (b) $V(\phi)$ and $U(\phi)$, (c) $q(z)$, (d) $\Omega_m$ and $\Omega_{\phi}$. Discussion similar to previous ones can be performed around these graphs. The transition between $\Omega_m$ and $\Omega_{\phi}$ occurs again around $z\sim0.4$. \begin{figure*}[htb!] \centering \subfigure[]{\includegraphics[scale=0.39]{4field.jpg}}\;\subfigure[]{\includegraphics[scale=0.39]{4pot.jpg}}\;\subfigure[]{\includegraphics[scale=0.39]{4q.jpg}}\;\subfigure[]{\includegraphics[scale=0.39]{4w.jpg}} \caption{(a) Plot of $\phi$ as a function of $z$. (b) Plot of $V$ (solid curve) and $U$ (dashed curve) as a function of $\phi$. (c) Plot of $q$ as a function of $z$. (d) Plot of $\Omega_{\phi}$ (solid curve) and $\Omega_m$ (dashed curve) as a function of $z$. This is for BA parametrization, with $\phi_0=0$, $H_0=1$, $\omega_0=-0.95$, $\omega_1=0.1$ and $\Omega_{m0}=0.27$.} \label{fba} \end{figure*} \section{Summary and conclusions} \label{sandc} In this work we studied the presence of twinlike models in FRW cosmology driven by a single real scalar field, in flat spacetime, from the study of some common parametrizations for the equation of state parameter $\omega(z)$. We showed that, regardless of the choice of $\omega(z)$, it is always possible to have models driven by standard and tachyonic dynamics with the same acceleration parameter, the same energy density and the same pressure. \subsection*{Acknowledgments} \label{ack} We would like to thank CAPES for financial support and D. Bazeia for the fruitful discussions and suggestions.
1,108,101,563,941
arxiv
\section{Introduction} Our discussion stems from multi-marginal optimal transport theory: Let $(X_1,\mu_1),\ldots,(X_N,\mu_N)$ be Borel probability spaces. We set $X=X_1\times\cdots\times X_N$ and we denote by $\Pi(X)$ the set of all Borel probability measures $\pi$ on $X$ such that the {\em marginals} of $\pi$ are the $\mu_i$'s. Let $c:X\to\RR$ be a cost function. A cornerstone of multi-marginal optimal transport theory is Kellerer's~\cite{Kel} generalization of the Kantorovich duality theorem to the multi-marginal case. Kellerer's duality theorem asserts that, in a suitable framework, \begin{equation}\label{Kellerer} \min_{\pi\in\Pi(X)} \int_X c(x)d\pi(x)=\max_{\begin{array}{c} u_i\in L_1({\mu_i}),\\ \sum_{1\leq i\leq N}u_i\ \leq\ c \end{array}} \ \ \sum_{1\leq i\leq N}\int_{X_i}u_i(x_i)d\mu_i(x_i). \end{equation} It follows that if $\pi$ is a solution of the left-hand side of~\eqref{Kellerer} and $(u_1,\ldots,u_N)$ is a solution of the right-hand side of~\eqref{Kellerer}, then $\pi$ is concentrated on the subset $\Gamma$ of $X$ where the equality $c=\sum_{1\leq i\leq N}u_i$ holds. In recent publications (see, for example,~\cite{BBW2, Gri, KP}) such subsets $\Gamma$ of $X$ are referred to as $c$-splitting sets: Let $N\geq 2$ be a natural number and $I=\{1,\ldots,N\}$ an index set. Let $X_1,\ldots,X_N$ be nonempty sets, $X=X_1\times\cdots\times X_N$ and $c:X\to\RR$ a function. \begin{definition}[$c$-splitting set] \label{splitting def} Let $\Gamma\subseteq X$. We say that $\Gamma$ is a $c$-splitting set if for each $i\in I$ there exists a function $u_i:X_i\to\RX$ such that \begin{equation}\label{splitting functions definition inequality} \forall x=(x_1,\ldots,x_N)\in X,\qquad c(x_1,\ldots,x_N)\leq \left(\bigoplus_{i\in I}u_i\right)(x):=\sum_{i\in I} u_i(x_i) \end{equation} and \begin{equation}\label{splitting functions definition equality} \forall x=(x_1,\ldots,x_N)\in \Gamma,\ \ \qquad c(x_1,\ldots,x_N)=\left(\bigoplus_{i\in I} u_i\right)(x):=\sum_{i\in I} u_i(x_i). \end{equation} In this case we say that $(u_1,\ldots,u_N)$ is a $c$-splitting tuple of $\Gamma$. Given functions $u_i:X_i\to\RX$ that satisfy~\eqref{splitting functions definition inequality}, we call the set of all points $(x_1,\ldots,x_N)\in X$ that satisfy \eqref{splitting functions definition equality} the $c$-splitting set generated by the tuple $(u_1,\ldots,u_N)$. \end{definition} In the case $N=2$, splitting sets are natural in convex analysis as graphs of subdifferentials. Indeed, by the Young-Fenchel inequality the graph of the subdifferential $\partial f$ is the $c$-splitting set generated by the pair $(f,f^*)$ where $c=\scal{\cdot}{\cdot}$ is the classical pairing between a linear space and its dual. Similar to the two-marginal case, in the multi-marginal case monotonicity arises naturally as well: \begin{definition} [$c$-cyclic monotonicity] \label{monotonicity definitions} The subset $\Gamma$ of $X$ is said to be $c$-cyclically monotone of order $n$, $n$-$c$-monotone for short, if for all $n$ tuples $(x^1_1,\dots,x_N^1),\dots,(x_1^n,\dots,x_N^n)$ in $\Gamma$ and every $N$ permutations $\sigma_1,\dots,\sigma_N$ in $S_n$, \begin{equation}\label{cycmondef} \sum_{j=1}^nc(x_1^{\sigma_1(j)},\dots,x_N^{\sigma_N(j)})\leq \sum_{j=1}^n c(x_1^{j},\dots,x_N^{j}); \end{equation} $\Gamma$ is said to be $c$-cyclically monotone if it is $n$-$c$-monotone for every $n\in\{2,3,\dots\}$; and $\Gamma$ is said to be $c$-monotone if it is $2$-$c$-monotone. Finally, $\Gamma$ is said to be maximally $n$-$c$-monotone if it has no proper $n$-$c$-monotone extension. \end{definition} Cyclic monotonicity was first introduced by Rockafellar~\cite{Rockafellar} in the framework of classical convex analysis. During the late 80s and early 90s (see \cite{Bre, Rochet, Rus}) the concept was generalized to $c$-cyclic monotonicity in order to hold for more general cost functions $c$ in the framework of two-marginal optimal transport theory. Currently, it lays at the foundations of the theory (see for example \cite{GanMc, San, Vil}) and plays a role also in recent refinements (see, for example, \cite{BR1, BR2}). Extending the role it plays in two-marginal optimal transport theory, in the past two and a half decades multi-marginal $c$-monotonicity and aspects of $c$-convex analysis are becoming an integral part of the fast evolving multi-marginal optimal transport theory as can be seen, for example, in ~\cite{AC, BBW2, BG, Car, CN, GS, GhMa, GhMo, Gri, KP, KS, MPC, MGN, Pas1, Pas2, RU}. An important instance of an extension from the two-marginal case relating Definition~\ref{splitting def} with Definition~\ref{monotonicity definitions} is the known fact that $c$-splitting sets are $c$-cyclically monotone (see, for example, \cite{BBW2, Gri, KP, KS}). Before attending our convex analytic discussion we remark that in order to make optimal transport compatible with our discussion, one should exchange min for max in the left-hand side of~\eqref{Kellerer}, exchange max for min in the right-hand side of~\eqref{Kellerer} and, finally, exchange the constraint $\sum_i u_i\leq c$ in the right-hand side of~\eqref{Kellerer} with the constraint $c\leq \sum_i u_i$ as we did in Definition~\ref{splitting def} and Definition~\ref{monotonicity definitions}. In the framework of multi-marginal optimal transport, presumably the most traditional and well studied cost functions are classical extensions of the pairing between a linear space and its dual: \begin{quote} \emph{For the remainder of our discussion, for each $1\leq i\leq N$, we assume that $X_i=H$ is a real Hilbert space with inner product $\scal{\cdot}{\cdot}$ and induced norm $\|\cdot \|$. We let $c:X\to\RR$ be the cost function defined by} \begin{equation*}\label{classical c} c(x_1,\dots,x_N)=\sum_{1\leq i<j\leq N}\scal{x_i}{x_j}. \end{equation*} \end{quote} It follows from straightforward computation (see for example~\cite{BBW2}) that a set $\Gamma\subseteq X$ is $n$-$c$-monotone if and only if it is $n$-$c$-monotone with respect to each of the functions \begin{equation*} (x_1,\dots,x_N)\mapsto\ -\sum_{1\leq i<j\leq N}\tfrac{1}{2}\|x_i-x_j\|^2\ \ \ \ \ \ \ \ \ \ \ \ \text{and}\ \ \ \ \ \ \ \ \ \ \ \ \ \ (x_1,\dots,x_N)\mapsto\ \tfrac{1}{2}\bigg\|\sum_{i=1}^N x_i\bigg\|^2. \end{equation*} Although classical convex analysis and monotonicity are instrumental in multi-marginal optimal transport, and although several multi-marginal convex analytic results are already available (as we recall in our more specific discussion further below), to the best of our knowledge, a comprehensive multi-marginal monotonicity and convex analysis theory is still lacking. To this end, in the present paper we lay additional foundations and provide several extensions of classical monotone operator theory and convex analysis into the multi-marginal settings. The remainder of the paper is organized as follows. In Section~\ref{s:multi-mar-Minty} we provide a characterization of multi-marginal $c$-monotonicity in terms of classical monotonicity. We employ this characterization in order to provide several equivalent criteria, including a Minty-type criterion, a criterion based on the partition of the identity into a sum of firmly nonexpansive mappings, and other criteria for multi-marginal maximal $c$-monotonicity. In Section~\ref{s:MMMM_via_cont} we provide a continuity criterion for multi-marginal maximal monotonicity. In Section~\ref{s:c-split_tuples} we focus on multi-marginal convex analysis. In particular, we extend Moreau's decompositions and provide criteria for maximal $c$-monotonicity of $c$-splitting sets, the multi-marginal extensions of subdifferentials. We show that the same criteria also imply multi-marginal $c$-conjugacy of $c$-splitting functions. In the case $N=3$ we also provide a class of $c$-splitting triples for which $c$-conjugacy implies maximal $c$-monotonicity. Section~\ref{s:ex} contains examples and applications of our results to the problem of determining maximal $c$-monotonicity of sets and $c$-conjugacy of $c$-splitting tuples, thus reducing the need of further challenging computations of multi-marginal $c$-conjugate tuples. Additionally, we point out several open problems. In the remainder of this section we collect standard notations and preliminary facts from classical monotone operator theory and convex analysis which, largely, follow \cite{BC2017}. Let $A:H \To H$ be a set-valued mapping. The {\em domain} of $A$ is the set $\dom A=\{x\in H \ST Ax\neq\varnothing\}$. The \emph{range} of $A$ is the set $\ran A = A(H)=\bigcup_{x \in H} Ax$, the \emph{graph} of $A$ is the set $\gra A = \{(x,u)\in H \times H \ST u \in Ax\}$ and the {\em inverse mapping} of $A$ is the mapping $A^{-1}$ satisfying $x\in A^{-1}u\Leftrightarrow u\in Ax$. $A$ is said to be \emph{monotone} if $$ (\forall (x,u) \in \gra A)(\forall (y,v) \in \gra A)\quad \scal{x-y}{u-v} \geq 0. $$ $A$ is said to be \emph{maximally monotone} if there exists no monotone operator $B$ such that $\gra A$ is a proper subset of $\gra B$. The \emph{resolvent} of $A$ is the mapping $J_A=(A+\Id)^{-1}$ where $\Id$ is the {\em identity mapping}. The mapping $T:\dom T\subseteq H\to H$ is said to be \emph{firmly nonexpansive} if $$ (\forall x\in \dom T)(\forall y\in \dom T) \quad \|Tx-Ty\|^2 + \|(\Id-T)x-(\Id-T)y\|^2 \leq \|x-y\|^2, $$ where $\dom T\subseteq H$. The function $f:H\to\RX$ is said to be \emph{proper} if $\dom f:=\{x\in H \ST f(x)<\infty\}\neq\varnothing$. The \emph{Fenchel conjugate} of the function $f$ is the function $f^*$ defined by \begin{equation} f^*(u)=\sup_{x\in H}\big(\scal{u}{x}-f(x)\big). \end{equation} We set $q(\cdot)=\frac{1}{2}\|\cdot\|^2$. The {\em Moreau envelope} of $f$ is the function defined by the {\em infimal convolution} \begin{equation} e_f(s)=(f\square q)(s)=\inf_{x\in H}\big(f(x)+q(s-x)\big). \end{equation} The {\em subdifferential} of the proper function $f$ is the mapping $\partial f:H\rightrightarrows H$ defined by $$ \partial f(x)=\big\{u\in H\ \big|\ f(x)+\scal{u}{y-x}\leq f(y),\ \ \forall y\in H\big\}. $$ The \emph{indicator function} of a subset $C$ of $H$ is the function $\iota_C:H\to\RX$ which vanishes on $C$ and equals $+\infty$ on $H\smallsetminus C$. \begin{fact}[Minty's Theorem {\rm\cite[Theorem~21.1]{BC2017}}] Let $A: H \rightrightarrows H$ be monotone. Then $A$ is maximally monotone if and only if $\ran (\Id + A) = H$. \end{fact} \begin{fact} {\rm (\cite[Proposition~23.8]{BC2017})}\label{f:firm_vs_mono} Let $A:H \rightrightarrows H$. Then \begin{enumerate} \item\label{f:firm_vs_mono-i} $J_A$ is firmly nonexpansive if and only if $A$ is monotone; \item\label{f:firm_vs_mono-ii} $J_A$ is firmly nonexpansive and $\dom J_A=H$ if and only if $A$ is maximally monotone. \end{enumerate} \end{fact} Let $f$ be a proper lower semicontinuous convex function. The proximity operator \cite[Definition~12.23]{BC2017} of $f$ is defined by \begin{equation} \prox_f:H\to H: x\mapsto \prox_f x=\argmin_{y\in H}\big(f(y)+q(y-x)\big). \end{equation} For all $s\in H$, \cite[Proposition~12.15]{BC2017} implies that there is a unique minimizer of $f(\cdot)+q(s-\cdot)$ over all $x\in H$; thus, the proximity operator of $f$ is well defined. Furthermore, we also have $\prox_f=J_{\partial f}$. Additional properties of the Moreau envelope are: \begin{fact}[Moreau envelope] Let $f$ be a proper lower semicontinuous convex function. The following assertions hold: \begin{enumerate} \item (Moreau decomposition) $e_{f}+e_{f^*}=q$. \item $x=\prox_f s$\quad $\iff$\quad $e_{f}(s)=f(x)+q(s-x)$. \item {\rm(\cite[Proposition~12.30]{BC2017})} $e_f$ is Fr\'echet differentiable with $\nabla e_f=\Id-\prox_f$. \end{enumerate} \end{fact} Finally, we set the marginal projections $P_i\colon X\to X_i\colon (x_1,\ldots,x_N)\mapsto x_i$ for $i$ in $\{1,\ldots, N\}$ and the two-marginal projections $P_{i,j}\colon X\to X_i\times X_j\colon (x_1,\ldots,x_N)\mapsto (x_i,x_j)$ for $i<j$ in $\{1,\ldots,N\}$. Given a subset $\Gamma$ of $X$, we set \begin{equation}\label{gammai} \Gamma_i= P_i(\Gamma) \qquad\text{and}\qquad \Gamma_{i,j}=P_{i,j}(\Gamma) \end{equation} We also define $A_{i,j}:X_i\rightrightarrows X_j$ via $$ \gra A_{i,j}=\Gamma_{i,j}. $$ The notation $A_i$ is reserved for a different purpose and introduced in Section~\ref{s:multi-mar-Minty}. \section{A characterization of multi-marginal $c$-monotonicity and Minty type criteria for $c$-monotonicity}\label{s:multi-mar-Minty} Let $\sumop :H\times H\to H$ be the mapping defined by $\sumop(x,y)=x+y$. For any mapping $A:H\rightrightarrows H$, we have the identity \cite[Lemma~12.14]{Rock-Wets} \begin{equation} J_{A^{-1}}=\Id-J_A. \end{equation} If, in addition, $A$ is monotone, then by Fact~\ref{f:firm_vs_mono}, $J_A$ and $J_{A^{-1}}$ are single-valued, thus, \begin{equation}\label{e:JAJA-1} J_{A}+J_{A^{-1}}=\Id|_{\sumop(\gra A)}, \end{equation} which is equivalent to $\gra A$ being parameterized by \begin{equation}\label{e:graA_para} \gra A=\menge{(J_As,J_{A^{-1}}s)}{s\in \sumop(\gra A)}. \end{equation} Given a set $\Gamma\subseteq X$, we now associate with $\Gamma$ monotone mappings as follows. \begin{definition} Let $\Gamma\subseteq X$ be a set. For each index set $\varnothing \neq K \subsetneq I$, we define the mapping $A_K:H\rightrightarrows H$ by \begin{equation}\label{e:monotone_condition} \gra A_K=\bigg\{\Big(\sum_{i\in K}x_i,\sum_{i\in I\setminus K}x_i\Big)\ \Big|\ (x_1,\ldots,x_N)\in \Gamma\ \bigg\} \end{equation} and for each $i\in I$ we set $A_i=A_{\{i\}}$. \end{definition} Our first aim is to characterize the $c$-monotonicity of a set $\Gamma$ in terms of the monotonicity of its $A_K$'s, and furthermore, extend \eqref{e:JAJA-1} and \eqref{e:graA_para} to the multi-marginal settings. To this end we will employ the sum mapping \begin{equation}\label{e:sum_func} \sumop:X\to H:(x_1,\ldots,x_N)\mapsto \sum_{i\in I}x_i, \end{equation} and the following fact which follows by a straightforward computation (see, e.g.,~\cite[Fact 3.3]{BBW2}). \begin{fact}\label{mono under shift} Let $x\in X$. If the subset $\Gamma$ of $X$ is $n$-$c$-cyclically monotone, then so is $\Gamma+x$. \end{fact} \begin{lemma}\label{l:c-mono_iff_mono} Let $\Gamma\subseteq X$ be a set. Then the following assertions are equivalent: \begin{enumerate} \item\label{l:c-mono_iff_mono-i} $\Gamma$ is $c$-monotone; \item\label{l:c-mono_iff_mono-ii} For each $\varnothing \neq K \subsetneq I$, the mapping $A_K$ is monotone; \item\label{l:c-mono_iff_mono-iii} For each $\varnothing \neq K \subsetneq I$, the mapping $J_{A_K}:\sumop(\Gamma)\to H$ is firmly nonexpansive. \end{enumerate} In this case, \begin{equation}\label{e:sumJAi=Id} J_{A_1}+\cdots + J_{A_N}=\Id|_{\sumop(\Gamma)}, \end{equation} equivalently, $\Gamma$ can be parameterized by \begin{equation}\label{e:Gamma_para} \Gamma = \menge{(J_{A_1}s,\ldots,J_{A_N}s)}{s\in \sumop(\Gamma)}; \end{equation} and, furthermore, for each $\varnothing \neq K \subsetneq I$, \begin{equation}\label{e:JAK=sumJAi} J_{A_K}=\sum_{i\in K}J_{A_i}. \end{equation} \end{lemma} \begin{proof} \ref{l:c-mono_iff_mono-i} $\iff$ \ref{l:c-mono_iff_mono-ii}: First we characterize the $c$-monotone relations of the set $\{z,0\}$ in $X$. We employ a similar computation to the one in~\cite[Lemma 4.1]{BBW2}: For $z=(z_1,\ldots,z_N)\in X$ and $\varnothing \neq K \subsetneq I$ we set $z^K=(z^K_1,\ldots,z^K_N)\in X$ by \begin{equation*} z^K_i=\begin{cases} z_i ,& i\in K;\\ 0 ,& i\in I\setminus K. \end{cases} \end{equation*} From Definition~\ref{monotonicity definitions} it follows that $\{z,0\}$ is $c$-monotone if and only if for each $\varnothing\neq K\subsetneq I$ \begin{align*} 0&\leq c(z)+c(0)-c(z^K)-c(z^{I\setminus K})\\[1mm] &=\sum_{i,j\in I,\ i<j}\scal{z_i}{z_j}+0-\sum_{i,j\in I,\ i<j}\scal{z^K_i}{z^K_j}-\sum_{i,j\in I,\ i<j}\scal{z^{I\setminus K}_i}{z^{I\setminus K}_j}\\[1mm] &=\sum_{i,j\in I,\ i<j}\scal{z_i}{z_j}-\sum_{i,j\in K,\ i<j}\scal{z_i}{z_j}-\sum_{i,j\in I\setminus K,\ i<j}\scal{z_i}{z_j}=\bigg\langle\sum_{i\in K}z_i,\sum_{i\in I\setminus K}z_i\bigg\rangle. \end{align*} In general, from Definition~\ref{monotonicity definitions} it follows that the set $\Gamma\subseteq X$ is $c$-monotone if and only if for any $x\in\Gamma$ and $y\in\Gamma$, the set $\{x,y\}$ is $c$-monotone, which, in turn, by invoking Fact~\ref{mono under shift}, is equivalent to the set $\{x-y,0\}$ being $c$-monotone. Summing up, we see that $ \Gamma$ is $c$-monotone if and only if for any $x=(x_1,\ldots,x_N),\ y=(y_1,\ldots,y_N)\in\Gamma$ and any $\varnothing \neq K \subsetneq I$, by letting $z=x-y$, $$ 0\leq\bigg\langle\sum_{i\in K}x_i-\sum_{i\in K}y_i,\sum_{i\in I\setminus K}x_i-\sum_{i\in I\setminus K}y_i\bigg\rangle, $$ i.e., $A_K$ is monotone. \ref{l:c-mono_iff_mono-ii} $\iff$ \ref{l:c-mono_iff_mono-iii}: By the definition of $A_K$, it follows that $\dom J_{A_K}=\sumop(\Gamma)$. Thus, the equivalence \ref{l:c-mono_iff_mono-ii} $\iff$ \ref{l:c-mono_iff_mono-iii} follows immediately from Fact~\ref{f:firm_vs_mono}\ref{f:firm_vs_mono-i}. Finally, \eqref{e:sumJAi=Id}, \eqref{e:Gamma_para} and \eqref{e:JAK=sumJAi} follow from \ref{l:c-mono_iff_mono-iii} and the definition of $A_K$. \end{proof} We now address maximal $c$-monotonicity. Equivalent statements of Minty's characterization are: Let $A:H\rightrightarrows H$ be a monotone mapping. Then $A$ is maximally monotone if and only if $$ \sumop\big(\gra(A)\big)=H, $$ equivalently, $$ \gra(A)+\gra(-\Id)=H\times H. $$ In order to extend our discussion of these formulas into the multi-marginal settings we will employ the following definitions and notations. We denote by $\Delta$ the subset of $X=X_1\times\cdots\times X_N$ defined by \begin{equation}\label{e:Delta} \Delta=\big\{(x,\ldots,x)\big|\ x\in H\big\}.\ \ \ \ \ \ \ \text{Consequently,}\ \ \ \ \ \Delta^{\perp}=\Big\{(x_1,\ldots,x_N)\in X\ \Big|\ \sum_{i=1}^N x_i=0\ \Big\}. \end{equation} \begin{corollary}\label{c:unique_mono_decomp} Let $\Gamma\subseteq X$ be a $c$-monotone set. Then for every $u,v\in\Gamma$, \begin{equation} u-v\in \Delta^\perp\qquad \iff\qquad u=v. \end{equation} \end{corollary} \begin{proof} Let $u=(u_1,\ldots,u_N)$ and $v=(v_1,\ldots,v_N)$ belong to $\Gamma$ and suppose that $$ u-v=d=(d_1,\ldots,d_N)\in\Delta^\perp. $$ We prove that $d_i=0$ for each $1\leq i\leq N$. To this end, set $1\leq i_0\leq N$. By Lemma~\ref{l:c-mono_iff_mono}, $A_{i_0}$ is monotone. Consequently we see that \begin{equation*} 0\leq \bigg\langle u_{i_0}-v_{i_0},\sum_{i\neq i_0}u_i-\sum_{i\neq i_0}v_i\bigg\rangle =\bigg\langle d_{i_0},\sum_{i\neq i_0}d_i\bigg\rangle=\scal{d_{i_0}}{-d_{i_0}}=-\|d_{i_0}\|^2\leq 0. \end{equation*} \end{proof} Combining Lemma~\ref{l:c-mono_iff_mono} and Corollary~\ref{c:unique_mono_decomp} with classical two-marginal monotone operator theory, we arrive at the following result. \begin{theorem}[multi-marginal maximal $c$-monotonicity] \label{t:MMMM} Let $\Gamma\subseteq X$ be a $c$-monotone set. Then the following assertions are equivalent: \begin{enumerate} \item\label{t:MMMM-i} For each $\varnothing \neq K \subsetneq I$ the mapping $A_K$ defined by \eqref{e:monotone_condition} is maximally monotone; \item\label{t:MMMM-ii} There exists $\varnothing \neq K \subsetneq I$ such that the mapping $A_K$ is maximally monotone; \item\label{t:MMMM-iii} $\Gamma+\Delta^\perp=X$; \item\label{t:MMMM-iv} $J_{A_{1}}+\cdots+J_{A_{N}}=\Id$; \item\label{t:MMMM-v} For each $\varnothing \neq K \subsetneq I$ the firmly nonexpansive mapping $J_{A_K}:H\to H$ has full domain and $J_{A_K}=\sum_{i\in K}J_{A_i}$; \item\label{t:MMMM-vi} $\sumop(\Gamma)=H$. \end{enumerate} In this case, $\Gamma$ is maximally $c$-monotone. \end{theorem} \begin{proof} \ref{t:MMMM-i} $\Rightarrow$ \ref{t:MMMM-ii} is trivial. \ref{t:MMMM-ii} $\Rightarrow$ \ref{t:MMMM-iii}: Suppose that $A_K$ is maximally monotone and let $a=(a_1,\ldots,a_N)\in X$. We will prove that there exist $x=(x_1,\ldots,x_N)\in\Gamma$ and $d=(d_1,\ldots,d_N)\in\Delta^\perp$ such that $x+d=a$. Indeed, the maximal monotonicity of $A_K$ implies that $\ran(A_K+\Id)=H$. Consequently, by the definition of $A_K$, there exists $x=(x_1,\ldots,x_N)\in\Gamma$ such that $\sum_{i=1}^N x_i=\sum_{i=1}^N a_i $. For each $1\leq i\leq N$ we let $d_i=a_i-x_i$. Then $\sum_{i=1}^N d_i=0$, that is $d=(d_1,\ldots,d_N)\in\Delta^\perp$ and $x+d=a$. \ref{t:MMMM-iii} $\Rightarrow$ \ref{t:MMMM-iv}: Fix $1\leq i_0\leq N$. We prove that $A_{i_0}+\Id$ is onto. Indeed, let $s\in H$. We prove that there exists $x=(x_1,\ldots,x_N)\in\Gamma$ such that $(x_{i_0},s)\in\gra(A_{i_0}+\Id)$. Indeed, let $h=(h_1,\ldots,h_N)\in X$ such that $\sum_{i=1}^N h_i=s$. Then \ref{t:MMMM-iii} implies the existence of $x=(x_1,\ldots,x_N)\in\Gamma$ and $d=(d_1,\ldots,d_N)\in\Delta^\perp$ such that $x+d=h$. Consequently, $\sum_{i=1}^N x_i=s$ which implies that $(x_{i_0},s)=\Big(x_{i_0},\sum_{i=1}^N x_i\Big)\in\gra(A_{i_0}+\Id)$. Thus, since $A_{i_0}$ is monotone, we conclude that its resolvent $J_{A_{i_0}}$ is firmly nonexpansive and has full domain. This is true for each $1\leq i_0\leq N$ and since for any $s\in H$ there exists $x=(x_1,\ldots,x_N)\in\Gamma$ such that $\sum_{i=1}^N x_i=s$, we conclude that $\sum_{i=1}^N J_{A_i}(s)=\sum_{i=1}^N x_i=s$, that is, \ref{t:MMMM-iv} holds. \ref{t:MMMM-iv} $\Rightarrow$ \ref{t:MMMM-v}: Since $A_K$ is monotone for every $\varnothing \neq K \subsetneq I$, the resolvent $J_{A_K}$ is firmly nonexpansive and \ref{t:MMMM-iv} implies it has full domain. Furthermore, by employing our notations from the previous step, we see that for every $s\in H$, $\sum_{i\in K} J_{A_i}(s)=\sum_{i\in K} x_i=J_{A_K}(s)$, that is, we have arrived at \ref{t:MMMM-v}. \ref{t:MMMM-v} $\Rightarrow$ \ref{t:MMMM-i}: Let $\varnothing \neq K \subsetneq I$. Since the resolvent $J_{A_K}$ is firmly nonexpansive and has full domain, $A_K$ is maximally monotone. Summing up, we have established \ref{t:MMMM-i} $\Rightarrow$ \ref{t:MMMM-ii} $\Rightarrow$ \ref{t:MMMM-iii} $\Rightarrow$ \ref{t:MMMM-iv} $\Rightarrow$ \ref{t:MMMM-v} $\Rightarrow$ \ref{t:MMMM-i}. \ref{t:MMMM-iv} $\Rightarrow$ \ref{t:MMMM-vi}: Since for each $1\leq i\leq N$, $\dom(J_{A_i})=S(\Gamma)$, then \ref{t:MMMM-iv} $\Rightarrow$ \ref{t:MMMM-vi}. \ref{t:MMMM-vi} $\Rightarrow$ \ref{t:MMMM-iii}: Suppose that $\sumop(\Gamma)=H$ and let $y\in X$. Then there exist $x\in\Gamma$ such that $\sumop(y)=\sumop(x)$. Consequently, $y-x\in\Delta^\perp$, which implies that $y=x+(y-x)\in\Gamma+\Delta^\perp$. Finally, we prove that \ref{t:MMMM-iii} implies the maximal $c$-monotonicity of $\Gamma$. Indeed, suppose that $u$ is $c$-monotonically related to $\Gamma$. We then write $u=d+v$ where $d\in\Delta^\perp$ and $v\in\Gamma$. Since $u,v\in \Gamma\cup\{u\}$ which is $c$-monotone and $u-v\in\Delta^\perp$, Corollary~\ref{c:unique_mono_decomp} implies that $u=v\in\Gamma$. \end{proof} \begin{remark} To the best of our knowledge, the question whether the multi-marginal generalization of the other direction of Minty's characterization of maximal monotonicity holds, namely, whether the maximal $c$-monotonicity of the set $\Gamma$ implies that $\Gamma+\Delta^\perp=X$, equivalently, that $J_{A_{1}}+\cdots+J_{A_{N}}=\Id$, is still open. \end{remark} \begin{remark} In the partition of the identity in~\eqref{e:sumJAi=Id} and in Theorem~\ref{t:MMMM}\ref{t:MMMM-iv} we conclude from~\eqref{e:JAK=sumJAi} and Theorem~\ref{t:MMMM}\ref{t:MMMM-v} that any partial sum of the firmly nonexpansive mappings is also firmly nonexpansive. This is not the case for general partitions of the identity into sums of firmly nonexpansive mappings; indeed, an example where partial sums of a partition of the identity into firmly nonexpansive mappings fail to be firmly nonexpansive is provided in~\cite[Example 4.4]{BBW1}. We elaborate further on this in Example~\ref{e:failed partial sum} below. \end{remark} \section{Multi-marginal maximal $c$-monotonicity via continuity} \label{s:MMMM_via_cont} In the classical two-marginal case an important class of maximally monotone operators is the one of continuous monotone operators. A continuity criterion guarantees maximality in the multi-marginal framework as well: \begin{theorem}\label{continuous monotone is maximal} Let $\Gamma\subseteq X$ be a $c$-monotone set. Suppose that $\Gamma$ is the graph of a continuous mapping $T=(T_2,\ldots,T_N):X_1\to\Pi_{i=2}^N X_i$, i.e., \begin{equation*} \Gamma=\gra(T)=\big\{(x,T_2x,\ldots,T_N x)\big|\ x\in H\big\} \end{equation*} where for each $2\leq i\leq N$ the mapping $T_i:H\to H$ is continuous. Then $\Gamma$ is maximally $c$-monotone. \end{theorem} We provide two proofs for Theorem~\ref{continuous monotone is maximal}. We begin with a direct proof. \begin{proof} Let $u=(u_1,\ldots,u_N)$ be $c$-monotonically related to $\Gamma$. We prove that $u\in\Gamma$. Since $A_1$, induced from the $c$-monotone set $\Gamma\cup\{u\}$, is monotone, \begin{equation*} \forall x\in H,\quad 0\leq\bigg\langle u_1-x,\ \sum_{i=2}^N (u_i-T_i x)\bigg\rangle. \end{equation*} For $t>0$ we let $x_t=u_1+t\sum_{i=2}^N(u_i-T_i u_1)$. Then $x_t\longrightarrow u_1$ as $t\to0^+$ and \begin{align*} 0\leq t\bigg\langle \sum_{i=2}^N(T_i u_1-u_i),\ \sum_{i=2}^N (u_i-T_i x_t)\bigg\rangle. \end{align*} Since each $T_i$ is continuous, we deduce that \begin{align*} 0&\leq \bigg\langle \sum_{i=2}^N(T_i u_1-u_i),\ \sum_{i=2}^N (u_i-T_i x_t)\bigg\rangle\\[2mm] &\xrightarrow[]{t\to 0^+}\ \bigg\langle \sum_{i=2}^N(T_i u_1-u_i),\ \sum_{i=2}^N (u_i-T_i u_1)\bigg\rangle=-\bigg\|\sum_{i=2}^N (u_i-T_i u_1)\bigg\|^2, \end{align*} which implies \begin{equation*} \sum_{i=2}^N u_i=\sum_{i=2}^N T_i u_1; \end{equation*} equivalently, \begin{equation} (u_1,\ldots,u_N)-(u_1,T_2u_1,\ldots,T_Nu_1)\in\Delta^\perp. \end{equation} Thus, by Corollary~\ref{c:unique_mono_decomp}, we have $ (u_1,\ldots,u_N)=(u_1,T_2u_1,\ldots,T_Nu_1)\in\gr T. $ \end{proof} The second proof of Theorem~~\ref{continuous monotone is maximal} employs the classical two-marginal fact that a monotone and continuous mapping is maximally monotone \cite[Corollary~20.28]{BC2017}, Lemma~\ref{l:c-mono_iff_mono} and Theorem~\ref{t:MMMM}. \begin{proof} Since $A_1(x)=\sum_{i=2}^N T_i(x)$ for every $x\in H$, by employing Lemma~\ref{l:c-mono_iff_mono} it follows that $A_1$ is a monotone and continuous mapping, hence, maximally monotone. Consequently, by employing Theorem~\ref{t:MMMM} we conclude that $\Gamma$ is maximally monotone. \end{proof} \section{Maximal $c$-monotonicity of $c$-splitting sets, $c$-conjugate tuples and multi-marginal convex analysis} \label{s:c-split_tuples} We begin our discussion of $c$-splitting tuples by a known observation regarding the subdifferentials of the splitting functions: As in~\cite{GS,KS,RU} we observe that if $(f_1,\ldots,f_N)$ is a $c$-splitting tuple of $\Gamma\subseteq X$, then given $x=(x_1,\ldots,x_N)\in\Gamma$ and for any $x_1'\in X_1$, \begin{align*} \sum_{i=1}^N f_i(x_i)&=c(x_1,\ldots,x_N)\\[1mm] \text{and} \qquad c(x_1',x_2,\ldots,x_N)&\leq f_1(x_1')+\sum_{i=2}^N f_i(x_i). \end{align*} Summing up these two inequalities followed by simplifying, we see that \begin{equation*} f_1(x_1)+\scal{x_1'}{x_2+\cdots+x_N}\leq f_1(x_1')+\scal{x_1}{x_2+\cdots+x_N},\ \ \ \ \text{that is,}\ \ \ \ \ \ \sum_{i=2}^N x_i\in\partial f_1(x_1). \end{equation*} Similarly, we conclude that for each $1\leq i_0\leq N$, \begin{equation}\label{e:split_subdiff i} \sum_{i\neq i_0} x_i\in\partial f_{i_0}(x_{i_0}). \end{equation} Since $\gra A_{i_0}=\Big\{\big( x_{i_0},\sum_{i\neq i_0}x_i\big)\ \Big|\ (x_1,\ldots,x_N)\in \Gamma\ \Big\}$, this implies \begin{equation}\label{e:split_subdiff ii} \gra(A_{i_0})\subseteq\gra(\partial f_{i_0}). \end{equation} Similar observations and $c$-monotonicity properties of $\Gamma$ from Section 2 are also related to the \emph{Wasserstein barycenter} as can be seen, for example, in~\cite{AC}. We continue our discussion by a characterization of $c$-splitting tuples and their generated $c$-splitting sets in terms of the Moreau envelopes of the splitting functions. \begin{theorem}\label{envelopes inequality and equality} For each $1\leq i\leq N$, let $f_i:X_i\to\RX$ be proper, lower semicontinuous, and convex. Then $c\leq\bigoplus_{i=1}^N f_i$ if and only if \begin{equation}\label{e:env_ineq} \forall s\in H,\quad e_{f_1^*}(s)+\cdots+e_{f_N^*}(s)\leq q(s). \end{equation} Now assume this is the case, and let $\Gamma\subseteq X$ be the $c$-splitting set generated by $(f_1,\ldots,f_N)$. Then equality in~\eqref{e:env_ineq} holds if and only if $s=x_1+\cdots+x_N$ where $(x_1\ldots,x_N)\in\Gamma$. \end{theorem} \begin{proof} The inequality $c\leq\bigoplus_{i=1}^N f_i$ holds if and only if for all $(x_1,\ldots,x_N)\in X$, \begin{align} &c(x_1,\ldots,x_N)\leq \sum_{i=1}^N f_i(x_i)\label{e:c_leq_fi}\\ \iff\quad &q(x_1+\cdots+x_N)=c(x_1,\ldots,x_N)+\sum_{i=1}^{N}q(x_i)\leq \sum_{i=1}^{N}\big(f_i(x_i)+q(x_i)\big).\label{e:q_leq_fi+q} \end{align} We see that \eqref{e:c_leq_fi} holds with equality only when $(x_1,\ldots,x_N)\in\Gamma$ if and only if \eqref{e:q_leq_fi+q} holds with equality only when $(x_1,\ldots,x_N)\in\Gamma$. Let $\phi:X\to\RR$ be defined by $$ \phi(x_1,\ldots,x_N)=q(x_1+\cdots+x_N). $$ Then, using \cite[Corollary~15.28(i)]{BC2017}, we have \begin{equation*} \forall(x_1,\ldots,x_N)\in X,\quad \phi^*(x_1,\ldots,x_N)=q(x_1)+\iota_{\Delta}(x_1,\ldots,x_N). \end{equation*} Since for each $1\leq i\leq N$, $(f_i+q)^*=e_{f_i^*}$ (see, for example, \cite[Proposition~14.1]{BC2017}), we arrive at \begin{equation*} \Big(\bigoplus_{i=1}^N (f_i+q)\Big)^*=\bigoplus_{i=1}^N (f_i+q)^*=\bigoplus_{i=1}^N e_{f_i^*}. \end{equation*} Consequently, (classical) Fenchel conjugation transforms~\eqref{e:q_leq_fi+q} into~\eqref{e:env_ineq} and vise versa. We now address the case of equality in~\eqref{e:env_ineq}. Let $(x_1\ldots,x_N)\in X$ and $s=x_1+\cdots+x_n$. Then for each $1\leq i\leq N$, by the Fenchel-Young inequality, \begin{equation}\label{Y-F for envelopes} \scal{s}{x_i}\leq (f_i+q)^*(s)+(f_i+q)(x_i)=e_{f_i^*}(s)+(f_i+q)(x_i) \end{equation} with equality if and only if $x_i\in\partial (f_i+q)^*(s)$, i.e., since $(f_i+q)^*=e_{f_i^*}$ is Fr\'echet differentiable (see, e.g., \cite[Proposition~12.30]{BC2017}), $x_i=\nabla e_{f_i^*}(s)$ . By summing up~\eqref{Y-F for envelopes} over $i$, we obtain \begin{equation}\label{multi-marginal Y-F for envelopes} \scal{s}{s}=\sum_{i=1}^N\scal{s}{x_i}\leq\sum_{i=1}^N \big(e_{f^*_i}(s)+(f_i+q)(x_i)\big) \end{equation} with equality if and only if $x_i=\nabla e_{f_i^*}(s)$ for every $1\leq i\leq N$. ($\Leftarrow$): Suppose that $x=(x_1,\ldots,x_N)$ is in the $c$-splitting set $\Gamma$ generated by $(f_1,\ldots,f_N)$ and set $s=\sumop(x)$. We prove equality in \eqref{e:env_ineq}. It follows from~\eqref{e:split_subdiff i} that for each $1\leq i\leq N$, \begin{equation} \sum_{j\neq i}x_j=s-x_i\in\partial f_i(x_i) \quad\iff\quad s\in\partial (f_i+q)(x_i), \end{equation} which, in turn, implies that $x_i\in\partial (f_i+q)^*(s)$, that is, $x_i=\nabla e_{f_i^*}(s)$. Since in this case there is equality in~\eqref{multi-marginal Y-F for envelopes} and in~\eqref{e:q_leq_fi+q}, we obtain equality in~\eqref{e:env_ineq}. ($\Rightarrow$): Let $s\in H$ be a point where equality in~\eqref{e:env_ineq} holds. Since $\sum_{i=1}^N e_{f_i^*}$ and $q$ are Fr\'echet differentiable and $\sum_{i=1}^N e_{f_i^*}\leq q$, then at the point of equality $s$ we have $$ \nabla\Big(\sum_{i=1}^N e_{f_i^*}\Big)(s)=\nabla q(s)=s. $$ For each $1\leq i\leq N$, set $x_i=\prox_{f_i}(s)=\nabla e_{f^*_i}(s)$ (see, e.g., \cite[eq~(14.7)]{BC2017}). Then it follows that $s=x_1+\cdots+x_N$. Thus, in order to complete the proof it is enough to prove that $(x_1,\ldots,x_N)\in\Gamma$ or, equivalently, that there is equality in~\eqref{e:q_leq_fi+q}. Indeed, Moreau's decomposition (see, e.g., \cite[Remark~14.4]{BC2017}) implies that $e_{f_i}+e_{f_i^*}=q$ for each $1\leq i\leq N$. Consequently, \begin{equation*} \sum_{i=1}^N e_{f_i^*}(s)= q(s) \qquad \text{is equivalent to} \qquad \sum_{i=1}^N e_{f_i}(s)=(N-1)q(s). \end{equation*} We also note that for each $1\leq i\leq N$, $x_i=\prox_{f_i}(s)$ implies that \begin{equation*} e_{f_i}(s)=\min_{x\in H}\big(f_i(x)+q(s-x)\big)=f_i(x_i)+q(s-x_i). \end{equation*} Thus, we arrive at \begin{align*} &\sum_{i=1}^N \big(q(s-x_i)+f_i(x_i)\big)=(N-1)q(s)\\[1mm] \Leftrightarrow\qquad &-\sum_{i=1}^N\scal{s}{x_i}+\sum_{i=1}^N\big(f_i(x_i)+q(x_i)\big)=-q(s)\\[1mm] \Leftrightarrow\qquad &\sum_{i=1}^N\big(f_i(x_i)+q(x_i)\big)=q(s). \end{align*} \end{proof} We now address $c$-conjugation. \begin{definition}[$c$-conjugate tuple] For each $1\leq i\leq N$, let $f_i:X_i\to\RX$ be a proper function. We say that $(f_1,\ldots,f_N)$ is a {\em $c$-conjugate tuple} if for each $1\leq i_0\leq N$, \begin{equation*} f_{i_0}(x_{i_0})=\Big(\bigoplus_{i\neq i_0}f_i\Big)^c(x_{i_0}) :=\sup_{i\neq i_0,\ x_i\in X_i}\ c(x_1,\ldots,x_{i_0},\ldots,x_N)-\sum_{i\neq i_0}f_i(x_i),\ \ \ \ \ \ x_{i_0}\in X_{i_0}. \end{equation*} \end{definition} It follows that if $(f_1,\ldots,f_N)$ is a $c$-conjugate tuple, then $f_i$ is lower semicontinuous and convex for each $1\leq i\leq N$. Furthermore, it is known (see \cite{GS} and \cite{CN}) that given a $c$-splitting tuple $(u_1,\ldots,u_N)$ of a set $\Gamma\subseteq X$, it can be relaxed into a $c$-conjugate $c$-splitting tuple $(f_1,\ldots,f_N)$ of $\Gamma$ by setting \begin{equation*} f_{1}=\Big(\bigoplus_{2\leq i\leq N}u_i\Big)^c \end{equation*} inductively, \begin{equation*} f_{i_0}=\Big(\bigoplus_{1\leq i\leq i_0-1}f_i \oplus \bigoplus_{i_0+1\leq i\leq N}u_i\Big)^c \qquad\text{for}\ \ 2\leq i_0\leq N-1, \end{equation*} and finally \begin{equation*} f_N=\Big(\bigoplus_{1\leq i\leq N-1}f_i\Big)^c. \end{equation*} In the case $N=2$, let $f_1:X_1\to\RX$ be proper, lower semicontinuous and convex, let $f_2=f_1^*:X_2\to\RX$ be its conjugate and let $\Gamma=\gra(\partial f_1)\subseteq X_1\times X_2$. Then it is well known that $\Gamma$ is maximally monotone, see, e.g., \cite[Theorem~20.25]{BC2017}. Since $f_1=f_1^{**}=f_2^c$ and also $f_2=f_1^c$, then we can restate as follows: \begin{quote} Let $\Gamma\subseteq X_1\times X_2$ be the $c$-splitting set generated by the $c$-conjugate pair $(f_1,f_2)$. Then $\Gamma$ is maximally $c$-monotone and determines its $c$-conjugate $c$-splitting tuple $(f_1,f_2)$ uniquely up to an additive constant pair $(\rho,-\rho)$ with $\rho\in\RR$. \end{quote} A generalization to an arbitrary $N\geq 2$ would be \begin{quote} Let $\Gamma\subseteq X$ be the $c$-splitting set generated by the $c$-conjugate tuple $(f_1,\ldots,f_N)$. Then $\Gamma$ is maximally $c$-monotone and determines its $c$-conjugate $c$-splitting tuple $(f_1,\ldots,f_N)$ uniquely up to an additive constant tuple $(\rho_1,\ldots,\rho_N)$ such that $\sum_{i=1}^N \rho_i=0$. \end{quote} To the best of our knowledge, whether or not this latter assertion is true in general is still open. We do, however, provide a positive answer in a more particular case in Theorem~\ref{3-marginal smooth conjugate} and additional insight in Theorem~\ref{t:split_max_c-mono}. Furthermore, we note that in the case $N=2$, given a conjugate pair $(f_1,f_2)$, Moreau's decomposition can be restated as \begin{equation}\label{e:moreau_decomp} e_{f_1^*}+e_{f_2^*}=q \qquad\text{and}\qquad \prox_{f_1}+\prox_{f_2}=\Id. \end{equation} Combining our discussion with Theorems~\ref{envelopes inequality and equality} and \ref{l:c-mono_iff_mono}, we arrive at the following generalized multi-marginal convex analytic assertions which, in particular, generalize the decomposition \eqref{e:moreau_decomp}. To this end, we again recall that for each $1\leq i_0\leq N$, $$\gra A_{i_0}=\Big\{\big( x_{i_0},\sum_{i\neq i_0}x_i\big)\ \Big|\ (x_1,\ldots,x_N)\in \Gamma\ \Big\}.$$ \begin{theorem}\label{t:split_max_c-mono} For each $1\leq i\leq N$, let $f_i:X_i\to\RX$ be convex, lower semicontinuous, and proper. Suppose that $\Gamma\subseteq X$ is the $c$-splitting set generated by $(f_1,\ldots,f_N)$. Then the following assertions are equivalent: \begin{enumerate} \item\label{t:split_max_c-mono-i} There exist $1\leq i_0\leq N$ such that $A_{i_0}$ is maximally monotone; \item\label{t:split_max_c-mono-ii} There exist $1\leq i_0\leq N$ such that $A_{i_0}=\partial f_{i_0}$; \item\label{t:split_max_c-mono-iii} $A_{i}=\partial f_{i}\ $ for each $1\leq i\leq N$; \item\label{t:split_max_c-mono-iv} $\prox_{f_1}+\cdots+\prox_{f_N}=\Id$; \item\label{t:split_max_c-mono-v} $e_{f_1^*}+\cdots+e_{f_N^*}=q$. \end{enumerate} In this case \begin{enumerate}[label={\rm(\Alph*)}] \item\label{t:split_max_c-mono-A} $\Gamma$ is maximally $c$-monotone (and, consequently, maximally $c$-cyclically monotone); \item\label{t:split_max_c-mono-B} $(f_1,\ldots,f_N)$ is a $c$-conjugate $c$-splitting tuple of $\Gamma$. Moreover, $\Gamma$ determines its $c$-conjugate $c$-splitting tuple $(f_1,\ldots,f_N)$ uniquely up to an additive constant tuple $(\rho_1,\ldots,\rho_N)$ such that $\sum_{i=1}^N \rho_i=0$. \end{enumerate} \end{theorem} \begin{proof} \ref{t:split_max_c-mono-i} $\Rightarrow$ \ref{t:split_max_c-mono-ii}: $\partial f_{i_0}$ is monotone and $\gra(A_{i_0})\subseteq\gra(\partial f_{i_0})$ (see~\eqref{e:split_subdiff ii}). Consequently, since $A_{i_0}$ is maximally monotone, it follows that $A_{i_0}=\partial f_{i_0}$. \ref{t:split_max_c-mono-ii} $\Rightarrow$ \ref{t:split_max_c-mono-iii}: $A_{i_0}=\partial f_{i_0}$ is maximally monotone as the subdifferential of a proper lower semicontinuous convex function. Consequently, it follows from Theorem~\ref{t:MMMM}\ref{t:MMMM-i}\&\ref{t:MMMM-ii} that $A_i$ is maximally monotone for each $1\leq i\leq N$. Now, $\partial f_i$ is monotone and $\gra(A_{i})\subseteq\gra(\partial f_{i})$ (see~\eqref{e:split_subdiff ii}). Consequently, since $A_i$ is maximally monotone, it follows that $A_i=\partial f_i$. \ref{t:split_max_c-mono-iii} $\Rightarrow$ \ref{t:split_max_c-mono-iv}: Follows from Theorem~\ref{t:MMMM}\ref{t:MMMM-i}\&\ref{t:MMMM-iv} since $A_i=\partial f_i$ is maximally monotone and $\prox f_i=J_{\partial f_i}=J_{A_i}$. \ref{t:split_max_c-mono-iv} $\Rightarrow$ \ref{t:split_max_c-mono-v}: By integrating \ref{t:split_max_c-mono-iv} we obtain the equality in \ref{t:split_max_c-mono-v} up to an additive constant. Theorem~\ref{envelopes inequality and equality} implies that equality in \ref{t:split_max_c-mono-v} holds on $S(\Gamma)$; thus, the additive constant vanishes. \ref{t:split_max_c-mono-v} $\Rightarrow$ \ref{t:split_max_c-mono-i}: By Theorem~\ref{envelopes inequality and equality} equality in \ref{t:split_max_c-mono-v} holds only on $\sumop(\Gamma)$. Consequently, \ref{t:split_max_c-mono-v} implies that $\sumop(\Gamma)=H$. By employing Theorem~\ref{t:MMMM}\ref{t:MMMM-vi}\&\ref{t:MMMM-i}, we obtain \ref{t:split_max_c-mono-i}. In this case Theorem~\ref{t:MMMM} also implies $\Gamma$ is maximally $c$-monotone. Thus, it remains to prove \ref{t:split_max_c-mono-B}. By our preliminary discussion there exists a $c$-conjugate $c$-splitting tuple $(h_1,\ldots,h_N)$ of $\Gamma$. From \ref{t:split_max_c-mono-iii} and from \eqref{e:split_subdiff ii} we conclude that $\gra(\partial f_{i})=\gra(A_{i})\subseteq\gra(\partial h_{i}) $ which, by maximality, implies that $\partial f_i=\partial h_i$ for each $1\leq i\leq N$. Here there exists a constant tuple $(\rho_1,\ldots,\rho_N)\in\RR^N$ such that $(f_1,\ldots,f_N)=(h_1,\ldots,h_N)+(\rho_1,\ldots,\rho_N)$. For $(x_1,\ldots,x_N)\in\Gamma$ the equality $\sum_{i=1}^N f_i(x_i)=\sum_{i=1}^N h_i(x_i)$ implies that $\sum_{i=1}^N \rho_i=0$. Consequently, the fact that for each $1\leq i_0\leq N$ \begin{equation*} f_{i_0}-\rho_{i_0}=\Big(\bigoplus_{i\neq i_0}^N(f_{i}-\rho_{i})\Big)^c \end{equation*} implies that $(f_1,\ldots,f_N)$ is a $c$-conjugate tuple. \end{proof} We now provide a smoothness criteria in the 3-marginal case where Theorem~\ref{t:split_max_c-mono}\ref{t:split_max_c-mono-i}--\ref{t:split_max_c-mono-v}\&\ref{t:split_max_c-mono-B} are equivalent and imply maximal $c$-monotonicity. To this end we will employ the following facts. \begin{fact} {\rm (\cite[Theorem~14.19]{BC2017})} \label{Toland} Let $g:H\to\RX$ be proper, let $h:H\to\RX$ be proper, lower semicontinuous and convex. Set \begin{equation*} f:H\to[-\infty,+\infty]:x\mapsto\begin{cases} g(x)-h(x), & \ \ \ x\in\dom(g);\\ +\infty, & \ \ \ x\notin\dom(g). \end{cases} \end{equation*} Then \begin{equation*} f^*(y)=\sup_{v\in\dom(h^*)}\big(g^*(y+v)-h^*(v)\big). \end{equation*} \end{fact} \begin{fact}\label{Solov} {\rm(\cite[Corollary 2.3]{Sol})} Let $f:\RR^n\to\RR$ be proper and lower semicontinuous. If $f^*$ is essentially smooth, then $f$ is convex. {\color{blue} } \end{fact} \begin{theorem}\label{3-marginal smooth conjugate} Let $n\in\NN,\ N=3$ and $H=\RR^n$. Let $g:X_2\to\RX$ and $h:X_3\to\RX$ be proper, lower semicontinuous and convex functions. Suppose that $f=(g\oplus h)^c$ (in particular if $(f,g,h)$ is a $c$-conjugate triple) and that $f$ is essentially smooth. Let $\Gamma$ be the $c$-splitting set generated by $(f,g,h)$. Then assertions~\ref{t:split_max_c-mono-i}--\ref{t:split_max_c-mono-v} of Theorem~\ref{t:split_max_c-mono} hold and $\Gamma$ is maximally $c$-monotone. \end{theorem} \begin{proof} Since $f=(g\oplus h)^c$ and $\dom(g+q)^*=\dom(e_{g^*})=\RR^n$, then by employing Fact~\ref{Toland} in~\eqref{f(x+y) conjugates} and then Moreau's decomposition in~\eqref{Moreau's decomp for 3 conjugates} we see that \begin{align} (f+q)(x)&= \sup_{y,z\in\RR^n} \big(c(x,y,z)-g(y)-h(z)+q(x)\big)\nonumber\\ &=\sup_{y,z\in\RR^n} \big(\scal{x}{y}+\scal{y}{z}+\scal{z}{x}+q(x)-g(y)-h(z)\big)\nonumber\\ &=\sup_{y\in\RR^n}\big( \scal{x}{y}+h^*(x+y)+q(x)-g(y)\big)\nonumber\\ &= \sup_{y\in\RR^n}\big( h^*(x+y)+q(x+y)-(g(y)+q(y))\big)\nonumber\\ &=\big((h^*+q)^*-(g+q)^*\big)^*(x)\label{f(x+y) conjugates}\\ &=(e_h-e_{g^*})^*(x)=(q-e_{g^*}-e_{h^*})^*(x).\label{Moreau's decomp for 3 conjugates} \end{align} Since $f+q$ is essentially smooth, Fact~\ref{Solov} implies that $q-e_{g^*}-e_{h^*}$ is convex. Consequently, $$ e_{f^*}=(f+q)^*=(q-e_{g^*}-e_{h^*})^{**}=q-e_{g^*}-e_{h^*}, $$ that is, $e_{f^*}+e_{g^*}+e_{h^*}=q$. \end{proof} \begin{remark} In our discussion in the last paragraph of Section~\ref{s:multi-mar-Minty} we pointed out that in the partition of the identity in~Theorem~\ref{t:MMMM}\ref{t:MMMM-iv} any partial sum of the firmly nonexpansive mappings is again firmly nonexpansive and, furthermore, that general partitions of the identity into firmly nonexpansive mappings partial sums may fail to be firmly nonexpansive. Thus, in the context of $c$-splitting sets a natural question is: Given a partition of the identity into proximal mappings, are partial sums also proximal mappings? Unlike general firmly nonexpansive mappings, a positive answer to this question is provided by~\cite[Theorem~4.2]{BBW1}. \end{remark} \section{Examples, observations and remarks} \label{s:ex} We now apply our results in order to determine maximality of $c$-monotone sets. Given a multi-marginal $c$-cyclically monotone set $\Gamma\subseteq X$, the problem of constructing a $c$-splitting tuple is, in general, nontrivial. Nevertheless, constructions which are independent of maximality and uniqueness considerations are available for some classes of $c$-cyclically monotone sets (for example, see \cite{BBW2} for the case $N \geq 3$). We also note that $c$-splitting tuples can be constructed via~\eqref{e:split_subdiff ii} if it is known, in addition, that the antiderivatives $f_i$ are unique up to additive constants, as guaranteed by Theorem~\ref{t:split_max_c-mono}. Now, suppose that a $c$-splitting tuple is already given. The computation and classification of the $c$-splitting tuple as being a $c$-conjugate tuple were, thus far, nontrivial. We employ our results for such classifications in the following examples. For these cases, we are able to conclude $c$-conjugacy without additional nontrivial computations of multi-marginal conjugates. In addition, we demonstrate finer aspects of multi-marginal maximal monotonicity. \begin{example}\label{ex:quadratics} For each $1\leq i\leq N$, set $X_i=\RR^d$ and let $Q_i\in\RR^{d\times d}$ be symmetric, positive definite, and pairwise commuting. Set \begin{equation*} \Gamma=\big\{(Q_1v,\ldots,Q_Nv)\ \big|\ v\in\RR^d\big\}. \end{equation*} For each $1\leq i\leq M$, define $M_i\in\RR^{d\times d}$ by \begin{equation*} M_i=\Big(\sum_{k\neq i}Q_k\Big)Q_i^{-1}. \end{equation*} In~\cite[Example~3.4]{BBW2}, it was established that \begin{equation*} \forall (x_1,\ldots,x_N)\in X,\quad c(x_1,\ldots,x_N)=\sum_{1\leq i< j\leq N}\scal{x_i}{x_j}\leq\sum_{1\leq i\leq N}q_{M_i}(x_i), \end{equation*} where $q_{M_i}(x)=\frac{1}{2}\scal{x}{M_i x}$, and equality holds if and only if $(x_1,\ldots,x_N)\in\Gamma$. Thus, we conclude that $\Gamma$ is the $c$-splitting set generated by the tuple $(q_{M_1},\ldots,q_{M_N})$, and that $A_i=M_i=\nabla q_{M_i}$ for each $1\leq i\leq N$. Consequently, Theorem~\ref{t:split_max_c-mono} implies that $(q_{M_1},\ldots,q_{M_N})$ is a $c$-conjugate $c$-splitting tuple of $\Gamma$, and that $\Gamma$ is maximally $c$-monotone. The maximal $c$-monotonicity of $\Gamma$ is also implied by Theorem~\ref{continuous monotone is maximal} via continuity of a parametrization, say, \begin{equation*} \Gamma=\big\{(v,Q_2Q_1^{-1}v\ldots,Q_NQ_1^{-1}v)\ \big|\ v\in\RR^d\big\}. \end{equation*} \end{example} As a simple application of Example~\ref{ex:quadratics}, we now generalize the well-known classical fact that the only conjugate pair of the form $(f,f)$ is $(f,f)=(q,q)$ and that in this case the generated splitting set is the graph of the identity mapping. \begin{corollary}[self $c$-conjugate tuple] The only $c$-conjugate tuple of the form $(f,\ldots,f)$ is \begin{equation*} (f,\ldots,f)=(N-1)(q,\ldots,q). \end{equation*} In this case, the generated $c$-splitting set is $\Gamma=\Delta$. \end{corollary} \begin{proof} In the settings of Example~\ref{ex:quadratics} we let $Q_i=\Id$ for each $1\leq i\leq N$. Then $\Gamma=\Delta$ and $q_{M_i}=(N-1)q$ for each $1\leq i\leq N$. We conclude that $(N-1)(q,\ldots,q)$ is a $c$-conjugate $c$-splitting tuple and generates the $c$-splitting set $\Delta$. We now prove that it is the only $c$-conjugate tuple of this form. Let $(f,\ldots,f)$ be a $c$-conjugate tuple. Then for $1\leq i_0\leq N$ and for $x_{i_0}\in X_{i_0}$, \begin{equation}\label{self conjugates} f(x_{i_0})=\sup_{i\neq i_0,\ x_i\in H}\Big(c(x_1,\ldots,x_{i_0},\ldots,x_N)-\sum_{i\neq i_0}f(x_i)\Big). \end{equation} By letting $x_i=x_{i_0}$ for every $i$ in the supremum in~\eqref{self conjugates} we see that $$ f(x_{i_0})\geq c(x_{i_0},\ldots,x_{i_0})-(N-1)f(x_{i_0})\ \ \ \ \ \ \ \Rightarrow\ \ \ \ \ \ \ Nf\geq N(N-1)q\ \ \ \ \ \Rightarrow\ \ \ \ \ \ f\geq (N-1)q. $$ Consequently, $$ f=\Big(\bigoplus_{i\neq i_0} f\Big)^c\leq \Big(\bigoplus_{i\neq i_0} q\Big)^c=(N-1)q. $$ \end{proof} A similar type of construction to the one of Example~\ref{ex:quadratics}, however, a nonlinear one, is available when the marginals are one-dimensional. \begin{example} For each $1\leq i\leq N$, let $\alpha_i:\RR\to\RR$ be a continuous, strictly increasing and surjective function with $\alpha_i(0)=0$. Let $\Gamma$ be the curve in $\RR^N$ defined by $$ \Gamma=\Big\{\big(\alpha_1(t),\ldots,\alpha_N(t)\big)\ \Big|\ t\in\RR\Big\} $$ and for each $1\leq i\leq N$, let \begin{equation}\label{curves} f_i(x_i)=\int_0^{x_i} \bigg(\sum_{k\neq i} \alpha_k\big(\alpha^{-1}_i(t)\big)\bigg)dt. \end{equation} In~\cite[Example 4.3]{BBW2}, it was established that \begin{equation}\label{e:curve_ineq} \sum_{1\leq i<j\leq N} x_ix_j \leq \sum_{i=1}^N\int_0^{x_i} \bigg(\sum_{k\neq i} \alpha_k\big(\alpha^{-1}_i(t)\big)\bigg)dt \ \ \ \ \ \ \ \ \ \ \ \ \ \forall (x_1,\ldots,x_N)\in\RR^N \end{equation} and that equality in~\eqref{e:curve_ineq} holds if and only if $x_j=\alpha_j\big(\alpha_i^{-1}(x_i)\big)$ for every $1\leq i<j\leq N$, namely, if $(x_1,\ldots,x_N)\in\Gamma$. We now conclude that $\Gamma$ is the $c$-splitting set generated by the tuple $(f_1,\ldots,f_N)$ and that for each $1\leq i\leq N$, \begin{equation*} A_i=\nabla f_i=\sum_{k\neq i} \alpha_k\circ\alpha_i^{-1}. \end{equation*} Consequently, Theorem~\ref{t:split_max_c-mono} implies that $(f_1,\ldots,f_n)$ is a $c$-conjugate $c$-splitting tuple of the maximally $c$-monotone curve $\Gamma$. Similar to Example~\ref{ex:quadratics}, the maximal $c$-monotonicity of $\Gamma$ can also be deduced via continuity. \end{example} A linear example of a different type, where none of the two marginal projections of $\Gamma$ is monotone, but where, however, $\Gamma$ is $c$-cyclically monotone, is available for $N=3$ and 2-dimensional marginals. \begin{example}\label{ex:gamma ij not mono} Suppose that $N=3$ and that $X_1 = X_2=X_3=\RR^2$. We set $$ M_1=2\begin{pmatrix} 1 & 0\\0 & 0 \end{pmatrix},\ \ \ \ M_2=2 \begin{pmatrix} 1 & 0\\0 & 1 \end{pmatrix},\ \ \ \ M_3=\frac{1}{7} \begin{pmatrix} 8 & 3\\3 & 2 \end{pmatrix} $$ and $$ \Delta_2=\big\{(a,a)\ \big|\ a\in\RR\big\} \subseteq\RR^2. $$ Set \begin{equation*} f_1=\iota_{\RR\times\{0\}}+q_{M_1},\qquad f_2=\iota_{\Delta_2}+q_{M_2}=\iota_{\Delta_2}+2q,\qquad \text{and}\qquad f_3=q_{M_3}. \end{equation*} \ \\ Furthermore, set $v_1=\big((0,0),(-1,-1),(1,-5)\big)$, $v_2=\big((1,0),(2,2),(0,7)\big)$ and $$ \Gamma=\spa\{v_1,v_2\}=\Big\{\big((s,0),(2s-t,2s-t),(t,7s-5t)\big)\Big|\ s,t\in\RR\Big\}. $$ It was established in~\cite[Example 3.5]{BBW2} that $$ \scal{x_1}{x_2}+\scal{x_2}{x_3}+\scal{x_3}{x_1}\leq f_1(x_1)+f_2(x_2)+f_3(x_3)\ \ \ \ \ \ \ \ \ \text{for all} \ \ \ \ (x_1,x_2,x_3)\in\big(\RR^2\big)^3 $$ with equality if and only if $(x_1,x_2,x_3)\in\Gamma$, namely, $\Gamma$ is the $c$-splitting set generated by the tuple $(f_1,f_2,f_3)$ and that none of the two marginal projections $\Gamma_{1,2},\ \Gamma_{1,3}$ and $\Gamma_{2,3}$ of $\Gamma$, is monotone. We observe that the matrix representation of the mapping $$ (t,7s-5t)\ \mapsto\ (s,0)+(2s-t,2s-t)\ \ \ \ \ \ \ s,t\in\RR $$ is $M_3$. Consequently, we see that $A_3=M_3=\nabla f_3$. Thus, by employing Theorem~\ref{t:split_max_c-mono} we conclude that $(f_1,f_2,f_3)$ is a $c$-conjugate $c$-splitting tuple of the maximally $c$-monotone subspace $\Gamma$ of $\big(\RR^2\big)^3$. \end{example} In all of our examples thus far, the set $\Gamma$ was a maximally $c$-monotone $c$-splitting set. We now present maximally $c$-monotone sets which are not $c$-splitting sets. To this end, we note the following simple fact: Suppose that the set $\Gamma\subseteq X$ is $n$-$c$-monotone, then for each $1\leq i_0\leq N$ the mapping $A_{i_0}:H\rightrightarrows H$ is $n$-monotone. Indeed, let $\Gamma$ be $n$-$c$-monotone and assume, without the loss of generality, that $i_0=1$. Let $(x^1_1,\dots,x_N^1),\dots,(x_1^n,\dots,x_N^n)\in\Gamma$ and $\sigma\in S_n$. Then a straightforward computation implies that the inequality $$ \sum_{j=1}^n c(x_1^j,x_2^{\sigma(j)},\dots,x_N^{\sigma(j)})\leq \sum_{j=1}^n c(x_1^{j},\dots,x_N^{j}) $$ leads to the inequality $$ \sum_{j=1}^n\bigg\langle x_1^j,\sum_{i=2}^N x_i^{\sigma(j)}\bigg\rangle\leq \sum_{j=1}^n\bigg\langle x_1^j,\sum_{i=2}^N x_i^j\bigg\rangle. $$ Thus, we see that if $\Gamma$ is $n$-$c$-monotone, then $A_1$ is $n$-monotone. To sum up, \begin{quote} if for some $1\leq i_0\leq N$ the mapping $A_{i_0}$ is not cyclically monotone, then the set $\Gamma$ is not a $c$-splitting set. \end{quote} Indeed, otherwise, $\Gamma$ would have been $c$-cyclically monotone (as we recollected after Definition~\ref{monotonicity definitions}) and, by the above argument, for all $1\leq i_0\leq N$ the mapping $A_{i_o}$ would have been cyclically monotone. We now address a trivial embedding of all classical maximally monotone operators in the multi-marginal framework. In particular, we obtain maximally $c$-monotone mappings which are not $c$-cyclically monotone. \begin{example}\label{trivial embedding} Let $A:H\rightrightarrows H$ be a maximally monotone mapping. We set $\Gamma\subseteq X$ by $$ \Gamma=\menge{(x_1,x_2,0,\ldots,0)}{x_2\in Ax_1}. $$ Then $\Gamma$ is $c$-monotone and we see that $A_1=A$ is maximally monotone. Consequently, by invoking Theorem~\ref{t:MMMM}~\ref{t:MMMM-ii} we conclude that $\Gamma$ is maximally $c$-monotone. In addition, we see that $A$ is $n$-monotone if and only if $\Gamma$ is $n$-$c$-monotone. Therefore, if $A$ is not $n$-monotone for some $n\geq 3$, then $\Gamma$ is not $n$-$c$-monotone. Furthermore, since the $n$-$c$-monotonicity of a set is invariant under shifts, the set $\Gamma=\menge{(x_1,x_2,\rho_3,\ldots,\rho_N)}{x_2\in Ax_1}$ is also maximally monotone for any constant vectors $\rho_3,\ldots,\rho_N\in H$. \end{example} Our next example of a maximally $c$-monotone set which is not a $c$-splitting set does not follow from an embedding of the type in Example~\ref{trivial embedding}. \begin{example}\label{c-mono not cyclically mono} Set $N=3$ and for each $1\leq i\leq 3$ set $X_i=\RR^2$. Let $R_\theta$ denote the counterclockwise rotation by the angle $\theta$ in $\RR^2$. Let the set $\Gamma\subseteq X=\big(\RR^2\big)^3$ be defined by \begin{equation}\label{non cyclically gamma} \Gamma=\Big\{\Big(x,\ \tfrac{\sqrt{3}}{2}R_{-\pi/2}x,\ \tfrac{\sqrt{3}}{2}R_{-\pi/2}x\Big) \,\Big|\,x\in\RR^2\Big\}. \end{equation} It follows that \begin{equation*} \gr A_1=\Big\{(x,\sqrt{3}R_{-\pi/2}x)\,\Big|\,x\in\RR^2\Big\} \quad\implies\quad A_1=\sqrt{3}R_{-\pi/2}. \end{equation*} Since $\Gamma=\Big\{\Big(\tfrac{2}{\sqrt{3}}R_{\pi/2}x,x,x\Big) \,\Big|\,x\in\RR^2\Big\}$, we have \begin{equation*} \gr A_2=\gr A_3= \Big\{\Big(x,x+\frac{2}{\sqrt{3}}R_{\pi/2}x\Big)\,\Big|\,x\in\RR^2\Big\} \quad\implies\quad A_2=A_3=\sqrt{\frac{7}{3}}R_{\arctan(2/\sqrt{3})}. \end{equation*} We see that $A_1$, $A_2$, and $A_3$ are maximally monotone. Consequently, for each $\varnothing\neq K\subsetneq\{1,2,3\}$, the mapping $A_{K}$ is maximally monotone and it now follows from Theorem~\ref{t:MMMM} that $\Gamma$ is maximally $c$-monotone in $X$. Furthermore, since $A_1$ is not $3$-$c$-cyclically monotone, it is not $c$-cyclically monotone and, consequently, $\Gamma$ is not a $c$-splitting set. By a straightforward computation, it follows that \begin{equation*} J_{A_1}=\frac{1}{2}R_{\pi/3},\ \ J_{A_2}=J_{A_3}=\frac{\sqrt{3}}{4}R_{-\pi/6}\ \ \ \ \ \ \text{and}\ \ \ \ \ J_{A_1}+J_{A_2}+J_{A_3}=\Id. \end{equation*} Finally, from~\eqref{non cyclically gamma} it is easy to see that $\Gamma_{i,j}$ is monotone for all $1\leq i<j\leq 3$. \end{example} We see that in the case $N=3$ the set $\Gamma$ is $c$-monotone if and only if the mappings $A_1, A_2$ and $A_3$ are monotone. In the following example we demonstrate that the monotonicity of all of the $A_i$'s no longer implies the $c$-monotonicity of $\Gamma$ in the case when $N\geq 4$. \begin{example}\label{e:failed partial sum} In~\cite[Lemma~4.2 and Example~4.3]{BBW1} it was established that: In $X=\RR^2$, let $n\in\{2,3,\ldots\}$, let $\theta \in \;\big]\negthinspace\arccos(1/\sqrt{2}),\arccos(1/\sqrt{2n})\big]$, set $\alpha=1/(2n\cos(\theta))$, and denote by $R_\theta$ the counterclockwise rotator by $\theta$. Then the following hold: \begin{enumerate} \item $\alpha R_\theta$ and $\alpha R_{-\theta}$ are firmly nonexpansive. \item\label{ex5.7ii} $n\alpha R_\theta$ and $n\alpha R_{-\theta}$ are not firmly nonexpansive. \item $n\alpha R_\theta + n\alpha R_{-\theta} = \Id$. \end{enumerate} We employ these facts to construct a set $\Gamma$ as follows. We set $N=2n$ and $$ T_i=\begin{cases} \alpha R_\theta, & 1\leq i\leq n;\\ \alpha R_{-\theta}, & n+1\leq i\leq 2n. \end{cases} $$ Define $$ \Gamma=\menge{(T_1 x,\ldots,T_{2n} x)}{x\in\RR^2}\subseteq X={(\RR^2)}^{2n}. $$ It then follows that for each $1\leq i\leq N$, the mapping $J_{A_i}=T_i$ is firmly nonexpansive with full domain. We conclude that the set $\Gamma$ possesses the following properties: \begin{enumerate}[resume] \item for each $1\leq i\leq N$, the mapping $A_i$ is maximally monotone, \item $J_{A_1}+\cdots+J_{A_N}=\Id$. \end{enumerate} However, due to \ref{ex5.7ii}, the mappings \begin{equation*} J_{A_{\{1,\ldots,n\}}}=\sum_{i=1}^n J_{A_i}=\sum_{i=1}^{n}T_i=n\alpha R_{\theta}, \quad\text{and similarly}\quad J_{A_{\{n+1,\ldots,2n\}}}=n\alpha R_{-\theta} \end{equation*} are not firmly nonexpansive, equivalently, $A_{\{1,\ldots,n\}}=R_{-2\theta}$ and $A_{\{n+1,\ldots,2n\}}=R_{2\theta}$ are not monotone. Consequently, by employing Lemma~\ref{l:c-mono_iff_mono} we conclude that despite the fact that $\Gamma$ possesses properties (iv) and (v), it is not a $c$-monotone set. \end{example} \begin{remark} In~\cite{BBW2} the two marginal projections $\Gamma_{i,j}$ of a set $\Gamma\subseteq X$ were employed, it was established that if the $\Gamma_{i,j}$'s are cyclically monotone, then $\Gamma$ is $c$-cyclically monotone and an explicit construction of a $c$-splitting tuple is provided. However, it was also established that this is a sufficient condition for $c$-cyclic monotonicity of $\Gamma$ but not a necessary one, in general, as can be seen in Example~\ref{ex:gamma ij not mono} where we provide a maximally $c$-cyclically monotone set such that all of its two-marginal projections are not monotone. In the one dimensional case (i.e., $X_i=\RR$ for each $1\leq i\leq N$), it was established that $\Gamma$ is $c$-monotone if and only if all of its two marginal projections $\Gamma_{i,j}$ are monotone. With the exception of Example~\ref{ex:gamma ij not mono}, in all of our examples of $c$-monotone sets in this section the set $\Gamma$ had monotone two-marginal projections $\Gamma_{i,j}$. Thus, a natural question is: \emph{How does the monotonicity and maximal monotonicity of the two-marginal projections $\Gamma_{i,j}$ relate to the $c$-monotonicity and maximal $c$-monotonicity of $\Gamma$?} \begin{proposition}\label{p:mono two projections} Let $X_i=\RR^d$ for each $1\leq i\leq N$. Let $\Gamma\subseteq X$ be a set. Suppose that for each $1\leq i< j\leq N$ the set $\Gamma_{i,j}$ is monotone. Then $\Gamma$ is $c$-monotone. \end{proposition} \begin{proof} The mapping $A_K$ is monotone if and only if for every $(x_1,\ldots,x_N),\ (y_1,\ldots,y_N)\in\Gamma$, $$ 0\leq\Scal{\sum_{i\in K}x_i-y_i}{\sum_{j\not\in K}x_j-y_j}. $$ Since the right-hand side is equal to $\sum_{i\in K\atop j\not\in K}\scal{x_i-y_i}{x_j-y_j}$ and since, by the monotonicity of $\Gamma_{i,j}$, $0\leq\scal{x_i-y_i}{x_j-y_j}$, we see that $A_K$ is monotone. \end{proof} To the best of our knowledge, the question whether the maximal monotonicity of the $\Gamma_{i,j}$'s implies the maximal $c$-monotonicity of $\Gamma$ is still open. Finally, we note that the maximal $c$-monotonicity of $\Gamma$ does not imply the maximal monotonicity of the $\Gamma_{i,j}$'s even when the $\Gamma_{i,j}$'s are monotone. Indeed, in Example~\ref{trivial embedding}, we see that although $\Gamma$ is maximally $c$-monotone, $\Gamma_{i,j}$ is a singleton for all $3\leq i<j\leq N$, thus $\Gamma_{i,j}$ is monotone but not maximally monotone. Even in the case $N=3$, $\Gamma_{1,3}$ is a proper subset of the graph of the zero mapping whenever $\Gamma$ is generated by a maximally monotone mapping $A$ without a full domain. We conclude in this case that $\Gamma$ is maximally $c$-monotone, however, $\Gamma_{1,3}$ is not maximally monotone. \end{remark} \section*{Acknowledgments} We thank three anonymous referees for their kind and useful remarks. Sedi Bartz was partially supported by a University of Massachusetts Lowell startup grant. Heinz Bauschke and Xianfu Wang were partially supported by the Natural Sciences and Engineering Research Council of Canada. Hung Phan was partially supported by Autodesk, Inc.
1,108,101,563,942
arxiv
\section{INTRODUCTION} \label{sec:intro} The data deluge of the past few decades sparked a great deal of research in image compression. In image compression, such as JPEG, most of the acquired data is accurately approximated by much fewer coefficients of a basis such as wavelets. An interesting research question was posed by Donoho\cite{Donoho06} ``why go to so much effort to acquire all the data when most of what we get will be thrown away? Can we not just directly measure the part that will not end up being thrown away?'' Compressive sensing (CS) bypasses this wasteful acquisition process but rather only measures the relatively fewer values necessary to reconstruct the signal or image. This amazing ability has been the topic of a fast growing area of research as is illustrated in a paper by Elad\cite{Elad12} (see his Figure 2). However, translating the fundamentals of CS into practical hardware with significant benefits over conventional imaging has been limited. As stated in a JASON report for 2012\cite{JASON12}, ``The CS literature often deals with idealized situations". Despite thousands of paper published yearly on this subject, few head to head comparisons between CS and conventional imaging techniques appear in the literature. The advantages of CS are derived from a savings in terms of size, weight, power, and costs compared to a conventional sensor. The disadvantages come from the need to post-process the measurements and a reduction in peak signal to noise ratio (PSNR). This tradeoff indicates situations where implementing a CS architecture might be beneficial. For example, our efforts to directly compare CS image construction and super-resolution to conventional imaging with super-resolution for aerial imagery proved conventional imaging methods to be preferable. More specifically, we compared using the same low resolution focal plane array (FPA) in a CS camera mode and in a conventional sensor with the goal of attaining a 4 times greater resolution image reconstruction (double the width and height resolution of the FPA). For the conventional camera approach we simply used bicubic interpolation to obtain the higher resolution image. Interpolation is computationally fast and only created some blurring in the final result. On the other hand, a variety of compressive sensing approaches in the literature were attempted, such as the CS imager described by Willett, et.al. \cite{Willett11} and the algorithm provided by Romberg \cite{Romberg08}. In some cases the results contained artifacts and in all cases the CS approach was more computationally expensive. These experiments indicated that whenever conventional techniques are easily implemented, conventional imaging techniques are preferable to CS techniques. These tests indicate that CS is most valuable in order to overcome limitations of conventional techniques; that is, when conventional imaging is hard to do. The aim of this article is to encourage and assist researchers in finding those niche applications where the CS approach provide substantial gain compared to conventional approaches. The main contributions of this paper are twofold: first, to present lessons learned to help find practical, real-world applications derived from the author's recent experience in evaluating CS applications; and second, these guidelines are illustrated by showing the benefits of CS versus conventional imaging of a new application - sea skimming missile detection. Some of the past successful applications for CS provides guidance in finding other successful applications. Two such early wins are SparseMRI\cite{Lustig07} and radio interferometry\cite{JASON12}. Analysis of the methods behind these two very different applications show many similarities. From this analysis the following guidelines for successful application of CS can be derived: \begin{enumerate} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item CS most useful when it can overcome hardware, financial, bandwidth, or battlefield limitations, \item A priori knowledge of what you will be imaging, \item The signal is sparse in the pixel domain or there exist a transformation to create a sparse representation and additionally the sparsity is orders of magnitude smaller than the number of pixels, \item The measurement matrix is (optimally) incoherent with the sparsifying transform, \item There exists physically realizable hardware architecture corresponding to the theoretical design. \end{enumerate} The first two conditions come out of practical considerations and eliminate many potential applications for CS. This implies that it is necessary to find niche applications where CS has unusual strengths such that it is better to use CS for those strengths than use a conventional sensor. The next two conditions derive from CS theory and are prevalent throughout the literature. The last condition grounds the architecture in a realizable CS camera. In the next section we will provide a brief description of compressive sensing prior to describing a new, practical application for CS. \subsection{Compressive Sensing} Compressive sensing (CS) grew out of the work of initial work by Emmanuel J. Cand\`{e}s, Justin Romberg and Terence Tao\cite{Candes06}, and David Donoho\cite{Donoho06}. At it's heart, compressive sensing is about breaking the Nyquist barrier; that is the ability to image when collecting much fewer measurements than twice the highest frequency dictated by the Nyquist theorem. This is possible because there is much less useful information than indicated by the frequency. The key to compressive sensing's success relies on the signal being sparse or compressible in either its natural state or sparse in a known basis. In this situation, solving an underdetermined, linear system of equations can uncover the essential information within the few measurements. Transform coding\cite{Eldar12, Elad10} relies on finding a basis where the signal is sparse or compressible. Here we provide a description of key terms: \begin{enumerate} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item Sparsity - a signal is k sparse if it has at most k terms or if it can be exactly represented by at most k atoms from a transform or dictionary. \item Compressible - the coefficients of a representation of a signal decrease exponentially, implying that the signal can be accurately represented by it largest k coefficients. \item Coherence/incoherence - the metric used for a measurement transform that is the largest absolute product between the a sparse or compressible signal and the measurement matrix. Intuitively, a small coherence means the signals are spread out in the measurement domain, which leave "clues" to their existence. Fourier transforms is a good example of incoherence since a Dirac function or spike in the signal domain is spread out throughout the Fourier/frequency domain. \item Partial Fourier (PF) coefficients - undersampling the Fourier coefficients and retaining only a fraction of the coefficients. \end{enumerate} More formally, one seeks to recover a $k$-sparse signal $x \in \mathbf{R}^N $ based on the measurements $y \in \mathbf{R}^M$, each of which is a different linear combination of the $x$. Here one expects $ k < M \ll N $, where a lower bound of $ M \approx k \times log ( N ) $ is given in one of the original papers by Cand\`es, Romberg, and Tao\cite{Candes06} but more recent research claim even fewer measurements are sufficient. In the event that $x$ is not naturally sparse, we assume the existence of a linear transform $\Psi$ such that $x=\Psi\theta$ and $\theta\in \mathbf{R}^N$ is $k$-sparse in the coefficient vector. Given this notation, assume the measurement process is described by the equation \begin{equation} y = A \Psi \theta + n \end{equation} where the $M\times N$ matrix $A$ models measurement process, and $n\in \mathbf{R}^M$ represents noise in the measurements. CS theory has shown that it is possible to accurately reconstruct $\theta$ given $y$ by solving the optimization problem \cite{Candes06} \begin{equation} \label{eqn:CS} \operatorname*{arg\,min}_{\theta} \parallel \theta \parallel_0 ~~subject~to \parallel y - A \Psi \theta \parallel_2^2 < \epsilon \end{equation} where $\epsilon$ is a small scalar related to the noise variance. If there isn't any noise in the measurements ($ n = 0 $) this situation is called the exact model and equation \ref{eqn:CS} reduces to \[ \operatorname*{arg\,min}_{\theta} \parallel \theta \parallel_0 ~~subject~to ~~ y = A \Psi \theta \] In the past few years a significant amount of research was directed to solving Equation \ref{eqn:CS} and reconstructing the original image. It is NP hard to solve this equation and it is typically solved approximately either by greedy methods or $L_1$ relaxation methods. Greedy methods include Orthogonal Matching Pursuits (OMP)\cite{Mallat93, Tropp07}, CoSaMP\cite{Needell09}, Subspace Pursuits\cite{Dai09}, thresholding\cite{Elad10}, and others. Since the objective function $ \parallel . \parallel_0 $ in Equation \ref{eqn:CS} in non-convex, it is common in the literature to ``relax" the objective function with a convex approximation $ \parallel . \parallel_1. $ Hence one instead solves \begin{equation} \label{eqn:L1} \operatorname*{arg\,min}_{\theta} \parallel \theta \parallel_1 ~~subject~to \parallel y - A \Psi \theta \parallel_2^2 < \epsilon \end{equation} which can be solved by linear programming methods\cite{Chen98} for which efficient solvers exist. An equivalent solution can be obtained by solving a modified form of Equation \ref{eqn:L1} \begin{equation} \label{eqn:lambda} \operatorname*{arg\,min}_{\theta} \left( \frac{1}{2} \parallel y - A \Psi \theta \parallel_2^2 + \lambda \parallel \theta \parallel_1 \right) \end{equation} where $ \lambda $ is a parameter to balance the solution between the first term for integrity to the data and the second term to promote sparsity. In addition to the sparsity requirement, the above solutions require that the sensing matrix $A$ be ``incoherent'' with the signal model $\Psi$. Incoherence can be understood heuristically in the following way; the incoherence property suggests that the measurement system $A$ takes a sparse signal $\theta$ and spreads it out over many observations $y$. If this property holds, information sufficient to recover the vector $\theta$ are contained in the $M$ measurements $y$. For example, if the sparse signal consists of $k$ spikes we might choose $\Psi={\bf I}$ (the identity matrix). In this case the Fourier transform matrix $A$ is optimally incoherent and will spread out information of the spikes throughout the frequency domain. One could also design a CS imaging system so that $A$ is comprised of random entries (e.g., each entry equiprobable ``0'' or ``1''). This essentially guarantees that a vector that is sparse in the basis $\Psi$ will be non-sparse in $A$. \subsection{Related Work} A great many papers in the compressive sensing literature discuss how to implement practical compressed sensing systems (e.g., see \cite{Willett11, Romberg09, Wagadarikar08}) but few show CS imaging to be better than conventional imaging when all aspects of the two approaches are weighed and compared. Most practical CS hardware architectures can be found described on Igor Carron's website at https://sites.google.com/site/igorcarron2/compressedsensinghardware and the latest information within the CS field may be discovered in his daily blog at http://nuit-blanche.blogspot.com/. As discussed above, two credible applications for CS are in its use for MRI\cite{Lustig07} and radio interferometry\cite{Li11}. We used these wins as prototypes for establishing the guidelines for successful application of CS. There are a number of papers in the literature applying CS to detection of stars, such as the paper by Gupta, et. al.\cite{Gupta11}. Missile detection considered later in this paper bears similarities to star point source detections algorithms. This also bears some resemblance to the application of compressive sampling to radio interferometry\cite{Li11} where the use of partial Fourier reconstruction is described. The JASON report on "Compressive sensing for DoD sensor systems"\cite{JASON12} is closely related to the topic of this article as the report addresses the question where CS can be beneficially utilized for DoD radar applications. Some of the guidelines for successful application of CS can be found in this report but are not clearly delineated as in this paper. Ultimately the report finds that CS should be of interest to DoD but more research is required, such as this paper, to determine useful applications. The paper ``Compressive sampling vs. conventional imaging'' by Haupt and Nowark\cite{Haupt06} addresses the same question that is the focus of this paper. However, that paper does not provide general guidelines nor an example of a new CS application as is done in this paper. Several recent papers, such as one by Adcock, et. al.\cite{Adcock13}, point out the gap between CS theory and use in real world applications and show how to bridge it. There are in the CS literature papers related to using compressive sampling for target detection and tracking. The paper by Kashter, Levi, and Stern\cite{Kashter12} describe application of CS to motion detection. However, their method relies on frame differencing to achieve sparsity; that is, only the moving objects appear in the frame differenced image. Similarly the paper by Poon, et. al.\cite{Poon12} describes a realizable hardware application that they applied to a simplified target tracking problem. Again their method depends on frame differencing, hence the objects moving to different pixels from frame to frame. In the missile detection problem, the sub-pixel missile signal remains in the same pixel for tens of frames and frame differencing eliminates all possibility of detecting the already dim target. A paper by Duarte, et. al.\cite{Duarte06} describes the signal detection problem using CS principles without reconstructing the image. The missile detection application described here similarly follows this concept; that is, the goal is to detect the missile regardless of the quality of the image reconstruction. \section{A NEW APPLICATION OF COMPRESSIVE SENSING} In this section we describe the conventional and new CS approaches to the missile detection problem. Specifically we are able to demonstrate better Receiver Operating Characteristic (ROC) performance with a lower resolution focal plane array than a previously used, conventional approach. Rather than try and recover a pristine (low PSNR) image, we show that by modifying the CS recovery algorithm we can effect greater probability of detection for low false alarm rates. This is, of course, the main goal of a missile warning system and hence is a clear demonstration of the potential of CS architectures. Section \ref{sec:missile} briefly describes the sensor system and field data collected to support the system development. Section \ref{sec:CSmissile} describes how to adapt the sensor system and algorithms for a CS approach. \begin{figure}[tbc] \centerline{ \begin{tabular}{c} \includegraphics[scale=0.30]{OrigImage} \end{tabular} } \caption{Full 45x 3.7 degree sensor image at 0800.} \label{fig:OrigImage} \end{figure} \subsection{Missile Detection} \label{sec:missile} Several years ago the Naval Research Laboratory (NRL) developed a MWIR sensor system for detection, tracking, and declaration of sub- and supersonic sea-skimming, anti-ship cruise missiles. The objective was to find bright, sub-pixel sized objects just above the horizon. As part of this effort, both the sensor and detection algorithms were developed, and subsequently tested with field data. This section describes the sensor, algorithms, and the collected field data. The sensor used in this study is a passive staring midwave IR sensor \cite{LNS06}. It consists of an f/2 80 mm diameter optical system with a prism anamorphic element that results in a 3.6 x 48 degree field of view. The focal plane for the sensor is a 25 micron pitch, 512 x 2560, 5 micron cutoff HgCdTe array cooled to 80K and operating at a 30 Hz frame rate. The sensor instantaneous field of view is 130 (vertical) x 300 (horizontal) microradians. Either the 3.7 - 4.25 or 4.6 - 4.85 micron spectral band is chosen depending on viewing conditions using a warm filter wheel that is part of the optical assembly. The large focal plane array (FPA) size is required for detecting dim, sub-pixel targets. The data processing functions were broken down into two primary subdivisions, the front-end and back-end processing. This division was required because of the large amount of data from the sensor that must be handled in real time. The primary goal of the front end processing is image processing and the production of candidate detections, which are called exceedances (referred to as "xcds" for brevity in this paper). The primary goal of the back end processing is the generation and maintenance of tracks, and for the sea skimming missile at the horizon detection mode, correct declaration of threats with a minimum of false alarms. This paper focuses on a realizable CS camera and the front end algorithms while the back end processing remains identical. The detection performance of the system against a surrogate missile target was tested in a series of experiments in September 2005 \cite{LNS06}. The sensor was mounted at a height of 65 feet above sea level on the roof of a building 100 meters from the shoreline. This location provided a clear view to the east over the Atlantic Ocean for the full 45 degree field of view of the sensor. Data was collected between 0800 and 1500 hours. The target consisted of a radiometrically calibrated blackbody infrared source constructed using an internally heated metal plate integrated on a towed decoy. It provided an in-band signature of approximately 4 watts/ster, which is approximately the expected signal from a subsonic missile. The target was towed from a Lear jet flying at an altitude of approximately 2500 feet and speed of 270 knots. The target altitude was controlled from the tow aircraft and varied between 50 and 150 feet above the sea surface depending on wind conditions. The target can be seen to be above the geometric horizon at ranges exceeding 15 nautical miles for target heights of greater than 50 feet Representative sensor imagery is illustrated in Figure \ref{fig:OrigImage}. Along with the towed target there were a variety of small surface vessels, both commercial fishing and pleasure craft, that presented point like sources to the sensor. Perhaps the main challenge in the missile detection problem is the large number of false alarms (FAs) that tend to arise due to clutter in the scene such as sunlight reflecting off the sea. The front end processing must minimize these point like sources without losing sight of the target, which is is relatively dim compared to these bright FAs. To accomplish this goal, background normalized values are computed by the front end. Specifically, this two-dimensional spatial image processing consists of: \begin{enumerate} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item Read in an image frame, \item Single frame spatial demeaning (i.e., convolve with a 1-D, 21 element, zero mean filter), \item Four parallel, optical point spread function (PSF) matched filtering, \item Local neighborhood calculation of the spatial variance, \item Computing background normalized values as the filtered results divided by spatial variance, and \item Thresholding of the background normalized values in order to identify exceedances (xcds). \end{enumerate} A summary of the original spatial detection algorithm is shown in Figure \ref{fig:FrontEndAlgo}. For simplicity, this algorithm was slightly modified for the results in this paper by using only one PSF for the matched filtering. In this paper, applying this algorithm on the collected field data constitutes the conventional approach to missile detection. It is important to note that each of the above listed steps constitutes a {\it linear} operation on the data. This will become important later in the image recovery step of the CS approach. \begin{figure}[tbc] \centerline{ \begin{tabular}{c} \includegraphics[scale=1.0]{FrontEndAlgo} \end{tabular} } \caption{Block diagram for single frame based spatial processing sea skimming missile detection algorithm.} \label{fig:FrontEndAlgo} \end{figure} \begin{figure}[tbc] \centerline{ \begin{tabular}{c} \includegraphics[scale=0.3]{mask16} \end{tabular} } \caption{Sample of mask for the 128x640 FPA where 15 out of 16 pixels are blocked (shown in black) and only 1 mask pixel is open (shown in white).} \label{mask16} \end{figure} \begin{figure}[tbc] \centerline{ \begin{tabular}{c} \includegraphics[scale=0.31]{PartFourierImage} \end{tabular} } \caption{Direct image reconstruction from partial Fourier coefficients where 15 out of 16 pixels are blocked and only 1 mask pixel is open.} \label{PartFourierImage} \end{figure} \subsection{CS Missile Detection} \label{sec:CSmissile} The goal of a CS missile detection system is to maintain a high level of correct true target detections while obtaining only a fraction of the number measurements relative to the number of pixels in the original system. Does the sea skimming missile problem fulfill the conditions for finding a practical application of CS? This particular problem does fit nicely within the CS requirements. The first requirement for a practical CS application is when applying CS can overcome hardware, financial, or battlefield limitations. In the missile detection scenario the trend is towards more resolution and smaller pixels in order to better detect dim, sub-pixel missile. However, a four times increase in number of pixels often translates into an order of magnitude increase in cost. Hence, CS can combat this long term trend for more expensive FPAs. Next, one needs a priori knowledge of what you will be imaging in order to develop the sparsifying and incoherent transforms. In this situation the objects of interest are only the point sources within the scene. Since Fourier transforms are incoherent with delta functions, which are point sources in the discrete realm, they are ideal for this application. Third is the requirement that the sparsity be orders of magnitude smaller than the number of pixels. This is true in this case because there is only one missile and only hundreds of point like pixels out of over a million pixels; hence a reduction of four orders of magnitude. The fourth requirement is incoherence between measurement matrix and the sparsifying transform. Here the point like objects are sparse in the image domain and as mentioned, the Fourier transforms are ideally incoherent. The fifth guideline asks ``Is the hardware for the CS sensor realizable?'' Yes; for the envisioned hardware one can use Fourier optics and a mask to measure some fraction of the Fourier coefficients. For proof of concept and prototyping purposes, we used a simple, random binary mask where each entry is the result of a Bernoulli trial. As clearly explained in the paper by Lustig, Donoho, and Paula\cite{Lustig07},``Randomness is too important to be left to chance''. The choice of the mask is critical to obtaining good results because the mask directly affects the aliasing artifacts. Hence, several masks of this simple type were tested with our imagery and the best one was used in our experiments in Section \ref{sec:results}. Two metrics were used to compare masks in order to pick the best; one metric is to count the number of FAs and pick the mask that produced the fewest FAs; the other metric is to compute the Mean Square Error (MSE) of the image reconstructed with the partial Fourier coefficients and pick the mask that produced the smallest error. The ROC results for both of these masks are displayed in Figure \ref{ROC}, where the "1st mask" is the result of the first metric and the "2nd mask" is the result of the second metric above. Still, this type of mask is likely quite inefficient and there exists abundance of numerical evidence demonstrating the advantage of variable density sampling strategies \cite{Lustig07, Puy11}, which was not tested in this demonstration. Furthermore, Romberg has shown \cite{Romberg09} that greater efficiencies are possible with random modulation masks. For this simulation the scenario consists of a smaller FPA; that is, increase the size of the pixel by 2 in each direction. However, the mask retains original resolution/pixel size, which in this case is 512x2560. By creating a mask that blocks out 3 of 4 pixels, each FPA "fat" pixel measures the Fourier coefficient of only the one open mask pixel. The mask is designed to randomly pick which 3 are blocked and which is open. Similarly we tested reducing the FPA further - to only 128x640 and 64x320 pixels and the 512x2560 pixel mask blocks 15 out of 16 or 63 out of 64, respectively. A sample of a section of the 1 out of 16 mask is shown in Figure \ref{mask16}. An approach to solving the missile detection problem is to zero fill the missing Fourier coefficient and reconstruct the image with an inverse Fourier transform. Figure \ref{PartFourierImage} shows the resulting image from this direct approach of reconstruction from partial Fourier (PF) coefficients where 15 out of 16 pixels are blocked and only 1 mask pixel is open. Strong artifacts are clearly visible in the image, which interfere with the processing described above in section \ref{sec:missile}. However, as shown in the Results section, thresholding the background normalized image for point like objects works reasonably well when the SNR of the target is large. This implies that the PF coefficients retain information on the point like objects in the frame. Instead, previous successful examples of compressive sensing in sparse MRI \cite{Lustig07} and radio interferometry \cite{Li11} point to more successful CS approaches for solving the CS missile detection problem. Following the CLEAN approach\cite{Hogbom74} used in radio interferometry and Donoho, Tsaig, and Starck's stagewise orthogonal matching pursuit \cite{Donoho12}, one can implement a greedy method by iteratively thresholding the background normalized image and removing the brightest points. In addition, this is equivalent to the nonlinear thresholding scheme described by Lustig, Donoho, and Pauly\cite{Lustig07}. On the other hand, following the techniques from this same paper on sparse MRI \cite{Lustig07} and a paper describing radio interferometry \cite{Li11} one can analogously solve the following optimization problem as an $ L_1 $ relaxation approach: \begin{equation} \operatorname*{arg\,min}_x \left( \parallel T(x) \parallel_1 + 0.5 \parallel y - M F x \parallel_2^2 \right) \end{equation} where $ x $ is the unknown image, $ y $ is the observed undersampled Fourier coefficients, $ F $ is the Fourier transform, $ M $ is a mask for under-sampling the coefficients, and $ T(x) $ is a function that represents the missile detection algorithm described above in section \ref{sec:missile}. In other words, the problem constraint has the detection algorithm ``built in''. By minimizing $\parallel T(x) \parallel_1$ we are minimizing the number of declared targets. Given proper thresholding parameters in $T$, the false alarms will be significantly reduced and only true positives survive the minimization. While this is not at all appropriate for pristine image reconstruction, that is not our goal. There exists many algorithms for solving the $ L_1 $ optimization problem. In the Results shown in Section \ref{sec:results} we used the conjugate gradient (CG) method with a total variation prior described in the MRI \cite{Lustig07} paper and the robust TwIST method\cite{Bioucas07}. \begin{table}[tbc] \caption{Compares results the original algorithm on the original image to CS in various configurations. The fewer exceedences above the target pixel intensity, the better the algorithm.} \label{tab:missile} \begin{center} \begin{tabular}{|l|c|} \hline \rule[-1ex]{0pt}{3.5ex} \textbf{Image processing} & \textbf{Number of exceedences above target} \\ \hline \rule[-1ex]{0pt}{3.5ex} Original 512x2560 image algorithm & 90 \\ \hline \rule[-1ex]{0pt}{3.5ex} CS using 256x1280 PF Coeffs via TwIST or thresholding & 10 \\ \hline \rule[-1ex]{0pt}{3.5ex} CS using 128x640 PF Coeffs via TwIST or thresholding & 23 \\ \hline \rule[-1ex]{0pt}{3.5ex} CS using 128x640 PF Coeffs; Remove groups of 5 & 14 \\ \hline \rule[-1ex]{0pt}{3.5ex} CS using 64x320 PF Coeffs via TwIST or thresholding & 32 \\ \hline \rule[-1ex]{0pt}{3.5ex} CS using 64x320 PF Coeffs; Remove groups of 5 & 8 \\ \hline \end{tabular} \end{center} \end{table} \subsection{Results} \label{sec:results} This section presents two tests of this CS missile detection system; results from a feasibility test to detect the target in a single frame and a more comprehensive test of missile detection over 500 frames of video. Initially we performed a first test to determine the feasibility of detecting the surrogate missile target using CS methodology. This test used a frame where the missile SNR is relatively large. Table \ref{tab:missile} compares results the original algorithm on the original image (512x2560) to CS in various configurations. The original algorithm on the original image is able to find missile target in the background normalized image but, unsurprisingly, there are 90 false alarms (FAs) that are of stronger intensity than the true target pixel. In this test, the fewer FAs above the target pixel intensity, the better the likelihood of target detection and tracking. If one doubles the width and height of the pixels, one gets 256x1280, or a quarter of the pixels, which is the case using CS in the next row of Table \ref{tab:missile}. In our experiments, using either TwIST for L1 optimization solution or simple thresholding, all produced the same results, which was 10 FAs that are of stronger intensity than the true target pixel. That the CS method improved over the original algorithm with the full image was initially surprising but later tests showed that this was related to the nature of the image frame used in this experiment. However, this result does provide the proof we were seeking that the concept of CS missile detection is valid. The next row of Table \ref{tab:missile} gives the results when the resolution of the FPA is reduced once again (to 128x640) and here too the result of 23 brighter xcds is better than the results from the original algorithm. Test of a non-linear thresholding, where the brightest 5 xcds were iteratively removed as described in Section \ref{sec:CSmissile}, is shown in the next row. This demonstrates that the non-linear methodology can improve on simple thresholding. Finally, the last two rows of the Table show that the CS methods can work for this image with even a lower FPA resolution of 64x320. Given a satisfactory result from our proof of concept, a more comprehensive test of the CS missile detection system was performed by comparing ROC curves of the target detection over 500 frames of video. The target signal varies significantly and randomly from frame to frame, offering a challenging test of the CS technique. The goal is to detect the true target in all 500 frames with a minimum number of FAs. The conjugate gradient (CG) optimization \cite{Lustig07} weights were both set to 0.0001 to include both the sparsity and total variation (TV) constraints. Figure \ref{ROC} shows the ROC curve for the original algorithm and another curve for the CS solution by CG with a total variation (TV) prior at 256x1280 resolution. Each ROC is computed by processing each frame by its algorithm, applying a set of thresholds, and recording at each threshold the number of FAs and if the true target is detected. Afterward, the number of FAs and target detections is summed over all of the frames and is used to plot the ROC line. Figure \ref{ROC} illustrates that our CS method using CG a TV prior at 256x1280 resolution provides a competitive result to the original algorithm. Of particular interest is the initial behavior where up until 60\% of the target detections the CS method obtains the target with fewer FAs than the original method. Of note is that the initial test described above used one of these frames, explaining why in Table \ref{tab:missile} the CS method obtained fewer FAs than the original algorithm. Finally we compare computation time for the CS method versus the original algorithm. Running the original algorithm over 500 frames of video using a Matlab function takes 166 seconds of execution time, which is an order of magnitude slower than real time (at 30 Hz, 500 frames last 16.7 seconds). This algorithm does run in real time when implemented on an FPGA. CS using simple thresholding adds a negligible amount of execution time but the performance of the thresholding was unsatisfactory. Running the CS method using CG with a TV prior at 256x1280 resolution executes in 2300 seconds, which is much greater than real time. Future work might show that utilizing GPU's and parallel processing techniques will permit running these codes within the real time requirements of these types of systems. \begin{figure}[tbc] \centerline{ \begin{tabular}{c} \includegraphics[scale=0.5]{ROC5} \end{tabular} } \caption{ROC comparing the original missile detection algorithm to a CS algorithm over 500 frames of video containing the surrogate missile target. The CS method utilized a conjugate gradient method with sparse and TV priors.} \label{ROC} \end{figure} \section{CONCLUSIONS} The purpose of this article is to encourage and assist researchers in finding those niche applications where the CS approach provides substantial gain over conventional approaches. Now that the theory has advanced so significantly, it is time to focus on finding applications where CS will win in real-world comparisons to current conventional state-of-the-art techniques, which include techniques such as super-resolution and image deblurring. This paper presented lessons learned to help researchers find practical, real-world applications, then used these guidelines to find and show the potential benefit of a new CS application - sea skimming missile detection. The results shown here for CS missile detection are only the beginning of the required effort to demonstrate practical target detection. No effort was taken to optimize the architecture or mask for the missile detection problem, yet competitive results were obtained with this simple CS method using a lower resolution focal plane array. This is proof of the potential of the CS method for this application and additional research is required on masks, solvers and architectures that certainly will improve on these initial results. Finally, the primary message is that all of the excitement surrounding CS is necessary and appropriate for encouraging our creativity but we all must take off our "rose colored glasses" and critically judge our ideas, methods and results relative to conventional imaging approaches. \section*{Acknowledgments} The author gratefully thanks Jonathan M. Nichols and James R. Waterman for their assistance and the Naval Research Laboratory for supporting this work. \bibliographystyle{spiebib}
1,108,101,563,943
arxiv
\section{Security Analysis}\label{sec:secanalysis} \begin{figure} \centering \includegraphics[width = 0.45\textwidth]{network_stack.png} \caption{Demonstration of possible attacks.} \label{fig:attack_demo} \end{figure} \subsection{Possible Attacks} \subsubsection{Random Scanning Attack} In a random scanning attack, the attacker scans for open ports as other botnet malwares do without joining the relay sharing system. Since IoT devices are shielded by NAT gateways, scanning traffic from the Internet will not be able to reach the IoT devices. Even if the attacker successfully compromises devices located in the same subnet, the non TLS-N traffic will be discarded by the packet filter that resides in the IoT devices' operating system kernel. As depicted in Figure~\ref{fig:attack_demo}, this type of attack will never get a chance to exploit any vulnerabilities in the application layer. \subsubsection{Attacks against the Relay Server} In the centralized relay model, the relay server becomes an attractive target for attackers because once it gets compromised, attackers can efficiently access millions of its connected IoT devices. Differently, in our relay sharing system, attacking relay servers is much less efficient because each relay server only connects to a small number of IoT devices. In consequence, attackers need to spend significantly more efforts to take down many relay servers with different software implementations, while obtaining much less retribution. Even if some relay servers get compromised, they will be detected and excluded from the relay system when they are utilized for launching attacks. \subsubsection{Malicious Relay Server Attack} In this kind of attack, the malicious relay server attacks its connected IoT devices by delivering packets containing attacking vectors. According to our threat model, the malicious relay server must sign the packets with its blockchain private key to get it accepted by the target IoT device. As shown in Figure~\ref{fig:attack_demo}, the signed packet is then passed to our relay sharing client middleware. The middleware may be vulnerable which means a malicious relay server does have the chance to successfully bypass the middleware's inspection and takes down the device without getting reported. However, as one of the core benefits of our work, we propose to deter this kind of attacks by imposing ``\emph{economic risk}'' instead of relying on the unrealistic perfect software implementation. Launching attacks inevitably requires the target's platform information which is acquired by sending some probing packets. If the target is not vulnerable to the probing attack, the unauthorized packets will be reported, which results in the malicious relay server losing all its deposit. Because the packet does not originate from the controller, the malicious relay server is not able to prove its innocence through the rebutting process. Although the attacker can rejoin RS-IoT with a new account, the risk of losing deposit still exists. Also, compared with the random scanning attack, it's very slow to traverse IoT devices on the RS-IoT platform by passively waiting to be commissioned. Finally, attackers get discouraged because this type of attack is not only risky but also inefficient. \subsection{Fairness Analysis} Considering there is no trust between relay users and servers, fairness of the service trading platform is required to prevent cheating behaviors of either parties. We enumerate all possible cheating scenarios and show how to deal with them by using our proof-of-delivery scheme. \subsubsection{Cheating} First, the relay user has the incentive to cheat by denying that they have already received the relayed packet from the relay server. According to the commitment procedure described in Section \ref{sec:pod}, this can be achieved by sending an incorrect commitment back to the relay server. For this kind of cheating, the relay server cannot verify the commitment $B$ and $B'$ and thus won't reveal the cover stream key $PN$. Without $PN$, the cheating IoT device is not able to extract the desired content, which is equivalent to receiving nothing. Hence, the free ride is impossible due to the packet covering conducted by the relay server. Second, the relay server has the motivation to reap without sowing. That is, it may deliver incomplete packets to IoT devices to reduce the cost. Since the delivered packet is covered by the cover stream, IoT cannot verify its integrity before the relay server reveals the cover stream key $PN$. However, the byte selecting list $Ra'$ is not known to the relay server, and it has no idea about which bytes will be selected for composing the commitment. As a result, when an IoT device generates the commitment $B'$ on the imcompletely delivered packet, the relay server has no method to figure out a $PN$ that can satisfy the relation of $Cover_{PN}(B) = B'$. Then, there will be no way to pass the checking of proof-of-delivery by the smart contact to obtain the service fee. \subsubsection{Malicious Reporting} Since joining the system as an IoT devices requires no deposit, attackers who want to destroy the system may register a large amount of IoT device accounts and use them to maliciously report benign relay servers. Although we design the rebutting scheme, this method may still be able to overload relay servers and undermine the system's performance. We prevent this kind of abuse by utilizing the high cost nature of on-blockchain function calls, which is usually regarded as a problem as we discussed before. According to our description of the reporting process, IoT devices need to pass the whole packet as one of the arguments to the function call which will be very expensive considering the data storage and processing price on Ethereum~\cite{wood2014ethereum}. For normal reporting of real malicious packets, this cost will be made up by the confiscated deposit of the relay server. However, if an IoT device reports benign packet, there will be no make up. So, malicious reporting leads to high cost and is quite uneconomic. \section{APPENDIX} \subsection{Mirai \& its Variants}\label{sec:variant} \subsubsection{Miari Botnet} The IoT oriented botnet malware 'Mirai' spread from September 2016 had caused record-breaking DDoS with traffic flow of over 660 Gbps. Soon after that the source code of Mirai was published and can be publicly accessed on Github \cite{mirai_source}. According to the publicized source code, there are about 60 hardcoded default telnet username password pairs that are used to log into the victim hosts. With a successful login, the attacker load the executable malware execute it. After that, the compromised device joins the bot army by scanning potential victims and reporting login credentials to botnet master server. After infection, the malware would delete the executable binary from device storage while shutdown all accessible network ports for protecting itself. Though Mirai is still a coarse botnet malware without DGA (domain generation algorithm) or P2P network to hide the location of C\&C server, the release of its source code inspires other imitators to create much more sophisticated variants. \subsubsection{Hajime Bot\cite{hajime}} Hajime \cite{Hajime_detail} is the first Mirai variant that was found Oct 5, 2016. Unlike equipping with the centralized malware loader as Mirai, Hajime utilized the P2P network to load the malware which makes the analysis of it much more difficult. It uses the same scanning and brute-force cracking attack method as Mirai uses \cite{mirai_source}. The difference is it utilizes the P2P network to push message to the bot nodes instead have hardcoded command and control server's domain name inside the source code. Besides, Hajime is a modulized malware, and the author keeps on developing new modules like CPU architecture support and UPnP IGD vulnerability exploitation to enforce the malware's capability. \subsubsection{Satori/Okiru Bot} This Mirai variant is discovered in late November 2017 and targeting the Huawei's router HG532. The Satori exploits one old vulnerability (CVE-2014-8316) and one zero-day vulnerability (CVE-2017-17215) to take down target device. All these vulnerabilities are related to command injection in UPnP SOAP interface. The vulnerability is located at UPnP IGD (Internet Gateway Device) Implementation \cite{Huawei_upnp}. In the implementation, the AddPortMapping service is mistakenly exposed to the Internet. This allows attackers to inject command at the \textbf{NewInternetClient} parameter. \subsubsection{Masuta/PureMasuta} Masuta\cite{HNAP_exploitation} is a newly emerged Mirai variant produced by the same author of Satori. This variant takes advantage of D-Link's HNAP (Home Network Administration Protocol) software as its weapon. HNAP is also a SOAP (Simple Object Access Protocol) based protocol for network admins to manage the network devices. This protocol, published by PureNetwork (acquired by Cisco), is reported to have a vulnerability in 2015. However, it's still left unpatched by the date of attack happened. In summary, this vulnerability allows attackers to get the router's login name and password remotely by sending forged HTTP GET request. Combined with the remote command execution (RCE) vulnerability in device common gateway interface (CGI), attackers can remotely log into the victim devices with root privilege and execute arbitrary codes. \subsubsection{Deutsche Telekom Bot (TR-069 abuse)} This kind of botnet was first detected in November 2016. It manipulates the CPE WAN Management Protocol (CWMP), an HTTP-based protocol that enables auto-configuration and remote management of home routers, modems, and other customer premises equipment (CPE). This exploit led to an out-age at Deutsche Telekom late November 2016 \subsubsection{ZyXEL Default Password} This Mirai variant is noticed in November 2017 by 360 secure Netlab\cite{zyxel}. It's discovered by the detection of large scanning traffic on port 23/2323 by over 100k scanner IP from Argentina. During this attack,two new default credentials been used by this malware: \textbf{admin/CenturyLink} and \textbf{admin/QwestModem}. ZyXEL is a Taiwanese router manufacturer; its product is reported to have vulnerability on administration login process that can let attackers bypass the authentication (CVE-2017-3216 \cite{CVE_bypass}) in June 2017. After that, some other vulnerabilities are found on its products including backdoor telnet with default password, CGI command injection (CVE-2017-(7964,6884,15226,17901)). ZyXEL hasn't finished patching up all these flaws and just advised users disable the WAN management access on their router which is enabled by default. \subsubsection{Summary} From the retrospection of Mirai and its variants, we can find some shared characters \begin{itemize} \item Employ bot nodes to scan random IP addresses \item Exploits application-level software vulnerabilities \item Home routers make up the largest portion of the victim device type. \end{itemize} These common characters trend revealed the fact that one important prerequisite for IoT botnet's propagation is the victim device can be accessed directly from the Internet. Explicitly, routers bear the brunt of the malware as most of them serve as the gateway with public IP address. In Mirai botnet attack, compromised NVR and IP cameras which should usually reside behind the Network Address Translation (NAT) shield of home routers are exposed by the use of UPnP IGD protocol which does port mapping automatically to enable user remote access. Although UPnP-IGD is considered as dangerous long before the botnet attack, a lot of home router products still get this function enabled by default. After the reap of Mirai, most vulnerable UPnP devices are infected and no longer accessible from the Internet, while others may have disabled the port mapping feature and turn to the cloud relay method for remote accessing as we'll discuss in section\ref{sec:cur}. As a result, the only target left for botnet malware is the home router which is inevitably exposed on the public Internet. \subsection{Smart Contract Background} The concept of smart contract is introduced by Vitalik Buterin \cite{buterin2014next} in 2013. In Ethereum whitepaper \cite{buterin2014next}, the smart contract is described as script code implemented on Blockchain to deal with digital assets. Different from prior Bitcoin script, the general computing platform is achieved in Ethereum contract by enabling turning complete functions. Smart contract bytecode derived from high-level script languages are executed in an isolated environment which is called as Ethereum Virtual Machine (EVM). Smart contract platform can be regarded as a global computer where, in theory, all nodes will execute the contract code and then reach consensus on execution results, and incorrect execution with wrong results would be discarded by other nodes in the blockchain network and thus be excluded from the main chain. The verification of execution result conducted during consensus procedures gives the Ethereum global computer the feature of trusted computing and tampering resistance. In Ethereum platform, two types of account are provided:1) personal account is set up by individual users for launching ether transfer or interacting with contract functions, and it's protected by personal private key; 2) Contract account is the location of contract code whose security is ensured by the aforementioned trusted contract execution. Both two types of accounts can store a balance of cryptocurrency. To prevent the computing power of the "global computer" be maliciously occupied and remunerate miners who execute the contract code, each operation of the contract code has associated processing fee calculated in "gas" which is dynamically pegged with current digital asset unit "Ether". Although the execution of contract code is not free, it provides following favorable features: \begin{itemize} \item The execution of the contract is deterministic that can be verified by any node at any time after the execution \item The result of execution is tampering resistance and nondeniable \item Correctness of contract execution is ensured \end{itemize} In our work, smart contract serves as the arbiter in relay sharing platform to guarantee fairness in service trading and execute punishment on misbehavior. \section{Background}\label{sec:background} \subsection{N-version Programming}\label{sec:nvp} N-version programming (NVP) \cite{chen1995n} is a method used in software engineering in which multiple versions of software are created from the same copy of initial specifications. These equivalent copies of softwares are developed independently in different approaches. It is proved that NVP can greatly reduce the influence from identical software faults. The concept has already been used as an effective defense method against software flaws \cite{cox2006n}. For our proposed relay sharing system, the variety of IoT devices' software implementation makes it impossible for attackers to launch universally applicable attacks and imposes risks when they make unsuccessful attempts. \subsection{Non-Repudiable TLS} TLS-N \cite{ritzdorf2018tls} is an extension of the current TLS protocol. It adds non-repudiation features. Traditional TLS/SSL protocol only verifies the identity of the other end at the beginning of the session and assumes the consistency of identity after the encrypted session is established. Although it is effective to defeat the man-in-the-middle attack and impersonation attack, it does not support communication forensics with non-deniable proofs of packets sending. Unlike the normal TLS protocol which only uses HMAC in application data packets for message integrity checking, TLS-N enables signatures on each packet transmitted between the communicating peers. By adding a verifiable signature created with the private key, the sender cannot deny sending certain content. The TLS-N protocol timely fills the gap between the imperfection of traditional TLS and the requirements of objective and verifiable message publication for smart contracts' execution. It is built on the base of normal TLS protocol and proved to be able to generate proofs that are not only non-repudiable but also privacy-preserving. As tested by the author, TLS-N only incurs less than 1.5 milliseconds increase of time overhead for each HTTP request compared to the original version of TLS protocol and costs no more than 3 USD for verifying proofs on the public Ethereum blockchain. In our work, TLS-N signatures of malicious packets are used by our reporting system as the evidence of relay servers' misbehaviors. \subsection{Smart Contract} Smart contract~\cite{buterin2014next} consists of pieces of script code on blockchain to deal with digital assets. It is executed by all miner nodes in the isolated virtual environment. The accepted execution results are recorded on the decentralized ledger through consensus mechanisms which ensures it to be trustworthy and tampering-resistant. Smart contracts provide two types of accounts: the personal account is owned by the participating node and protected by the private key; the contract account points to the instance of smart contract code without private keys. Both types of accounts have the account addresses and are able to hold digital assets. \section{Case Study of Efficient Blockchain and IoT Integration: IoT Remote Access}\label{sec:case} Smart home IoT systems enable homeowners to manage their IoT sensors and appliances from both inside and outside their homes. However, accessing IoT outside the home's private network is challenging due to the isolation of Network Address Translation (NAT) and the firewall. As a result, IoT remote access requires a relay server with a public accessible IP address for bridged communication. In this section, we study the use case of IoT remote accessing to demonstrate the proposed framework. \subsection{State-of-art Solutions} \paragraph{Port forwarding} Direct access through the IP addresses may be obstructed by the NAT and firewalls. They either shield IoT devices in private networks or filter out inbound connections. The intuitive workaround is exposing the device's port on the public address by setting the port forward on the gateway device. However, as pointed out in~\cite{sivaraman2016smart}, the NAT has side-effects of isolating IoT devices and protecting them from attacking traffics on the Internet. Some consumer IoT devices such as Belkin's WeMo smart plug and Philips Hue smart light accept unauthenticated commands from the local network and require no credentials to simplify users' configuration. However, exposing IoT devices them could be dangerous because any host on the Internet gets the possibility to compromise them. For example, a surprising higher number of IoT devices are infected by the Mirai botnet malware because they are exposed to the public Internet with the UPnP IGD protocol~\cite{krebs2016makes}. \paragraph{Cloud-based Endpoint} \begin{figure} \centering \includegraphics[width = 0.45\textwidth]{relay_basic.pdf} \caption{Relay-assisted IoT accessing model. For the scenario of both two parties are behind the NAT, a relay server with a publicly accessible IP address is necessary for bidirectional communications.} \label{fig:basic_relay} \end{figure} Even shielded by the gateway, vulnerable IoT devices behind the home private network are still threatened by other compromised devices through the LAN connection as described in~\cite{sivaraman2016smart}. Modern IoT vendors try to address this problem with a more radical isolation that anchors the access entry of their products on the endpoint instances hosted by cloud servers as shown in Figure~\ref{fig:basic_relay}. With the embedded public key, IoT devices can initiate a secure session towards the endpoint right after it boots up and maintain the session with heartbeat packets. This method enables indirect access through the relay server~\cite{ajitomi2015cost}. Since the session is proactively launched by the IoT device located behind the NAT, access requests encapsulated in the response packets can freely pass through the NAT and firewall, which is dubbed as "piggybacking." A typical real-world example is the message broker service~\cite{aws-broker} provided by Amazon Web Service's IoT core module. It maintains a long-term session with the subscribed IoT devices where messaging protocols such as MQTT are carried with it for real-time device control. Counterintuitively, the centralized cloud could provide attackers, especially botnet attackers, better approaches to deliver IoT attacks. As the cloud server usually groups entries for similar devices from the same vendor, once it gets compromised~\cite{cloud-breach}, the attacker suddenly acquires free access to a large amount of IoT devices which share similar vulnerabilities. This saves attackers the effort for randomly scanning on the Internet, which is time-consuming. Even the cloud server is not taken down, security flaws of application program interfaces (APIs) as summarized in~\cite{alrawi2019sok} can also facilitate attacks like username enumeration, password brute-forcing, and unauthorized privilege escalation. These insecure cloud APIs prompt the cloud-based IoT botnet attacks~\cite{pierre2017dlink,kimp2p}. Aside from security issues, the scalability problem is another concern. Maintaining encrypted sessions involves the device identity management, the credential update and constant communication, which induce heavy pressure on the centralized cloud server. Considering the scale of IoT devices, centralized cloud servers can potentially become the performance bottleneck whose malfunction may result in a large-scale blackout of IoT devices. \subsection{Threat Model} According to the background knowledge and the aforementioned challenges, we present our threat model here. Following the \emph{access-to-attack} assumption, we assume very strong attackers who have capabilities to: \begin{enumerate} \item Take down any IoT devices as long as they can get access to them by exploiting application layer software vulnerabilities. \item Take down arbitrary relay servers with non-negligible time. \item Install malwares on compromised IoT devices for botnet attacks. \end{enumerate} These capabilities are reasonable considering the endless discoveries of new vulnerabilities on vendor customized applications. Also, we make some reasonable assumptions about the restriction of attackers' capabilities: \begin{enumerate} \item Attackers cannot take down an unknown device without making multiple attempts. \item Attackers cannot crack standard cryptographic primitives such as the digital signature. \end{enumerate} These assumptions are practical because of the principle of multi-version programming. IoT applications made by different vendors may have different vulnerabilities, but there is no universally applicable vulnerability considering the variety of IoT software. Besides, since the hardware, the operating system, and public middleware libraries such as TLS-N are widely shared by different device vendors and are well tested in contrast to individually developed application layer programs, it is much more difficult to find vulnerabilities to exploit among them. So, we think those extremely strong attacking vectors targeting IoT hardware and fundamental software components are out of our scope. \section{System Model \& Problem Statement}\label{sec:cur} \subsection{Relay-assisted IoT Remote Accessing Model} \subsection{Problems \& Challenges} \subsection{Threat Model} \section{Discussion}\label{sec:dis} \paragraph{Device endpoints} Blockchain data is increasing infinitely, it is unrealistic to let each IoT devices to access the public blockchain with their own client. In real-world deployment, the blockchain client can be hosted on a local edge server (e.g., a home router, an IoT device hub) which has higher computing power and sufficient storage space to host an Ethereum Geth client~\cite{geth}. The edge server opens the Remote Procedure Call (RPC) interface and serves as a proxy for IoT devices to query the blockchain's content and broadcast transactions. As a result, resource limited IoT devices do not need to store any blockchain data except their own private keys for signing transactions. As the edge server is owned and operated by the same owner, it is trusted by the all other IoT devices with in the same household's network. This architecture has already been widely adopted by many other research works and is proved to be more efficient than the light Geth client~\cite{lightgeth}. \paragraph{Account compromising attacks} Since each IoT devices need to setup its own cryptocurrency wallet for paying the service fee, malicious relay servers have the motivation to steal their customer IoT devices' account. Malicious servers may launch attacks in the wish of retrieving victim devices' blockchain private key. However, in reality, this type of attack can be easily defeated by allowing deposing service fees with any other blockchain account (e.g., users' personal accounts that are securely protected). There is no need to check the eligibility of making deposits because malicious depositing makes no sense. As a result, there will only be very few amount of cryptocurrency in IoT devices' accounts for paying transaction fees. As a result, earnings of this kind of attacks are very limited compared to the high risk of being reported and losing a large amount of the deposit. \paragraph{Service Quality} Different relay servers may provide services with different qualities. For example, a relay server with more powerful hardware and higher Internet bandwidth could forward packets with smaller latency. To help relay service users to choose better service providers, the service rating function can be added to the current smart contract. For a specific relay server, its service users that have use it to forward a higher number of packets than a pre-defined threshold have the eligibility to rate its service quality. The smart contract will record its total number of ratings and the average rating score along with its record in the \textit{serverInfo} table. Hence, on receiving quotes from relay servers, IoT devices have additional information to find a most attactive one with balanced price and service quality. \paragraph{Amount of Service Fee and Deposit} This paper concentrates on demonstrating the framework of IoT and blockchain integration. Finding the optimal amount of service fee and deposit requires economic theories that is out of the scope of this paper. Intuitively, the required deposit for a specific relay server can be dynamically tuned according to the number of its users. When the number of users reaches a higher lever, the relay server should invest additional deposit accordingly. \section{Conclusion}\label{sec:con} In this paper, we presented a general framework for efficient blockchian and IoT service integration. To solve high cost and overhead of on-blockchain operations, we proposed the distributed architecture to host IoT services on third-part servers and use the blockchain and the smart contract as a naturally trusted authority to enforce the fairness and punish attackers. We applied this architecture on the task of IoT remote accessing and designed RS-IoT, a blockchain-assisted distributed relay sharing system. RS-IoT provided secure and robust relay services for IoT users to access their IoT devices which are behind the network address translation (NAT). We utilized ``\emph{an economic approach to cyber security}'' to deter malicious relay servers and achived it with our novel proof-of-delivery mechanism. By verifying proofs off-chain, the costs and throughput issues of blockchain are overcome. We demonstrated the cost efficiency of our design with our prototype implementation on the Ethereum testnet. \section{Experiment}\label{sec:exp} To validate the efficiency and usability of RS-IoT, we deploy the proposed RS-IoT on Ethereum rinkeby testnet with solidity script language of version 0.4.21. After that, we use three Raspberry PIs with Geth (Golang Ethereum Client) and the web3.py package installed as an emulation of the relay server, IoT, and controller respectively. Each of the raspberry PIs has one account setup in its geth client, and is topped up with test ethers from the rinkeby Faucet. To minimize the execution cost, we avoid using local variables and store all intermediate results in a memory location. We set the commitment length $N$ to 32 bytes to avoid exceeding block gas limit. Based on the cost of gas defined in the Ethereum yellow paper~\cite{wood2014ethereum}, we can accurately evaluate the execution cost of every function that we use. Also, with the gas price set to 2 Gwei (1 Ether equals to $1*E^9$ Gwei) and an Ether price of \$135, we can convert the execution cost to USD as listed in Table~\ref{tab:cost}. \begin{table}[htbp]\caption{Contracts Execution Cost.} \begin{center \scalebox{1.0}{ \begin{tabular}{l c c p{3cm} } \textbf{Entity:Function} & \textbf{Cost in Gas} & \textbf{Cost in USD}\\ \toprule \centering $D$:register & 47k & 0.012 \\ $C$:register & 22k & 0.005 \\ $R$:register & 40k & 0.01 \\ $D$:service request &1.8k & 0.0003 \\ $D$:service select &14.3k & 0.003 \\ $D$:service confirm & 22.8k& 0.005 \\ $D$:commitment & 175k & 0.046 \\ $C$:commitment & 151k & 0.039 \\ $R$:commitment verify &40k& 0.01 \\ $C,D,R$:decommission& 12k & 0.003 \\ $D$:execute & 8k & 0.002 \\ \bottomrule Registration Total & 109k & 0.03 \\ Commission Total & 32.6k & 0.009 \\ Commit Total &366k & 0.10 \\ \bottomrule \end{tabular} } \end{center} \label{tab:cost} \end{table} As we can see, since all the operations of the RS-IoT are asynchronous and none of them exceed the Ethereum gas limit (average 3,000,000 gas per block), there is no limitation on the number of concurrent online relay sessions. From the usage perspective, registration costs 109k Gas in total, which is a one-time cost. Commission/decommission cost 32.6k gas, but it only happens when the IoT device switches to a new relay server. Though the price for commitment is high, asynchronous verification as mentioned in Section~\ref{sec:deliver} makes it unnecessary to be called for each delivered packet. Instead, by caching the commitment transactions, verification can be conducted in arbitrary long packet intervals as long as no disputation emerges. Thus, the cost is amortized as affordable. The costs of reporting and rebutting scales along with the size of the reported packet are shown in Figure~\ref{fig:rrgascost}. Though the gas limit of one block is about 7 million gas, operations that consume more than 3.5 million gas become difficult to processed. The largest packet size for successful reporting and rebutting in our experiment is 3.5k bytes. \begin{figure} \centering \includegraphics[width = 0.4\textwidth]{gascost.jpg} \caption{Reporting \& Rebutting Gas Cost.} \label{fig:rrgascost} \end{figure} \section{Introduction} The IoT market is flourishing. The number of connected things has surpassed the global population in 2017 and is estimated to reach 20 billion in 2020~\cite{20b}. IoT devices such as IP cameras and smart light bulbs have been deployed in numerous smart environments. However, different from traditional computing platforms like PCs and Smartphones, IoT devices typically lack computational resources for enforcing strong security mechanisms. Worse, a lot of manufacturers mainly concentrate on novel features, while security flaws of their things are left unpatched for years \cite{dlinknopatch}. Consequently, massive vulnerable IoT devices are taken down by botnet attackers for large-scale botnet attacks. The astonishing damage of IoT botnet attacks was demonstrated by the Mirai attack \cite{Antonakakis2017} in September 2016, when more than 600k IoT devices were infected to launch the record-breaking DDoS attack. Such IP-scanning based harvesting attacks mainly target things that have public IP addresses; thus, things behind Network Address Translation (NAT) are protected from such attacks. On the other hand, things isolated by NAT typically connect to their vendor-maintained (or sponsored) clouds, such that messages between things and users' control devices can be relayed by the connected cloud servers and data can be stored in clouds. Such a centralized remote-accessing model for things, however, has multiple serious limitations in terms of security, availability and scalability. First, the security of these cloud computing platforms is questionable. Major cloud security breaches have been reported for many times~\cite{cloud-breach}, which shows that even major cloud computing providers such as Amazon cannot avoid them, let alone the small clouds maintained or sponsored by vendors of those low-cost IoT devices. As we will describe in section \ref{sec:cur}, a lot of such cloud servers run without sufficient security measures. Since attacks targeting IoT devices have become increasingly sophisticated, attackers are certainly motivated to compromise these cloud servers, which have access privileges to things that are otherwise protected by NAT, in order to harvest things. Once a cloud server in the centralized model is compromised. a large number of things behind NAT are exposed to attackers. Second, the centralized accessing model is subject to single points of failure. If a cloud is down, it is unknown how the things that are registered (usually, hardcoded in firmware) to use the cloud can be accessed. Third, the centralized model may not be scalable enough to meet the requirement of the rapidly developed market and handle spikes of user requests. Some solutions are proposed to defeat IoT malware like Mirai. They employ malware detection or honeypots~\cite{luo2017iotcandyjar,thakar2017reverse} to passively collect malware's characteristics for discovering and taking down the botnet master node. These method cannot provide comprehensive and timely protection. Other defense methods are either not flexible or require strong security assumptions; e.g., assuming that IoT devices are built on hardware with specific security features \cite{winter2008trusted,costan2016intel}. \paragraph{This Work} In this paper, we propose \textsc{RS-IoT}, a novel IoT remote accessing solution based on decentralized relay sharing. The intrinsic advantage of the decentralized architecture provides a robust solution to aforementioned problems in security, availability, and scalability. Leveraging the power of smart contracts, we design a transparent, self-governing relay service trading platform where \emph{monetary incentives} are used to motivate third-party relay service providers' participation and deter potential malicious behaviors. By enforcing monetary penalties, the proposed technique is able to thwart attackers who aim at making profits from attacks. More remarkable, the reporting and proof-of-delivery mechanism is able to tackle the zero-day attacks, and solve disputation of cheating without the need of any trusted authorities. People have realized that cyber security is not a problem that only technology can solve and tried to include also an economic point of view~\cite{jerman2008economic}, while we have effectively enabled ``\emph{an economic approach to cyber security}''~\cite{economic} when resolving this important security problem. This paper focuses on the security aspect when presenting the system. As we will show, since security is enforced by the proposed technique, IoT devices owners on this platform can freely commission arbitrary relay service providers instead of the ones specified by vendors. Thus, the decentralized nature of the proposed technique naturally resolves the availability and scalability issues of the current centralized model. \paragraph{Contribution} Our main contributions are as follows: \begin{itemize} \item To the best of our knowledge, RS-IoT is the first decentralized relay architecture designed for IoT remote accessing and botnet malware prevention. \item We design smart contracts based relay billing scheme for fair relay service trading. The guaranteed remuneration motivates third-party relay providers to join for profit. The designed relay protocol solves possible disputations by making use of the objective and deterministic smart contract code execution. \item Precautionary defense against future unknown malware is achieved by utilizing misbehavior reporting scheme and the concept of N-version programming\cite{chen1995n}. Thanks to the diversity of software implementations, a given exploit can only work on a portion of the connected devices. Once the exploit fails on \emph{any} IoT device, the attacker's deposit would be confiscated. \item We evaluate the overhead of the executing smart contract codes on Ethereum testnet and show that the solution is practical with acceptable cost. \end{itemize} Conventional centralized communication models having all devices connected to a central cloud server make the cloud server become the bottleneck and target of attacking~\cite{liang2017towards}. Ideally, we would be able to patch all vulnerable IoT devices, but given the diversity of equipment, architecture, and attacks, this is extremely challenging The rest part of the paper is organized as follows. In Section \ref{sec:background}, we present the background. In Section \ref{sec:cur}, we discuss the weaknesses of the centralized IoT accessing model. In Section \ref{sec:design}, we propose our design of relay-sharing IoT accessing method. Section \ref{sec:pod} and Section \ref{sec:penalty} describe the details of fair relay work proof and punishment on misbehavior during relay, respectively. Security analysis and experiments are presented in Section \ref{sec:secanalysis} and Section \ref{sec:exp}, respectively. We present the related work in Section \ref{sec:relatedwork} and conclude in Section \ref{sec:con}. \section{Introduction}\label{sec:into} The IoT market is flourishing. According to Gartner's report in 2018~\cite{predict2018gartner}, there is predicted to have 14.2 billion connected things in use in 2019 and 25 billion by 2021. However, security concerns are raised along with the growth of the IoT market. A series of horrible IoT-related attacks have been seen during the past years such as the Mirai botnet attack~\cite{Antonakakis2017}, BrickerBot attack~\cite{kolias2017ddos}, Deutsch Telekom TR-069 attack~\cite{Antonakakis2017}. Newly developed fancy pwns and hacks targeting IoT devices like~\cite{pierre2017dlink,awesome-iot-attacks} are emerging every day. Considering the scale of IoT devices, securing them becomes a non-trivial task. For example, the Mirai botnet attacker launches the record-breaking DDoS attack by recruiting more than 600K compromised IoT devices as his bot army, causing inaccessibility of many high-profile websites such as Twitter, Reddit, and Netflix. Securing these vulnerable IoT devices is challenging. Not only because those low-cost IoT devices are lacking of computing resources and I/O peripherals, but also due to the IoT vendors' lax on implementing secure softwares. Many manufacturers are busy rolling out products with novel features while leaving security flaws unpatched for years~\cite{dlinknopatch,awesome-iot-attacks,tschofenig2017report,schneier2014internet}. Given IoT devices' actuality of insecurity, it is reasonable to have the assumption of \emph{access-to-attack}, with which any attacker that has direct access to the IoT device's open port is supposed being able to take down the device. Modern IoT vendors try to address this problem by anchoring the access entries of their products on endpoint instances hosted by cloud servers~\cite{alrawi2019sok}. They migrate critical services such as authentication, remote administration, and data collection from vulnerable IoT devices to the more secure cloud server. Unfortunately, the insecurity of cloud servers is never a piece of news with prominent examples of high-profile compromises including Equifax~\cite{bernard2017equifax}, Dropbox, and the US voter database~\cite{walters2014cyber}. Since the Cloud endpoint serves as the entry and is trusted by a large number of IoT devices, it may, on the contrary, give adversaries an additional arsenal for launching large scale attacks~\cite{zhang2015cloud,singh2016twenty}. Recent advancements on the blockchain and smart contracts inspire researchers to seek blockchain-based solutions for its intrinsic advantages on decentralization, faulty tolerance, and data integrity. However, incorporating blockchain and IoT is non-trivial due to the blockchain's characteristics and the IoT's requirements. On the one hand, billions of IoT devices are running 24/7 and produce enormous amounts of data to be timely stored and processed. On the other hand, blockchain usually has limited throughput and is costly. Although the smart contract theoretically achieves Turing-complete, the Proof-of-Work (PoW) consensus mechanism makes it not only expensive but also slow. Currently, most research on blockchain-based IoT solutions are still using the blockchain as a substitution of the cloud server. Their approaches to solving the aforementioned challenges can be roughly categorized into three types: 1) They choose to study specific services that are latency insensitive and bring low overhead, for example, IoT authentication and identity management. 2) They use private or permissioned blockchains instead of public blockchains to avoid cost and throughput issues. 3) They turn to edge computing where edge servers are introduced to mitigate the IoT's resource constrains for interacting with the blockchain. These workarounds are either making a trade-off between the usability and security or are limited on specific tasks. A general framework to efficiently integrate blockchain into IoT systems is still an open question. \paragraph{This Work} In this paper, we try to fill this gap by rethinking the blockchain's role in the IoT service architecture. Instead of being the host to accommodate services directly, blockchain is more suitable to be a service trading platform where service users and third-party providers can discover each other, establish commission relationships, and settle service fees. The service itself is undertaken by independent third-party providers. The participation of third-party providers decouples the relationship between IoT devices and vendor operated cloud servers, which allows IoT devices switch to any provider for better service security and quality. To realize it, the following questions need to be answered: \begin{itemize} \item[\textbf{Q1:}] How to motivate the participation of third-party service providers? \item[\textbf{Q2:}] How to establish a mutual trust between service users and providers? \item[\textbf{Q3:}] How to prevent malicious behaviors like cheating, attacking, and denial of service? \end{itemize} We propose \textsc{RS-IoT}, a novel blockchain assisted IoT relay sharing system as a case study of the IoT remote access service. In the system, we introduce third-party relay servers to substitute the centralized message broker~\cite{happ2017meeting} for enabling two-way relayed communications between a user's controller client device and an IoT device. IoT device owners on this platform can freely commission any relay servers for their devices instead of using those designated by device vendors. First, the decentralized nature of the proposed technique resolves the SPOF and scalability issues. Leveraging the power of the smart contract~\cite{buterin2014next}, we design a transparent, self-governing relay service trading protocol where \emph{monetary incentives} are used to motivate third-party relay providers' participation and deter potential malicious behaviors. Furthermore, fair and objective disputation arbitration and attack handling are achieved without any trusted authorities by using our proof-of-delivery scheme which involves off-chain proof generation and on-chain verification. As a result, third-party relay servers get the incentive to shield their customer IoT devices, which gives vulnerable IoT devices additional protection against zero-day attacks. \paragraph{Contribution} We rethink the integration of blockchain for IoT systems after a comprehensive literature investigation. Based on what, we propose a novel blockchain-assisted decentralized relay sharing system as a solution to the IoT remote accessing problem. Our contributions are summarized as follows: \begin{itemize} \item We propose a practical framework for Blockchain enabled IoT services and provide the guideline to resolve its inherent shortcomings of cost, throughput, and latency. \item We propose the proposed \textsc{RS-IoT}, which to the best of our knowledge, is the first decentralized relay architecture designed for IoT remote access. \item We design a smart contract based relay service trading system where disputes are resolved by using smart contracts. \item We achieve precautionary defense against future unknown malware by presenting the misbehavior reporting scheme which is inspired by the concept of N-version programming \cite{chen1995n}. Malicious attacking behaviors are deterred because once the malware attack fails on \emph{any} IoT device, the attacker's deposit will be confiscated to cause direct financial loss. \end{itemize} The rest of the paper is organized as follows. We present the related background knowledge in Section~\ref{sec:background}. In Section~\ref{sec:integration}, we review existing researches in terms of blockchain and IoT integration and propose our framework for converting cloud-based IoT services to blockchain-assisted distributed services. In Section~\ref{sec:case}, we study the use case of IoT remote access and give the threat model. The design overview of RS-IoT is illustrated in~\ref{sec:design}. After that, Section~\ref{sec:pod} and Section~\ref{sec:penalty} describe the details of proof-of-delivery and malicious behavior reporting. Security analysis and experiments are presented in Section \ref{sec:secanalysis} and Section \ref{sec:exp}, respectively. We survey related works in Section~\ref{sec:relatedwork} and have discussions of some implementation issues in Section~\ref{sec:dis}. Finally, we conclude the paper in Section~\ref{sec:con}. \section{Blockchain and IoT Integration}\label{sec:integration} Cloud platforms have been proved to help enhance resource constraint IoT devices against security threats~\cite{sajid2016cloud,parwekar2011internet,srivastava2015secure}. However, the risk of single point of failure and the problem of scalability bottleneck entice IoT vendors to shift their services from the centralized architecture to the decentralized form. The blockchain's intrinsic natures of decentralization, tamper-resistance, and autonomy make it a promising candidate to be used in the IoT paradigm. Furthermore, the emergence of smart contracts provides a practical way for Turning-complete trusted global computer, which inspires the proposal of autonomous IoT system~\cite{brody2014device}. However, these wonderful security benefits of the blockchain are not to be taken for granted. The `world computer' is achieved by using numerous blockchain nodes as redundant backups. This brings high cost and latency for operations on public blockchains. Moreover, despite blockchain's good scalability of accommodating unlimited participating nodes, the whole network throughput of transaction processing is limited. For instance, Bitcoin has a constant throughput of 10 minutes per block, which is equivalent to 7 transactions per second and Ethereum allows 15 seconds per block. Another issue is cost. All operations that modify the public blockchain's state are subject to transaction fees. Even simple operations like storing or changing a byte in the blockchain can be expensive, let alone complicated tasks such as data processing. Considering the IoT's requirement of large scale and low cost, the use of blockchain becomes unpractical. To find a general and viable solution, we first review the existing research works of blockchain-based IoT security mechanisms and analyze their strengths and weaknesses on addressing the two mentioned constraints. Based on the retrospection, we propose our efficient integration framework. \subsection{Retrospection of Existing Works} Since the proposal of smart contracts in 2014, there have been many research works regarding the paradigm of IoT and blockchain integration. We choose three well-presented survey papers~\cite{panarello2018blockchain,christidis2016blockchains,conoscenti2016blockchain} as indices to collect notable works related to this topic for further analyses and discussions. In contrast to these survey papers, we focus on categorizing and evaluating the technical methods used by these papers to solve the aforementioned problems. Specifically, we evaluate them in terms of 6 criteria: \begin{itemize} \item \textbf{Use Case:} The target application or service the work aims to provide for IoT. \item \textbf{Service Architecture:} Participating entities and their roles in service. \item \textbf{Service Requirements:} The requirements on throughput, latency and cost. \item \textbf{Scalability:} The scale of deployment. \item \textbf{Blockchain Specs:} The type of blockchain and consensus algorithm they use. \item \textbf{Attack Resilience:} Whether their system can exclude the malicious nodes and recover from a system failure. \end{itemize} Based on these criteria, we categorize the collected papers into the following two types: \paragraph{Blockchain as a Ledger} In this type, the blockchain is used as an immutable ledger or a distributed database to store critical information. This is the most straightforward way for the integration of IoT and blockchain and is adopted by many early works. \cite{wu2018out} proposes to use the blockchain to store and deliver out-of-band authentication credentials. Since authentication only occurs when devices request sensitive information from the cloud server, it has low requirements for throughput and latency. The authors indicate that they use the Eris blockchain, but they do not specify any details about the implementation, nor a solution to reduce the involved transaction fees. \cite{dorri2017blockchain} designs a multi-layer architecture for blockchain powered smart home access control. It employs a centralized device to process all transactions and generate new blocks to get rid of the overhead brought by the Proof-of-Work. However, this design contradicts the core concept of decentralization of blockchain and undermines the security. Some other works straightforwardly leverage the blockchain's immutable feature to facilitate distributed data storage and sharing as discussed in \cite{zyskind2015decentralizing} and \cite{worner2014your}. They use public blockchain to store either raw data or the hash of data. These works avoid the cost and throughput constraints by either choosing specific services that have low requirements for transaction frequency and latency or using a single centralized miner. \paragraph{Blockchain as a Service} Inspired by the concept of decentralized applications~\cite{raval2016decentralized}, some works regard the smart contract as a trusted computing platform and build more applications on it. The first category is decentralized PKIs based on the smart contract. \cite{chen2018certchain} introduces an additional party named `bookkeeper' to form the backbone of a permissioned blockchain and store the certificate revocation list. But it does not specify the origin and the incentive of `bookkeepers', which implies a limited number of available bookkeepers. Certcoin~\cite{fromknecht2014certcoin} chooses to build a decentralized trust system based on Namecoin, a public blockchain for DNS services. Although the author finds a solution to mitigate end users' storage burdens, all certificates and public keys are still held on the blockchain. Access control and authorization is another hot topic for blockchain and IoT integration. `WAVE'~\cite{andersen2017wave} proposes a city-wide blockchain to store the metadata of permissions for supporting a variety of access control patterns. The authors craft a set of smart contracts to automatically handle complicated out-of-order access permission delegations. A bunch of other IoT authorization solutions~\cite{ourad2018using,alexopoulos2018towards,stanciu2017blockchain} basically use the similar architecture that employs smart contracts to automatically enforce access control policies. They try to address the cost and throughput issues either by using permissioned blockchains~\cite{chen2018certchain,stanciu2017blockchain,alexopoulos2018towards}, or delicately designing the contract code to minimize the frequency of modifying the global state of blockchain~\cite{ourad2018using,andersen2017wave}. In summary, the aforementioned works basically follow two approaches. First, they choose specific services that require a low frequency of issuing transactions to avoid intensive on-chain operations which are subject to fees and latencies. Reading the blockchain data would not cause a state change and has no cost or minimal overhead. Therefore, the blockchain can host data sharing, PKI services, and access controls, which have asymmetric requirements of reading and writing. This also explains why a considerable portion of blockchain-IoT papers in the survey~\cite{panarello2018blockchain} focus on these topics. Another approach is using private or permission blockchains instead of the public blockchain, where the burden of computing-intensive proof-of-work is relieved or avoided. However, it also undermines the security with a much smaller network scale. Some new blockchain techniques such as hyperledger~\cite{androulaki2018hyperledger} and IoTa~\cite{popov2016tangle} achieve better performance by making trade-offs either on decentralization or on security. \subsection{Rethinking Blockchain and IoT Integration}\label{subsec:rethink} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{abstractmodel.pdf} \caption{Abstract architectures of IoT services. Solid lines represent direct Internet communications and dash-dotted lines represent interactions with the blockchain.} \label{fig:abstractmodel} \end{figure} To explore a feasible way to integrate IoT and blockchain, we first make an abstract model of IoT services as depicted in Figure~\ref{fig:abstractmodel}. In a conventional IoT services scenario, with client devices (e.g., smartphone), users can either access their IoT devices directly (case (a)) or through the cloud server for advanced functionalities like cloud storage and automation (case (b)). The blockchain, as mentioned earlier enables IoT services to replace the cloud server with the blockchain (case (c)). One promising architecture is proposed in~\cite{zyskind2015enigma} as shown in (case (d)) that combines the ordinary distributed system with blockchain by moving heavy-load services from the blockchain to third-party servers. The blockchain serves as the coordinator to enforce correctness and fairness. However, this paper mainly focuses on the theoretical model of secure multi-party computing and does not offer more details of how to realize it in the IoT domain. We follow this insight and dive into more details of blockchain-assisted distributed IoT services to find the answer to the three questions as proposed in the Section~\ref{sec:into}. Many blockchains provide the functionality of cryptocurrencies. So, the blockchain can be used as a service trading platform where service users (usually IoT devices) use services provided by third-party service providers who join for service fees paid by cryptocurrencies. To avoid possible disputations, the blockchain can also serve as the intermediary between two parties that collects pre-paid service fee payments from service users and compensates providers when objective proofs are provided. The blockchain can also punish malicious service providers by forfeiting their deposits of cryptocurrency to deter possible attacking attempts. \section{Design of RS-IoT}\label{sec:design} In this section, we present our design of the blockchain based relay-sharing IoT remote access system (RS-IoT), which utilizes the framework we propose in Section~\ref{sec:integration}. Since a relay server is necessary for accessing IoT devices behind the NAT, a feasible solution is to decouple the fixed relationship between relay server and the IoT devices by replacing the centralized cloud server with a large number of third-party relay servers. Accordingly, a smart contract based service trading platform is designed for transaction management and dispute arbitration, with which IoT devices are able to freely choose their relay service providers. Our design brings four prominent benefits: 1) The management platform is completely self-governing to guarantee the fairness of the service trading without the help of any trusted authority. 2) Compromised or misbehaving relay servers will be reported and excluded from the relay platform to avoid further damage. 3) Malicious relay servers that launches attacks against its connected IoT devices have the risk to be reported and punished 4) No assumption of robust and secure IoT applications are required. \begin{figure}[ht] \centering \includegraphics[width = 0.45\textwidth]{figures/design.pdf} \caption{Overview of relay-sharing IoT remote accessing. Nodes connected by the same type of line are within the same session of a relay service. A relay server can host multiple control sessions and a controller device can control multiple IoT devices through different relay servers.} \label{fig:design} \end{figure} \subsection{Relay Sharing Model}\label{sec:relay_sharing_model} Taking the smart contract into account, there are four roles involved in RS-IoT as shown in Figure~\ref{fig:design}: the IoT device ($D$), the controller ($C$), the relay server ($R$), and the smart contract ($SC$). Among them, the controller and the IoT device are grouped as the party of the service user and share secret keys since they are usually owned by the same owner and have the common interest. The relay server constitutes the party of service provider. The smart contract is script code stored on the public blockchain as a fair third-party. All roles except the smart contract join the blockchain by generating their own public and private keys, and the smart contact is published on the blockchain with only the account address. We denote their addresses on as $addr(D)$, $addr(C)$, and $addr(R)$ respectively. After that, the IoT device, the controller and the relay server top up their account with cryptocurrency as deposit or service fee. The basic unit of relay service is the end-to-end link between an IoT device and its controller client with one commissioned relay server in between. Both the IoT device and the controller are connected to the relay server via TLS-N sessions and all packets passed through them are signed with the sender's blockchain private key. \subsection{Relay Workflow} This subsection describes the workflow of relay sharing by going over the procedure of forwarding a packet. As described in Figure~\ref{fig:workflow}, the workflow consists of the registration phase, the commission phase, and the relay phase. \begin{figure*} \centering \includegraphics[width = 0.9\textwidth]{workflow.pdf} \caption{Workflow of RS-IoT. Solid lines are function call transactions toward the smart contract and blockchain direct transactions. Dash lines are direct communications through the TLS-N session.} \label{fig:workflow} \end{figure*} \subsubsection{Registration} With blockchain accounts established, all three parties register their account addresses on the smart contract to indicate their participation by calling a registration function of the smart contract $SC$ and creating new items in two tables in blockchain: \textit{userInfo} and \textit{serverInfo}. Since a service user is uniquely identified by its pair of IoT device ID and controller ID, both of them need to register by providing the other party's address as arguments. After either one of them calls the registration function, a new item in \textit{userInfo} is created with the confirmation flag set to false. Then, the function call by the other party would flip the confirmation flag to indicate a successful user registration. For the relay server, the deposit of cryptocurrency is required to be paid to the smart contract along with the registration function call transaction. The amount of deposit is stored in \textit{serverInfo} together with the relay server's blockchain address. The function prototype for registration is listed below \begin{lstlisting}[language=Solidity] function reg_user(address oppo_end) public {...} function reg_server() public payable{...} \end{lstlisting} \subsubsection{Commission} The commission phase is used for mutual discovery between service users and providers, as well as setting up service relationships. The commission phase begins with an IoT device calling the \textbf{service request} function which broadcasts a global event containing the user's registration information. Upon receiving the event, interested relay servers respond with their IP addresses and quotes of service via direct transactions towards the requesting IoT's blockchain address. After waiting for some time, the IoT device evaluates the received quotes by the deposit and price, and finally makes a decision by calling the \textbf{service select} function with the chosen relay server's blockchain address and price as arguments. Similar to the user registration, this function call would generate an item into the table \textit{serviceList} consisting the following values as shown in Table~\ref{tab:keys} and set the confirmation flag to pending. \begin{table}[hbp] \caption{Keys of content in the serviceList.} \centering \scalebox{1}{ \begin{tabular}{l l p{10cm}} \toprule \centering \textbf{$Txn$} & index number for each pair of IoT device and relay server\\ \textbf{$Serial$} & counter to index the number of successfully relayed packets\\ \textbf{$Address$} & blockchain address of all involved parties\\ \textbf{$Price$} & cost of forwarding one packet\\ \textbf{$Balance$} & amount of pre-paid service fee payment\\ \bottomrule \end{tabular} } \label{tab:keys} \end{table} Finally, the chosen relay server confirms the service relationship by calling the \textbf{service confirm} function to change the confirmation flag in table \textit{serviceList} to confirmed. At the same time, an event broadcast would be triggered as a log of relationship binding. Then, the commission is finished. The IoT device launches a TLS-N connection towards the commissioned relay server. The function prototype for commission is listed below \begin{lstlisting}[language=Solidity] function service_request(address D, address C) public {...} function service_select(address D, address C, address R, uint price) public payable{...} function service_confirm(uint Txn){...} \end{lstlisting} \subsubsection{Relay}\label{subsec:relay} As all history transactions on blockchain are publicly readable, the controller client can easily recover the current serving relay server's IP address and initiate a TLS-N connection towards it. With the shared secret key between the IoT device and the controller client, a long-term symmetric encryption key $K(C,D)$ can be derived to encrypt the messages exchanged between them. The controller client first generates a proof of signed transaction on the original packet which is sent to the relay server along with the packet itself. On receiving the packet, the relay server uses a random stream derived from a one-time key $PN$ to cover the packet before forwarding it to the IoT device. Afterwards, the receiver (IoT device) generates another proof transaction using the same algorithm but on the covered packet and sends it back to the relay server. Finally, the relay server creates a new transaction containing the cover key and broadcasts it along with two received proof transactions. With the cover key, a successful proof verification on the smart contract will trigger a transfer of cryptocurrency from the contract account to the relay server's personal account. We will illustrate details of the proof generation algorithm and show its effect on preventing cheating in Section~\ref{sec:pod} and Section~\ref{sec:secanalysis}, respectively. \subsubsection{Decommission} As we stated, both the relay server and the IoT device can freely determine when to end the service relationship. The decommission process is provided to terminate a service relationship by either party by calling the function \textbf{decommission}. $Txn$ is the only argument required to specify the service record to be cleared. After a decommission is initiated, an event would be emitted as the notification and the service record item in \textit{serviceList} is deleted after a pre-defined block time for the relay server to finish billing. The remaining pre-paid service fee will be paid back to the IoT device's personal account. \subsection{Relay Service Client}\label{sec:middleware} \begin{figure}[ht] \centering \includegraphics[width = 0.4\textwidth]{figures/packet_handler.pdf} \caption{Packets handling procedures of IoT devices. The operating system can filer out traffic that does not originate from the commissioned relay server and the TLS-N library blocks packets without valid signatrues.} \label{fig:packet_handler} \end{figure} Relay Service Client is a middleware installed in IoT devices to deal with all relay sharing related logics. Besides interacting with the smart contract for the aforementioned workflow, this middleware is also responsible for setting the packet filter policy for the operating system kernel and configuring the TLS-N module with a private key. As shown in Figure \ref{fig:packet_handler}, for any incoming packets, the netfilter of the operating system first filters out those connection requests initiated by external hosts. Then, the netfilter dispatches packets that originate from the commissioned relay server to the TLS-N library which checks their signature. Only packets with valid signatures can be accepted and passed to the relay service client. Here, the client inspects received packets and report those suspicious ones to the smart contract. Different device vendors have different implementations of the relay service client and inspection method. \subsection{Billing of Relay Service}\label{sec:billing} During the commission precedure, the pre-paid service fee is transferred to the smart contract's account along with the function call of \textbf{service confirm}. As described in Section~\ref{subsec:relay}, a relay server gets remuneration by presenting the proof of each successfully relayed packet. However, considering the amount of relayed packets and blockchain's aforementioned cost and throughput constraints, verifying each of them on the smart contract is unrealistic. We innovatively use the smart contract's local verifiability to make it unnecessary to verify each packet on the blockchain. That is, on receiving proof transactions, the relay server first runs the verification function locally instead of posting them on the blockchain. If the verification is successful, the relay server caches the proofs and for the current packet, and send the cover key to the IoT device. If both two parties are honest, the IoT device would be able to successfully recover the message with the received cover key. As long as no dispute occurs, this offline verification can be used for all following packets. When the relay server wants to cash the remuneration, it only needs to verify the proof of the last packet on the blockchain. The difference between the recorded serial number $serial$ in the table \textit{serviceList} and $serial$ in the proof is accounted as the number of successfully relayed packets. The total amount of remunerations is then calculated as the product of the number of successfully relayed packets and the unit service price. After the transfer, the $serial$ in \textit{serviceList} is updated. Because the $serial$ as a function call argument is signed by the IoT device, it can be regarded as the IoT's acknowledgment of successful relay for all packets before it. \section{Penalty \& Disputation Solving}\label{sec:penalty} Since the relay server serves as the only access entry for all its customer IoT devices, all traffic delivered to the IoT device should originate from its legitimate controller clients. However, relay service providers are anonymous according to the registration phase, which induces the risk of malicious relay servers delivering IoT malware. Based on the non-repudiable features of TLS-N, we develop a smart contract function for relay service users reporting unauthorized traffic from their commissioned relay server. After issuing a log event as notification, the reported relay server needs to present the TLS-N signature of the sender for the reported packet to claim its innocence. Otherwise, the relay server's registration will be revoked, and all its deposit will be confiscated. \subsection{Reporting}[ht] \begin{figure} \centering \includegraphics[width = 0.45\textwidth]{figures/reporting.pdf} \caption{The workflow of reporting suspicious packets. Once the public key derived from the signature matches the address of the accused relay server, an record is inserted into the pending list. At the same time, it triggers an event to notify the relay server.} \label{fig:report} \end{figure} Upon receiving a packet, the IoT device firstly verifies the TLS-N signature with the TLS-N library to make sure it really originates from the relay server. Then, the TLS-N library passes the packet to the application layer of the IoT devices. If the packet contains malicious content that is not initiated by the controller, IoT devices have some chance to recognize it. According to the theory of multi-version programming (MVP), a perfect attack that can compromise arbitrary IoT devices does not exist considering the variety of IoT software types and architectures. Once a malicious packet fails to take down a device, the device can use the relay server's TLS-N signature to report this misbehavior to the smart contract. The reporting process starts from the function call transaction towards the \textbf{reporting} function in the smart contract. As shown in the figure~\ref{fig:report}, the function call contains arguments of the transaction number, the suspected packet's serial number, the content of the packet and the relay server's TLS-N signature. Since the TLS-N signature is signed with the relay server's private key, when giving the content of the original packet, any arbitrary party including the smart contract can retrieve the relay server's public key and then derive the blockchain address $ADDR(R)$. The smart contract will verify the validation of the signature by generating the address from it and compare it with the address stored inside the \textit{serviceList} which is indexed by the argument of the transaction number. If the recovered address matches that in the list, it means the reported packet really comes from the relay server and the smart contract will take it as a valid report. Then the smart contract will create a new record in the report pending list which contains the transaction number, the serial number, the reported packet, and the current latest block number. At the same time, a notification is emitted and broadcasted on the blockchain to inform the relay server to process this accusation. The function prototype for reporting is listed below: \begin{lstlisting}[language=Solidity] function reporting(uint Txn, uint serial, bytes packet, bytes32 signature){...} \end{lstlisting} \subsection{Rebutting}[ht] \begin{figure} \centering \includegraphics[width = 0.45\textwidth]{rebutting.pdf} \caption{The workflow of rebuting a pending report record. The relay server prove itself's innocence by presenting the sender's TLS-N signature of the suspected packet and the cover key so that the smart contract can uncovered the packet and verify the TLS-N signature.} \label{fig:rebut} \end{figure} Even if the pending report record is successfully generated, it only represents the reported packet is sent by the relay server and is not enough to determine whether it is malicious. As a result, we design the \textit{rebutting} function for the accused relay server to defend its innocence by proving that the reported packet is originating from the controller device. As shown in the Figure~\ref{fig:rebut}, it involves a similar process that starts from sending the function call transaction towards the rebutting function in the smart contract. Among parameters provided, the transaction and serial number are used to locate the pending record of rebutting. Since the packet in the pending list is covered by the relay server with the cover key, the relay server needs to provide the cover key to recover the original packet it receives from the controller. Afterward, with the original packet and the controller's signature, the controller's address $ADDR(C)$ can be derived using the same way as in the reporting process. If the derived address matches that stored in the \textit{serviceList} table, it indicates the reported packet is really initiated by the controller, and the relay server does not do anything malicious. If so, the smart contract will delete the record in the report pending list. Otherwise the rebutting fails and the record remains inside the pending list. The function prototype for rebutting is listed below: \begin{lstlisting}[language=Solidity] function rebutting(uint Txn, uint serial, bytes32 packetHash, uint PN){...} \end{lstlisting} \subsection{Executing} Since the reporting notification takes some time to broadcast on the blockchain, a grace period is provided for the relay server to respond to the accusation. The grace period can be measured with the number of newly generated blocks in the blockchain because the generation of new blocks usually means the consensus among all participating nodes in the network and the completion of states update. If the reporting record remains in the pending list after the end of the grace period, the IoT device who initiate the reporting is eligible to execute the penalty by sending function call transactions towards the \textit{executing} function in the smart contract. This time only the transaction number and the serial number is needed to locate the record in the pending list. The smart contract firstly traverses the report pending list to check the existence of the referred reporting record. Then, it calculates the number of blocks that are newly generated after the reporting. If the difference is larger than the pre-defined grace period, the smart contract will execute the penalty that means the relay server's deposit stored inside the smart contract account will be transferred to the reporting IoT device's account. Moreover, the misbehaving relay server's registration information in the \textit{serverInfo} list will be deleted to revoke its qualification as a relay server. As a result, it won't be able to be commissioned by any other IoT device in the future. Since the public blockchain is anonymous, the malicious relay server may rejoin the relay system with a different address to continue its attacks. However, registering as a new relay server requires the attacker to pay the deposit again, which causes economic loss to the attacker. The function prototype for executing is listed below: \begin{lstlisting}[language=Solidity] function execute(uint Txn, uint serial){...} \end{lstlisting} \section{Proof-of-delivery}\label{sec:pod} In a real-world service trading system, both the service user and the service provider have the incentive to cheat: the relay server may deliver broken, modified, or forged packets for making extra profit; While, the relay user (including the controller and the IoT device) may deny the receipt of packets to avoid the payment. To solve this problem, we propose an autonomous proof-of-delivery solution to resolve possible disputes fairly by utilizing the smart contract as a decentralized trusted computing platform. Firstly, We design a SHA-3~\cite{dworkin2015sha} based key stream generator for the relay server to hide the content of the original packet. Then, leveraging the smart contract's locally verifiable feature, we propose an innovative off-line blind proof generation algorithm to derive proofs of packet delivery on both the original and the covered packet. During the operation, the relay server holds the cover key while it asks the IoT device for the proof of the covered packet. On one hand, without receiving the correct proof, the relay server would not reveal the cover key for the receiver to extract the message. On the other hand, the relay server is not able to get the correct proof for cashing reward if the packet is not delivered confidentially. This solution provides a mutual restriction between the user and the server so that neither of them have the opportunity to cheat. Thanks to the smart contract's off-chain verifiability, the proof only needs to be posted on the blockchain when disputes occur rather than every time a packet is relayed, which greatly reduces the cost of operation. For clarity, we list all symbols to be used in the following notation table: \begin{table}[htbp] \caption{Notations} \centering \scalebox{1}{ \begin{tabular}{r c p{10cm}} \toprule \centering $K_{(C,D)}$ & pre-shared encryption key between $C$ and $D$\\ $S_{(C,D)}$ & pre-shared secret for bits selector\\ $Ra\ \&\ Ra'$ & byte select index list\\ $PN$ & secret key for cover stream generator\\ $B$ & selected bytes for uncovered packet\\ $B'$ & selected bytes for covered packet\\ $Tx(B)$ & contract function call with $B$ as argument\\ $N$ & commitment length\\ $i$ & self-incrementing counter\\ \bottomrule \end{tabular} } \label{tab:TableOfNotationForMyResearch} \end{table} \begin{figure}[ht] \centering \includegraphics[width = 0.45\textwidth]{figures/pod.pdf} \caption{The workflow of proof-of-delivery. Dotted lines mean signed transactions are sent to the relay server. After the verification, the relay server publishes them on the smart contract to get its service fee.} \label{fig:pod} \end{figure} \subsection{Components} Considering the high cost of storing and processing data on the blockchain, it is not practical to verify the entire packet on smart contract because of the unbearable latency and cost. Although existing cryptography primitives have already provided satisfying digest-based security features, hardly any of them are available as built-in functions on popular public blockchain platforms such as the Ethereum~\cite{wood2014ethereum}. If we implement them in the form of script code, the cost will become unbearable. To overcome these difficulties, we design the \textbf{Bytes Selector} and the \textbf{Cover Stream Generator} as new primitives based on Ethereum's built-in functions. The Bytes Selector is used to extract fixed-length bytes streams from arbitrary packets, while the Cover Stream Generator is used to generate cover streams to hide the content of a packet. These new components provide comparable security while remains low cost. \subsubsection{Cover Stream Generator} To prevent the lost of service fee when service users maliciously deny the receiving of relayed packets, the relay server covers the content of packets to be delivered to the receiver with a stream cipher. The relay server will only reveal the cover key when it gets valid commitments from the receiver as the proof of successful delivery. The cover stream generator is an alternative implementation of the stream ciphers on the smart contract. Since there is no pre-compiled script function in the current version of Ethereum blockchain, implementing a standard stream cipher would be extremely expensive. Under this case, we build the cover stream generator based on the SHA-3 hash function, which is the cheapest built-in operation on Ethereum~\cite{wood2014ethereum}. The cover stream is generated by concatenating hashes of the sums of a secret key $PN$ and a self-increasing counter $i$. To keep high entropy of randomness, we only retain the highest indexed byte of each 256-bit SHA-3 hash result. The cover stream is generated as shown in the equation below. The `$|$' means the concatenation of hash values. \begin{equation} cover(PN) = SHA3(PN)|SHA3(PN+1)|\cdots \end{equation} Aside from using the low-cost building block, this hash chain based stream cipher also reduces cost by enabling selective encryption stream generation at arbitrary position. For example, to encrypt/decrypt the content at the $K$th byte with a given key $PN$, we can simply calculate $SHA3(PN+K-1)$ and take the first bytes as the key stream instead of producing the key stream from the beginning. Thus, the overhead from generating and storing the whole stream for large packets is avoided. \subsubsection{Bytes Selector} As an analogy, the bytes selector serves the similar purpose as the Hash Message Authentication Code (HMAC) function. The difference is that our bytes selector retains the commutativity along with our cover scheme (i.e., the digest of the covered packet equals to the covered digest giving same secure keys). The bytes selector is driven by the secret $S(C,D)$ shared between the controller client $C$ and the IoT device $D$. Both $C$ and $D$ use this secret as the seed of a pseudo random number generator. For each packet, $C$ and $D$ synchronously generate $N$ random numbers of 16-bits, denoted as $Ra = \{ra_1,ra_2,ra_3,\cdots,ra_N\}$. Assuming the length of the packet is $L$, the bytes selection list $Ra' = \{ra'_1,ra'_2,ra'_3,\cdots,ra'_N\}$ is derived from $Ra$ with $ra'_i = ra_i \mod L$. Thereafter, a list of $N$ bytes $B=\{b_1,b_2,b_3,\cdots,b_N\}$ are extracted from targeting packet where $b_i$ is the $ra'_i$th byte in the packet. The generated bytes list is totally random and there is no way to recover their locations in the packet without the knowledge of the random seed $S(C,D)$. \subsection{Proof-of-Delivery Workflow} The proof-of-delivery comprises four steps: 1) The sender (assuming it is the controller because control session is usually initiated by it) generate the first commitment with our bytes selector on the original packet to be sent to the relay server. 2) The relay server covers the packet and forwards it to the receiver (IoT device). 3) The receiver generate the second commitment on the covered packet using the same methond and sends it back to the relay server. 4) The relay server verify two received commitments with the cover stream key and reveal the key to the receiver if commitments are valid. \subsubsection{Bytes Commitment by the Controller Client} We assume the controller $C$ wants to send a packet $msg$ to the device $D$ through the relay server $R$. $C$ firstly encrypts the message with the pre-shared symmetric key $K_{(C,D)}$ and generates the encrypted packet $En_{K(C,D)}[msg]$. Then, it rolls the random number generator with the pre-shared secret $S(C,D)$ to get the $Ra'$ as indices of bytes to be selected. After that, it extracts the bytes from $En_{K(C,D)}[msg]$ as indexed by $Ra'$ to form the commitment $B$ and use it as the argument of the transaction towards the function of \textbf{commitment}. Finally, the transaction $Tx(B)_C$ is signed by the controller client's private key and sent to the relay server $R$. \subsubsection{Bits Covering by the Relay Server} On receiving the encrypted packet $En_{K(C,D)}[msg]$ and the commitment $B$ from the controller client $C$, the relay server $R$ generates a random number $PN$ as the seed of the cover stream generator to produce an pseudo random stream. It uses the cover stream to cover the packet with the exclusive-OR operation. which is illustrated as in the equation below. After that, the covered packet is forwarded to the IoT device. $$Co(En_{K(C,D)}[msg]) = En_{K(C,D)}[msg]\ \oplus\ cover(PN)$$ \subsubsection{Bits Commitment by the IoT Device} The received packet from the relay server is covered by the cover stream. By using the bytes selector, bytes at the same location are selected on the covered packet which is denoted as $B'$. Then, the IoT device packages $B'$ together with the bytes selection list $Ra'$ into a function call transaction $Tx(B',Ra')_D$. The signed transaction is sent back to the relay server $R$. \subsubsection{Asynchronous Delivery Verification}\label{sec:deliver} Upon receiving both commitment transactions $Tx(B)_C$ and $Tx(B',Ra')_D$, the relay server now has all the of materials to verify the correctness of the commitments locally. Then, the relay server checks whether $B\ \oplus\ B' $ equals to $cover(PN)$ at the designated location as specified by $Ra'$. If the verification is successful, the relay server prepares another transaction $Tx(PN)_R$ signed by itself with the $PN$ as the argument. These three commitment transactions are the proof of delivery which are cached by the relay server. When commitments are presented to the smart contract $SC$, the same verification is performed as the relay server. Payment for the relay service is transferred to the relay server upon successful commitment verifications. To avoid the latency and the transaction fee caused by executing the smart contract, the relay server caches all commitments instead of verifying them immediately. It delivers the cover key $PN$ through the relay connection to the IoT device after the successful verification. When it wants to withdraw the payment, it verifies the commitment of the newest packet. As described in Section~\ref{sec:billing}, the verification of all previous commitments is unnecessary. \section{Related Work}\label{sec:relatedwork} \subsection{IoT Malware Defense} After the breakout of Mirai botnet, various defense schemes are proposed. They can be categorized into three types: The first type is the honeypot as in \cite{pa2015iotpot,luo2017iotcandyjar,Antonakakis2017} which is usually for research purpose only. The honeypot is a computer with publicly accessible IP addresses and powerful traffic monitoring and logging systems that imitates normal IoT devices. It functions as a trap to detect botnet scanning traffic and after that entices the loading of executable malware files by pretending to be compromised. In this way, the honeypot operators can acquire samples of malwares at the first time. Analysis of the sample malware gives security experts clues about the address of the attacking master's command and control server and helps them make security patches. However, because of the lack of reliable firmware update methods, it's hard to push the security patch to vulnerable devices, which undermines the effect of this kind of defense. The second type is the secure software implementation guideline, and modules like IDS (Intrusion Detection System) designed for IoT devices and IoT connected cloud servers. \cite{barrera2017idiot,payne2018securing} fall into this type. \cite{barrera2017idiot} proposes to enforce security policies on IoT devices to filter out abnormal or unnecessary packets and \cite{payne2018securing} offers some guidelines from perspectives of network configuration and device deployment. Both of these works require assumptions about the robustness of deployed softwares, which does not always hold. The last type of defense as in \cite{cao2017hey,lauria2017footprint,goodin2017brickerbot} is actually inspired by Mirai botnet malware, which involves discovering vulnerable IoT devices by random scanning. Upon discovering new targets, they either dig into the system to expel the hidden malware or report the vulnerable device to authorities and the owner. However, this type of defense is limited on its usability that there is normally no reliable way to notify the owner for discovered vulnerabilities. Purifying the IoT device without getting the owner's approval causes legal concern. \subsection{Blockchain based IoT Service} Though blockchain has been proved as a successful technology for years, the high cost of transaction fee and limited network throughput have long become the impediment of adopting it for IoT. However, there are quite some works trying to find workaround ways to integrate the power of blockchain into IoT systems. In \cite{andersen2017wave} smart contract is used as an IoT authorization platform where the smart contract only serves as an online distributed ledger to record permission changes. In \cite{hardjono2016cloud}, blockchain is used for secure device commissioning and data privacy protecting with the help of secure hardware. In \cite{dorri2017blockchain}, self-mined private blockchain is deployed in home network as a ledger of accessing privilege and access control policies. However, none of these works solve the problem of IoT malware propagation. Although the idea of imposing economic incentives and penalties has already been proposed in Enigma~\cite{zyskind2015enigma}, it only provides a theoretical model for secure multi-party computation rather than a usable guideline for IoT and blockchain integration. Enigma conduct most of its operation on the blockchain which is inevitably expensive. In comparison, our proof-of-delivery scheme can run off-chain completely as long as no dispute happens. Other related works that utilizing economic incentives such as NameCoin~\cite{loibl2014namecoin} and FileCoin~\cite{benet2018filecoin} also use the economic incentives, but they are applied on specific tasks that involves low frequency of on-chain operation and do not propose the framework of off-chain verfification.
1,108,101,563,944
arxiv
\section{Introduction} \label{sec:intro} Silicon-based quantum devices are one of the brightest prospects for the future of quantum computing, achieving long coherence times \cite{pla2013high,veldhorst2015two,huang2019fidelity,yoneda2018quantum} and having potential for mass-production \cite{ansaloni2020single,zwerver2021qubits,wu2021strong}. In building a full-scale quantum computer, achieving fault-tolerance is also of utmost importance as devices are scaled up in complexity and number of qubits \cite{fowler2012surface,veldhorst2017silicon}. Scalable fault-tolerant architectures have generally been envisioned to require some form of long distance coupling between qubit arrays \cite{vandersypen2017interfacing}. Moving the qubits themselves is one of the promising strategies shown across various material platforms with both theoretical and experimental studies \cite{greentree2004coherent,bertrand2016fast,fujita2017coherent,nakajima2018coherent}. Other strategies being adopted include the coupling of spins to photons in a cavity \cite{vermersch2017quantum,mi2017strong,samkharadze2018strong,borjans2020resonant}, the use of a spin bus \cite{friesen2007efficient}, using spin chains \cite{di2010quantum}, transporting electrons with surface acoustic waves (SAW) \cite{sogawa2001transport}, and coherent SWAP gates based on exchange operations applied consecutively \cite{sigillito2019coherent}. Here, we focus on the coherent transfer of spins by shuttling an electron qubit between MOS quantum dots in silicon, including both analysis of experimental demonstrations \cite{hensen2020silicon,yoneda2021coherent} and theoretical studies \cite{ginzel2020spin,buonacorsi2020simulated,krzywda2020adiabatic}. It has been recently demonstrated that it is possible to perform high fidelity transport of spins in a double quantum dot with a polarization transfer fidelity of 99.97\% and average coherent transfer fidelity of 99.4\% \cite{yoneda2021coherent}. These results will serve as the motivation for the theory presented in this paper. In the next section (Section~\ref{sec:model}), we will present the theory and modeling of the double quantum dot system Hamiltonian and the qubit dispersion. In Section~\ref{sec:noise}, we examine the temporal errors using a four-level model for the qubit, where we solve the time independent Schrödinger equation while subjected to computer-generated noise. Here, we examine the impact of noise on both the Ramsey-like coherence times and Hahn echo times. Following that, in Section~\ref{sec:gates}, we reduce further into a two-level system and examine another possible source of the transfer error in the form of unwanted rotations on the Bloch sphere during the transfer process. Finally, in Section~\ref{sec:discuss}, we discuss potential improvements to the shuttling process to improve the transfer fidelity. \section{Theory and Modeling} \label{sec:model} In this section, we define the Hamiltonian of our double quantum dot system and examine the qubit dispersion that results. Typically, electrons in silicon have a total of six valley states, with two states per Cartesian direction. In silicon devices, the electric confinement of the quantum dots results in the valley states along the $x$ and $y$ directions being much higher in energy than in the $z$ direction, resulting in only two valley states relevant to our discussion \cite{kane2000silicon,ando1982electronic}. The valley states will be defined using an effective mass approach \cite{saraiva2011intervalley} where the valley wavefunctions are described by the combination of the wave envelope function in the $z$ direction and Bloch functions. In our system, we have a single electron in a double quantum dot in silicon. We can define the wavefunction as follows, \begin{align} \label{eqn:wavefn} \ket{\psi_{i,\pm}} = F_i(\mathbf{r}) u_\pm(\mathbf{r}) e^{\pm i k_0 z}\:, \end{align} where $F_i(\mathbf{r})$ is the envelope wavefunction in the $x$, $y$, and $z$ directions and describes the spatial part of the wavefunction, and $u_\pm(\mathbf{r}) e^{\pm i k_0 z}$ is the Bloch function describing the valley states, with the wave vector $\pm k_0 \hat{z}$ describing the positions in momentum-space of the conduction band minima. Valley eigenstates are generally a superposition of the $+k_0$ and $-k_0$ valley states, with their coefficients determined from the valley phases. This is accounted for in the Hamiltonian, which consists of charge, spin and valley degrees of freedom. Charge states here refer to the location of the electron, \textit{i.e.}, whether the electron is in dot A or dot B, and we define detuning to be the energy separation between charge states localized in the two dots. The basis states in which we construct our Hamiltonian are given by, \begin{align} \label{eqn:basis} \left\{\ket{A,\uparrow,-k_0},\ket{A,\downarrow,-k_0},\ket{A,\uparrow,+k_0},\ket{A,\downarrow,+k_0},\right.\nonumber\\\left.\ket{B,\uparrow,-k_0},\ket{B,\downarrow,-k_0},\ket{B,\uparrow,+k_0},\ket{B,\downarrow,+k_0}\right\}\:. \end{align} We define the Hamiltonian in second quantization in terms of the creation and annihilation operators, $c^{(\dagger)}_{i,\sigma,v}$, and the number operator, $\hat{n}_{i,\sigma,v}=c^\dagger_{i,\sigma,v} c^{}_{i,\sigma,v}$. The total Hamiltonian is given as, \begin{align} \label{eqn:ham8} \hat{H} = \hat{H}_\mathrm{qd} + \hat{H}_\mathrm{Z} + \hat{H}_\mathrm{soc} + \hat{H}_\mathrm{valley} + \hat{H}_\mathrm{sv}\:, \end{align} where \begin{multline} \hat{H}_\mathrm{qd} = \sum_{\sigma,v} \left[\frac{\varepsilon}{2} \left(\hat{n}_{\mathrm{A},\sigma,v} - \hat{n}_{\mathrm{B},\sigma,v}\right) + \right.\\ \left.\sum_{i\neq j}\frac{t_\mathrm{c}}{2} \left(c^\dagger_{i,\sigma,v}c^{}_{j,\sigma,v} + \mathrm{h.c.}\right)\right] \end{multline} is the quantum dot Hamiltonian and describes the detuning ($\varepsilon$) and spin- and valley-independent tunnel coupling ($t_\mathrm{c}$) between dots. The second term \begin{align} \hat{H}_\mathrm{Z} = \sum_{i,v} \frac{E_{\mathrm{Z},i}}{2} \left(\hat{n}_{i,\uparrow,v} - \hat{n}_{i,\downarrow,v}\right) \end{align} describes the dot-dependent Zeeman splitting ($E_{\mathrm{Z},i}$). We account for difference in Zeeman splitting ($\Delta E_\mathrm{Z}$) across dots due to differences in the $g$-tensor \cite{tanttu2019controlling}, but neglect differences between the Zeeman splitting of different valley states, which can be significant \cite{ruskov2018electron}, but do not impact the regime in which we study there. Spin-orbit interaction effects are described by, \begin{multline} \hat{H}_\mathrm{soc} = \sum_{i,v}\frac{\eta_i\varepsilon}{2}\left(\hat{n}_{i,\uparrow,v}-\hat{n}_{i,\downarrow,v}\right) \\ + \sum_{v, i \neq j}\frac{t_\mathrm{sd}}{2} \left(c^\dagger_{i,\uparrow,v}c^{}_{j,\uparrow,v} - c^\dagger_{i,\downarrow,v}c^{}_{j,\downarrow,v}\right) \\ + \frac{t_\mathrm{sf}}{2} \left(c^\dagger_{i,\uparrow,v}c^{}_{j,\downarrow,v} + c^\dagger_{i,\downarrow,v}c^{}_{j,\uparrow,v}\right) + \mathrm{h.c.} \:. \end{multline} Note that the Zeeman splitting differing in each dot is also a result of spin-orbit coupling. The first spin-orbit effect here is the linear Stark shift denoted by $\eta_i$. The second effect alters the tunneling process and includes both spin-dependent ($t_\mathrm{sd}$) and spin-flip ($t_\mathrm{sf}$) terms. Spin-dependent effects alter the tunnel coupling such that different spin states are coupled at slightly different rates, whereas spin-flip effects couple the spin up and down states in different charge states. The coupling between valley states is described by, \begin{align} \hat{H}_\mathrm{valley} = \sum_{i,\sigma} E_{\mathrm{v},i} e^{i\phi_i} \left(c^\dagger_{i,\sigma,v}c^{}_{i,\sigma,v^\prime}\right) + \mathrm{h.c.} \:, \end{align} where $E_{\mathrm{v},i}$ is the intensity of the valley coupling and $\phi_i$ is the valley phase, which differs for each dot. The final term in the Hamiltonian describes spin-valley mixing and is given by, \begin{multline} \hat{H}_\mathrm{sv} = \sum_{i} \frac{\Delta^\mathrm{sv}_1}{2}(c^\dagger_{i,\downarrow,-k_0}c^{}_{i,\uparrow,+k_0}) \\ + \frac{\Delta^\mathrm{sv}_2}{2}(c^\dagger_{i,\downarrow,+k_0}c^{}_{i,\uparrow,-k_0}) + \mathrm{h.c.} \:, \end{multline} which results from the valley-dependent spin-orbit field created by the $\mathrm{SiO_2}$ interface, with $\Delta_1^\mathrm{sv}=|\Delta_1^\mathrm{sv}|e^{i\phi_i}$ and $\Delta_2^\mathrm{sv}=|\Delta_2^\mathrm{sv}|e^{i\phi_i}$ \cite{huang2014spin,zhang2020giant,cai2021coherent}. For our numerical simulations, we choose sensible values of the Hamiltonian parameters based on a combination of experimental data from Ref.~[\onlinecite{yoneda2021coherent}] and literature values. The valley splitting and valley mixing parameters are defined to be on the same order of magnitude as what has been measured in the literature \cite{zhang2020giant,yang2013spin,wang2013charge}. We summarize the parameters used in Appendix~\ref{app:hamiltonian}. Using these numerical parameters, we are able to obtain all of the eight energy levels as shown in Fig.~\ref{fig:qubitfreq}(a) where we plot the eigenenergies of the system against detuning. We also show in Fig.~\ref{fig:qubitfreq}(b) the energies of the system where we consider only the spin and charge degrees of freedom, leading to a four-level model. This model will be used for the analysis of errors arising from $1/f$ noise in Sec.~\ref{sec:noise}. We can also perform a Schrieffer-Wolff transformation to obtain a two-level model as shown in Fig.~\ref{fig:qubitfreq}(c) which will be used in the analysis of possible transfer errors in Sec.~\ref{sec:gates}. \begin{figure}[ht!] \centering \includegraphics[width=0.44\textwidth]{Figure_1_Updated_3.pdf} \caption{Energy levels and qubit dispersion. (a) shows the energy levels of the full $8\times8$ Hamiltonian as described in Eq.~(\ref{eqn:ham8}). Note that the legend describes the states when the electron is far detuned in dot A ($\varepsilon \ll -t_\mathrm{c}$), with $\ket{-}$ and $\ket{+}$ being the valley eigenstates. (b) shows the energy levels when we consider only the charge (electron position) and spin states, resulting in only four levels as labeled. (c) shows the two-level system of only the effective ground states which can be obtained by performing a Schrieffer-Wolff transformation. (d) shows the qubit dispersion of the double quantum dot system. Experimentally, the qubit frequency can be obtained from both the electron spin resonance (ESR) frequency (blue triangles) and the measurement of the precession frequency (red circles) as part of coherent tunneling spectroscopy \cite{yoneda2021coherent}. The fitted yellow line plots the fit of the dispersion calculated from the eight-level Hamiltonian model [Eq.~(\ref{eqn:ham8})], corresponding to the first excitation energy in (a). In all of these figures, we chose the Zeeman energy, $E_\mathrm{Z}$, and tunnel coupling, $t_\mathrm{c}$, such that $t_\mathrm{c} \gg E_\mathrm{Z}$. For a detailed summary of all the parameters used in these diagrams, refer to Appendix~\ref{app:hamiltonian}. Magnetic field, $B_0$, is set at $1~\mathrm{T}$ here. The accompanying schematic demonstrates the location of the electron as detuning changes. There are a total of eight energy levels with the red and blue levels respectively representing the ground and excited valley eigenstates. The dotted and solid lines are spin up and down states respectively.} \label{fig:qubitfreq} \end{figure} The primary topic of study in this paper is the coherent transfer of spin qubits in double quantum dots, the schematic of which is shown in Fig.~\ref{fig:qubitfreq}(d). Typically, the qubit state is initialized in one of the quantum dots, for example, dot A in the case of $\varepsilon \ll -t_\mathrm{c}$, with the ground state being the superposition state of spin up and down in the lower valley eigenstate. A pulse in voltage detuning brings the energy levels of the two dots into equilibrium. Near this equilibrium point ($\varepsilon=0$), charge hybridization occurs, and the wavefunction of the electron is spread across both quantum dots. Finally, after the transfer, the qubit state is now in dot B ($\varepsilon \gg t_\mathrm{c}$). We will test the model defined above against the experimental results obtained in Ref.~(\onlinecite{yoneda2021coherent}). We reproduce in Fig.~\ref{fig:qubitfreq}(d) the results from two experiments \cite{yoneda2021coherent}. The blue triangles are the resonant frequencies for microwave spin driving, and the red dots are the frequency of qubit precession measured from a Ramsey experiment. These describe the qubit frequency as the electron transits from one dot to another as shown in the accompanying schematic. In comparison, we can model the qubit frequency using the Hamiltonian as defined in Eq.~(\ref{eqn:ham8}) while taking into account all eight energy levels as shown in Fig.~\ref{fig:qubitfreq}(a) and with basis states as defined in Eq.~(\ref{eqn:basis}). The qubit frequency can be obtained by calculating the first excitation energy, with the result shown in Fig.~\ref{fig:qubitfreq}(d) (yellow line). This eight-level model describes the qubit dispersion except for nonlinear Stark shifts at far detuning levels. Therefore, our analysis will be limited to voltage detuning within $\varepsilon = \pm2.5~\mathrm{meV}$ and will not be impacted by nonlinear Stark shift effects in far detuning levels. This model serves as the theoretical basis upon which we build our understanding of the double dot system and the spin transport process. In recent experimental findings relating to spin transport \cite{yoneda2021coherent}, one of the key findings is that there are two main types of errors that accumulate during the transfer process. One is an error accumulated over time due to the exposure of the qubit to electric field fluctuations, which can be described by simulating Ramsey-like, $T_2^*$, and Hahn echo, $T_2^\mathrm{H}$, coherence times. The other error component is a transfer error that does not depend on the ramp time but increases with the number of transfers across the inter-dot region. In this work, we examine both of these sources of error, investigating possible causes and their impacts on the transfer process. \section{Electric noise and sweet spots} \label{sec:noise} \begin{figure*}[ht] \centering \includegraphics[width=0.9\textwidth]{Figure_2_Updated_2.pdf} \caption{Effect of $1/f$ noise on Ramsey and Hahn echo coherence times. (a) shows the energy diagram for a four-level model with degrees of freedom in spin and charge. In this regime where $t_\mathrm{c} \gg E_\mathrm{z}$, there is minimum leakage to the more energetic states. The diagram is color coded based on the proportion of each state to show the transition from dot A to dot B. The energy levels enclosed in the orange dotted lines will be the focus of the next section. (b) shows the qubit dispersion ($f_\mathrm{Q}$) as obtained from the four-level model, and the gradient of the qubit dispersion ($\mathrm{d}f_\mathrm{Q}/\mathrm{d}\varepsilon$). We define the reference frame with respect to the qubit Larmor frequency at $\varepsilon \sim -1.35~\mathrm{meV}$ (dot A) and plot the qubit frequency in this frame. The gradient of the qubit dispersion is also plotted, indicating the expected detuning dependence of coherence times. We observe at $\varepsilon=-1.56~\mathrm{meV}$ the dispersion gradient goes to zero. (c) shows the results of the single electron transistor (SET) noise spectroscopy measurement (in purple). We extrapolate noise amplitude at $1~\mathrm{Hz}$ (100 \textmu{$\mathrm{eV}^2/\mathrm{Hz}$}) to higher frequencies where the qubit operates, with the approximate trend line shown in yellow. Examples of computer-generated noise spectra at the sampling frequencies of the time evolution is also plotted, the blue trace is sampled at 20 GHz and the red trace is sampled at 20 kHz. These noise spectra match the amplitude of the experimental results from SET spectroscopy. The algorithm for the simulated noise-spectrum is outlined in Appendix~\ref{app:1fsimulation}. (d) shows the $T_2^*$ Ramsey coherence times for different values of detuning at $B_0=1.42~\mathrm{T}$. The sweet spot in coherence is observed at about $-2~\mathrm{meV}$ at this magnetic field. We also plot here $\left|(\mathrm{d}f_\mathrm{Q}/\mathrm{d}\varepsilon)\delta\varepsilon\right|^{-1}$ calculated at $B_0=1.42~\mathrm{T}$, with $|\delta\varepsilon|\approx250~\text{\textmu{}}\mathrm{eV}$. (e) shows the $T_2^\mathrm{H}$ Hahn echo times at $B_0=1~\mathrm{T}$. The sweet spot in detuning can be observed close to the inter-dot anticrossing for both the experiment 1 (blue) and simulation results. A second set of experimental result (experiment 2) is also plotted here in red to show the Hahn echo times extended to the other side of the anticrossing ($\varepsilon>0$). Both of these sets of experimental data are from a Hahn echo-like experiment performed in a device reported in Ref.~(\onlinecite{yoneda2021coherent}), with some minute differences in the experimental protocol, which will be elaborated on in Appendix~\ref{app:sweetspot}.} \label{fig:error} \end{figure*} In this section, we look at temporal errors, and how these errors can impact the transfer process. We focus on the variation of coherence times ($T_2^*$ and $T_2^\mathrm{H}$) with detuning, especially close to the inter-dot anticrossing. The limitations on $T_2^*$ coherence times vary from system to system, but for our system of concern [that in Ref.~(\onlinecite{yoneda2021coherent})], the same device has been extensively characterized in a previous experiment in Ref.~(\onlinecite{chan2018assessment}). There, the two main sources of noise identified were the hyperfine coupling to the residual ${}^{29}\mathrm{Si}$ nuclear spins \cite{khaetskii2002electron} (the device is fabricated on isotopically purified ${}^{28}\mathrm{Si}$, with residual ${}^{29}\mathrm{Si}$ at concentrations of 800ppm) and the $1/f$ noise originating from fluctuations of two-level charged systems in the oxide and at the interface between the oxide and silicon \cite{culcer2009dephasing,kuhlmann2013charge}, which create noise on the spin mediated by the spin-orbit coupling. For this analysis, we will focus on the $1/f$ noise effects, which becomes dominant with the strong dispersion at the onset of the dot transition. Hyperfine effects are also expected to have some influence on the qubit shuttling performance when the variability between qubit frequencies in the dots is not too large, but for the regime considered here we disregard such effects. For the purpose of understanding the coherence times, we neglect the valley degree of freedom, which was defined in Eq.~(\ref{eqn:ham8}), leaving us with an effective four-level system. This is because the charge and spin states will be sufficient for an analysis of the impact of $1/f$ noise on the system, with charge noise entering directly through the charge degree of freedom, degrading the spin coherence through the dependence of its frequency on $\varepsilon$. Note that valleys still play a role in this system because the linear spin-orbit coupling of each dot is dictated by its valley structure \cite{ruskov2018electron}. We can observe in Fig.~\ref{fig:error}(a) the energy levels relevant to our analysis. At negative detuning values, the ground state wavefunctions of the system are primarily in dot A, and correspondingly, the ground state wavefunctions are mostly in dot B at positive detuning values. There is a large energy gap between the two lowest levels and the excited pair of states, corresponding to the large tunnel coupling in our double dot system ($\sim 430~\text{\textmu{}}\mathrm{eV}\approx h\times104~\mathrm{GHz}$). In this regime, the tunnel coupling is much larger than the Zeeman splitting at 1T, and this reduces the state leakage in the transfer process. In Fig.~\ref{fig:error}(b), we plot the dispersion of the first excitation energy of the double dot system. The difference in Zeeman splittings between the two dots is caused by the surface roughness arising from atomic sources of disorder in the oxide as mentioned before. This creates a small difference in the $g$-factors of the two dots that generally results in tens of megahertz of difference in the Zeeman splittings \cite{tanttu2019controlling}. The transition between these two qubit frequencies is set by the charge hybridization between the dots due to the tunnel coupling. Examining the qubit frequency, we find that it is highly dependent on the detuning energies, especially near the anticrossing, where there is a steep transition from one qubit frequency to another. Charge noise causes fluctuations of the dot levels and therefore enters as detuning noise in the Hamiltonian. Near the anticrossing at zero detuning, small fluctuations in detuning leads to large shifts in frequency, due to the large Stark shifts, $\left|\mathrm{d}f_\mathrm{Q}/\mathrm{d}\varepsilon\right|$, plotted (in red) in Fig.~\ref{fig:error}(b). In reality, $1/f$ noise is not always small in amplitude, such that analyzing $\left|\mathrm{d}f_\mathrm{Q}/\mathrm{d}\varepsilon\right|$ may be insufficient in some scenarios. To understand how the coherence times are related to the qubit frequency, we simulate the effect of charge noise on the qubit. The Hamiltonian we use to describe our system is given explicitly as, \begin{widetext} \begin{align} \hat{H}_{4\times 4} = \frac{1}{2}\begin{pmatrix} E_\mathrm{Z,A} + \left(\eta_\mathrm{A} + 1\right)\varepsilon & 0 & t_\mathrm{c} + t_\mathrm{sd} & t_\mathrm{sf} \\ 0 & -E_\mathrm{Z,A} - \left(\eta_\mathrm{A} - 1\right)\varepsilon & t_\mathrm{sf} & t_\mathrm{c} - t_\mathrm{sd} \\ t_\mathrm{c} + t_\mathrm{sd} & t_\mathrm{sf} & E_\mathrm{Z,B} + \left(\eta_\mathrm{B} - 1\right)\varepsilon & 0 \\ t_\mathrm{sf} & t_\mathrm{c} - t_\mathrm{sd} & 0 & -E_\mathrm{Z,B} - \left(\eta_\mathrm{B} + 1\right)\varepsilon \end{pmatrix}\:, \label{eqn:ham4x4} \end{align} \end{widetext} where all the terms are as defined in the previous section. We can model $1/f$ charge noise in this system as fluctuations on the detuning levels. Thus, the noise Hamiltonian is given by, \begin{widetext} \begin{align} \hat{H}_\mathrm{noise}(t) = \frac{1}{2}\begin{pmatrix} \left(\eta_\mathrm{A}+1\right)\delta\varepsilon(t) & 0 & 0 & 0 \\ 0 & -\left(\eta_\mathrm{A}-1\right)\delta\varepsilon(t) & 0 & 0 \\ 0 & 0 & \left(\eta_\mathrm{B}-1\right)\delta\varepsilon(t) & 0 \\ 0 & 0 & 0 & -\left(\eta_\mathrm{B}+1\right)\delta\varepsilon(t) \end{pmatrix}\:, \label{eqn:Hnoise} \end{align} \end{widetext} where the $\delta\varepsilon(t)$ terms are numerically generated noise in the time domain. Examples of the spectrum are shown in Fig.~\ref{fig:error}(c) and details of how the noise is simulated can be found in Appendix~\ref{app:1fsimulation}. The total Hamiltonian will then be given as, \begin{align} \label{eqn:ham4} \hat{H}_\mathrm{total}(t) = \hat{H}_{4\times 4} + \hat{H}_\mathrm{noise}(t)\:. \end{align} Having defined the Hamiltonian, we now evaluate the evolution of the wavefunctions under the Hamiltonian by solving numerically the Schrödinger equation, \begin{align} \label{eqn:schrodinger} i\hbar\frac{\partial}{\partial t}\psi(t) = \hat{H}_\mathrm{total}\psi(t)\:. \end{align} In general, the solution to this system can be constructed from small time steps $\delta t$ as, \begin{align} \label{eqn:timeevo} \psi(t+\delta t) = e^{-i\hat{H}_\mathrm{total}\delta t/\hbar} \psi(t) \:, \end{align} where the time evolution of the wavefunction is governed by the unitary, $U=e^{-i\hat{H}_\mathrm{total}\delta t/\hbar}$. This approximation is valid as long as $\delta t$ is much smaller than the characteristic time scale of variation of $\hat{H}_\mathrm{total}$. We initialize at $t=0$ into the superposition state of spin up and down, and then iteratively calculate the unitary and the wavefunction of Eq.~(\ref{eqn:timeevo}) until a total evolution time $t_\mathrm{evol}$. We note that only the time dependence of the noise Hamiltonian is kept in the total expression for the Hamiltonian, with the qubit Hamiltonian time-independent in Eq.~(\ref{eqn:ham4}). In general, the qubit Hamiltonian ($\hat{H}_{4\times4}$) is also time-dependent since it contains the detuning parameter $\varepsilon(t)$. This implies that in order for the approximation in Eq.~(\ref{eqn:timeevo}) to be valid, we would have to use small time discretization of at least three orders smaller than the Hamiltonian terms for our purpose since the detuning is the dominating energy term. This is also the case for the calculations in the next section where we do not consider any noise in the system and examine the impact of diabatic effects on the transfer process. However, in the case relevant to the discussion in this section, we are calculating the coherence times of the qubit for fixed detuning values. This leaves only the time dependence in the noise Hamiltonian. We do not have to sweep the detuning values over time. The only time-varying parameters are the noise parameters which are themselves small. This allows us to perform the simulation using coarse time steps of a tenth of the total evolution time, $t_\mathrm{evol}$/10, which, as we will show later in the section, reproduces the experimental results sufficiently well. A computational challenge is the fact that the calculation has to be performed over several orders of magnitude in time. There is a large range of expected coherence times, with minimum coherence times on the order of 0.1 \textmu{}s near the anticrossing, and maximum coherence times near the sweet spot expected to be on the order of hundreds of microseconds. In order to capture the full range of coherence decay, we divide up the numerical simulation into different sections with each section consisting of $t_\mathrm{evol}$ varying over a single order of magnitude, thus speeding up significantly the simulation. More details are contained in Appendix~\ref{app:numerics}. Our simulated noise is calibrated against the noise amplitude measured from the current through a single electron transistor (SET) near the quantum dots, which is used to estimate the electric noise in the device (Appendix~\ref{app:setspec}). The purple trace in Fig.~\ref{fig:error}(c) shows the results from this experimental technique, and we take reference to its amplitude at $f=1~\mathrm{Hz}$, with the yellow dotted line as the reference line for $1/f$ noise of amplitude $100~\text{\textmu{}}\mathrm{eV}^2/\mathrm{Hz}$. The power spectral density (PSD) for two examples of the computer generated noise is also plotted in Fig.~\ref{fig:error}(c) corresponding to different sampling frequencies ($20~\mathrm{GHz}$ for the blue trace and $20~\mathrm{kHz}$ for the red trace). Given the statistical nature of the noise, we repeat the calculation of the density matrix $\rho$ after $t_\mathrm{evol}$ over 100 realizations of noise. The final density matrix is then averaged over these 100 iterations and we obtain $\bar{\rho}$. Finally, we calculate the Bloch length \cite{kimura2003bloch,jakobczyk2001geometry}, which is a measure of the qubit coherence, \begin{align} \left|\mathbf{r}\right| = \sqrt{\frac{4}{3}\left(\mathrm{Tr}(\bar{\rho}^2)-\frac{1}{4}\right)} \:. \end{align} The Bloch length is chosen here because it corresponds to what was measured in the experiment in Ref.~(\onlinecite{yoneda2021coherent}). We can calculate the Bloch length as a function of time evolution, $t_\mathrm{evol}$. We obtain a decay, which can be fitted using, $|\mathbf{r}| = A\exp(-(t_\mathrm{evol}/T_2^*)^\beta) + C$, where $A$, $C$, $T_2^*$, and $\beta$ are the amplitude, the final Bloch length after decay, the coherence time, and the decay exponent, respectively. The same expression is adapted later to obtain the Hahn echo time. Following this process, we obtain the results shown in Fig.~\ref{fig:error}(d). Here, we set the $B_0$ magnetic field to $1.42~\mathrm{T}$, corresponding directly to the conditions under which the experiment in Ref.~(\onlinecite{yoneda2021coherent}) was performed, which yielded the experimental data (plotted in blue) in Fig.~\ref{fig:error}(d). We plot the Ramsey coherence times to highlight the point of minimum $T_2^*$ near the anticrossing at zero detuning, and we also obtain fairly consistent results between the simulation (in red) and the experiment (in blue). We also plot here in purple $\left|\frac{df_\mathrm{Q}}{d\varepsilon}\delta\varepsilon\right|^{-1}$, where we took the inverse of the Stark shift multiplied by the effective amplitude of $1/f$ noise $\left(\sim 2\pi |\delta\varepsilon|_\mathrm{1Hz}\sqrt{\ln{\left(\frac{f_\mathrm{h}}{f_\mathrm{l}}\right)}}\right)$ to obtain the expected coherence times assuming that the noise amplitude is small enough with respect to the Stark shift ($df_\mathrm{Q}/d\varepsilon$). Here $f_\mathrm{h}$ and $f_\mathrm{l}$ are set to $1~\mathrm{GHz}$ and $1~\mathrm{Hz}$ respectively as the typical bounds of $1/f$ noise while operating the qubit. The effective amplitude of the $1/f$ noise is $\sim 250~\text{\textmu{}}\mathrm{eV}$. We find that the numerical results from the calculation of $\left|\frac{df_\mathrm{Q}}{d\varepsilon}\delta\varepsilon\right|^{-1}$ and from the full simulation with $1/f$ noise agrees well with each other. There is also a good agreement between the experiment and the numerical results, suggesting that most of the noise in our system is quasi-static charge noise in this interdot tunneling regime. We observe that there is indeed a dip in the coherence time near zero detuning, where the inter-dot anticrossing is. This coincides with what we observed from the Stark shift of the double quantum dot [as shown in Fig.~\ref{fig:error}(b)] where it is maximum close to the inter-dot anticrossing. We also observe a coherence time sweet spot from the simulation results which is outside of the range of detuning values swept in the experiment. We note that the point of maximum coherence time is indeed at the point where $df_\mathrm{Q}/d\varepsilon=0$. Next, we incorporate a Hahn echo pulse into the simulation. The key difference between this simulation and what was described previously is the inclusion of a $\pi$-pulse during the time evolution of the qubit state. The $\pi$-pulse is implemented by applying a unitary operator, \begin{align} U_\pi = \exp(-i~\frac{\pi}{2}~\tau_I \otimes \sigma_y) \:, \label{eqn:ypulse} \end{align} with \begin{align} \tau_I \otimes \sigma_y = \begin{pmatrix} 0 & -1i & 0 & 0 \\ 1i & 0 & 0 & 0 \\ 0 & 0 & 0 & -1i \\ 0 & 0 & 1i & 0 \end{pmatrix} \:, \end{align} where $\tau$ and $\sigma$ are the Pauli matrices representing the charge and spin subspace respectively, and $U_\pi$ represents a $\pi$-pulse about the $y$-axis in the spin-subspace. This unitary operator is multiplied to the qubit wavefunction at exactly the mid-point of the time evolution ($t_\mathrm{evol}/2$), corresponding also to the pulsing sequence used in the experiment \cite{yoneda2021coherent}. Otherwise, the other aspects of calculating the Hahn echo time, $T_2^\mathrm{H}$, are the same as for the Ramsey coherence time, $T_2^*$. The results of this simulation are shown in Fig.~\ref{fig:error}(e), where we can also observe both the point of minimum Hahn echo times near the inter-dot anticrossing and the sweet spot in detuning. At the sweet spot, we benefit from enhanced coherence, which is indicative of an ideal point for qubit idling due to reduced temporal errors. The existence of this sweet spot can be explained by examining the qubit dispersion plotted in Fig.~\ref{fig:error}(b) and the plotted gradient $df_\mathrm{Q}/d\varepsilon$, where we observe that the dispersion gradient vanishes at $\varepsilon=-1.56~\mathrm{meV}$. In the context of qubit transfer, this is an ideal point in detuning for initializing or idling due to its long coherence times. Such a sweet spot could potentially also be an ideal point for qubit control. Comparing the experimental result of Fig.~\ref{fig:error}(e) in blue (experiment 1) with the simulation results, we observe a small constant offset between the results, which is discussed in Appendix~\ref{app:sweetspot}. We also discuss why in some data, such as shown in red (experiment 2), the sweet spot seems absent. Finally, comparing $T_2^*$ and $T_2^\mathrm{H}$, we observe that the Hahn echo improves the coherence times across all detuning values up to an order of magnitude. We also note that while these results are calculated at different magnetic fields, we find that the difference in magnetic field only leads to marginal change in the results, as discussed in Appendix~\ref{app:noise}, indicating that the improvement occurs due to the echo decoupling. Overall, there is satisfactory consistency between the experimental and simulation results for both cases of $T_2^*$ and $T_2^\mathrm{H}$, suggesting that $1/f$ noise can be used to model decoherence in our system. What these results mean in the context of qubit transport is that we should avoid idling at the regions of reduced coherence times by implementing fast pulsing between the quantum dots while making use of the sweet spot in detuning as an idling point in experiments. However, fast pulsing can also increase leakage errors, so the pulse speed would be limited by the size of the tunnel coupling \cite{krzywda2020adiabatic}. We will explore the impact of pulsing speed in more detail in Sec.~\ref{sec:discuss}. \section{Transport process as a single qubit gate} \label{sec:gates} Other than the errors due to $1/f$ charge noise investigated in the previous section, we are also interested in other errors that occur near the anticrossing. Recent studies suggest that $1/f$ noise can also lead to diabatic effects at the inter-dot transition \cite{krzywda2020adiabatic}. Previous experimental results also show error that occurs regardless of the transfer ramp times and are finite even in the idealized limit of infinitely fast transfers \cite{yoneda2021coherent}. One potential type of such errors is unitary error, \textit{i.e.}, unaccounted rotations occurring on the qubit Bloch sphere, and we will analyze them in this section as effective $x$ and $z$ gates after the transfer process. One characteristic of our double dot system is that we have a large tunnel coupling as shown in Fig.~\ref{fig:error}(a), and that has been instrumental in allowing us to pulse qubits across $4~\mathrm{meV}$ in voltage detuning over 8 ns, without significant contributions from diabatic errors. The opposite scenario with a small tunnel coupling is discussed in Appendix~\ref{app:adiabaticity}, where we have the energy diagram of a four-level system with a tunnel coupling of $41~\text{\textmu{}}\mathrm{eV}$ or $10~\mathrm{GHz}$, which is smaller than the Zeeman splitting at $B_0=1~\mathrm{T}$ ($115~\text{\textmu{}}\mathrm{eV}$ or $28~\mathrm{GHz}$). With these Hamiltonian parameters, states with different spin and charge configurations cross around the inter-dot charge transition, $\varepsilon=0$, leading to enhanced spin-flip tunneling and larger possibilities of state leakage in the transport process. This is the regime that should be avoided in qubit transport protocols. \begin{figure}[!ht] \centering \includegraphics[width=0.45\textwidth]{Figure_3_Updated_3.pdf} \caption{Unitary errors arising from the transport process. (a) shows the energy levels of the effective two-level Hamiltonian, obtained from a Schrieffer-Wolff transformation of the four-level model shown in Fig.~\ref{fig:error}(a), where $t_\mathrm{c} \gg E_\mathrm{z}$. This approximation is valid only in the limit where $t_\mathrm{c} \gg E_\mathrm{z}$. (b) shows the polarization spin transfer errors over a range of ramp rates. The ramp rate of $500~\text{\textmu{}}\mathrm{eV/ns}$ corresponds to what is used in the experiment \cite{yoneda2021coherent} and is represented by the yellow star. (c) shows the magnitude of the off-diagonal term in the $H_\mathrm{2\times2}$ Hamiltonian. Generally, as tunnel coupling increases, the magnitude of the off-diagonal term increases proportionally as well. Note that the magnitude of this term is many orders smaller than either the detuning parameter, $\varepsilon$ or the tunnel coupling, $t_\mathrm{c}$. We show the magnitude of tunnel coupling in both units of meV and GHz. (d) shows the effective $z$-gate after a single transfer, which is a rotation about the quantization axis set by the $B_0$ magnetic field. The amount of rotation is proportional to the precession frequency at each dot and the amount of time spent at each detuning position. We fix here the total ramp time from $\varepsilon_1$ to $\varepsilon_2$ at $8~\mathrm{ns}$. The rotating frame is defined by the precession rate at $\varepsilon=-1.35~\mathrm{meV}$ (dot A), leading to minimal phase accumulation in the bottom left quadrant.} \label{fig:gates} \end{figure} Considering only the regime with tunnel coupling larger than the Zeeman splitting, $t_\mathrm{c}\gg E_\mathrm{z}$, we perform a Schrieffer-Wolff transformation \cite{winkler2003spin} to isolate only the effective ground state orbitals, consisting of the lowest two energy levels as shown in Fig.~\ref{fig:gates}(a). To confirm the validity of this approximation, we also calculate the spin polarization errors after a transfer across $4~\mathrm{meV}$ with ramp times spanning several different orders of magnitude, which we plot in Fig.~\ref{fig:gates}(b). For this simulation, we remain in the four-level model [Fig.~\ref{fig:error}(a)] and initialize the qubit in either the spin up or spin down state in quantum dot A, at a detuning level of $\varepsilon=-2~\mathrm{meV}$, and calculate the time evolution operators of the time-dependent Hamiltonian as we ramp the qubit from one dot to another, similar to what we outlined in Section~\ref{sec:noise}. As previously mentioned, we remove the impact of noise for this analysis and consider only diabatic effects due to the ramp itself (in a recent study it was found that the $1/f$ noise is also responsible for significant diabatic effects \cite{krzywda2020adiabatic}). Next, we consider a time-dependent Hamiltonian in which the detuning parameter, $\varepsilon(t)$ varies with time and $\hat{H}_\mathrm{noise}=0$. Therefore, we solve for the qubit state in time steps that are $1/10^5$ of the total ramp time, ensuring that the numerical time steps are small enough to capture the change in wavefunction accurately \cite{buonacorsi2020simulated}. Finally, we calculate the state fidelity, $\mathcal{F}$, at the end of the transfer by comparing with the target eigenstate, \begin{align} \mathcal{F} = |\braket{\psi_\mathrm{target}}{\psi_\mathrm{final}}|^2 \:. \label{eqn:statefid} \end{align} We show that for a tunnel coupling of $430~\text{\textmu{}}\mathrm{eV}=h\times104~\mathrm{GHz}$, we have a very small diabatic error for our chosen ramp rate, represented by the yellow star ($500~\text{\textmu{}}\mathrm{eV/ns}$). Now, using the Schrieffer-Wolff transformation, we derive an effective $2 \times 2$ Hamiltonian from the four-level model, with details given in Appendix~\ref{app:hamiltonian}, \begin{align} \hat{H}_\mathrm{2\times 2} = \frac{1}{2}\hbar\omega_0\mathbb{1} + \frac{1}{2}\hbar\omega\sigma_z + \frac{1}{2}\hbar g \sigma_x \label{eqn:heff} \end{align} where $\omega_0$ is a shift in energy governed by the size of the tunnel coupling and the detuning, $\omega$ is dominated by the qubit frequencies of each dot depending on the detuning position and $g$ is dominated by the term $t_\mathrm{sf} t_\mathrm{c} / (2\sqrt{t_\mathrm{c}^2+\varepsilon^2)}$ which is highly dependent on the magnitude of the spin-flip tunnel coupling. We neglect second order terms and higher in this Hamiltonian. In this form, the Hamiltonian explicitly shows that we can characterize the spin-flip error as an $x$-rotation set by $g$, and any phase accumulation as a $z$-rotation set by $\omega$. With the effective $2\times 2$ Hamiltonian [Eq.~(\ref{eqn:heff})], we seek to understand the amount of $z$- and $x$-rotations accumulated during the transfer process in a single transfer. We will use a ramp time of 8 ns, corresponding to the ramp time used in the experiment of Ref.~(\onlinecite{yoneda2021coherent}). For these simulations, we initialized the qubit in a superposition of the two eigenstates at a particular detuning position, $\varepsilon_1$. By iteratively calculating the unitary defined in Eq.~(\ref{eqn:timeevo}), we take the product of all unitaries as we vary the detuning from $\varepsilon_1$ to $\varepsilon_2$. This gives an effective unitary time-evolution operator, $U_\mathrm{eff}$. From this effective unitary operator, we factor out a global phase $e^{i\phi}$, and then we take the matrix logarithm \cite{hansen2021pulse}. In this way, we obtain a effective operator with $\sigma_x$, $\sigma_y$, and $\sigma_z$ components, which can be interpreted as an effective qubit rotation accumulated during the transfer, \begin{align} -i\log{U_\mathrm{eff}} \equiv c_1 \sigma_x + c_2 \sigma_y + c_3 \sigma_z \:, \end{align} where the coefficients of $c_1$, $c_2$, and $c_3$ can be obtained from this final effective operator. These coefficients are different from the terms in the 2-by-2 Hamiltonian [Eq.~(\ref{eqn:heff})]. We vary the detuning positions of the start ($\varepsilon_1$) and end ($\varepsilon_2$) points for the transfer and calculate the effective qubit rotation as a function of both these quantities. We note that in the form of Eq.~(\ref{eqn:heff}), the leading coefficient of $\sigma_x$ is very small. In Fig.~\ref{fig:gates}(c), we show how $\hbar g$ is expected to change with both detuning $\varepsilon$ and tunnel coupling $t_\mathrm{c}$ parameters. This term is strongest near the inter-dot anticrossing, and the detuning regime in which it is significant increases with tunnel coupling. This particular form of error would benefit from a smaller tunnel coupling, but this would increase the temporal errors discussed before. Instead, faster pulsing to avoid the inter-dot region can help to minimize this error as well as the temporal errors. Calculating the effective $x$ gates using the method above, we find that indeed the amount of $x$-rotations increases with increasing amount of time spent near the inter-dot anticrossing. However, we find that the effective $x$-rotations are small and on the order of pico-radians. This is primarily due to two reasons: the first of which is that $g$ is small compared to the $\omega$ term. Second, we define the qubit in the basis of the rotating frame, where the off-diagonal terms gain an oscillatory phase at the frequency of the rotating frame, and therefore to first order, average to zero. We estimated higher order terms in rotating wave approximation \cite{zeuch2020exact}, and that yields effective $x$-rotations on the order of pico-radians, shown in Appendix~\ref{app:xrot}. The small magnitude of the $x$-rotations is indicative that most likely, these effective rotations are not a significant source of transfer errors during the qubit transport process. In Fig.~\ref{fig:gates}(d), we show the result of the effective $z$-rotations by plotting the coefficient $c_3$ as a function of $\varepsilon_1$ and $\varepsilon_2$. The results show that in the top right and bottom left quadrants, the amount of phase accumulated is approximately constant for a fixed total time anywhere in the quadrant. This is consistent with the fact that if the qubit is moving within the same dot and without moving too close to the anticrossing, the rate of phase accumulation is approximately constant, corresponding to the qubit frequencies of each dot in the rotating frame. The rotating frame is defined with respect to dot A, and therefore the phase accumulation in dot A is completely governed by the Stark shift in dot A, while in dot B, it will be dominated by the difference in Zeeman splitting, $\Delta E_\mathrm{Z}$. The rate of phase accumulation is then most impacted when a transfer takes place from one dot to another (\textit{i.e.} $\varepsilon_1$ and $\varepsilon_2$ not being in the same dot) or when the transfer occurs near the inter-dot anticrossing where the electron wavefunction is effectively spread across both quantum dots. The method described in this section was used to characterize the transfer process as an effective gate, and we found that there are unitary errors that occur as a function of detuning positions for a fixed total time. This includes a non-zero transfer error in the form of unwanted $x$-rotations, it is also too small for the particular experimental conditions to be significantly measured due to the qubit being in the rotating frame and also the small magnitude of the spin-flip term in the Hamiltonian. We expect, however, that when the Zeeman splitting matches or exceeds the tunnel coupling, this approximation will no longer hold, and significant spin flip may be observed. \section{Discussion} \label{sec:discuss} In the previous sections, we have explored different error mechanisms relevant to the transport process. In this section, we discuss how this information can be of aid in future experiments in coherent transport. \begin{figure}[!ht] \centering \includegraphics[width=0.45\textwidth]{Figure_4_Updated_2.pdf} \caption{Impact of tunnel coupling on Hahn echo coherence times and polarization spin transfer error. (a) shows the effect of tunnel coupling on the $T_2^\mathrm{H}$ spectrum across detuning values between $-3.6~\mathrm{meV}$ and $3.6~\mathrm{meV}$. This simulation is performed at $B_0=1~\mathrm{T}$. (b) considers the transfer of a spin down ground state from dot A to B across $4~\mathrm{meV}$ with varying ramp times. The two curves are calculated for different tunnel couplings, with the red and blue traces calculated for tunnel couplings of $41~\text{\textmu{}}\mathrm{eV}\approx h\times10~\mathrm{GHz}$ and $430~\text{\textmu{}}\mathrm{eV}\approx h\times104~\mathrm{GHz}$ respectively. The yellow star indicates the ramp rate used in the experiments of Ref.~(\onlinecite{yoneda2021coherent}).} \label{fig:control} \end{figure} In the results shown in the previous sections, we set the magnitude of the tunnel coupling to be about $430~\text{\textmu{}}\mathrm{eV}\approx h\times104~\mathrm{GHz}$, obtained from experimental fits \cite{yoneda2021coherent}. Its large magnitude has been instrumental in avoiding leakage in the experiment. In this section, we examine how a different tunnel coupling can impact the results we obtained in the earlier sections. In Fig.~\ref{fig:control}(a), we show how the Hahn echo times, $T_2^\mathrm{H}$, change with the magnitude of tunnel coupling. In Sec.~\ref{sec:noise}, we observed that coherent errors peaked at the inter-dot anticrossing and that there exists a sweet spot in detuning where the coherence time is maximum. In this figure [Fig.~\ref{fig:control}(a)], we observe that with a larger tunnel coupling, the minimum $T_2^\mathrm{H}$ time at the inter-dot anticrossing is increased slightly. Even though it remains within the same magnitude, it does suggest that a larger tunnel coupling can be helpful in extending coherence times. We also observe that the sweet spot remains visible, indicating that the sweet spot can be readily accessed in different tunnel coupling regimes and is a general characteristic of the qubit dispersion. This opens up possibilities of quantum information protocols making use of the sweet spot while having tunability of the tunnel coupling. Next, we show in Fig.~\ref{fig:control}(b) the error rates during transfer as a function of the ramp rates, considering only diabatic effects. Here we investigate diabatic effects that occur as a result of ramping too quickly between the dots \cite{buonacorsi2020simulated}, where we calculate the state fidelity [Eq.~(\ref{eqn:statefid})] after a single transfer for different ramp rates. Similarly to Fig.~\ref{fig:gates}(b), we show the results of transferring a spin down state from dot A to B across $4~\mathrm{meV}$ with varying ramp times. In other words, this is a measurement of adiabaticity and ideally, we want minimum error rates here at the chosen ramp rate, thus minimizing the effect of state leakage. We had already seen that with a large tunnel coupling of $t_\mathrm{c}=430~\text{\textmu{}}\mathrm{eV}\approx h\times104~\mathrm{GHz}$, the error rates are very small ($\sim 10^{-10}$) at the ramp rate of $500~\text{\textmu{}}\mathrm{eV/ns}$ used in the experiment of Ref.~(\onlinecite{yoneda2021coherent}). Here, we expanded on that by also showing the results with a much reduced tunnel coupling of $t_\mathrm{c}=41~$\textmu{}$\mathrm{eV} \approx h\times10~\mathrm{GHz}$. It becomes obvious that a large tunnel coupling is more advantageous for avoiding diabatic errors by comparing the results for $\sim430~\text{\textmu{}}\mathrm{eV}$ (shown in blue) and for $\sim41~\text{\textmu{}}\mathrm{eV}$ (shown in red). The diabatic errors with a lower tunnel coupling are increased by several orders of magnitude at the same ramp rate. It would not be preferable to lower the ramp rate since one should ramp quickly across the inter-dot anticrossing in order to minimize the temporal errors. This points to a large tunnel coupling being a key step towards optimizing the transfer process. Other than tunnel coupling, parameters like spin-orbit coupling strengths and magnetic fields also have an impact on the transport process, but they are either kept constant or are difficult to control in scalable architectures. Recent work also suggests that spin-orbit parameters can be heavily dependent on surface roughness and other characteristics of the device determined during the fabrication process \cite{ruskov2018electron,tanttu2019controlling,ferdous2018interface}. We have shown here that with a large tunnel coupling and operating at or close to the sweet spot in detuning, it would be possible to overcome these other effects, which is also substantiated by the experimental results obtained \cite{yoneda2021coherent}. Discussing the results of coherent qubit transport in the context of tunnel coupling is also important because it is a highly tunable parameter \cite{eenink2019tunable,zajac2015reconfigurable,takakura2014single} and that will be important in scale-up architectures as well. This is especially the case when we intend to coherently transport qubits in a large-scale structure in the bucket-brigade manner, a scheme that we are primarily concerned with here. In this scheme, we are moving the electrons across multiple dots and therefore it is important to understand the tunneling process, which is where most of the errors occur in our system. \section{Summary} \label{sec:conclusion} In this paper, we adopted an eight-level model for our double quantum dot system, which includes the spin, valley and charge degrees of freedom. We examined two different types of errors in spin transfer, temporal and unitary errors. To understand the temporal errors, we analyzed the spectrum of both $T_2^*$ and $T_2^\mathrm{H}$ times, and estimated that $1/f$ noise with amplitude of $100~\text{\textmu{}}\mathrm{eV}^2/\mathrm{Hz}$ at $1~\mathrm{Hz}$ is able to adequately describe the spectrum of coherence times near the inter-dot anticrossing. As for unitary errors, we model the transport process as $x$- and $z$-rotations on the Bloch sphere by considering only the two lowest-energy states and show that any errors in the form of $x$-rotations are negligible in the rotating frame with large tunnel couplings ($t_\mathrm{c}\gg E_\mathrm{Z}$). Finally, we discussed how the results we presented changes with the size of the tunnel coupling, further cementing its importance. To conclude, a large tunnel coupling will be key in minimizing the errors that occur during the transport process, especially for spin-flip errors. Also, fast pulsing will be very helpful for avoiding the region of fast dephasing near the inter-dot anticrossing, especially since most errors accumulate near the inter-dot anticrossing. Having coherent qubit transfer in large scale systems will allow for non-local operations, while increasing the inter-connectivity between the qubits. \begin{acknowledgments} We thank I. Hansen, P. Mai, J. Y. Huang, and C. C. Escott for helpful discussions. We acknowledge support from the Australian Research Council (FL190100167 and CE170100012) and the US Army Research Office (W911NF-17-1-0198). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the US Government. M.K.F., S.Y., J.D.C. and W.G. acknowledges support from Sydney Quantum Academy. J.Y acknowledges support from JST PRESTO grant (JPMJPR21BA). \end{acknowledgments}
1,108,101,563,945
arxiv
\section{Introduction} \label{theIntroduction} Hybrids are mesons that include explicit gluonic degrees of freedom. These can have non-exotic $J^{PC}$ and hence may coexist with heavy quarkonia. The numerous charmonium-like and bottomonium-like ``XYZ'' states discovered since 2003~\cite{Eidelman:2012vu} have inspired the search for hybrids within the charmonium and bottomonium sectors~\cite{Olsen,Olsen2,Godfrey:2008nc,Pakhlova,Close2007}. In Ref.~\cite{Harnett:2012gs} QCD Laplace sum-rules were used to perform mass predictions for axial vector $(J^{PC}=1^{++})$ charmonium and bottomonium hybrids. The flux tube model predicts the lightest charmonium hybrids at 4.1-4.2~GeV~\cite{Barnes1995}. Lattice QCD~\cite{Perantonis,Liu:2011rn,Liu:2012ze} yields quenched predictions of about 4.0~GeV for the lightest charmonium hybrids, and unquenched predictions of approximately 4.4~GeV for $1^{++}$ charmonium hybrids in particular. Refs.~\cite{Govaerts:1984hc,Govaerts:1985fx,Govaerts:1986pp} comprise the first studies of heavy quark hybrids using QCD sum-rules. Multiple $J^{PC}$ including $1^{++}$ were examined; however, many of the resulting sum-rules exhibited instabilities, leading to unreliable mass predictions. Refs.~\cite{Qiao:2010zh,Berg:2012gd} re-examined the $1^{--}$ and $0^{-+}$ channels respectively, finding that the dimension-six gluon condensate which was not included in Refs.~\cite{Govaerts:1984hc,Govaerts:1985fx,Govaerts:1986pp} stabilizes the sum-rules in these channels. Motivated by these results, we have investigated the effects of the dimension-six gluon condensate for axial vector heavy quark hybrids using QCD Laplace sum-rules~\cite{Harnett:2012gs}. The resulting mass predictions are discussed with regard to the nature of the X(3872) and in relation to the charmonium hybrid multiplet structure suggested by recent lattice calculations~\cite{Liu:2012ze}. \section{Laplace Sum-Rules for Axial Vector Heavy Quark Hybrids} \label{theSumRules} The correlation function used to study axial vector ($J^{PC}=1^{++}$) heavy quark hybrids is given by \begin{gather} \Pi_{\mu\nu}(q)=i\int d^4x \,e^{i q\cdot x}\langle 0\vert T\left[j_\mu(x)j_\nu(0)\right]\vert 0\rangle \label{basic_corr} \\ j_\mu=\frac{g}{2}\bar Q\lambda^a\gamma^\nu\tilde G^a_{\mu\nu}Q\,,~\tilde G^a_{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\alpha\beta}G^a_{\alpha\beta}\,, \label{current} \end{gather} with $Q$ representing a heavy quark field~\cite{Govaerts:1985fx}. The transverse part $\Pi_{\rm V}$ of \eqref{basic_corr} couples to $1^{++}$ states \begin{equation} \Pi_{\mu\nu}(q)=\left(\frac{q_\mu q_\nu}{q^2}-g_{\mu\nu} \right)\Pi_{\rm V}(q^2)+\frac{q_\mu q_\nu}{q^2}\Pi_{\rm S}(q^2)~. \label{corr_tensor} \end{equation} In Refs.~\cite{Govaerts:1985fx,Govaerts:1984hc} the perturbative and gluon condensate $\langle \alpha \, G^2\rangle=\langle \alpha G^a_{\mu\nu} G^a_{\mu\nu}\rangle$ contributions to the imaginary part of $\Pi_{\rm V}(q^2)$ were calculated to leading order. The Feynman diagrams for these are represented in Fig.~\ref{pert_g2_fig}. \begin{figure}[hbt] \centering \includegraphics[scale=0.45]{pert_g2.eps} \caption{Feynman diagram for the leading-order perturbative and $\langle \alpha \, G^2\rangle$ contributions to $\Pi_{\rm V}$. The current is represented by the $\otimes$ symbol.} \label{pert_g2_fig} \end{figure} For brevity only imaginary parts of the perturbative and $\langle \alpha \, G^2\rangle$ contributions are given; the full expressions may be found in Ref~\cite{Harnett:2012gs}. We find \begin{gather} \begin{split} {\rm Im}\Pi_{\rm V}^{\rm pert}(q^2)=& \frac{\alpha m^6}{180\pi^2z^2} \Biggl[ \left(15-35z-22z^2-216z^3\right. \\ &\left.+48z^4\right)\sqrt{z-1}\sqrt{z} +15\left(1-3z+16z^3\right) \\ &\times \Biggl. \log\left[\sqrt{z-1}+\!\sqrt{z}\right] \Biggr] \,, \label{Im_Pi_pert} \end{split} \\ \begin{split} {\rm Im}\Pi^{\rm GG}_{\rm V}(q^2)&=-\frac{ m^2\langle \alpha G^2\rangle}{18}\left(1+2z\right)\frac{\sqrt{z-1}}{\sqrt{z}}\,, \\ &z=\frac{q^2}{4m^2}\,, \quad z>1 \,. \label{Im_Pi_GG} \end{split} \end{gather} Expressions \eqref{Im_Pi_pert} and \eqref{Im_Pi_GG} are in complete agreement with the corresponding integral representations given in~\cite{Govaerts:1985fx,Govaerts:1984hc}. The dimension-six gluon condensate $\langle g^3 G^3\rangle=\langle g^3 f_{abc} G^a_{\mu\nu} G^b_{\nu\alpha} G^c_{\alpha\mu}\rangle$ contributions are now determined. These were not calculated in Refs.~\cite{Govaerts:1985fx,Govaerts:1984hc}, and are represented by the diagrams in Fig.~\ref{GGG_fig}. The full expression for these contributions is \begin{figure}[hbt] \centering \includegraphics[scale=0.45]{g3ab.eps} \caption{Feynman diagram for the leading-order $\langle g^3 G^3\rangle$ contribution to $\Pi_{\rm V}$. Additional diagrams related by symmetry are not shown. } \label{GGG_fig} \end{figure} \begin{gather} \begin{split} \Pi_{\rm V}^{\rm GGG}(q^2)=&\frac{\langle g^3G^3\rangle}{1152\pi^2} \Biggl[ \frac{3(17z-9)}{z-1}-\frac{3(17-46z+27z^2)}{(z-1)^2} \Biggr. \\&+\left(\frac{2z(2-9z+6z^2)}{(z-1)^2}-\frac{4z\left(3z-1\right)}{z-1}\right) \\ &\Biggl.\times\,\phantom{}_2F_1\left(1, 1; 5/2;z\right) \Biggr]\,. \end{split} \label{Pi_GGG} \end{gather} The imaginary part of \eqref{Pi_GGG} is \begin{gather} \begin{split} {\rm Im}\Pi_{\rm V}^{\rm GGG}(q^2)=&\frac{\langle g^3G^3\rangle}{384\pi }\frac{\sqrt{z-1}}{\sqrt{z}}\left[ \frac{2(1-3z)}{z-1} \right. \\ &\left. +\frac{(2-9z+6z^2)}{(z-1)^2} \right] \,,\quad z>1\,, \label{Im_Pi_GGG} \end{split} \end{gather} which is singular at $z=1$. This poses a problem since the sum-rules will involve integrating \eqref{Im_Pi_GGG} from $z=1$. Below it will be shown how this difficulty can be overcome. We now formulate the QCD Laplace sum-rules~\cite{Shifman:1978bx,Shifman:1978by}. Using a resonance plus continuum model for the hadronic spectral function \begin{gather} \rho(t) = \rho^{\rm had}(t) + \theta(t-s_0) \rm{Im}\Pi^{QCD}(t)\,, \end{gather} the Laplace sum-rules are given by \begin{gather} {\cal L}_{k}^{\rm QCD}\left(\tau,s_0\right) = \frac{1}{\pi}\int_{t_0}^{\infty} t^k \exp\left[ -t\tau\right] \rho^{\rm had}(t)\; dt \,, \label{final_laplace} \end{gather} where $t_0$ is the hadronic threshold. The quantity on the left hand side of \eqref{final_laplace} is given by \begin{gather} \begin{split} {\cal L}_k^{\rm QCD}\left(\tau,s_0\right)=&\frac{1}{\tau}\hat B\left[\left(-1\right)^k Q^{2k}\Pi_{\rm V}\left(Q^2\right)\right] \\ &-\frac{1}{\pi} \int_{s_0}^{\infty} t^k \exp \left[-t\tau \right] {\rm Im} \Pi_{\rm V}(t)\; dt \,, \label{laplace} \end{split} \end{gather} where $Q^2=-q^2$ and $s_0$ is the continuum threshold. $\hat B$~is the Borel transform, which is closely related to the inverse Laplace transform~\cite{Bertlmann:1984ih}. The singular terms in \eqref{Pi_GGG} that are irrelevant for \eqref{Im_Pi_GGG} are, however, relevant for the inverse Laplace transform. It is the inclusion of these terms that allows the integration of \eqref{Im_Pi_GGG} from $z=1$ to be defined as a limiting procedure. Thus the imaginary part \eqref{Im_Pi_GGG} is insufficient to formulate the sum-rules for $1^{++}$ hybrids, as found for $0^{-+}$ hybrids~\cite{Berg:2012gd}. From the results for the leading order perturbative~\eqref{Im_Pi_pert}, $\langle \alpha G^2 \rangle$~\eqref{Im_Pi_GG}, $\langle g^3 G^3 \rangle$~\eqref{Pi_GGG}~and~\eqref{Im_Pi_GGG} contributions, we find \begin{gather} \begin{split} {\cal L}_0^{\rm QCD}\left(\tau,s_0\right)=&\frac{4m^2}{\pi} \Biggl[ \int_1^{s_0/4m^2} \left[ {\rm Im}\Pi_{\rm V}^{\rm pert}\left(4m^2 x\right) \right. \Biggr. \\ &\left. +{\rm Im}\Pi_{\rm V}^{\rm GG}\left(4m^2 x\right) \right] \exp{\left(-4m^2\tau x\right)\,dx} \\ &+\lim_{\eta\to 0^+} \left( \int_{1+\eta}^{s_0/4m^2} {\rm Im}\Pi_{\rm V}^{\rm GGG}(4m^2 x) \right. \\ &\times \exp{\left(-4m^2\tau x\right)\,dx} \\ &+\left. \Biggl. \frac{4m^2\langle g^3G^3\rangle}{192\pi^2\sqrt{\eta}}\exp{(-4m^2\tau)} \right) \Biggr] \,, \end{split} \label{L_0} \\ {\cal L}_1^{\rm QCD}\left(\tau,s_0\right)=-\frac{\partial}{\partial\tau}{\cal L}_0^{\rm QCD}\left(\tau,s_0\right)\,. \label{L_1} \end{gather} The mass and coupling are functions of the renormalization scale $\mu$ in the $\overline{\rm MS}$-scheme. After evaluating the $\tau$ derivative in \eqref{L_1}, renormalization group improvement may be implemented by setting $\mu=1/\sqrt{\tau}$~\cite{Narison:1981ts}. \section{Analysis: Mass Predictions for Axial Vector Heavy Quark Hybrids} \label{theAnalysis} To make mass predictions for axial vector heavy quark hybrids we utilize a single narrow resonance model \begin{equation} \frac{1}{\pi}\rho^{\rm had}(t)=f^2\delta\left(t-M^2\right)\,. \label{narrow_res} \end{equation} Inserting \eqref{narrow_res} in \eqref{final_laplace} gives \begin{equation} {\cal L}_k^{\rm QCD}\left(\tau,s_0\right)=f^2 M^{2k}\exp{\left(-M^2\tau\right)}\,, \label{narrow_sr} \end{equation} which can be used to calculate the ground state mass $M$ via the ratio \begin{equation} M^2=\frac{{\cal L}_1^{\rm QCD}\left(\tau,s_0\right)}{{\cal L}_0^{\rm QCD}\left(\tau,s_0\right)}\,. \label{ratio} \end{equation} Before using \eqref{ratio} to calculate the mass, the QCD input parameters must be specified. For the charmonium and bottomonium hybrid analyses we use one-loop $\overline{{\rm MS}}$ expressions for the coupling and quark masses: \begin{gather} \begin{split} &\alpha(\mu)=\frac{\alpha\left(M_\tau\right)}{1+\frac{25\alpha\left(M_\tau\right)}{12\pi}\log{\left(\frac{\mu^2}{M_\tau^2}\right)}} \,, \; m_c(\mu)=\overline m_c\left(\frac{\alpha(\mu)}{\alpha\left(\overline m_c\right)}\right)^\frac{12}{25} \,; \\ &\alpha(\mu)=\frac{\alpha\left(M_Z\right)}{1+\frac{23\alpha\left(M_Z\right)}{12\pi}\log{\left(\frac{\mu^2}{M_Z^2}\right)}} \,, \; m_b(\mu)=\overline m_b\left(\frac{\alpha(\mu)}{\alpha\left(\overline m_b\right) }\right)^\frac{12}{23} \,. \end{split} \end{gather} The numerical values of the QCD parameters are given in Table~\ref{QCD_parameters}. \begin{table}[ht] \centering \begin{tabular}{|| l | l ||} \hline \hline $\alpha\left(M_\tau\right)$ & $0.33$ \\ $\overline{m}_c$ & $\left(1.28 \pm 0.02\right) {\rm GeV}$ \\ $\alpha\left(M_Z\right)$ & $0.118$ \\ $\overline{m}_b$ & $\left(4.17 \pm 0.02\right) {\rm GeV}$ \\ $\langle \alpha \, G^2 \rangle$ & $\left(7.5 \pm 2.0\right)\times 10^{-2}{\rm GeV}^4$ \\ $\langle g^3G^3\rangle$ & $\left(8.2\pm 1.0\right){\rm GeV^2}\langle \alpha \, G^2\rangle$ \\ \hline \hline \end{tabular} \caption{QCD parameters. The quark masses, $M_\tau$ and $M_Z$ are taken from Ref.~\cite{pdg}. The values of $\alpha\left(M_\tau\right)$ and $\alpha\left(M_Z\right)$ are from Ref.~\cite{Bethke:2009jm}. Numerical values of the condensates are taken from Ref.~\cite{Narison:2010cg}.} \label{QCD_parameters} \end{table} The sum-rule window is determined following Ref.~\cite{Shifman:1978bx} by requiring that contributions from the continuum are less than 30\% of total and non-perturbative contributions are less than 15\% of total. These criteria are then used to constrain the Borel parameter $\tau$. This leads to the sum-rule windows of $5.3\,{\rm GeV^2} < 1/\tau <7.3\,{\rm GeV^2}$ for the charmonium hybrid and $7.8\,{\rm GeV^2} < 1/\tau <25.0\,{\rm GeV^2}$ for the bottomonium hybrid. The continuum threshold $s_0$ is optimized by first determining the smallest value of $s_0$ for which the ratio \eqref{ratio} stabilizes (exhibits a minimum) within the respective sum-rule windows. Then the optimal value is fixed by the $s_0$ which has the best fit to a constant within the sum-rule window. The mass prediction \eqref{ratio} is shown for hybrid charmonium in Fig.~\ref{charm_opt} and for hybrid bottomonium in Fig.~\ref{bottom_opt}. \begin{figure}[hbt] \centering \includegraphics[scale=0.85]{charm_mass_ratio.eps} \caption{The ratio ${\cal L}_1^{\rm QCD}\left(\tau,s_0\right)/{\cal L}_0^{\rm QCD}\left(\tau,s_0\right)$ for hybrid charmonium is shown as a function of the Borel scale $1/\tau$ for the optimized value $s_0=33\,{\rm GeV^2}$ (solid curve). The ratio is also shown for $s_0=38\,{\rm GeV^2}$ (upper dotted curve), $s_0=28\,{\rm GeV^2}$ (lower dotted curve) and $s_0\to\infty$ (uppermost dashed curve). Central values of the QCD parameters have been used.} \label{charm_opt} \end{figure} \begin{figure}[hbt] \centering \includegraphics[scale=0.85]{bottom_mass_ratio.eps} \caption{The ratio ${\cal L}_1^{\rm QCD}\left(\tau,s_0\right)/{\cal L}_0^{\rm QCD}\left(\tau,s_0\right)$ for hybrid bottomonium is shown as a function of the Borel scale $1/\tau$ for the optimized value $s_0=150\,{\rm GeV^2}$ (solid curve). For comparison the ratio is also shown for $s_0=170\,{\rm GeV^2}$ (upper dotted curve), $s_0=130\,{\rm GeV^2}$ (lower dotted curve) and $s_0\to\infty$ (uppermost dashed curve). Central values of the QCD parameters have been used.} \label{bottom_opt} \end{figure} We predict the masses of axial vector charmonium and bottomonium hybrids to be $5.13\pm0.25\,{\rm GeV}$ and $11.32\pm0.32\,{\rm GeV}$, respectively. The uncertainties are due to the QCD parameter uncertainties given in Table~\ref{QCD_parameters} and are dominated by variations in $\langle \alpha \, G^2 \rangle$. This is in contrast to the pseudoscalar where variations in $\langle g^3 G^3 \rangle$ dominate~\cite{Berg:2012gd}. These predictions for axial vector heavy quark hybrids are in agreement with those of Refs.~\cite{Govaerts:1984hc,Govaerts:1986pp}, suggesting that the effects of $\langle g^3 G^3 \rangle$ are less important for the $1^{++}$ channel than for the $1^{--}$ and $0^{-+}$ channels. \section{Conclusions} \label{theConclusion} In Ref.~\cite{Harnett:2012gs} we have studied $J^{PC}=1^{++}$ heavy quark hybrids using QCD sum-rules. For the first time we have calculated the contributions from $\langle g^3 G^3 \rangle$, which were found to be important for the $1^{--}$~\cite{Qiao:2010zh} and $0^{-+}$~\cite{Berg:2012gd} channels. We find that $\langle g^3 G^3 \rangle$ has less effect on the $1^{++}$ channel, resulting in mass predictions of $5.13\pm0.25\,{\rm GeV}$ for hybrid charmonium and $11.32\pm0.32\,{\rm GeV}$ for hybrid bottomonium, in agreement with the range of predictions given in Refs.~\cite{Govaerts:1984hc,Govaerts:1986pp}. The X(3872) has possible $J^{PC}$ assignments of $1^{++}$ or $2^{-+}$~\cite{Abulencia:2006ma,Abe:2005iya}, but $1^{++}$ is strongly favoured~\cite{Brambilla:2010cs}. In Ref.~\cite{Li:2004sta} it was suggested that the X(3872) is a hybrid, but this interpretation has been largely ruled out since the flux-tube~model~\cite{Barnes1995} and lattice QCD~\cite{Perantonis,Liu:2011rn,Liu:2012ze} predict that the lightest charmonium hybrids have masses significantly greater than that of the X(3872). If it is shown to have $J^{PC}=1^{++}$, our mass prediction of $5.13\,\rm{ GeV}$ is in agreement with the results of other theoretical approaches that disfavour a charmonium hybrid interpretation of the X(3872). In Ref.~\cite{Liu:2012ze} it is suggested that $0^{-+}$ and $1^{--}$ are members of a ground state charmonium hybrid multiplet, while $1^{++}$ is a member of a multiplet of excited charmonium hybrids. The present result and those of Refs.~\cite{Qiao:2010zh,Berg:2012gd} seem to be in qualitative agreement with this multiplet structure, although the mass splittings are significantly larger than those of Ref.~\cite{Liu:2012ze}. Future work to update remaining unstable sum-rule channels in Refs.~\cite{Govaerts:1985fx,Govaerts:1984hc,Govaerts:1986pp} to include the effects of $\langle g^3 G^3 \rangle$ would clarify the QCD sum-rule predictions for the spectrum of charmonium hybrids. \bigskip \noindent {\bf Acknowledgements:} We are grateful for financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC).
1,108,101,563,946
arxiv
\section{#1} \setcounter{equation}{0}} \newcommand{ \hfill (\thesection.\arabic{equation}\alph{letter})}{ \hfill (\thesection.\arabic{equation}\alph{letter})} \newcommand{{(\Box+e^2\rho^2)}}{{(\Box+e^2\rho^2)}} \newcommand{\eql}{\nonumber & \hfill (\thesection.\arabic{equation}\alph{letter}) \cr \addtocounter{letter}{1}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\setcounter{letter}{1} \begin{eqnarray}}{\setcounter{letter}{1} \begin{eqnarray}} \newcommand{\addtocounter{equation}{1} \end{eqnarray}}{\addtocounter{equation}{1} \end{eqnarray}} \newcommand{\nonumber \\}{\nonumber \\} \newcommand{\req}[1]{Eq.(\ref{#1})} \newcommand{\reqs}[1]{Eqs.(\ref{#1})} \newcommand{\,\,\,\,\hbox to 30pt{\rightarrowfill}{\,\,\,\,\hbox to 30pt{\rightarrowfill} \,\,\,\,} \newcommand{\,\,\,\hbox to 20pt{\rightarrowfill}{\,\,\,\hbox to 20pt{\rightarrowfill} \,\,\,} \newcommand{\oli}[1]{\overline{#1}} \newcommand{\til}[1]{\tilde{#1}} \newcommand{\partial}{\partial} \newcommand{\til{z}_+}{\til{z}_+} \newcommand{\til{z}_-}{\til{z}_-} \newcommand{\ajo}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\ava}[2]{\frac{\delta #1}{\delta #2}} \newcommand{\njo}[1]{\frac{\partial #1}{\partial \til{z}_-}} \newcommand{\szs}[1]{\frac{\partial \:}{\partial z_{#1}}} \newcommand{\tr}[1]{\frac{\partial \phi}{\partial z_{#1}}} \newcommand{\dro}[1]{\frac{\partial \rho}{\partial z_{#1}}} \newcommand{\ntr}[1]{\frac{\partial^2 \phi}{\partial {z_{#1}}^2}} \newcommand{\str}[1]{\frac{\partial^2 #1}{\partial z_+ \partial z_-}} \newcommand{\pdfi}[1]{\frac{d #1}{d \phi}} \newcommand{\sdfi}[1]{\frac{d^2 #1}{d \phi^2}} \newcommand{{1\over2}}{{1\over2}} \newcommand{{\cal G}}{{\cal G}} \newcommand{{\cal F}}{{\cal F}} \newcommand{\til{{f}}}{\til{{f}}} \newcommand{\til{{g}}}{\til{{g}}} \newcommand{\til{\rho}}{\til{\rho}} \newcommand{\til{\phi}}{\til{\phi}} \newcommand{\til{C}}{\til{C}} \newcommand{\til{F}}{\til{F}} \newcommand{\til{\cal G}}{\til{\cal G}} \newcommand{\oli{V}}{\oli{V}} \newcommand{\oli{R}}{\oli{R}} \newcommand{\oli{g}}{\oli{g}} \newcommand{\oli{\phi}}{\oli{\phi}} \newcommand{\hat{\Pi}}{\hat{\Pi}} \newcommand{A(t,x)}{A(t,x)} \newcommand{B(t,y)}{B(t,y)} \newcommand{(t,x)}{(t,x)} \newcommand{(t,y)}{(t,y)} \newcommand{\abz}[2]{\ajo{#1}{#2(t,z)}} \newcommand{\abpi}[2]{\ajo{#1}{\Pi_{#2}(t,z)}} \newcommand{\kxz}[2]{\ajo{{\cal #1}(x)}{#2(z)}} \newcommand{\lxz}[2]{\ajo{{\cal #1}(x)}{\Pi_{#2}(z)}} \newcommand{{\vec{x}}}{{\vec{x}}} \newcommand{{\vec{y}}}{{\vec{y}}} \newcommand{Z_{{FP}}}{Z_{{FP}}} \newcommand{Z_{{F}}}{Z_{{F}}} \newcommand{Z_{{R}}}{Z_{{R}}} \newcommand{Z_{{OP}}}{Z_{{OP}}} \newcommand{Z_{EKT}}{Z_{EKT}} \newcommand{{\varphi^\dagger}}{{\varphi^\dagger}} \newcommand{{\dot\tau}}{{\dot\tau}} \newcommand{{\dot\alpha}}{{\dot\alpha}} \newcommand{{H_{ADM}}}{{H_{ADM}}} \newcommand{{{\cal L}_\xi}}{{{\cal L}_\xi}} \newcommand{\bibitem}{\bibitem} \begin{document} \begin{titlepage} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \renewcommand{\baselinestretch}{1.3} \medskip \hfill UNB Technical Report 05-03\\[20pt] \begin{center} {\large {\bf The Generalized Ricci Flow for 3D Manifolds with One Killing Vector }} \\ \medskip {} \medskip \renewcommand{\baselinestretch}{1} {\bf J. Gegenberg $\dagger$ G. Kunstatter $\sharp$ \\} \vspace*{0.50cm} {\sl $\dagger$ Dept. of Mathematics and Statistics and Department of Physics, University of New Brunswick\\ Fredericton, New Brunswick, Canada E3B 5A3\\ } {\sl $\sharp$ Dept. of Physics and Winnipeg Institute of Theoretical Physics, University of Winnipeg\\ Winnipeg, Manitoba, Canada R3B 2E9\\ } \end{center} \renewcommand{\baselinestretch}{1} \begin{center} {\bf Abstract} \end{center} {\small We consider 3D flow equations inspired by the renormalization group (RG) equations of string theory with a three dimensional target space. By modifying the flow equations to include a U(1) gauge field, and adding carefully chosen De Turck terms, we are able to extend recent 2D results of Bakas to the case of a 3D Riemannian metric with one Killing vector. In particular, we show that the RG flow with De Turck terms can be reduced to two equations: the continual Toda flow solved by Bakas, plus its linearizaton. We find exact solutions which flow to homogeneous but not always isotropic geometries. } \vfill \hfill September 2005 \\ \end{titlepage} \section{Introduction} The Ricci flow of d-dimensional manifolds is interesting because of its relationship to the renormalization group equations of generalized 2D sigma models with d-dimensional target space. In two spacetime dimensions, Ricci flow also provides a proof of the uniformization theorem\cite{poincare}, which states that every closed orientable two dimensional manifold with handle number 0,1, or $>1$ admits {\it uniquely} the constant curvature geometry with positive, zero, or negative curvatures, respectively. Bakas \cite{bakas} has shown that the 2D Ricci flow equations in conformal gauge provide a continual analogue of the Toda field equations. Using this algebraic approach he was able to write down the general solution. The potential importance of a 3D uniformization theorem is evident particularly in the context of (super)membrane physics and three-dimensional quantum gravity where one should be able to perform path-integral quantization via a similar procedure to that in two dimensions. Unfortunately, there is no uniformization theorem in three dimensions, only a conjecture due to W.P. Thurston. \cite{thurston,scott}. Recently there has been speculation that Perelman \cite{perelman} has overcome some roadblocks in Hamilton's program to prove the conjecture using the `Ricci flow' \cite{hamilton1,caochow}. It is therefore important to understand in detail the properties of this flow. In the following, we follow up on a suggestion by Bakas to use his 2D results in order to analyze the flow equations for 3D manifolds with a single Killing vector. This provides a tractable midisuperspace approach which can be systematically studied in the context of the `stringy flow' first considered in \cite{stringy}. We will show that this flow reduces to the infinite dimensional generalization of the Toda equation for the conformal factor of the invariant 2D submanifold plus a linear equation for the scale factor of the extra dimension. Note that since the latter scale factor depends on the coordinates of the invariant subspace, our manifolds are not simple direct products. In addition, we will analyze two exact analytic solutions in detail and show that they have the expected behaviour. The paper is organized as follows. Section 2 reviews 2D flow equations and Bakas' results, Section 3 reviews the stringy flow of \cite{stringy}, but with the De Turck modification \cite{deturck}. The De Turck modification contains a vector field $\xi_i$, and we show that if we choose this vector field as a linear combination of two vector fields, one of which is proportional to the gradient of the dilaton field, then the dilaton can be decoupled from the flow of the remaining fields. In Section 4 we discuss the flow for a metric ansatz where one of the coordinates is in the direction of a Killing vector field, and the remaining part of the metric is in the form of a conformal 2D metric. We use the second part of the De Turck vector field to preserve this form of the metric throughout the flow. In order that the flow is self-consistent, the U(1) vector field in the stringy flow must be fixed in terms of the functions that occur in the metric tensor. The flow then reduces to two equations for the two metric degrees of freedom. One of these is the `continual Toda equation' \cite{bakas} for the conformal factor of the 2D geometry orthogonal to the Killing vector field, and the other, for the component of the metric in the direction of the Killing vector field, is the linearization of the continual Toda equation. Section 5 presents specific solutions and Section 6 ends with conclusions and prospects for future work. \section{The 2D Case} We now summarize the methodology and results of Bakas\cite{bakas} since they play a crucial role in the following. The Ricci flow equations, for arbirary 2-metric $g_{AB}$ are: \begin{equation} {\partial g_{ij}\over \partial t} = -R_{ij} + \nabla_i \xi_j + \nabla_j \xi_i. \label{2D flow} \end{equation} The last two terms (the so-called ``De Turck'' terms) incorporate the effects of all possible diffeomorphisms and can be chosen arbitrarily in order to simplify the equations and/or optimize convergence. Originally, De Turck \cite{deturck} chose the vector field \begin{equation} \xi^i:=g^{jk}\left(\Gamma^i_{jk}-\Delta^i_{jk}\right), \end{equation} where $\Gamma^i_{jk}$ is the Christoffel connection with respect to the Riemannian metric $g_{ij}$ and $\Delta^i_{jk}$ is a fixed `background connection'. The purpose was to replace the Ricci flow, which is only weakly parabolic, by an equivalent flow which is strongly parabolic. Bakas chose to work in the conformal gauge: \begin{equation} ds^2 = g_{ij}dx^idx^j= {1\over2}\exp(\Phi)(dx^2+dy^2). \end{equation} In this gauge there is no need to add De Turck terms and the flow takes the form of a non-linear ``heat equation'': \begin{equation} {\partial\over\partial t}e^\Phi = \nabla^2\Phi. \label{2D heat equation} \end{equation} The Toda equations describe the integrable interactions of a collection of two dimensional fields $\Phi_i(x,y)$ coupled via the Cartan matrix $K_{ij}$: \begin{equation} \sum_jK_{ij}e^{\Phi_j(x,y)}= \nabla^2\Phi_i(x,y). \end{equation} Bakas argues that Eq.(\ref{2D heat equation}) is a continual analogue of the above, with the Cartan matrix replaced by the kernel: \begin{equation} K_{ij}\to K(t,t')={\partial\over\partial t}\delta(t,t'). \end{equation} This leads to a general solution to (\ref{2D heat equation}) in terms of a power series around the free field expanded in path ordered exponentials. Although the resulting expression is difficult to work with explicitly, it does provide a formal complete solution to the 2D flow equations. In the next sections we will show that a similar formal solution can also be found for three dimensional metrics with at least one Killing field. \section{3D Flow Equations} We consider here a generalization of the Ricci flow, in which, besides the metric $g_{ij}$, there are additional fields which flow, consisting of a dilaton $\phi$, a gauge two-form potential $B_{ij}$ with field strength $H_{ijk}$ and finally, a $U(1)$ gauge field with potential 1-form $A_i$ and corrresponding field strength $F_{ij}$ which couples as a `Maxwell-Chern-Simons theory'. Including De Turck terms plus gauge terms for the flow of the non-metric fields, the flow is \begin{eqnarray} {\dot g}_{ij}&=&-2\left(R_{ij}+2\phi_{|ij}-(\epsilon_FF_i{}^k F_{jk}+\frac{\epsilon_H}{4} H_{ikl}H_j{}^{kl})\right) +L_\xi g_{ij}, \label{gflow}\\ {\dot A}_i&=&-\left(e^{2\phi}\nabla_j(e^{-2\phi}F_i{}^j)+\frac{e\epsilon_F}{2}\eta_i{}^{jk}F_{jk}\right) +L_\xi A_i + \partial_i\Lambda,\label{Aflow}\\ {\dot B}_{ij}&=&e^{2\phi}\nabla_k(e^{-2\phi}H^k{}_{ij})+L_\xi B_{ij} +\partial_i\Lambda_j - \partial_j\Lambda_i,\label{Bflow}\\ \dot\phi&=&-\chi +\Delta\phi-|\nabla\phi|^2+\frac{\epsilon_F}{2}F^2+\frac{\epsilon_H}{12}+L_\xi\phi. \label{dilflow} \end{eqnarray} In the above, $L_\xi$ denotes the Lie derivative with respect to the (covariant) vector field $\xi_i$. The Lie derivative terms are present because we are flowing geometrical objects that are not coordinate invariant, so their time derivatives should only be determined up to arbitrary gauge transformations at each point along the flow. Similarly, the terms containing $\Lambda$ and $\Lambda_i$ correspond to arbitrary gauge transformations on the gauge fields. By choosing the gauge and coordinate transformation terms judiciously, we are able to simplify the equations considerably. This flow is motivated by two considerations. First, as shown in \cite{stringy}, all of the Thurston geometries are solutions of the equations of motion of this theory for various values of the parameters $\chi,\epsilon_H,\epsilon_F,e$, as well as the other fields. In particular, the addition of the Maxwell term alone ($e=0$) yields $S^2\times E^1$, $H^2\times E^1$ and $Sol$ as solutions. Moreover, there exists a generalized Birkhoff theorem which guarantees that these are the only solutions when $\phi=constant$ and $A\neq0$. With $e\neq0$, one finds that the remaining Thurston geometries $Nil$ and $SL(2,R)$ are also solutions. As argued in \cite{stringy} it seems plausible that these are the only solutions, but to date no rigorous proof exists. The second motivation comes from string theory. In particular, the RG flow for a non-linear sigma model with a 4D Kaluza-Klein target space resembles the flow above, with the $A_i$ potential originating as the twist potential of the 4D Kaluza-Klein metric. The details of this are being investigated elsewhere \cite{gegsun}. We choose $\xi_i=k_i+2\nabla_i\phi$ and let $\Lambda=-2A_j\phi^{|j}$ and $ \Lambda_i = 2B_{ji}\phi^{|j}$, where $k_i$ is as yet arbitrary. With these choices the dilaton is completely eliminated from the flow equations for the metric and gauge fields: \begin{eqnarray} \dot g_{ij}&=&-2\left(R_{ij}-(\epsilon_FF_i{}^k F_{jk}+\frac{\epsilon_H}{4} H_{ikl}H_j{}^{kl})\right) +L_k g_{ij}, \label{gflowt}\\ \dot A_i&=&-\left(\nabla_jF_i{}^j+\frac{e\epsilon_F}{2}\eta_i{}^{jk}F_{jk}\right) +L_k A_i+\partial_i \lambda,\label{Aflowt}\\ \dot B_{ij}&=&\nabla_k H^k{}_{ij}+L_k B_{ij}+\partial_i\lambda_j-\partial_j \lambda_i,\label{Bflowt}\\ \dot\phi&=&-\chi+\Delta\phi-2 |\nabla\phi|^2+\frac{\epsilon_F}{2}F^2+\frac{\epsilon_H}{6}H^2+L_k\phi. \label{dilflowt} \end{eqnarray} The arbitrary vector $k^i$ and gauge parameters $\lambda,\lambda_i$ indicate that we are still free to add further De Turck and gauge terms to the equations. We will use this freedom later to simplify the equations that result from a particular {\it ansatz}. \section{A Particular Case with One Killing Vector Field} Henceforth we set $B_{ij}$ identically equal to $0$, which is consistent with the flow equations. We also consider the case $e=0$ (no Chern-Simons term). We assume the metric to have a single Killing vector and to be manifestly hypersurface orthogonal (i.e. diagonal): \begin{equation} ds^2=e^\Phi(dx^2+dy^2)+e^\sigma dw^2, \end{equation} We also choose the following {\it ansatz} for the vector potential: \begin{equation} A_i=[e^{\sigma/2},0,0]. \end{equation} Consistency of the above {\it ansatz} requires that the flow equations preserve the diagonal nature of the metric. It turns out that this can be accomplished by choosing the vector field $k_i$ as \begin{equation} k_i=-{1\over2}\partial_i\sigma, \end{equation} With these choices the flow equations simplify to: \begin{eqnarray} \dot{g}_{xx}=e^\Phi\dot{\Phi}&=& \nabla^2 \Phi + {1\over2}(1+\epsilon_F)(\partial_x \sigma)^2, \nonumber\\ \dot{g}_{yy}=e^\Phi\dot{\Phi}&=& \nabla^2 \Phi + {1\over2}(1+\epsilon_F)(\partial_y \sigma)^2, \nonumber\\ \nonumber\\ \dot{g}_{xy}=0&=& {1\over2}(1+\epsilon_F) \partial_x\sigma\partial_y\sigma, \nonumber\\ \end{eqnarray} \begin{eqnarray} \dot{A}_x &=& \dot{A}_y = 0,\nonumber\\ \dot{A}_w&=&\epsilon_f \nabla^2 e^{\sigma/2} + \left({1+\epsilon_F}\right) \left(\partial\sigma\right)^2. \end{eqnarray} In the above, $\nabla^2$ denotes the flat space Laplacian. We now fix $\epsilon_F=-1$, in which case the flow boils down to two simple partial differential equations. The first is \begin{equation} \partial_t e^\Phi=\nabla^2 \Phi,\label{ctoda} \label{toda} \end{equation} which is the `continual Toda eqn' a la \cite{bakas}. The other flow is the linearization of the continual Toda flow: \begin{equation} e^\Phi\partial_t e^{-\sigma/2}= -\nabla^2 e^{-\sigma/2}.\label{linToda} \end{equation} To the best of our knowledge, there is no `simpler flow' constructed from the Ricci-De Turck flow alone, without other fields, which can self-consistently flow the metric preserving the manifestly static form. Note that for any given solution $\Phi$ of (\ref{toda}), there exists a corresponding solution for $\sigma$: \begin{equation} e^{-\sigma/2} = \Phi(x,y,-t) + \chi(x,y), \end{equation} where $\chi(x,y)$ is any harmonic function on the $x,y$ subspace, i.e. satisfying: $\nabla^2\chi=0$. \section{Exact Solutions of the Flow} We first examine a non-trivial flow, namely the sausage solution of Bakas \cite{bakas} (also called the Rossineau flow in the mathematical literature \cite{caochow}). This is an exact solution of the continual Toda equation of the form: \begin{equation} e^\Phi={2\sinh{[2\gamma t]}\over \gamma(\cosh{[2\gamma t]}+\cosh{(2 y)})}. \end{equation} In this case: \begin{equation} e^{-\sigma/2}= ln\left|{2\sinh{[2\gamma t]}\over \gamma(\cosh{[2\gamma t]}+\cosh{(2 y)})}\right|, \end{equation} where we have eliminated an imaginary term from $e^{-\sigma/2}$ by using the freedom to shift by a harmonic function. In the limit as $t\to\infty$, $e^\Phi\to 2/\gamma$, and $e^\sigma \to ln|2/\gamma|^{-2}$, so that the Ricci tensor goes to zero and in this limit, the geometry is flat. On the other hand, in the limit $t\to 0^+$, $e^\Phi\to 2t/\cosh^2{y}$. In this limit, we find that the Ricci scalar $R\sim\frac{1}{t}$. So, if we flow the highly curved non-homogeneous metric with initial value at $t=\epsilon>0$ \begin{equation} ds^2=\left(\ln{\frac{2\epsilon}{cosh^2{y}}}\right)^{-2}dw^2+\frac{2\epsilon}{\cosh^2(y)}\left(dx^2+dy^2\right), \end{equation} we end up at $t\to\infty$ with the flat metric. This is consistent with Thurston's conjecture. The second type of solution is of the Liouville type. We set \begin{equation} e^{\Phi(x,y;t)}=T(t) e^{\psi(x,y)}. \end{equation} Now for $t\geq 0$, we find that \begin{eqnarray} T(t)&=&\beta t,\nonumber \\ \nabla^2\psi&-&\beta e^\psi=0, \end{eqnarray} where $\beta$ is a separation constant. The second of the above equations is the Liouville equation, so the two dimensional part of the metric, $e^\Phi(dx^2+dy^2)$ has constant negative curvature (for $t\geq 0$). Again we choose $e^{-\sigma/2}=\Phi$, so that \begin{equation} e^\sigma=\left[\log {\beta t}+\psi(x,y)\right]^{-2}. \end{equation} The separation constant $\log\beta $ can be absorbed into $\psi$ without loss of generality. Hence the metric is \begin{equation} ds^2=\left[\log{t}+\psi(x,y)\right]^{-2} dw^2+\beta t e^{\psi(x,y)}(dx^2+dy^2).\label{lioumetric} \end{equation} The quantity $\psi(x,y)$ is a solution of the Liouvile equation. If $t\geq 0$, then the flow starts from some highly curved non-homogeneous metric near $t=0$. As $t\to \infty$, we have \begin{eqnarray} R_{AB}&\sim&-\frac{1}{2t}g_{AB},\nonumber \\ R_{ww}&\sim& 0, \end{eqnarray} with $A,B,...=x,y$. Hence, the geometry is asymptotically that of the homogeneous, but anisotropic geometry $H^2\times E^1$. Thus the flow is consistent with the Thurston conjecture. \section{Conclusions} We have shown that the modified Ricci flow equations Eq(9) for 3D metrics with at least one Killing vector can be integrated in precisely the same manner as the 2D equations, at least for the special case $\epsilon_F=-1$. In addition to extending this analysis to other values of the parameters in the action, and hence to other topologies, it is interesting to speculate whether these techniques could work for more general 3D metrics. Consider, without loss of generality, a diagonal metric \begin{equation} ds^2=e^{\Phi_1(x;t)}(dx^1)^2+e^{\Phi_2(x;t)}(dx^2)^2+e^{\Phi_3(x;t)}(dx^3)^2, \end{equation} where the functions $\Phi_i(x;t)$ depend on all 3 coordinates $x^i$. The resulting bare Ricci flow is again not manifestly elliptic, and the equations have non-trivial off-diagonal terms on the RHS that make direct integration difficult. Since in three dimensions any metric can be made diagonal with a suitable coordinate transformation, it is reasonable to assume that there exists a modified flow that ensures that diagonal metrics evolve into diagonal metrics. We have as yet not succeeded in constructing this modified flow, but if it did exist, it is possible that the resulting three flow equations for each of the three scale factors would take a form similar to what we have found above, albeit with non-trivial coupling. It may therefore provide a basis for solving the 3D flow equations in a more general setting. \noindent {\bf Acknowledgements}: We wish to thank the Perimeter Institute, where much of this work was done, for its hospitality and support. This work was supporte by the Natural Sciences and Engineering Research Council of Canada. We would also like to thank Viqar Husain, Vardarajan Suneeta and Eric Woolgar for very useful discussions.
1,108,101,563,947
arxiv
\section{Introduction} Young associations and open clusters are very useful laboratories to study star formation and the early stages of stellar evolution, as they enable us to probe a specific set of properties for a group of stars sharing the same age and composition but spanning a range of masses. Time-series photometric observations of such systems, in particular, can be used to probe a host of important phenomena, including accretion (for the youngest star forming regions only), pulsations (from pre- to post-main sequence), rotation and activity. These observations can also be used to detect and characterise eclipsing binaries (EBs), and thus to constrain evolutionary models by measuring the fundamental properties (masses, radii, luminosities and temperatures) of their component stars in a model-independent manner. Finally, they can also be used to search for planetary transits, potentially offering a window into the earliest stages of the evolution of planetary systems. Over the past decade, these science goals have motivated a number of major projects dedicated to the photometric monitoring of star forming regions and young open clusters. An exhaustive review of these projects and their achievements would be excessively long, but notable examples include the Monitor project \citep{aig+07}, which monitored 9 star forming regions and open clusters aged $<200$\,Myr using 2--4\,m telescopes worldwide. Monitor provided an unprecedented sample of rotation period measurements for young low-mass stars \citep[see e.g.][and references therein]{irw+09b,mor+13} and led to the detection of the lowest mass and youngest stellar eclipsing binary known \citep[][see Fig.~\protect\ref{fig:mrrel}]{irw+07b}. More recently, the Palomar Transient Factory Orion project used a 1.2\,m telescope and focused on the 25\,Ori region, detecting a further 7 pre-main sequence (PMS) EBs \citep{van+11} as well as what may be the first exoplanet transiting a PMS star \citep{van+12}. An important limitation of these projects, however, has been the time-sampling achievable from the ground. By contrast, the CoRoT and MOST satellites were able to observe the NGC\,2264 star forming region continuously for several weeks at a time, first in 2008 and again in 2011/2012, this time as part of a coordinated campaign using Spitzer, Chandra, VLT-FLAMES, CFHT and a number of other ground-based telescopes. The data from this program are still being analysed, but they have already provided fascinating insights into the rapid evolution of PMS pulsators \citep{gru+12,zwi+13a,zwi+13b} and the astonishing diversity of the variability of classical T Tauri stars \citep{ale+10}. The continuous time-sampling enabled a more robust determination of the rotation period distribution for the cluster \citep{aff+13}, and led to the detection tens of EBs, including a dozen which are likely cluster members, one of which is the first low-mass EB to show evidence of a circumbinary disk \citep{gil+13}. Unfortunately, NGC\,2264 was the only rich star forming region or open cluster located within the CoRoT `eyes' (or visibility zone). The field observed by Kepler during the nominal mission contained only a handful of moderately old open clusters, although these have already yielded exciting results on rotation \citep{mei+11}, pulsations \citep{mig+12} and the frequency of planets around cluster versus field stars \citep{mei+13}. The possibility of a two-wheel Kepler mission with decreased photometric precision, but increased flexibility in terms of pointing, aperture selection and observing strategy, represents a unique opportunity to monitor other key young associations and open clusters. A major advantage of Kepler over CoRoT and previous ground-based programs is its extremely wide field-of-view. This means that we can observe relatively nearby, spatially extended regions such as Taurus, the Pleiades, and the Hyades. Previous projects have avoided them because of the need to tile observations, but Kepler can observe a significant fraction of their members in a single shot. The very proximity of these regions not only makes them cornerstones of our understanding of early stellar evolution, it also greatly facilitates any follow-up observations. \section{The target clusters} Figure~\ref{fig:clusters} shows the spatial and kinematic distribution of young moving groups, associations, and clusters in the solar neighbourhood, colour-coded according to age. The richest of these are prime targets for the proposed program, but we also include a few particularly important clusters at larger distances and/or older ages. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{rice_clusters.png} \caption{Spatial (left) and kinematic (right) distribution of nearby young associations (from \protect\citealt{ric+11}).}\label{fig:clusters} \end{figure} In the rest of this section, we discuss the most likely targets for this program, but we stress that this target list is neither exhaustive nor final, and would need to be adjusted depending on the total time available, constraints on the satellite pointing, preliminary indications of the photometric performance of Kepler in two-wheel mode, and detailed tallies of the latest membership information for each target. First, Kepler has the unique ability to observe, in a single shot, the entire field of most of the nearest and richest star forming regions: such as Orion \citep{bal08}, $\rho$\,Ophiucus \citep{wil+08}, Chamaeleon \citep{luh08} and Taurus-Auriga \citep{ken+08} complexes. In Orion alone, we could observe several thousand stars spanning an age range form 1 to $\sim 12$\,Myr. The other three are less rich, with a few hundred members each (down to $V\sim 17$), but even more nearby ($\sim150$\,pc), and their spatially sparse nature has so far hindered intensive photometric monitoring campaigns. The two Northern regions are extremely well-studied already, and the two Southern ones are included in the GES. Combined with the existing CoRoT observations of NGC\,2264, and with the infrared variability information collected with Spitzer as part of the YSOVAR project ({\tt ysovar.ipac.caltech.edu}), Kepler observations of these three regions would provide fascinating insights into the evolution of accretion, rotation, activity and structure in the first few Myr of stellar evolution. Another very important group of targets are `intermediate age clusters', ranging from $\sim 30$ to $\sim 150$\,Myr, including for example: IC\,4665 \citep[][27\,Myr, 350\,pc]{lod+11}, NGC\,2547 \citep[][30\,Myr, 1400\,pc]{jef+04}, $\alpha$\,Per \citep[80\,Myr, 140\,Myr]{lod+12}, Blanco\,1 \citep[][90\,Myr, 130\,pc]{pla+11}, and the Pleiades \citep[][135\,Myr, 150\,pc]{per+05}. These clusters will enable us to probe the transition from PMS to MS and to study stellar evolution in a lower density environment. $alpha$\,Per and Blanco\,1 in particular represent a critical age in rotational evolution, and currently have very little rotation period data. Although some are a little more distant than the youngest targets, most are still spatially extended. All those in the Southern hemisphere are in the GES, and the Northern clusters are extremely well studied already, minimising the need for dedicated follow-up observations. Finally, we include the Hyades\citep[][625\,Myr,47\,pc]{per+98} and Praesepe \citep[][650\,Myr, 150\,pc]{kra+07}, as `cornerstone' where FGK stars have all arrived on the MS, while earlier type stars are already starting to evolve. The Hyades in particular are so near to the solar system, and so extended, that Kepler really is the only instrument capable of monitoring them in their entirety. While there are between hundreds to thousands of members in each cluster down to $V\sim17$ (approximately corresponding to the faint limit of Kepler in nominal mode), there is a case for pushing to fainter magnitudes, in order to probe low-mass stars and brown dwarfs. If it is feasible, a faint cutoff at $V\sim19$ would increase the number of cluster members monitored size by a factor $\sim 3$ and take us to the hydrogen burning mass limit for almost all target clusters (well into the substellar regime for the nearest / youngest). \section{Science drivers} \subsection{Pulsations} \begin{figure} \centering \includegraphics[height=4cm]{gammaDor.png} \hfill \includegraphics[height=4cm]{deltaScu.png} \caption{CoRoT light curves of PMS pulsators in NGC\,2264. Left: the $\gamma$\,Doradus star NGC\,2264~VAS\,20 \protect\citep{zwi+13a}. Right: 1-day section of the light curve of $\delta$\,Scuti star HD\,261711 \protect\citep{zwi+13b}.} \label{fig:puls} \end{figure} Asteroseismology offers a unique window into the interiors of stars, as amply demonstrated by the very successful asteroseismology program carried out during the nominal Kepler mission. The reduced photometric performance expected in two-wheel mode is very likely to preclude the study of very low-amplitude, Sun-like pulsations, but it should still be possible to study slower, larger amplitude `classical' pulsators. It is particularly interesting to study pulsations in young stars, which are evolving rapidly compared to their older counterparts, and can thus provide particularly stringent tests of models of stellar evolution. Youngs stars with masses between 1.5 and 5\,$M_\odot$ cross the instability strip in the HR diagram during their evolution towards the zero-age main sequence, and can thus display $\delta$\,Scuti-like pulsations. Theory also predicts the existence of other kinds of pulsators among B to F type PMS and early MS stars, including $\gamma$\,Doradus stars \citep{bou+11} and slowly pulsating B (SPB) stars. Importantly, the interior structure of young stars in this mass range differs significantly from that of their evolved counterparts located in the same region of the HR diagram, whereas their atmospheric properties (and hence colours and spectra) are quite similar, so asteroseismology uniquely constrains the evolutionary stage of such stars. Asteroseismology is also, of course, an important test of theoretical models of the interiors of PMS stars, particularly when focussing on young open clusters, where all the stars formed from the same birth cloud. For example, the joint seismic analysis of the 6 pulsators known (at that time) in the 3\,Myr old star forming region NGC\,2264 by \citet{gue+09} highlighted some important discrepancies with theoretical predictions. Pulsating PMS stars have spectral types from B to F, periods ranging from 30\,min to 1\,day and amplitudes of $\sim 1$\,mmag or less. To detect and model these pulsations thus requires tight time sampling (5\,min max) over periods well in excess of a day, as well as a photometric precision of order 1\,mmag, which is difficult to achieve from the ground. As a result, much of what we know about these young pulsators so far has come from a handful of objects located in NGC\,2264, which has been observed repeatedly by the MOST and CoRoT satellites, and from dedicated observations of Herbig Ae field stars with the MOST space telescope. Notable achievements resulting from these observations include: \begin{packed_item} \item the detection of tens of new PMS $\delta$\,Scuti stars almost doubling the total number (Zwintz, priv.\ comm.; see Fig.~\ref{fig:puls}, right panel); \item constraining the evolutionary state of a star from its pulsation frequencies \citep{gue+07}; \item showing that granulation in the stars’ thin convective envelopes might be responsible for the high numbers of low-amplitude frequencies observed \citep{zwi+11}; \item the detection of the first PMS star showing hybrid $\delta$\,Scuti-$\gamma$\,Doradus pulsations \citep{rip+11}; \item the detection of the first PMS SPB candidates \citep{gru+12}. B-stars have short PMS lifetimes, so these objects will enable us to study the transition from PMS to MS, i.e.\ from gravitational contraction to the onset of hydrogen core burning; \item the detection of the first PMS $\gamma$\, Doradus candidates \cite[][see Fig.~\ref{fig:puls}, left panel]{zwi+13b}. These have similar frequencies to SPB stars, so distinguishing between the two requires a precise estimate of the effective temperature. The CoRoT observations of NGC\,2264 were the first to have the precision and time coverage to enable this. \end{packed_item} Observing a handful of young clusters and associations with Kepler would enable us to identify and study these different kinds of pulsators at a range of ages. We expect to find between 15 and 20 pulsators in each cluster. \subsection{Accretion, rotation and activity} \begin{figure} \centering \includegraphics[height=5cm]{rotation.png} \hfill \includegraphics[height=5cm]{ctts.png} \vspace{1mm} \caption{Left: Compilation of rotation period measurements for young open clusters with a range of angular evolution models \protect\citep{gal+13}. Right: Example light curves of spectroscopically identified T Tauri stars observed by CoRoT in NGC\,2264\protect\citep{ale+10}, showing spot-like, AA Tau-like and irregular variability ($1^{\rm st}$, $2^{\rm nd}$ and 3$^{\rm rd}$ column, respectively).} \label{fig:rot_ctts} \end{figure} The angular momentum evolution of young stars results from a trade-off between competing effects: contraction onto the main sequence and the associated spin-up, star-disk interaction (disk-locking), angular momentum loss via a magnetised wind, and internal re-distribution of angular momentum as the structure of the star evolves. The past decade has seen a large increase in the number of rotation period measurements available for PMS and early MS stars, but theoretical models still struggle to reproduce all the available data (see Fig.~\ref{fig:rot_ctts}, left panel). While most well-studied young clusters and associations have been the subject of rotation period searches from the ground, these have typically focussed on the denser areas, and their period sensitivity is far from uniform. Monitoring selected clusters for a few weeks each would enable us to complete the census. Typical rotation periods for young stars range from 1 to 20 days \citep{irw+09b}, and modulation amplitudes from 0.5 to a few \%, so the main requirement placed by this part of the science case is the duration of the observations of each region (at least 20 days, ideally up to 40). Such observations would also provide an exciting window into the relationship between the amplitude and period of starspot-related variability, and -- for the younger associations -- between the latter and accretion-related variability. The CoRoT observations of NGC\,2264 have shown that about 20\% of the classical T Tauri (CTTS) stars in the cluster presented light curves that can be attributed to periodic obscuration of the photosphere by the inner region of the circumstellar disk (see Fig.~\ref{fig:rot_ctts}, right panel). These CTTSs, called AA Tau-like due to their resemblance to the well studied AA Tau system \citep{bou+99}, offer the unique opportunity to study the properties of the inner disk region, located at only a few stellar radii from the star. This also allows the analysis of the dynamical star-disk interaction. \subsection{Eclipsing binaries and transits} \begin{figure} \centering \includegraphics[width=0.65\linewidth]{mrrel.png} \caption{Mass-radius relation for low-mass stars. The lines show, from top to bottom, the theoretical isochrones of \protect\citet{bar+98}. The black points show measurements for stars with masses $< 1.5\,M_{\odot}$ in detached EBs (data from {\tt http://www.astro.keele.ac.uk/$\sim$jkt/debdata/debs.html}), with one of the new systems discovered by CoRoT in NGC\,2264 shown in red \citep{gil+13}. Note the improvement in mass and radius determination compared to other PMS systems, due mainly to the precise and continuous space-based light curve.} \label{fig:mrrel} \end{figure} Detached, double-lined eclipsing binaries (EBs) are extremely valuable objects because their masses, radii, effective temperatures and luminosities can be determined in a model-independent manner from the light and radial velocity curves of the system. When these reach a precision of a few percent or less, they provide one of the most powerful tests of stellar evolution models available \citep{and91,tor+10}. As these models underpin most of astrophysics, it is vital that they are tested as rigorously as possible. The two components of a given EB can generally be assumed to share the same age and metallicity, which adds to the tightness of the constraints. Figure \ref{fig:mrrel} shows the existing mass and radius measurements for low-mass stars belonging to detached EBs. While there are now many well-characterised systems on the main sequence, there are very few on the pre-main sequence (PMS). Furthermore, even in well-sampled regions of parameter space there are significant discrepancies between theory and observations, as models tend to under-predict the radii (or equivalently, over-predict the temperatures) of low-mass stars. Detecting new, young, low-mass EB systems and characterising them in detail therefore remains a very important goal. The light curve shown in Figure~\ref{fig:1039} was obtained by CoRoT for a system with $V=16.8$, and illustrates the kind of precision one can expect to achieve even in the worst case scenario with Kepler in two-wheel mode. As shown by the red points in Figure~\ref{fig:mrrel}, this is sufficient (given suitable follow-up spectroscopic observations to measure the orbit of the system) to extract useful constraints on the masses and radii of both components. The continuous sampling achievable from space also significantly enhances the sensitivity to moderate period (8--20\,days) EBs, enabling us to test the impact of varying degrees of mutual interactions between the two stars on their evolution \citep{cha+07,cou+11,irw+11}. Based on experience from the Monitor project and the CoRoT observations of NGC\,2264, we expect to discover around 10 new eclipsing binaries in each cluster. Another very exciting prospect is the possible detection of transiting giant planets on short-period ($<10$\,days) orbits. Detecting transiting planets in open clusters has proved very difficult from the ground, not least because the targets tend to be relatively distant, and thus faint, making spectroscopic follow-up very expensive. The PTF-Orion project has nonetheless shown that it is possible \citep{van+12}. Furthermore, the radial velocity detection by \citet{qui+12} of 2 planets in a sample of 53 stars monitored in Praesepe indicates that hot Jupiters are at least as common in young open clusters as in the field. Based on calculations similar to those performed for Monitor \citep{aig+07}, we expect between 0 and 2 transiting planets to be detectable in each cluster; the exact number is very sensitive to the photometric precision (not yet known) and time coverage. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{1039LC.png} \caption{Light curve of CoRoT 223992193, a low-mass EB discovered by CoRoT in NGC\,2264 \citep{gil+13}. The out-of-eclipse (OOE) variability is modelled using a Gaussian process model (red line).} \label{fig:1039} \end{figure} \section{Synergy with other projects} Aside from the aforementioned CoRoT and MOST observations, the proposed program is similar to the Monitor \citep{aig+07}, but would supersede it significantly in terms of field-of-view, precision and sampling. There is also some overlap with the science goals of YSOVAR project ({\tt http://ysovar.ipac.caltech.edu/}), which monitored a number of star forming regions with warm Spitzer. The wavelength range is clearly complementary, and again Kepler observations would provide improved sampling over a much wider field of view. Re-observing some of the targets already observed by Monitor and YSOVAR will be valuable per se, for example to chart evolution in disk and star-spot activity in individual stars on multi-year timescales. Importantly, the proposed observations will benefit from a very natural synergy with the Gaia-ESO Public Spectroscopic Survey (GES, PIs G.~Gilmore and S.~Randich, \citealt{gil+12}), a large homogeneous survey of the distributions of kinematics and chemical element abundances in the Galaxy, designed to complement the astrometric, photometric and low-resolution spectroscopic data that GAIA itself will provide. The GES will obtain spectra of $\sim 100\,000$ stars with VLT/FLAMES, focussing on well-defined samples of Milky Way field and cluster stars. The cluster component of GES will cover 80--90 clusters spanning a wide range of age, richness, mass, composition, morphology, etc, ranging from from the closest associations to massive clusters at few kpc from the Sun, with ages from 1\,Myr to 10\,Gyr. For each cluster, an unbiased sample of stars are selected in order to derive, together with the Gaia data, accurate distances, 3-D spatial distributions and motions. It will also provide precise radial velocities for each star observed, and hence unbiased (w.r.t.\ activity) estimates of membership probability, as well as the mass, age, abundance, binarity, lithium abundance, and $v \sin i$ of most members. For the youngest clusters, GES will discriminate between classical and weak-lined T Tauri stars (CTTS/WTTS), and thus between active and inert disks. All of this information will be crucial to interpret the light curves provided by Kepler. The GES observations started in January 2012 and $\sim 15$ clusters have already been observed, including several of the nearest young clusters included in this proposal. If the Kepler observations proposed here go ahead, a detailed plan and formal agreement with the GES management team will be put in place to make the most of the synergy, but we note that all GES data are public in any case. The GAIA mission itself is due for launch in late 2013 and will, over the course of a 5 year mission, provide micro-arcsec astrometry, multi-epoch (70 epochs average, 200 max.) millimag photometry, low-res (20--50) spectra and, for the brighter objects ($V<15$), moderate resolution spectra and km/s radial velocities. It will deliver proper motions with $\sim 20\,\mu$as accuracy down to $V\sim15$, and $\sim 200\,\mu$as accuracy down to $V\sim20$, was well as sub-milliarcsecond parallaxes down to $V\sim20$\footnote{GAIA performance data from {\tt http://www.rssd.esa.int/index.php?page=Science\_Performance\&project=GAIA}.}. This will enable the derivation of model-independent (kinematic trace-back) ages for young clusters and associations, and distances with relative precision better than 10\% down to the brown dwarf regime for the nearest clusters in this proposal. GAIA will also yield orbital solutions for multiple systems, including binary stars and gas giant planets at a few AU from their host star. The first GAIA positions will become available 2 years after launch and the first parallaxes 6 months after that, while the full dataset will be delivered at the end of the mission. At the preparation stage, preliminary results from GES (and GAIA, depending on the timing) will make the definition of photometric windows for individual cluster members to be observed by Kepler extremely efficient. The detailed characterisation of the clusters and their members provided by GAIA and GES data will also, of course, facilitate the interpretation of the Kepler time-series. GAIA will also help in other way, for example its photometry will extend the period sensitivity of the Kepler observations; the GES and GAIA RVs will provide orbital solutions for the EBs identified by Kepler, and the GAIA astrometric data will complement the EB sample at wide separations. Finally, we note that many of the stars which we propose to monitor would be prime targets for follow-up observations with the James Webb Space Telescope (JWST), due to their youth and proximity. \section{Proposed observations} \subsection{Expected photometric performance} The reduced pointing performance of Kepler in two-wheel mode is expected to affect the photometric performance significantly. Based on simulations performed to date, the call for white papers forecasts a photometric precision of about 0.5--1\,mmag per 1 minute integration for a $V=12$ star (cf.\ 30\,ppm during the nominal mission), but warns that pixel sensitivity variations may limit the overall relative photometry to 0.3--1\,\%. We take these two extremes as our best and worst case scenarios, respectively. The best-case scenario might be attained by implementing novel methods for calibrating the pixel sensitivity variations, extracting photometry from trailed images, and/or disentangling the systematic effects from the intrinsic variability of each star in the light curve itself (see Section~\ref{sec:phot}). Since the decrease in precision in two- compared to three-wheel mode is due to systematics, we do not expect the faint-end performance to be affected significantly, so that $\sim 1\%$ photometry could be achieved down to Kepler magnitudes of $\sim 19$ (extrapolated from \citealt{jen+10}). \subsection{Observing strategy} The projected lifetime of Kepler in two-wheel mode is 1 to 2 years. In a one-year program, we would be able to monitor 8-12 clusters for 4 to 6 weeks each. The main driver for the duration of the observations is sensitivity to longer rotation periods and longer period EBs, but the duration will also affect -- for example -- the precision of the asteroseismic analysis. Given two years, we could extend the target set, or the duration of each run (increasing sensitivity to long-period rotators and EBs) or return to some of the clusters observed in year 1 to probe long-term evolution of the variability properties. Even if the time available is much more restricted, so that only a few clusters can be observed, this will already represent a many-fold increase in the number and range of stars monitored in this way. Target lists for each cluster will be constructed by collating all the membership information available in the literature and from the GES and GAIA. The number of known members in each of the clusters listed in the previous section ranges from a few hundred to several thousand. Photometric apertures will be defined so as to follow the trail of each star, and will be allocated in priority to known members of the cluster. The remaining telemetry can be used to observe other targets in the same fields. In the denser, central regions of some clusters, it may be advantageous to download contiguous sections of the detector by collating multiple apertures. The standard 30 min cadence is acceptable for some of the science goals discussed above (rotation, activity, EBs and transits), but some require a cadence of $\sim 5$\,min or better (pulsations, rapid variability in T Tauri stars). Whether a subset of the targets are selected to be observed at the standard short cadence (1\,min), or a different combination of exposure times, it is clear that the time sampling requirements of this program are not expected to be problematic. While observing any given region once would already represent a significant advance, if the possibility arises, it would also be interesting to revisit one or more of the targets after one or more years, as done with CoRoT for NGC\,2264, to track secular changes in the different types of variability being studied. As discussed previously, a lot of information is already available about the properties of the target regions and their members. Nonetheless, if this program goes ahead, we will also seek to organise simultaneous monitoring campaign with other ground- and space-based facilities (spanning complementary wavelength ranges), as we have done in the past for NGC\,2264. \section{Possible strategies for optimising photometric performance} \label{sec:phot} The reduced pointing performance affects the photometric performance in two ways: through pixel sensitivity variations (the star samples many more pixels during an observation, each of which may have a slightly different sensitivity) and because the images will be come trailed for any integrations longer than about 5 min. \subsection{Modelling inter- and intra-pixel sensitivity variations} The best way to reduce the impact of inter- and intra-pixel variations may be to devise a novel way of calibrating them prior to the observations. In the absence of such a development, however, it might be possible to calibrate them, on a star-by-star basis, from the pixel time-series themselves. Below we outline a simple model for doing this. This model relies on a number of simplifying assumptions, some of which may well be excessively naive, but we merely suggest it here as an idea. We have not had the opportunity to implement and test it yet, but we would be interested in working with the science office to do so, if the opportunity arises. Consider one star whose flux and position on the detector at time $t$ are given by $S(t)$, $x_0(t)$ and $y_0(t)$, respectively. The ultimate quantity of interest is $S$, which is not known \emph{a priori}. On the other hand, it is reasonable to assume that $x_0$ and $y_0$ are known (from individual centroid measurements and/or global modelling of the satellite pointing). The spatial distribution of the flux on the detector is defined by the point-spread function, $P(\delta x,\delta y)$, where $\delta x$ and $\delta y$ are the departures from the star's nominal position in the $x$ and $y$ directions, respectively. For now we assume that the point-spread function for a given star is constant in time -- we address the time-dependence of the PSF introduced by the pointing drift later. Again, it is reasonable to assume that the PSF is well-known, or at least that it can be reduced to a known function with a small number of free parameters. The flux recorded during the $k^{\rm th}$ integration by the $(i,j)^{\rm th}$ pixel is then \begin{equation} \label{eq1} F_{ijk} = S(t_k) \, R_{ij} \, \int_{x=i-0.5}^{x=i+0.5} \int_{y=j-0.5}^{y=j+0.5} D(x-i,y-j) \, P(x-x_0(t_k),y-y_0(t_k)) \, {\rm d}x \, {\rm d}y, \end{equation} where $R_{ij}$ is the (unknown) peak sensitivity of the $(i,j)^{\rm th}$ pixel, which we assume to be constant in time, and $D(\delta x,\delta y)$ represents the relative intra-pixel sensitivity variations, where $D=1$ for $\delta x = \delta y = 0$. Once more, we have assumed that $D$ is independent of time, that it is the same for all pixels, and that it can be described by a simple parametric function (in the simplest extreme, $D=1$ everywhere). If there are $N$ observations spanning $M$ pixels, i.e.\ $N \times M$ data points in the entire time-series, the above model has $N+M+K$ free parameters, where $K$ is the number of parameters associated with the functions $P$ and $D$, and is assumed to be small. In practice, the effective number of data points will be smaller, as only $M'<M$ pixels will contribute significantly to the PSF at any given time. On the other hand, as the pointing of the satellite will be reset periodically, the same pixels will be sampled multiple times. Therefore, overall the problem should still be well constrained. Where appropriate, additional leverage may also be gained by placing certain restrictions on the form of $S$ (e.g. by constraining it to vary smoothly, quasi-periodically, etc\ldots), if the star in question is a known type of variable, for example. The practical implementation of this model will be challenging, due to the large number of parameters. However, we do expect it to be feasible, for example using advanced Markov Chain Monte Carlo sampling methods \citep[see e.g.][]{for+13} specifically designed to explore large and complex parameter spaces. Whatever the inference method used, this will be a computationally intensive process, and it may be that this approach could be applied only to specific objects where attaining maximum precision is particularly important. \paragraph{Moving stars and trailed images} In practice, the pointing drift will cause the PSF to be elongated for any integration lasting more than a few minutes. It will also mean that the position of a star can change significantly, which may alter the PSF. To address this we replace equation~\ref{eq1} with the slightly more complex expression: \begin{equation} \label{eq2} F_{ijk} = \int_{t = t_k}^{t=t_k+\delta t} S(t_k) \, R_{ij} \, \int_{x=i-0.5}^{x=i+0.5} \int_{y=j-0.5}^{y=j+0.5} D(x-i,y-j) \, P(x-x_0(t),y-y_0(t),x_0(t),y_0(t)) \, {\rm d}x \, {\rm d}y \, {\rm d}t, \end{equation} where $P$ is now a function of the instantaneous position of the star as well as the departure from this position. \subsection{\emph{A posteriori} correction of systematic effects} In this section we discuss potential strategies for mitigating the effects of any instrumental systematics, which are not calibrated out at the light curve extraction stage. During the nominal mission, the light curves extracted and calibrated by the standard pipeline displayed systematic effects, which were corrected in part by the pre-search data conditioning (PDC) step, albeit often at the expense of the intrinsic variability (other than transits). \begin{figure} \centering \includegraphics[width=0.48\linewidth]{arc_ex1.png} \hfill \includegraphics[width=0.48\linewidth]{arc_ex2.png} \caption{Two-step systematics correction for two representative examples from quarters 2 (left) and 3 (right). Top: raw data (black) and correction applied for discontinuities (`REC', red). Middle: REC-corrected data (black) and correction applied for common-mode systematic trends (ARC, red). Bottom: ARC-corrected data.} \label{fig:sys} \end{figure} To address this problem, we have adopted a two-step approach, which is still under development but is giving good results. We model common-mode systematic trends by modelling each light curve in turn as a linear combination of all the other light curves, and then applying a statistical entropy criterion to ensure that any trends identified in this manner are genuinely systematic \citep{rob+13}. This algorithm, which we call `ARC' (Astrophysically Robust Correction of systematic trends), uses a Bayesian approach with shrinkage priors to avoid overfitting, which we implement within a variational inference framework to ensure computational efficiency. On the other hand, some instrumental effects, in particular the discontinuities and thermal decays associated with monthly data download events, are present in all the light curves, but cannot be represented adequately by a linear basis model such as the one used by the ARC. We model these on a star by star basis, postulating a functional form for the systematic effect, and modelling it at the same time as the stellar variability itself, which we treat as a Gaussian process (GP, a very flexible, yet robust class of models, where functions are parametrised indirectly through their covariance properties). We refer to this as fault rectification (REC), and apply it before the ARC (see Figure~\ref{fig:sys} for examples). In two-wheel mode, we anticipate that common-mode systematics will be less widespread, as inter-pixel sensitivity variations (rather than global effects such as focus changes associated with the thermal relaxation of the satellite) are expected to dominate the systematics budget. Therefore, it may not be possible to describe any systematic effects which make it through the light curve as a linear combination of a small number of basis trends common to many light curves, which is the basis of the ARC algorithm. Indeed, the form of the systematics might well be unique to each light curve. On the other hand, it is also unlikely that we will be able to describe them using a specific functional form, as we have done for the thermal decay events during the nominal mission. However, the systematics are likely to be correlated in some way with the position of the star on the detector. We therefore suggest that it might be possible to model them using a GP, whose inputs are the $x$ and $y$ positions of the star on the detector. This is very similar to the technique developed by our group to treat systematics in Hubble Space Telescope exoplanet transmission spectra observations \citep{gib+12}, which we have also successfully used to model the pixel response function of Spitzer (Evans et al., in prep.). \section{Conclusions} We have shown that Kepler in two wheel mode could be a very powerful tool to monitor nearby, spatially extended young clusters and associations, thanks to its unique field of view, continuous coverage, and a photometric performance which, while reduced, is still likely to be very good. For example, observing 8--12 clusters for 4--6 weeks each, would lead to an order of magnitude increase in the number of known PMS and early MS pulsators, and enable us to chart the evolution of intermediate and low-mass stars onto the main sequence in unprecedented detail. It would complete enable us to probe the full diversity of accretion and activity-induced variability right (right into the brown dwarf regime for the some clusters), and complete the census of rotation periods in some of the nearest and best studied young associations. Finally, it will also lead to an order of magnitude increase in the number of well-characterised, young eclipsing binaries, and may lead to the detection of a few young transiting planets. We have shown that there are valuable synergies between the proposed observations and the GAIA-ESO survey, as well as the GAIA space mission itself. Finally, we have also outlined possible strategies for optimising the photometric performance of Kepler in two-wheel mode, which could be used for a wide range of observing programs.
1,108,101,563,948
arxiv
\section{Introduction}\label{sec:intro} Riemannian optimization is a powerful method to tackle a general class of optimization problems with geometric constraints so that the solution is constrained to a Riemannian manifold. One key component in Riemannian optimization is the Riemannian geometry. Over the past decades, numerous Riemannian geometries, including different manifold classes, quotient manifold structures and Riemannian metrics, have been proposed in various problems for having either better geometric properties or faster algorithmic convergences \cite{edelman1998geometry,vandereycken2013riemannian,mishra2016riemannian,mishra2014fixed,bonnabel2010riemannian,absil2009geometric,massart2020quotient,mishra2012riemannian}. In the literature, two popular choices of manifold classes in Riemannian optimization are {\it embedded submanifold} and {\it quotient manifold}. The embedded geometry often allows computing and interpreting the geometric notions more straightforwardly; while the optimization methods via quotient geometry can be more versatile as quotient geometry provides more choices of quotient structures and Riemannian metrics. The readers are referred to \cite{absil2009optimization,boumal2020introduction, mishra2014fixed,meyer2011geometric} for surveys on these topics. The Riemannian optimization under embedded and quotient geometries are not obviously related (even commented to be fundamentally different in \cite{journee2010low}) and are often studied separately in the literature. It is also unclear how to choose between these two geometries in Riemannian optimization. On the other hand, a few empirical studies on the algorithmic comparisons between the embedded and the quotient geometries in matrix completion and graph-based clustering problems showed that for either the gradient or trust-region based algorithms, these two geometries perform more or less the same in terms of the total computational time \cite{mishra2014fixed,mishra2012riemannian,douik2021low}. It has been asked by \cite{vandereycken2013low} on the reason behind. \cite{vandereycken2010riemannian} hinted that embedded and quotient approaches are probably related as the manifolds under these two geometries are diffeomorphic to each other. However, it remains elusive in the literature how they are exactly connected in specific Riemannian optimization problems from either an algorithmic or a geometric point of view. In this work, we make the first attempt to answer these questions by proposing a general framework to investigate the first-order and second-order geometric landscape connections of the optimization problem under the embedded and the quotient geometries. The first-order geometric connection can often be easily established and the general procedure for connecting second-order geometries includes three steps: 1. compute the quadratic form of Riemannian Hessians under two geometries; 2. construct a carefully-designed mapping $\mathcal{L}$ between the horizontal space under the quotient geometry to the tangent space under the embedded geometry at proper reference points to connect Riemannian Hessians; 3. establish the spectra connection between Riemannian Hessians via bounding the spectrum of $\mathcal{L}$. We then specifically consider the following fixed-rank matrix optimization problems: \begin{equation} \label{eq: PSD-manifold-formulation} \text{PSD case}: \quad \quad \min_{{\mathbf{X}} \in \mathbb{S}^{p \times p} \succcurlyeq 0, {\rm rank}({\mathbf{X}}) = r} f({\mathbf{X}}), \quad 0 < r \leq p, \end{equation} \begin{equation} \label{eq: general prob} \text{general case}: \quad \min_{{\mathbf{X}} \in \mathbb{R}^{p_1 \times p_2}, {\rm rank}({\mathbf{X}}) = r} f({\mathbf{X}}), \quad 0 < r \leq \min\{p_1,p_2\}. \end{equation} In the positive semidefinite (PSD) case, without loss of generality, we assume $f$ is symmetric in ${\mathbf{X}}$, i.e., $f({\mathbf{X}}) = f({\mathbf{X}}^\top)$; otherwise, we can set $\tilde{f}({\mathbf{X}}) = \frac{1}{2}(f({\mathbf{X}}) + f({\mathbf{X}}^\top))$ and have $\tilde{f}({\mathbf{X}}) = f({\mathbf{X}})$ for all ${\mathbf{X}} \succcurlyeq 0$ without changing the problem \cite{bhojanapalli2016dropping}. In both cases we assume $f$ is twice continuously differentiable with respect to ${\mathbf{X}}$ and the Euclidean metric. Both embedded and quotient geometries have been studied for the sets of fixed-rank matrices and many algorithms have been proposed for \eqref{eq: PSD-manifold-formulation} and \eqref{eq: general prob} on each individual Riemannian geometry. See Section \ref{sec: related-literature} for a review of existing results. By applying the general procedure, we establish the geometric connections of \eqref{eq: PSD-manifold-formulation} and \eqref{eq: general prob} under the embedded and a variety of quotient geometries (Theorems \ref{th: embedded-quotient-connection-PSD1}--\ref{th: embedded-quotient-connection-general3}, Corollaries \ref{coro: landscape connection PSD}, \ref{coro: landscape connection general case}) informally summarized as follows. \begin{Theorem}[Informal results] Consider optimization problems \eqref{eq: PSD-manifold-formulation} and \eqref{eq: general prob} on fixed-rank PSD and general matrix manifolds. \begin{itemize}[leftmargin=*] \item There exists an equivalence relation on the sets of Riemannian first-order stationary points (FOSPs), Riemannian second-order stationary points (SOSPs) and strict saddles of \eqref{eq: PSD-manifold-formulation} or \eqref{eq: general prob} under embedded and quotient geometries. \item The spectra of Riemannian Hessians of \eqref{eq: PSD-manifold-formulation} or \eqref{eq: general prob} under two geometries are sandwiched by each other at Riemannian FOSPs. \end{itemize} \end{Theorem} To the best of our knowledge, this is the first geometric landscape connection between the embedded and the quotient geometries for fixed-rank matrix optimization. In addition, the effects of Riemannian metric and quotient structure on the landscape connection are discussed. We also observe an algorithmic connection of \eqref{eq: PSD-manifold-formulation} and \eqref{eq: general prob} under the embedded and the quotient geometries with some specific Riemannian metrics. In a broad sense, embedded and quotient geometries are the most common two choices in Riemannian optimization. It is known that the manifolds under two geometries are diffeomorphic to each other, however it is unclear how the geometry-dependent key concepts of optimization problems are related. This paper bridges them from an geometric point of view and illustrates explicitly how they are connected in solving fixed-rank matrix optimization problems. \subsection{Related Literature} \label{sec: related-literature} This work is related to a range of literature on low-rank matrix optimization, Riemannian/nonconvex optimization, and geometric landscape analysis of an optimization problem. First, choosing a proper Riemannian geometry is undoubtedly a central topic in Riemannian optimization and numerous geometries have been proposed under different considerations. For example, \cite{vandereycken2013riemannian} proposed a homogeneous space geometry on the set of fixed-rank PSD matrices such that the complete geodesics can be obtained. \cite{mishra2016riemannian,mishra2012riemannian} proposed new Riemannian metrics under fixed-rank quotient geometries tailored to objective functions. Different Riemannian manifold structures have also been considered to have better geometric properties \cite{bonnabel2010riemannian,absil2014two}. In this work, we focus on the choice between embedded and quotient geometries and study its effect on the corresponding Riemannian optimization problem. Second, for fixed-rank matrix optimization \eqref{eq: PSD-manifold-formulation} and \eqref{eq: general prob}, a number of Riemannian optimization methods, including (conjugate) gradient descent, (Gauss-)Newton, trust-region have been developed under either embedded geometry \cite{hou2020fast,luo2020recursive,shalit2012online,vandereycken2013low,vandereycken2010riemannian,wei2016guarantees} or quotient geometry \cite{absil2009geometric,absil2014two,boumal2011rtrmc,edelman1998geometry,huang2018blind,meyer2011linear,meyer2011regression,mishra2014fixed,ngo2012scaled}. We refer readers to \cite{absil2009optimization,boumal2020introduction,cai2018exploiting} for the recent algorithmic development in Riemannian matrix optimization. In addition to Riemannian optimization, a number of other methods including convex relaxation \cite{recht2010guaranteed,cai2013compressed}, non-convex factorization \cite{jain2013low,ma2019implicit,sun2015guaranteed,tu2016low}, projected gradient descent \cite{jain2010guaranteed}, and penalty method \cite{gao2010majorized} have been proposed to solve \eqref{eq: PSD-manifold-formulation} and \eqref{eq: general prob} as well. A comparison of these different approaches can be found in \cite{chen2018harnessing,chi2019nonconvex}. Third, a few attempts have been made to analyze the landscape of a Riemannian matrix optimization problem. For example, \cite{maunu2019well,ahn2021riemannian} provided landscape analyses for robust subspace recovery and matrix factorization over the Grassmannian manifold. Under the embedded geometry, \cite{uschmajew2018critical} showed the landscape of \eqref{eq: general prob} is benign when $f$ is quadratic and satisfies certain restricted spectral bounds property. Different from this line of works which focus on the landscape of the problem under a single Riemannian geometry when $f$ is well-conditioned, here we study the geometric landscape connections of \eqref{eq: PSD-manifold-formulation} and \eqref{eq: general prob} under the embedded and the quotient geometries for a general $f$. Finally, there are a few recent studies on the geometric connections of different approaches for rank constrained optimization. For example, \cite{ha2020equivalence} studied the relationship between Euclidean FOSPs/SOSPs under the factorization formulation and fixed points of the projected gradient descent (PGD) in the general low-rank matrix optimization. They showed while the sets of FOSPs under the factorization formulation can be larger, the sets of SOSPs are contained in the set of fixed points of the PGD with a small stepsize. Another related work is \cite{luo2021nonconvex}, where they studied the landscape connections of the factorization formulation and the Riemannian formulation with embedded geometry for both \eqref{eq: PSD-manifold-formulation} and \eqref{eq: general prob} and identified the close connections between the landscapes under these two formulations, e.g., the set of FOSPs and SOSPs under these two formulations are exactly the same when constraining to rank-$r$ matrices. In addition, under the quotient geometry, a general equivalence relation on the sets of FOSPs and SOSPs of objectives on the total space and the quotient space is given in \cite[Section 9.11]{boumal2020introduction}. Complementary to these results, here we consider the geometric landscape connections of \eqref{eq: PSD-manifold-formulation} and \eqref{eq: general prob} under two Riemannian formulations with different geometries, i.e., embedded and quotient geometries. \subsection{Organization of the Paper}\label{sec: organization} The rest of this article is organized as follows. After a brief introduction of notation in Section \ref{sec: notation} and preliminaries for Riemannian optimization on embedded and quotient manifolds in Section \ref{sec: Riemannian-opt-background}, we introduce a general procedure for establishing landscape connections of an optimization problem under two Riemannian geometries in Section \ref{sec: general-strategy-for-connection}. Then we discuss the embedded and quotient geometries for fixed-rank matrix manifolds in Section \ref{sec: embedded-quotient-fixed-rank-matrix}. By applying the general procedure, our main results on the geometric landscape connections of \eqref{eq: PSD-manifold-formulation} and \eqref{eq: general prob} under the embedded and the quotient geometries are presented in Sections \ref{sec: connection-PSD} and \ref{sec: connection-general}, respectively. Conclusion and future work are given in Section \ref{sec: conclusion}. The proof for the main results are provided in the main body of the paper. Additional preliminaries, proofs and lemmas are collected in Appendices \ref{sec: additional-preliminaries}-\ref{sec: additional-lemmas}, respectively. \subsection{Notation and Preliminaries} \label{sec: notation} The following notation will be used throughout this article. For any positive integer $p$, denote $[p]=\{1,\ldots, p\}$. We use $\mathbb{R}^{p_1 \times p_2}$, $\mathbb{S}^{p \times p}$, $\mathbb{S}_+(r)$, $\mathbb{R}^{p \times r}_*$ and ${\rm St}(r,p)$ to denote the spaces of $p_1$-by-$p_2$ real matrices, $p$-by-$p$ real symmetric matrices, $r$-by-$r$ real symmetric positive definite matrices, $p$-by-$r$ real full column rank matrices and $p$-by-$r$ real matrices with orthonormal columns, respectively. We also let $\mathbb{O}_r$ be the set of $r$-by-$r$ orthogonal matrices, i.e., $\mathbb{O}_r = {\rm St}(r,r)$. Uppercase and lowercase letters (e.g., $A, B, a, b$), lowercase boldface letters (e.g., $\u, \v$), uppercase boldface letters (e.g., ${\mathbf{U}}, {\mathbf{V}}$) are often used to denote scalars, vectors, matrices, respectively. For any $a, b \in \mathbb{R}$, let $a \wedge b := \min\{a,b\}, a \vee b := \max\{a,b\}$. For any matrix ${\mathbf{X}} \in \mathbb{R}^{p_1\times p_2}$ with singular value decomposition (SVD) $\sum_{i=1}^{p_1 \land p_2} \sigma_i({\mathbf{X}})\u_i \v_i^\top$, where $\sigma_1({\mathbf{X}}) \geq \sigma_2({\mathbf{X}}) \geq \cdots \geq \sigma_{p_1 \wedge p_2} ({\mathbf{X}})$, denote its Frobenius norm and spectral norm as $\|{\mathbf{X}}\|_{\rm F} = \sqrt{\sum_{i} \sigma^2_i({\mathbf{X}})}$ and $\|{\mathbf{X}}\| = \sigma_1({\mathbf{X}})$, respectively. Also, denote ${\mathbf{X}}^{-1}$, ${\mathbf{X}}^{-\top}$ and ${\mathbf{X}}^\dagger$ as the inverse, transpose inverse, and Moore-Penrose inverse of ${\mathbf{X}}$, respectively. For any ${\mathbf{X}} \in \mathbb{R}^{p \times p}$, let ${\rm Sym}({\mathbf{X}}) = ({\mathbf{X}} + {\mathbf{X}}^\top)/2$, $\skew({\mathbf{X}}) =({\mathbf{X}} - {\mathbf{X}}^\top)/2$, and ${\rm tr}({\mathbf{X}})$ be the symmetric part, skew-symmetric part, and the trace of ${\mathbf{X}}$, respectively. For any ${\mathbf{X}} \in \mathbb{S}^{p \times p}$ having eigendecomposition ${\mathbf{U}} \boldsymbol{\Sigma} {\mathbf{U}}^\top$ with non-increasing eigenvalues on the diagonal of $\boldsymbol{\Sigma}$, let $\lambda_i({\mathbf{X}})$ be the $i$-th largest eigenvalue of ${\mathbf{X}}$, $\lambda_{\min}({\mathbf{X}})$ be the least eigenvalue of ${\mathbf{X}}$, and ${\mathbf{X}}^{1/2} = {\mathbf{U}} \boldsymbol{\Sigma}^{1/2} {\mathbf{U}}^\top$. We note ${\mathbf{X}} \succcurlyeq 0$ if ${\mathbf{X}}$ is a symmetric positive semidefinite (PSD) matrix. Throughout the paper, the SVD (or eigendecomposition) of a rank $r$ matrix ${\mathbf{X}}$ (or symmetric matrix ${\mathbf{X}}$) refers to its economic version and we say a column orthonormal matrix ${\mathbf{U}}'$ spans the top $r$ left singular space or eigenspace of ${\mathbf{X}}$ if ${\mathbf{U}}' = {\mathbf{U}} \O$ for some $\O \in \mathbb{O}_r$, where ${\mathbf{U}}$ is formed by the top $r$ left singular vectors or eigenvectors of ${\mathbf{X}}$. For any ${\mathbf{U}}\in {\rm St}(r,p)$, $P_{{\mathbf{U}}} = {\mathbf{U}}\U^\top$ represents the orthogonal projector onto the column space of ${\mathbf{U}}$; we also note ${\mathbf{U}}_\perp\in {\rm St}(p-r, p)$ as an orthonormal complement of ${\mathbf{U}}$. We use bracket subscripts to denote sub-matrices. For example, ${\mathbf{X}}_{[i_1,i_2]}$ is the entry of ${\mathbf{X}}$ on the $i_1$-th row and $i_2$-th column. In addition, ${\mathbf{I}}_r$ is the $r$-by-$r$ identity matrix. Finally, the dimension of a linear space $\mathcal{V}$ is denoted as $\dim(\mathcal{V})$. For any two linear spaces $\mathcal{V}_1, \mathcal{V}_2$, the sum of $\mathcal{V}_1$ and $\mathcal{V}_2$ is denoted by $\mathcal{V}_1+\mathcal{V}_2 := \{\v_1 + \v_2| \v_1 \in \mathcal{V}_1,\v_2 \in \mathcal{V}_2\}$. If every vector in $\mathcal{V}_1 + \mathcal{V}_2$ can be uniquely decomposed into $\v_1 + \v_2$, where $\v_1 \in \mathcal{V}_1,\v_2 \in \mathcal{V}_2$, then we call the sum of $\mathcal{V}_1$ and $\mathcal{V}_2$ the direct sum, denoted by $\mathcal{V}_1 \oplus \mathcal{V}_2$. The direct sum satisfies a key property: $\dim(\mathcal{V}_1 \oplus \mathcal{V}_2) = \dim(\mathcal{V}_1) + \dim(\mathcal{V}_2)$. For any two Euclidean spaces $\mathcal{V}_1$ and $\mathcal{V}_2$ endowed with inner product $g(\cdot, \cdot)$, we say $\mathcal{V}_1$ is orthogonal to $\mathcal{V}_2$ with respect to $g$ and note $\mathcal{V}_1 \perp \mathcal{V}_2$, if and only if $g(\v_1, \v_2) =0$ for any $\v_1 \in \mathcal{V}_1,\v_2 \in \mathcal{V}_2$ Suppose $f: \mathbb{R}^{p_1 \times p_2} \to \mathbb{R}$ is a differentiable scalar function and $\phi: \mathbb{R}^{p_1 \times p_2} \to \mathbb{R}^{q_1 \times q_2}$ is a differentiable matrix-valued function. Let the Euclidean gradient of $f$ at ${\mathbf{X}}$ be $\nabla f({\mathbf{X}})$, i.e., $(\nabla f({\mathbf{X}}))_{[i,j]} = \frac{\partial f({\mathbf{X}})}{\partial {\mathbf{X}}_{[i,j]}}$ for $i\in [p_1], j\in [p_2]$. The Euclidean gradient of $\phi$ is a linear operator from $\mathbb{R}^{p_1 \times p_2}$ to $\mathbb{R}^{q_1 \times q_2}$ such that $(\nabla \phi ({\mathbf{X}}) [{\mathbf{Z}}] )_{[i,j]} = \sum_{k \in [p_1],l \in [p_2]} \frac{\partial ( \phi ({\mathbf{X}}) )_{[i,j]} }{\partial {\mathbf{X}}_{[k,l]}} {\mathbf{Z}}_{[k,l]}$ for any ${\mathbf{Z}} \in \mathbb{R}^{p_1 \times p_2}, i \in [q_1], j \in [q_2]$. For a twice continuously differentiable function $f$, let $\nabla^2 f({\mathbf{X}})[\cdot]$ be its Euclidean Hessian, which is the gradient of $\nabla f({\mathbf{X}})$ and can be viewed as a linear operator from $\mathbb{R}^{p_1 \times p_2}$ to $\mathbb{R}^{p_1 \times p_2}$ satisfying \begin{equation*} ( \nabla^2 f({\mathbf{X}})[{\mathbf{Z}}] )_{[i,j]} = \sum_{k \in [p_1],l \in [p_2]} \frac{\partial (\nabla f ({\mathbf{X}}) )_{[i,j]} }{\partial {\mathbf{X}}_{[k,l]}} {\mathbf{Z}}_{[k,l]} = \sum_{k \in [p_1],l \in [p_2]} \frac{\partial^2 f ({\mathbf{X}}) }{\partial {\mathbf{X}}_{[k,l]} \partial {\mathbf{X}}_{[i,j]} } {\mathbf{Z}}_{[k,l]}. \end{equation*} Define the bilinear form of the Hessian of $f$ as $\nabla^2 f({\mathbf{X}})[{\mathbf{Z}}_1, {\mathbf{Z}}_2]:= \langle \nabla^2 f({\mathbf{X}})[{\mathbf{Z}}_1], {\mathbf{Z}}_2 \rangle $ for any ${\mathbf{Z}}_1, {\mathbf{Z}}_2 \in \mathbb{R}^{p_1 \times p_2}$, where $\langle \cdot, \cdot \rangle$ is the standard Euclidean inner product. \section{Riemannian Optimization Under Embedded and Quotient Geometries} \label{sec: Riemannian-opt-background} In this section, we first give a brief introduction to Riemannian optimization and then discuss how to perform Riemannian optimization under embedded and quotient geometries. Riemannian optimization concerns optimizing a real-valued function $f$ defined on a Riemannian manifold ${\cal M}$. The readers are referred to \cite{absil2009optimization,boumal2020introduction,hu2020brief} for more details. The calculations of Riemannian gradients and Riemannian Hessians are key ingredients to perform continuous optimization over the Riemannian manifold. Suppose ${\mathbf{X}} \in {\cal M}$, $g_{{\mathbf{X}}}( \cdot, \cdot )$ is the Riemannian metric, and $T_{\mathbf{X}} {\cal M}$ is the tangent space of ${\cal M}$ at ${\mathbf{X}}$. Then the Riemannian gradient of a smooth function $f:{\cal M} \to \mathbb{R}$ at ${\mathbf{X}}$ is defined as the unique tangent vector, ${\rm grad}\, f({\mathbf{X}}) \in T_{\mathbf{X}} {\cal M}$, such that $g_{\mathbf{X}}( {\rm grad} \, f({\mathbf{X}}),\xi_{\mathbf{X}} ) = {\rm D} \, f({\mathbf{X}})[\xi_{\mathbf{X}}], \forall\, \xi_{\mathbf{X}} \in T_{\mathbf{X}} {\cal M}$, where ${\rm D} f({\mathbf{X}})[\xi_{\mathbf{X}}]$ is the directional derivative of $f$ at point ${\mathbf{X}}$ along the direction $\xi_{\mathbf{X}}$. The Riemannian Hessian of $f$ at ${\mathbf{X}}\in{\cal M}$ is a linear mapping ${\rm Hess} \,f({\mathbf{X}}): T_{\mathbf{X}}{\cal M} \to T_{\mathbf{X}}{\cal M}$ defined as \begin{equation} \label{def: Riemannain-Hessian} {\rm Hess}\, f({\mathbf{X}})[\xi_{\mathbf{X}}] = \widebar{\nabla}_{\xi_{\mathbf{X}}} {\rm grad}\, f \in T_{\mathbf{X}} {\cal M}, \,\quad \forall \xi_{\mathbf{X}} \in T_{\mathbf{X}} {\cal M}, \end{equation} where $\widebar{\nabla}$ is the {\it Riemannian connection} on ${\cal M}$, which is a generalization of the directional derivative on a vector field to Riemannian manifolds \cite[Section 5.3]{absil2009optimization} (See Appendix \ref{sec: additional-preliminaries} for more details). The bilinear form of Riemannian Hessian is defined as ${\rm Hess} f({\mathbf{X}})[\xi_{\mathbf{X}}, \theta_{\mathbf{X}}]:= g_{\mathbf{X}} ( {\rm Hess} f({\mathbf{X}})[\xi_{\mathbf{X}}], \theta_{\mathbf{X}} )$ for any $\xi_{\mathbf{X}}, \theta_{\mathbf{X}} \in T_{{\mathbf{X}}}{\cal M}$. We say ${\mathbf{X}} \in {\cal M}$ is a Riemannian FOSP of $f$ iff ${\rm grad} f({\mathbf{X}}) = {\mathbf{0}}$ and a Riemannian SOSP of $f$ iff ${\rm grad} f({\mathbf{X}}) = {\mathbf{0}}$ and ${\rm Hess} f({\mathbf{X}}) \succcurlyeq 0$. Moreover, ${\mathbf{X}} \in {\cal M}$ is a local minimizer of $f$ if there exists a neighborhood ${\cal N}$ of ${\mathbf{X}}$ in ${\cal M}$ such that $f({\mathbf{X}}) \leq f({\mathbf{X}}')$ for all ${\mathbf{X}}' \in {\cal N}$. Finally, we call a Riemannian FOSP a strict saddle iff the Riemannian Hessian evaluated at this point has a strict negative eigenvalue. In this work, we mainly focus on two classes of manifolds: embedded submanifold and quotient manifold. The embedded submanifold can be viewed as a generalization of the notion of surface in $\mathbb{R}^{d}$ and the Riemannian gradients and Hessians under the embedded geometry can often be concretely written out as every geometric object lies in the embedding space. For example, suppose ${\cal M}$ is a Riemannian embedded submanifold of the Riemannian manifold $\widebar{{\cal M}}$ and the objective function $f: {\cal M} \to \mathbb{R}$ is the restriction of $\bar{f}: \widebar{{\cal M}} \to \mathbb{R}$ to the embedded submanifold ${\cal M}$. Then we have the following simple expressions for the Riemannian gradient of $f$ and the Riemannian connection \cite[Eq. (3.37) and Proposition 5.3.2]{absil2009optimization}: ${\rm grad} f({\mathbf{X}}) = P_{T_{\mathbf{X}} {\cal M}}( {\rm grad} \bar{f} ({\mathbf{X}}) ) $ and $\widebar{\nabla}_{\xi_{\mathbf{X}}} \eta = P_{T_{\mathbf{X}} {\cal M}}( \widebar{\nabla}'_{\xi_{\mathbf{X}}} \eta ),$ where $P_{T_{\mathbf{X}} {\cal M}}(\cdot)$ is the projection operator onto the tangent space $T_{\mathbf{X}} {\cal M}$, $\xi, \eta$ are two vector fields on ${\cal M}$, ${\rm grad} \bar{f} ({\mathbf{X}})$ and $\widebar{\nabla}'$ are the Riemannian gradient of $\bar{f}$ and the Riemannian connection on $\widebar{{\cal M}}$, respectively. On the other hand, the geometric objects under quotient manifolds are more abstract. The following Section \ref{sec: Riemannian-opt-quotient} aims to provide more details on how to perform Riemannian optimization on quotient manifolds. \subsection{Riemannian Optimization on Quotient Manifolds} \label{sec: Riemannian-opt-quotient} Quotient manifolds are often defined via an equivalence relation ``$\sim$'' that satisfies symmetric, reflexive and transitive properties \cite[Section 3.4.1]{absil2009optimization}. The equivalence classes are often abstract objects and cannot be directly applied in numerical computations. Riemannian optimization on quotient manifolds works on representatives of these equivalence classes instead. To be specific, suppose $\widebar{{\cal M}}$ is an embedded submanifold equipped with an equivalence relation $\sim$. The {\it equivalence class} (or {\it fiber}) of $\widebar{{\cal M}}$ at a given point ${\mathbf{X}}$ is defined by the set $[{\mathbf{X}}] = \{{\mathbf{X}}_1 \in \widebar{{\cal M}}: {\mathbf{X}}_1 \sim {\mathbf{X}} \}$. The set $ {\cal M} := \widebar{{\cal M}}/\sim = \{[{\mathbf{X}}]: {\mathbf{X}} \in \widebar{{\cal M}} \}$ is called a quotient of $\widebar{{\cal M}}$ by $\sim$. The mapping $\pi: \widebar{{\cal M}} \to \widebar{{\cal M}}/\sim$, $ {\mathbf{X}} \mapsto [{\mathbf{X}}]$ is called the {\it quotient map} or {\it canonical projection} and the set $\widebar{{\cal M}}$ is called the {\it total space} of the quotient $\widebar{{\cal M}}/\sim$. If ${\cal M}$ further admits a smooth manifold structure and $\pi$ is a smooth submersion, then we call ${\cal M}$ a quotient manifold of $\widebar{{\cal M}}$. Due to the abstractness, the tangent space $T_{[{\mathbf{X}}]} {\cal M}$ of ${\cal M}$ at $[{\mathbf{X}}]$ calls for a representation in the tangent space $T_{{\mathbf{X}}}\widebar{{\cal M}}$ of the total space $\widebar{{\cal M}}$. By the equivalence relation $\sim$, the representation of elements in $T_{[{\mathbf{X}}]} {\cal M}$ should be restricted to the directions in $T_{\mathbf{X}} \widebar{{\cal M}}$ without inducing displacement along the equivalence class $[{\mathbf{X}}]$. This can be achieved by decomposing $T_{{\mathbf{X}}}\widebar{{\cal M}}$ into complementary spaces $T_{{\mathbf{X}}}\widebar{{\cal M}} = \mathcal{V}_{{\mathbf{X}}} \widebar{{\cal M}} \oplus \mathcal{H}_{{\mathbf{X}}} \widebar{{\cal M}}$. Here, $\mathcal{V}_{{\mathbf{X}}} \widebar{{\cal M}}$ is called the {\it vertical space} that contains tangent vectors of the equivalence class $[{\mathbf{X}}]$. $\mathcal{H}_{{\mathbf{X}}} \widebar{{\cal M}}$ is called the {\it horizontal space} of $T_{{\mathbf{X}}}\widebar{{\cal M}}$, which is complementary to $\mathcal{V}_{\mathbf{X}} \widebar{{\cal M}}$ and provides a proper representation of the abstract tangent space $T_{[{\mathbf{X}}]} {\cal M}$ \cite[Section 3.5.8]{absil2009optimization}. Once $\widebar{{\cal M}}$ is endowed with $\mathcal{H}_{{\mathbf{X}}}\widebar{{\cal M}}$, a given tangent vector $\eta_{[{\mathbf{X}}]} \in T_{[{\mathbf{X}}]} {\cal M}$ at $[{\mathbf{X}}]$ is uniquely represented by a horizontal tangent vector $\eta_{\mathbf{X}} \in \mathcal{H}_{\mathbf{X}} \widebar{{\cal M}}$ that satisfies ${\rm D} \pi ({\mathbf{X}})[\eta_{\mathbf{X}}] = \eta_{[{\mathbf{X}}]}$ \cite[Section 3.5.8]{absil2009optimization}. The tangent vector $\eta_{\mathbf{X}} \in \mathcal{H}_{\mathbf{X}} \widebar{{\cal M}}$ is also called the {\it horizontal lift} of $\eta_{[{\mathbf{X}}]}$ at ${\mathbf{X}}$. Next, we introduce the notion of {\it Riemannian quotient manifolds}. Suppose the total space $\widebar{{\cal M}}$ is endowed with a Riemannian metric $\bar{g}_{\mathbf{X}}$, and for every $[{\mathbf{X}}] \in {\cal M}$ and every $\eta_{[{\mathbf{X}}]}, \theta_{[{\mathbf{X}}]} \in T_{[{\mathbf{X}}]} {\cal M}$, the expression $\bar{g}_{\mathbf{X}}(\eta_{\mathbf{X}}, \theta_{\mathbf{X}} )$, i.e., the inner product of the horizontal lifts of $\eta_{[{\mathbf{X}}]}, \theta_{[{\mathbf{X}}]}$ at ${\mathbf{X}}$, does not depend on the choice of the representative ${\mathbf{X}}$. Then the metric $\bar{g}_{\mathbf{X}}$ in the total space induces a metric $g_{[{\mathbf{X}}]}$ on the quotient space, i.e., $g_{[{\mathbf{X}}]}(\eta_{[{\mathbf{X}}]}, \theta_{[{\mathbf{X}}]}):= \bar{g}_{\mathbf{X}}(\eta_{\mathbf{X}}, \theta_{\mathbf{X}})$. The quotient manifold ${\cal M}$ endowed with $g_{[{\mathbf{X}}]}$ is called a {\it Riemannian quotient manifold} of $\widebar{{\cal M}}$ and the quotient mapping $\pi: \widebar{{\cal M}} \to {\cal M}$ is called a {\it Riemannian submersion} \cite[Section 3.6.2]{absil2009optimization}. Optimization on Riemannian quotient manifolds is particularly convenient because computation of representatives of Riemannian gradients and Hessians in the abstract quotient space can be directly performed by means of their analogous in the total space. To be specific, suppose $\bar{f}: \widebar{{\cal M}} \to \mathbb{R}$ is an objective function in the total space and is invariant along the fiber, i.e., $\bar{f}({\mathbf{X}}_1) = \bar{f}({\mathbf{X}}_2)$ whenever ${\mathbf{X}}_1 \sim {\mathbf{X}}_2$. Then $\bar{f}$ induces a function $f: {\cal M} \to \mathbb{R}$ on the quotient space and the horizontal lift of the Riemannian gradient of $f$ can be obtained as follows \cite[Section 3.4.2]{meyer2011geometric}: \begin{equation} \label{eq: quotient-gradient-general} \overline{{\rm grad} f([{\mathbf{X}}])} = P_{\mathbf{X}}^\mathcal{H} ({\rm grad} \bar{f} ({\mathbf{X}}) ). \end{equation} Here $P_{\mathbf{X}}^\mathcal{H}(\cdot)$ denotes the projection operator onto the horizontal space at ${\mathbf{X}}$ ($P_{\mathbf{X}}^\mathcal{H}(\cdot)$ depends on the metric $\bar{g}_{\mathbf{X}}$) and ${\rm grad} \bar{f}({\mathbf{X}})$ denotes the Riemannian gradient of $\bar{f}$ at ${\mathbf{X}}$ in the total space. The following Lemma \ref{lm: quotient-ortho-hori-gradient-express} shows that if the horizontal space is canonically chosen, i.e., $\mathcal{H}_{\mathbf{X}} \widebar{{\cal M}}$ is the orthogonal complement of $\mathcal{V}_{\mathbf{X}} \widebar{{\cal M}}$ in $T_{\mathbf{X}} \widebar{{\cal M}}$ with respect to $\bar{g}_{\mathbf{X}}$, then ${\rm grad} \bar{f} ({\mathbf{X}})$ automatically lies in the horizontal space at ${\mathbf{X}}$. \begin{Lemma}{\rm (\cite[Section 3.6.2]{absil2009optimization})} \label{lm: quotient-ortho-hori-gradient-express} Suppose $\mathcal{H}_{\mathbf{X}} \widebar{{\cal M}}:= \{ \eta_{\mathbf{X}} \in T_{\mathbf{X}} \widebar{{\cal M}}: \bar{g}_{\mathbf{X}}(\eta_{\mathbf{X}}, \theta_{\mathbf{X}}) = 0 \text{ for all } \theta_{\mathbf{X}} \in \mathcal{V}_{\mathbf{X}} \widebar{{\cal M}} \}$. Then $\overline{{\rm grad} f([{\mathbf{X}}])} = {\rm grad} \bar{f} ({\mathbf{X}})$. \end{Lemma} Finally, the Riemannian connection on the quotient manifold ${\cal M}$ can also be uniquely represented by the Riemannian connection in the total space $\widebar{{\cal M}}$. Suppose $\eta, \theta$ are two vector fields on ${\cal M}$ and $\eta_{\mathbf{X}}$ and $\theta_{\mathbf{X}}$ are the horizontal lifts of $\eta_{[{\mathbf{X}}]}$ and $\theta_{[{\mathbf{X}}]}$ in $\mathcal{H}_{\mathbf{X}}\widebar{{\cal M}}$. Then the horizontal lift of $\widebar{\nabla}_{\theta_{[{\mathbf{X}}]}} \eta$ on the quotient manifold is given by $ \overline{ \widebar{\nabla}_{\theta_{[{\mathbf{X}}]}} \eta } = P_{\mathbf{X}}^{\mathcal{H}} ( \widebar{\nabla}_{\theta_{\mathbf{X}}} \eta ) $, where $\widebar{\nabla}_{\theta_{\mathbf{X}}} \eta$ is the Riemannian connection in the total space. Combining \eqref{def: Riemannain-Hessian}, we have the horizontal lift of the Riemannian Hessian of $f$ on ${\cal M}$ satisfies \begin{equation} \label{eq: quotient-hessian-linear-from} \overline{{\rm Hess} f([{\mathbf{X}}])[\theta_{[{\mathbf{X}}]}]} = P_{\mathbf{X}}^{\mathcal{H}}\left( \widebar{\nabla}_{\theta_{\mathbf{X}}} \overline{{\rm grad} f} \right) \end{equation} for any $\theta_{[{\mathbf{X}}]} \in T_{[{\mathbf{X}}]} {\cal M}$ and its horizontal lift $\theta_{\mathbf{X}}$. We also define the bilinear form of the horizontal lift of the Riemannian Hessian as $\overline{{\rm Hess} f([{\mathbf{X}}])} [\theta_{{\mathbf{X}}}, \eta_{\mathbf{X}} ] := \bar{g}_{\mathbf{X}}\left( \overline{{\rm Hess} f([{\mathbf{X}}])[\theta_{[{\mathbf{X}}]}]}, \eta_{\mathbf{X}} \right) $ for any $\theta_{\mathbf{X}}, \eta_{\mathbf{X}} \in \mathcal{H}_{\mathbf{X}} \widebar{{\cal M}}$. Then, by recalling the definition of the Riemannian metric $g_{[{\mathbf{X}}]}$ in the quotient space, we have \begin{equation*} \begin{split} \overline{{\rm Hess} f([{\mathbf{X}}])} [\theta_{{\mathbf{X}}}, \eta_{\mathbf{X}} ] &= \bar{g}_{\mathbf{X}}\left(\overline{{\rm Hess} f([{\mathbf{X}}])[\theta_{[{\mathbf{X}}]}]}, \eta_{\mathbf{X}} \right)\\ & = g_{[{\mathbf{X}}]} \left( {\rm Hess} f([{\mathbf{X}}])[\theta_{[{\mathbf{X}}]}], \eta_{[{\mathbf{X}}]} \right) = {\rm Hess} f([{\mathbf{X}}])[\theta_{[{\mathbf{X}}]}, \eta_{[{\mathbf{X}}]}]. \end{split} \end{equation*} So ${\rm Hess} f([{\mathbf{X}}])$ is completely characterized by $\overline{{\rm Hess} f([{\mathbf{X}}])}$ in the lifted horizontal space. \section{A General Procedure for Establishing Geometric Connections in Riemannian Optimization} \label{sec: general-strategy-for-connection} In this section, we present a general procedure for connecting landscape properties of an optimization problem (not necessarily restricted to fixed-rank matrix optimization) under different Riemannian geometries. For convenience of presentation, we focus on the connection between embedded and quotient geometries, while this procedure can be applied in broader settings. Suppose ${\cal M}^e$, endowed with the Riemannian metric $g_{\mathbf{X}}$, is a Riemannian embedded submanifold of the Riemannian manifold $\widebar{{\cal M}}$. Consider the following optimization problem under ${\cal M}^e$ \begin{equation}\label{eq: general-strategy-example-embedded} \text{(Opt. under Embedded Geometry)} \quad \min_{{\mathbf{X}} \in {\cal M}^e} f({\mathbf{X}}), \end{equation} where $f: {\cal M}^e \to \mathbb{R}$ is twice continuously differentiable and is the restriction of $\bar{f}: \widebar{{\cal M}} \to \mathbb{R}$ to the embedded submanifold ${\cal M}^e$. Suppose $\widebar{{\cal M}}^q$ is another smooth manifold and there exists a submersion $ \ell: {\mathbf{Z}} \in \widebar{{\cal M}}^q \to \ell({\mathbf{Z}}) \in {\cal M}^e$ such that \eqref{eq: general-strategy-example-embedded} can be reformulated on $\widebar{{\cal M}}^q$ as \begin{equation} \label{eq: general-strategy-example} \min_{{\mathbf{Z}} \in \widebar{{\cal M}}^q } \bar{h}({\mathbf{Z}}) := f(\ell({\mathbf{Z}})). \end{equation} We will see concrete examples of ${\cal M}^e$, $\widebar{{\cal M}}^q$ and the mapping $\ell$ later in the context of fixed-rank matrix optimization. The transformation between \eqref{eq: general-strategy-example-embedded} and \eqref{eq: general-strategy-example} can be regarded as a generalization of the classic technique of changing of variables in the Riemannian optimization setting. However, we shall emphasize that the mapping $\ell$ is not necessarily a bijection. Hence, it is nontrivial, and generally impossible, to explicitly characterize the connections between the desired points, e.g., the FOSPs, SOSPs, strict saddles, and (local) minimizers, of \eqref{eq: general-strategy-example-embedded} and those of \eqref{eq: general-strategy-example}. If we further assume $\ell$ defines an equivalence relation ``$\sim$'' on $\widebar{{\cal M}}^q$, i.e., $\ell({\mathbf{Z}}_1) = \ell({\mathbf{Z}}_2)$ whenever ${\mathbf{Z}}_1 \sim {\mathbf{Z}}_2$, and ${\cal M}^q:=\widebar{{\cal M}}^q/\sim$ with metric $g_{[{\mathbf{Z}}]}$ is a Riemannian quotient manifold and diffeomorphic to ${\cal M}^e$, then $\bar{h}({\mathbf{Z}})$ induces a function $h([{\mathbf{Z}}])$ on the quotient manifold ${\cal M}^q$ and \eqref{eq: general-strategy-example-embedded} can be transformed to an optimization on the quotient manifold $\mathcal{M}^q$: \begin{equation} \label{eq: general-strategy-example-quotient} \text{(Opt. under Quotient Geometry)} \quad \min_{[{\mathbf{Z}}] \in {\cal M}^q}h([{\mathbf{Z}}]). \end{equation} Compared with \eqref{eq: general-strategy-example}, problem \eqref{eq: general-strategy-example-quotient} contains an additional quotienting step and this makes it possible to connect the geometric properties of problems \eqref{eq: general-strategy-example-embedded} and \eqref{eq: general-strategy-example-quotient}. First, since $\ell$ induces a diffeomorphism $\tilde{\ell}$ between ${\cal M}^q$ and ${\cal M}^e$ \cite[Proposition 3.5.23]{abraham2012manifolds}, a simple fact stated in the following lemma shows there is an equivalence relationship between the sets of local minimizers of \eqref{eq: general-strategy-example-embedded} and \eqref{eq: general-strategy-example-quotient}. \begin{Lemma} \label{lm: local-minimizer-connection} If ${\mathbf{X}}$ is a local minimizer of \eqref{eq: general-strategy-example-embedded}, then $\tilde{\ell}^{-1}({\mathbf{X}})$ is a local minimizer of \eqref{eq: general-strategy-example-quotient}; if $[{\mathbf{Z}}]$ is a local minimizer of \eqref{eq: general-strategy-example-quotient}, then ${\mathbf{X}} = \tilde{\ell}([{\mathbf{Z}}])$ is a local minimizer of \eqref{eq: general-strategy-example-embedded}. \end{Lemma} However, for numerical tractable points such as FOSPs and SOSPs, the analysis on the connections is much more involved as the definitions of these points depends deeply on the Riemannian geometry of the underlying manifolds. We show next that with a careful treatment with Riemannian gradients and Hessians of \eqref{eq: general-strategy-example-embedded} and \eqref{eq: general-strategy-example-quotient}, we are able to obtain a much richer geometric connection between the landscapes of these two problems. \subsection{Outline of the Procedure} \label{sec: general-strategy-outline} First, we find it is often relative easy to connect the first-order geometries between \eqref{eq: general-strategy-example-embedded} and \eqref{eq: general-strategy-example-quotient}. For example, by taking derivative on both sides of $\bar{h}({\mathbf{Z}}) = f(\ell({\mathbf{Z}}))$ along the direction $\theta_{\mathbf{Z}} \in T_{\mathbf{Z}} \widebar{{\cal M}}^q$ and the chain rule, we have \begin{equation} \label{eq: general-gradient-connection} \bar{g}_{\mathbf{Z}} \left( {\rm grad}\, \bar{h}({\mathbf{Z}}), \theta_{\mathbf{Z}} \right) = {\rm D} \bar{h}({\mathbf{Z}})[\theta_{\mathbf{Z}}] = {\rm D} f(\ell({\mathbf{Z}}))[ {\rm D} \ell({\mathbf{Z}})[\theta_{\mathbf{Z}}] ] \overset{(a)}= g_{\mathbf{X}} \left( {\rm grad} f(\ell({\mathbf{Z}})), {\rm D} \ell({\mathbf{Z}})[\theta_{\mathbf{Z}}] \right). \end{equation} Here $\bar{g}_{\mathbf{Z}}$ is the Riemannian metric on $\widebar{{\cal M}}^q$; (a) is because ${\rm D} \ell({\mathbf{Z}})[\theta_{\mathbf{Z}}]\in T_{\ell({\mathbf{Z}})} {\cal M}^e$ \cite[Section 3.5.1]{absil2009optimization}. Thus, for reasonable choices of $\ell$, we hope to find a connection between ${\rm grad}\, \bar{h}({\mathbf{Z}})$ and ${\rm grad} f(\ell({\mathbf{Z}}))$ based on \eqref{eq: general-gradient-connection}, and this can further give us the first-order geometric connection. Next, we present a general three-step procedure to connect the second-order geometries between \eqref{eq: general-strategy-example-embedded} and \eqref{eq: general-strategy-example-quotient}. \begin{itemize}[leftmargin=*] \item {\bf Step 1: Compute the quadratic forms of Riemannian Hessians.} We first compute the Riemannian Hessians from their definitions. For fixed-rank matrix optimization, we will derive the explicit expressions for the quadratic form of the Riemannian Hessians in Sections \ref{sec: connection-PSD} and \ref{sec: connection-general} and will see the quadratic forms of the Riemannian Hessians of $f({\mathbf{X}})$ and $h([{\mathbf{Z}}])$ always involve the quadratic form of the Riemannian Hessian of $\bar{f}$ and the Riemannian gradient of $\bar{f}$ or $\bar{h}$. For concreteness, we assume \begin{equation} \label{eq: general-setup-Hessian-expression} \begin{split} {\rm Hess} f({\mathbf{X}})[\xi_{\mathbf{X}}, \xi_{\mathbf{X}}] &= {\rm Hess} \bar{f} ({\mathbf{X}})[\phi(\xi_{\mathbf{X}}), \phi(\xi_{\mathbf{X}})] + \Psi_1 , \quad \forall \xi_{\mathbf{X}} \in T_{\mathbf{X}} {\cal M}^e,\\ \overline{{\rm Hess}\, h([{\mathbf{Z}}])}[\theta_{\mathbf{Z}}, \theta_{\mathbf{Z}}] &= {\rm Hess} \bar{f} (\ell({\mathbf{Z}}))[\varphi(\theta_{\mathbf{Z}}), \varphi(\theta_{\mathbf{Z}})] + \Psi_2 ,\quad \forall \theta_{\mathbf{Z}} \in \mathcal{H}_{\mathbf{Z}} \widebar{{\cal M}}^q. \end{split} \end{equation} Here $\phi: T_{\mathbf{X}} {\cal M}^e \to T_{\mathbf{X}} \widebar{{\cal M}}$, $\varphi: \mathcal{H}_{\mathbf{Z}} \widebar{{\cal M}}^q \to T_{\ell({\mathbf{Z}})} \widebar{{\cal M}}$ and $\mathcal{H}_{\mathbf{Z}} \widebar{{\cal M}}^q$ is the horizontal space of $T_{\mathbf{Z}} \widebar{{\cal M}}^q$; $\Psi_1$ and $\Psi_2$ incorporate remaining terms of ${\rm Hess} f({\mathbf{X}})[\xi_{\mathbf{X}}, \xi_{\mathbf{X}}]$ and $\overline{{\rm Hess}\, h([{\mathbf{Z}}])}[\theta_{\mathbf{Z}}, \theta_{\mathbf{Z}}]$ other than the quadratic form of the Riemannian Hessian of $\bar{f}$. As we will see in Propositions \ref{prop: gradient-hessian-exp-PSD} and \ref{prop: gradient-hessian-exp-general}, we often have $\phi(\xi_{\mathbf{X}}) = \xi_{\mathbf{X}}$ and $\varphi(\theta_{\mathbf{Z}}) = {\rm D} \ell({\mathbf{Z}})[\theta_{\mathbf{Z}}]$. \item {\bf Step 2: Find a proper mapping $\mathcal{L}$ between $\mathcal{H}_{\mathbf{Z}} \widebar{{\cal M}}^q$ and $T_{\mathbf{X}} {\cal M}^e$ to connect Riemannian Hessians. } To establish the second-order geometric connection, i.e., the connection between ${\rm Hess} f({\mathbf{X}})$ and $\overline{{\rm Hess}\, h([{\mathbf{Z}}])}$ with ${\mathbf{X}} = \ell({\mathbf{Z}})$, a natural idea is to first connect ${\rm Hess} \bar{f} ({\mathbf{X}})[\phi(\xi_{\mathbf{X}}), \phi(\xi_{\mathbf{X}})]$ with ${\rm Hess} \bar{f} (\ell({\mathbf{Z}}))[\varphi(\theta_{\mathbf{Z}}), \varphi(\theta_{\mathbf{Z}})]$. To do this, we would like to find a mapping $\mathcal{L}$ between $\mathcal{H}_{\mathbf{Z}} \widebar{{\cal M}}^q$ and $T_{\mathbf{X}} {\cal M}^e$ such that $\phi(\mathcal{L}(\theta_{\mathbf{Z}}))=\varphi(\theta_{\mathbf{Z}})$. Moreover, $\mathcal{L}$ is further constrained to be a bijection so that we can connect the whole spectra of ${\rm Hess} f({\mathbf{X}})$ with the spectra of $\overline{{\rm Hess}\, h([{\mathbf{Z}}])}$ as we will see in Step 3. On the other hand, such a mapping $\mathcal{L}$ alone seems not enough for connecting ${\rm Hess} f({\mathbf{X}})[\xi_{\mathbf{X}}, \xi_{\mathbf{X}}]$ with $\overline{{\rm Hess}\, h([{\mathbf{Z}}])}[\theta_{\mathbf{Z}}, \theta_{\mathbf{Z}}]$ as $\Psi_1$ and $\Psi_2$ in \eqref{eq: general-setup-Hessian-expression} can be complex and distinct from each other (see the forthcoming Propositions \ref{prop: gradient-hessian-exp-PSD} and \ref{prop: gradient-hessian-exp-general}). This motivates us to put some assumptions on ${\mathbf{X}}$ or $[{\mathbf{Z}}]$ in order to proceed. In the following, we consider a simple setting to illustrate what assumptions may be needed. \begin{Example}\label{ex: example-1} Consider the optimization problem: $ \min_{x \geq 0} f(x) $, where $f$ is a scalar function defined on the nonnegative part of the real line. If we consider the factorization $x = z^2$ and let $\bar{h}(z) = f(z^2)$, it is easy to see $\bar{h}(z) = \bar{h}(-z)$. So we can consider the equivalence classes $[z] = \{ z, -z \}$ for $z \in \mathbb{R}$ and take the quotient manifold ${\cal M}^q = \mathbb{R}/(+-)$, i.e., quotienting out the sign of the real number, as the search space. Suppose the Euclidean inner product is adopted as the Riemannian metric. Then in both cases, the Riemannian Hessians of $f(x)$ and $h([z])$ are equal to the corresponding Euclidean Hessians (see also the forthcoming Proposition \ref{prop: gradient-hessian-exp-PSD}): \begin{equation} \label{eq: scalar-motivating-example} \begin{split} {\rm Hess} f(x)[\xi_x, \xi_x] = \xi^2_x f''(x); \quad \overline{{\rm Hess}\, h([z])}[\theta_z, \theta_z] = \theta_z^2 \bar{h}''(z) = 2\theta_z^2f'(z^2) + 4 z^2\theta_z^2 f''(z^2), \end{split} \end{equation} where $f'$ and $f''$ denote the first and second derivatives of $f$. Given $z_1 \in \mathbb{R}$, $x_1 = z_1^2$ and \eqref{eq: scalar-motivating-example}, we see a natural assumption to connect ${\rm Hess} f(x_1)[\xi_{x_1}, \xi_{x_1}]$ with $\overline{{\rm Hess}\, h([z_1])}[\theta_{z_1}, \theta_{z_1}]$ is by assuming $z_1^2$ is a FOSP of $f(x)$, i.e., $f'(z_1^2) = 0$, otherwise it is even not guaranteed that ${\rm Hess} f(x_1)[\xi_{x_1}, \xi_{x_1}]$ and $\overline{{\rm Hess}\, h([z_1])}[\theta_{z_1}, \theta_{z_1}]$ will have the same sign. At the same time, a more encouraging fact is that if we further let the bijection $\mathcal{L}$ to be $\mathcal{L}(\theta_z) = 2z \theta_z$ for any $\theta_z \in \mathbb{R}$, then we have $\overline{{\rm Hess}\, h([z_1])}[\theta_{z_1}, \theta_{z_1}] = {\rm Hess} f(x_1)[\mathcal{L}(\theta_{z_1}), \mathcal{L}(\theta_{z_1})]$. \end{Example} Motivated by Example \ref{ex: example-1}, we find by putting some proper first-order assumptions on ${\mathbf{X}}$ or $[{\mathbf{Z}}]$, we could hope for a nice connection between ${\rm Hess} f({\mathbf{X}})$ and $\overline{{\rm Hess}\, h([{\mathbf{Z}}])}$. In fact, as we will see in Sections \ref{sec: connection-PSD} and \ref{sec: connection-general}, this intuition applies to all examples we consider in this paper. \item {\bf Step 3: Establish the spectra connection between Riemannian Hessians via bounding spectrum of $\mathcal{L}$.} Suppose one has successfully worked through Steps 1 and 2 and come to the stage that $\overline{{\rm Hess}\, h([{\mathbf{Z}}])}[\theta_{\mathbf{Z}}, \theta_{\mathbf{Z}}] = {\rm Hess} f({\mathbf{X}})[\mathcal{L}(\theta_{\mathbf{Z}}), \mathcal{L}(\theta_{\mathbf{Z}})]$ is shown to hold for any $\theta_{\mathbf{Z}} \in \mathcal{H}_{\mathbf{Z}} \widebar{{\cal M}}^q$ at properly chosen $[{\mathbf{Z}}]$ and ${\mathbf{X}} = \ell({\mathbf{Z}})$. The following Theorem \ref{th: hessian-sandwich} shows a sandwich inequality between the spectra of ${\rm Hess} f({\mathbf{X}})$ and $\overline{{\rm Hess}\, h([{\mathbf{Z}}])}$ based on spectrum bounds of $\mathcal{L}$. \end{itemize} \begin{Theorem}[Sandwich Inequalities for Spectra of Hessians] \label{th: hessian-sandwich} Suppose ${\mathbf{X}} \in {\cal M}^e$, ${\mathbf{Z}} \in \widebar{{\cal M}}^q$, $\dim(T_{\mathbf{X}} {\cal M}^e) = \dim(\mathcal{H}_{\mathbf{Z}} \widebar{{\cal M}}^q) = p$. Then both ${\rm Hess} f({\mathbf{X}})$ and $\overline{{\rm Hess}\, h([{\mathbf{Z}}])}$ have $p$ eigenvalues. Moreover, if $\mathcal{L}: \mathcal{H}_{\mathbf{Z}} \widebar{{\cal M}}^q \to T_{\mathbf{X}} {\cal M}^e$ is a bijection satisfying \begin{equation} \label{eq: general-setting-hessian-connection} \overline{{\rm Hess}\, h([{\mathbf{Z}}])}[\theta_{\mathbf{Z}}, \theta_{\mathbf{Z}}] = {\rm Hess} f({\mathbf{X}})[\mathcal{L}(\theta_{\mathbf{Z}}), \mathcal{L}(\theta_{\mathbf{Z}})], \quad \forall \theta_{\mathbf{Z}} \in \mathcal{H}_{\mathbf{X}} \widebar{{\cal M}}^q \end{equation} and \begin{equation} \label{eq: spectrum-bound-general-setting-L} \alpha \bar{g}_{\mathbf{Z}} (\theta_{\mathbf{Z}}, \theta_{\mathbf{Z}}) \leq g_{\mathbf{X}} (\mathcal{L}(\theta_{\mathbf{Z}}),\mathcal{L}(\theta_{\mathbf{Z}})) \leq \beta \bar{g}_{\mathbf{Z}} (\theta_{\mathbf{Z}}, \theta_{\mathbf{Z}}), \quad \forall \theta_{\mathbf{Z}} \in \mathcal{H}_{\mathbf{X}} \widebar{{\cal M}}^q \end{equation} holds for some $0\leq \alpha \leq \beta$, then for $k = 1,\ldots,p$, we have $$\lambda_k( \overline{{\rm Hess}\, h([{\mathbf{Z}}])} ) \text{ is sandwiched between } \alpha \lambda_k( {\rm Hess} f({\mathbf{X}}) ) \text{ and } \beta \lambda_k( {\rm Hess} f({\mathbf{X}})),$$ where $\lambda_k( {\rm Hess} f({\mathbf{X}}) )$ and $\lambda_k( \overline{{\rm Hess}\, h([{\mathbf{Z}}])} )$ are the $k$-th largest eigenvalues of ${\rm Hess} f({\mathbf{X}})$ and $\overline{{\rm Hess}\, h([{\mathbf{Z}}])}$, respectively. \end{Theorem} {\noindent \bf Proof of Theorem \ref{th: hessian-sandwich}.} First, because $\overline{{\rm Hess}\, h([{\mathbf{Z}}])}$ and ${\rm Hess} f({\mathbf{X}})$ are by definition self-adjoint linear maps from $\mathcal{H}_{\mathbf{Z}} \widebar{{\cal M}}^q$ and $T_{\mathbf{X}} {\cal M}^e$ to $\mathcal{H}_{\mathbf{Z}} \widebar{{\cal M}}^q$ and $T_{\mathbf{X}} {\cal M}^e$, respectively, both ${\rm Hess} f({\mathbf{X}})$ and $\overline{{\rm Hess}\, h([{\mathbf{Z}}])}$ have $p$ eigenvalues as $\dim(T_{\mathbf{X}} {\cal M}^e) = \dim(\mathcal{H}_{\mathbf{Z}} \widebar{{\cal M}}^q) = p$. Suppose $\u_1,\ldots,\u_p$ are eigenvectors corresponding to $\lambda_1({\rm Hess} f({\mathbf{X}}))$,$\ldots$, $\lambda_p({\rm Hess} f({\mathbf{X}}))$ and $\v_1,\ldots,\v_p$ are eigenvectors corresponding to $\lambda_1(\overline{{\rm Hess}\, h([{\mathbf{Z}}])}),\ldots,\lambda_p(\overline{{\rm Hess}\, h([{\mathbf{Z}}])})$. For $k = 1,\ldots, p$, define \begin{equation*} \begin{split} &\mathcal{U}_k = {\rm span}\{\u_1,\ldots,\u_k\}, \quad \mathcal{U}'_k = {\rm span}\{\mathcal{L}^{-1}(\u_1),\ldots,\mathcal{L}^{-1}(\u_k)\},\\ &\mathcal{V}_k = {\rm span}\{\v_1,\ldots,\v_k\}, \quad \mathcal{V}'_k = {\rm span}\{\mathcal{L}(\v_1),\ldots,\mathcal{L}(\v_k)\}. \end{split} \end{equation*} Let us first consider the case that $\lambda_k({\rm Hess} f({\mathbf{X}})) \geq 0$. The Max-min theorem for eigenvalues (Lemma \ref{lm: max-min-theorem}) yields \begin{equation} \label{ineq: spectrum-ineq1} \begin{split} \lambda_k(\overline{{\rm Hess}\, h([{\mathbf{Z}}])}) &\geq \min_{\u' \in \mathcal{U}_k', \u' \neq 0 } \frac{\overline{{\rm Hess}\, h([{\mathbf{Z}}])}[\u',\u']}{\bar{g}_{\mathbf{Z}}(\u',\u')} \overset{ \eqref{eq: general-setting-hessian-connection} }= \min_{\u' \in \mathcal{U}_k', \u' \neq 0} \frac{{\rm Hess} f({\mathbf{X}})[\mathcal{L}(\u'),\mathcal{L}(\u')]}{\bar{g}_{\mathbf{Z}}(\u',\u')} \\ & = \min_{\u \in \mathcal{U}_k, \u \neq 0} \frac{{\rm Hess} f({\mathbf{X}})[\u,\u]}{\bar{g}_{\mathbf{Z}}(\mathcal{L}^{-1}(\u),\mathcal{L}^{-1}(\u))} \geq \min_{\u \in \mathcal{U}_k,\u \neq {\mathbf{0}} }\frac{\lambda_k({\rm Hess} f({\mathbf{X}}))g_{\mathbf{X}}(\u,\u)}{\bar{g}_{\mathbf{Z}}(\mathcal{L}^{-1}(\u),\mathcal{L}^{-1}(\u))} \\ &\overset{\eqref{eq: spectrum-bound-general-setting-L} } \geq \alpha \lambda_k({\rm Hess} f({\mathbf{X}})) \geq 0. \end{split} \end{equation} On the other hand, we have \begin{equation}\label{ineq: spectrum-ineq2} \begin{split} & \quad \lambda_k({\rm Hess} f({\mathbf{X}})) \\ &\overset{\text{Lemma } \ref{lm: max-min-theorem}} \geq \min_{\v' \in \mathcal{V}_k',\v' \neq {\mathbf{0}} } \frac{{\rm Hess} f({\mathbf{X}})[\mathcal{L} \mathcal{L}^{-1}(\v'),\mathcal{L} \mathcal{L}^{-1}(\v')]}{g_{\mathbf{X}}(\v',\v')} \overset{ \eqref{eq: general-setting-hessian-connection} } = \min_{\v' \in \mathcal{V}_k',\v' \neq {\mathbf{0}} } \frac{\overline{{\rm Hess}\, h([{\mathbf{Z}}])}[\mathcal{L}^{-1}(\v'),\mathcal{L}^{-1}(\v')]}{g_{\mathbf{X}}(\v',\v')}\\ &= \min_{\v \in \mathcal{V}_k,\v \neq {\mathbf{0}} } \frac{\overline{{\rm Hess}\, h([{\mathbf{Z}}])}[\v,\v] }{g_{\mathbf{X}}(\mathcal{L}(\v),\mathcal{L}(\v))} \geq \min_{\v \in \mathcal{V}_k,\v \neq {\mathbf{0}} } \frac{\lambda_k(\overline{{\rm Hess}\, h([{\mathbf{Z}}])}) \bar{g}_{\mathbf{Z}}(\v,\v) }{g_{\mathbf{X}}(\mathcal{L}(\v),\mathcal{L}(\v))} \\ & \overset{\eqref{ineq: spectrum-ineq1}, \eqref{eq: spectrum-bound-general-setting-L} }\geq \lambda_k(\overline{{\rm Hess}\, h([{\mathbf{Z}}])} )/\beta. \end{split} \end{equation} So we have proved the result for the case that $\lambda_k({\rm Hess} f({\mathbf{X}})) \geq 0$. When $\lambda_k({\rm Hess} f({\mathbf{X}})) < 0$, we have $\lambda_{p+1-k}(-{\rm Hess} f({\mathbf{X}})) = -\lambda_k({\rm Hess} f({\mathbf{X}})) > 0$. Following the same proof of \eqref{ineq: spectrum-ineq1} and \eqref{ineq: spectrum-ineq2}, we have \begin{equation*} \begin{split} -\lambda_k(\overline{{\rm Hess}\, h([{\mathbf{Z}}])}) = \lambda_{p+1-k}(-\overline{{\rm Hess}\, h([{\mathbf{Z}}])}) \geq \alpha \lambda_{p+1-k}(-{\rm Hess} f({\mathbf{X}})) = -\alpha\lambda_k({\rm Hess} f({\mathbf{X}})) > 0,\\ -\lambda_k({\rm Hess} f({\mathbf{X}})) = \lambda_{p+1-k}(-{\rm Hess} f({\mathbf{X}})) \geq \lambda_{p+1-k}(-\overline{{\rm Hess}\, h([{\mathbf{Z}}])})/\beta = -\lambda_k(\overline{{\rm Hess}\, h([{\mathbf{Z}}])})/\beta. \end{split} \end{equation*} This finishes the proof of this theorem. \quad $\blacksquare$ \section{Embedded and Quotient Geometries on Fixed-rank Matrices} \label{sec: embedded-quotient-fixed-rank-matrix} Now we specifically focus on the fixed-rank matrix optimization problems \eqref{eq: PSD-manifold-formulation} and \eqref{eq: general prob}. The set of $p$-by-$p$ rank $r$ PSD matrices ${\cal M}_{r+}:=\left\{ {\mathbf{X}}\in \mathbb{S}^{p\times p} \mid {\rm rank}({\mathbf{X}}) = r, {\mathbf{X}} \succcurlyeq 0 \right\}$ and the set of $p_1$-by-$p_2$ rank $r$ matrices ${\cal M}_r:=\left\{ {\mathbf{X}}\in \mathbb{R}^{p_1\times p_2}\mid {\rm rank}({\mathbf{X}}) = r \right\}$ are two manifolds of particular interests. In Sections \ref{sec: embeddded-fixed-rank-matrix} and \ref{sec: quotient-fixed-rank-matrix}, we introduce embedded and quotient geometries on ${\cal M}_{r+}$ and ${\cal M}_r$, respectively. \subsection{Embedded Geometries for ${\cal M}_{r+}$ and ${\cal M}_r$} \label{sec: embeddded-fixed-rank-matrix} The following lemma shows that ${\cal M}_{r+}$ and ${\cal M}_r$ are smooth embedded submanifolds of $\mathbb{R}^{p \times p}$ and $\mathbb{R}^{p_1 \times p_2}$ and also summarizes commonly used algebraic representations of the corresponding tangent spaces. To emphasizing the embedding natural of ${\cal M}_{r+}$ and ${\cal M}_r$, we write them as ${\cal M}^e_{r+}$ and ${\cal M}^e_{r}$, respectively. \begin{Lemma}{\rm(\cite[Chapter 5]{helmke2012optimization}, \cite[Proposition 5.2]{vandereycken2010riemannian}, \cite[Example 8.14]{lee2013smooth})}\label{lm: Mr+ and Mr manifold} ${\cal M}^e_{r+}$ and ${\cal M}^e_{r}$ are smooth embedded submanifolds of $\mathbb{R}^{p \times p}$ and $\mathbb{R}^{p_1 \times p_2}$ with dimensions $(pr - r(r-1)/2)$ and $(p_1 + p_2 -r)r$, respectively. The tangent space $T_{{\mathbf{X}}}{\cal M}^e_{r+}$ at ${\mathbf{X}} \in {\cal M}^e_{r+}$ is \begin{equation} \label{eq:tangent Mr+} T_{\mathbf{X}} {\cal M}_{r+}^e = \left\{ [{\mathbf{U}}\quad {\mathbf{U}}_{\perp}] \begin{bmatrix} \S & {\mathbf{D}}^\top \\[2pt] {\mathbf{D}} & {\mathbf{0}} \end{bmatrix} [{\mathbf{U}}\quad {\mathbf{U}}_{\perp}]^\top: \S \in \mathbb{S}^{r \times r}, {\mathbf{D}} \in \mathbb{R}^{(p-r) \times r} \right\}, \end{equation} where ${\mathbf{U}} \in {\rm St}(r,p)$ spans the top $r$ eigenspace of ${\mathbf{X}}$. The tangent space $T_{{\mathbf{X}}}{\cal M}^e_{r}$ at ${\mathbf{X}} \in {\cal M}^e_{r}$ is \begin{equation} \label{eq:tangent Mr} T_{\mathbf{X}} {\cal M}_r^e = \left\{ [{\mathbf{U}}\quad {\mathbf{U}}_{\perp}] \begin{bmatrix} \S & {\mathbf{D}}_2^\top \\[2pt] {\mathbf{D}}_1 & {\mathbf{0}} \end{bmatrix} [{\mathbf{V}}\quad {\mathbf{V}}_{\perp}]^\top: \S \in \mathbb{R}^{r \times r}, {\mathbf{D}}_1 \in \mathbb{R}^{(p_1 - r)\times r}, {\mathbf{D}}_2 \in \mathbb{R}^{(p_2 - r)\times r} \right\}, \end{equation} where ${\mathbf{U}} \in {\rm St}(r,p_1)$ and ${\mathbf{V}}\in {\rm St}(r,p_2)$ span the left and right singular subspaces of ${\mathbf{X}}$, respectively. \end{Lemma} In addition, we assume the embedded submanifolds $\mathcal{M}^e_{r+}$ and $\mathcal{M}^e_r$ are endowed with the natural metric induced by the Euclidean inner product, i.e., $\langle {\mathbf{U}}, {\mathbf{V}} \rangle = \mathrm{trace}({\mathbf{U}}^\top {\mathbf{V}})$. \subsection{Quotient Geometries for ${\cal M}_{r+}$ and ${\cal M}_r$} \label{sec: quotient-fixed-rank-matrix} The versatile choices of fixed-rank matrix factorization yield various Riemannian quotient structures and metrics, which have been explored in the literature on both ${\cal M}_{r+}$ \cite{meyer2011regression,bonnabel2010riemannian,journee2010low,massart2020quotient} and ${\cal M}_r$ \cite{absil2014two,mishra2014fixed,mishra2012riemannian,meyer2011linear}. Due to the factorization, the total space of fixed-rank matrices under the quotient geometry (i.e., focus of this subsection) can often be written as a product space of some simple smooth manifolds, including the following three examples to be used later: \begin{itemize} \item[(1)] $\mathbb{R}^{p \times r}_*$: the set of real $p$-by-$r$ full column rank matrices; \item[(2)] ${\rm St}(r,p)$: the set of real $p$-by-$r$ matrices with orthonormal columns; \item[(3)] $\mathbb{S}_{+}(r)$: the set of $r$-by-$r$ real symmetric positive definite matrices. \end{itemize} All these three manifolds are smooth homogeneous spaces and there exists a smooth structure on their product space \cite[Section 3.1.6]{absil2009optimization}. In Table \ref{tab: basic-prop-simple-manifold}, we summarize several basic properties of these simple manifolds. \begin{table}[ht] \centering \begin{tabular}{c | c | c | c} \hline & $\mathbb{R}^{p \times r}_*$ & ${\rm St}(r,p)$ & $\mathbb{S}_{+}(r)$ \\ \hline Dimension & $pr$ & $pr-(r^2 + r)/2$ & $(r^2 + r)/2$ \\ \hline Matrix & \multirow{2}{1em}{${\mathbf{Y}}$} & \multirow{2}{1em}{${\mathbf{U}}$} & \multirow{2}{1em}{${\mathbf{B}}$} \\ representation & & & \\ \hline \multirow{3}{5em}{Tangent space} & \multirow{3}{7em}{$T_{{\mathbf{Y}}} \mathbb{R}^{p \times r}_* = \mathbb{R}^{p \times r}$} & \multirow{3}{12em}{$T_{{\mathbf{U}}}{\rm St}(r,p) = \{{\mathbf{U}}\boldsymbol{\Omega} + {\mathbf{U}}_\perp {\mathbf{D}}: \boldsymbol{\Omega} = - \boldsymbol{\Omega}^\top \in \mathbb{R}^{r \times r}, {\mathbf{D}} \in \mathbb{R}^{(p-r) \times r} \}$} & \multirow{3}{8em}{$T_{{\mathbf{B}}} \mathbb{S}_{+} (r) = \mathbb{S}^{r \times r}$}\\ & & & \\ & & & \\ \hline \multirow{3}{7em}{Projection onto tangent space} & \multirow{3}{8em}{$P_{T_{{\mathbf{Y}}} \mathbb{R}^{p \times r}_*}(\eta_{\mathbf{Y}}) = \eta_{\mathbf{Y}}$,$\forall \eta_{\mathbf{Y}} \in \mathbb{R}^{p \times r}$} & \multirow{3}{12em}{$P_{T_{{\mathbf{U}}}{\rm St}(r,p)}(\eta_{\mathbf{U}}) = P_{{\mathbf{U}}_\perp}(\eta_{\mathbf{U}}) + {\mathbf{U}} \skew({\mathbf{U}}^\top \eta_{\mathbf{U}})$, $\forall \eta_{\mathbf{U}} \in \mathbb{R}^{p \times r}$} & \multirow{3}{8em}{$P_{T_{{\mathbf{B}}} \mathbb{S}_{+} (r)}(\eta_{\mathbf{B}}) = {\rm Sym}(\eta_{\mathbf{B}})$, $\forall \eta_{\mathbf{B}} \in \mathbb{R}^{r \times r}$}\\ & & &\\ & & &\\ \hline Metric $g$ on & \multirow{2}{12em}{$g_{\mathbf{Y}}(\theta_{\mathbf{Y}}, \eta_{\mathbf{Y}}) = {\rm tr}({\mathbf{W}}_{\mathbf{Y}} \theta_{\mathbf{Y}}^\top \eta_{\mathbf{Y}})$,${\mathbf{W}}_{\mathbf{Y}} \in \mathbb{S}_+(r)$} & \multirow{2}{12em}{$g_{\mathbf{U}} (\theta_{\mathbf{U}}, \eta_{\mathbf{U}}) = {\rm tr}(\theta_{\mathbf{U}}^\top \eta_{\mathbf{U}})$} & \multirow{2}{8em}{$g_{\mathbf{B}} (\theta_{\mathbf{B}}, \eta_{\mathbf{B}}) = {\rm tr}({\mathbf{B}}^{-1} \theta_{\mathbf{B}} {\mathbf{B}}^{-1} \eta_{\mathbf{B}})$}\\ tangent space & & & \\ \hline \end{tabular}\caption{Basic Riemannian Geometric Properties for $\mathbb{R}^{p \times r}_*$, ${\rm St}(r,p)$ and $\mathbb{S}_{+}(r)$ \cite{edelman1998geometry,absil2009optimization,mishra2014fixed}. Here for any square matrix ${\mathbf{X}}$, ${\rm Sym}({\mathbf{X}}) = ({\mathbf{X}} + {\mathbf{X}}^\top)/2$ and $\skew({\mathbf{X}}) =({\mathbf{X}} - {\mathbf{X}}^\top)/2$; ${\mathbf{W}}_{\mathbf{Y}}$ is a weight matrix that specifies the Riemannian metric $g$ (see Remark \ref{rem: W_Y-choice-in-metric} for discussions).}\label{tab: basic-prop-simple-manifold} \end{table} \begin{Remark} \label{rem: W_Y-choice-in-metric} We introduce a weight matrix ${\mathbf{W}}_{\mathbf{Y}}$ while defining $g_{\mathbf{Y}}$ in $\mathbb{R}^{p \times r}_*$ so that various metrics considered in literature are covered. ${\mathbf{W}}_{\mathbf{Y}}$ is required to be in $\mathbb{S}_+(r)$ so that $g_{\mathbf{Y}}(\theta_{\mathbf{Y}}, \theta_{\mathbf{Y}}) \geq 0, \forall \theta_{\mathbf{Y}} \in T_{{\mathbf{Y}}} \mathbb{R}^{p \times r}_*$ and $g_{\mathbf{Y}}$ is a genuine Riemannian metric \cite[Section 3.6]{absil2009optimization}. Common choices of ${\mathbf{W}}_{\mathbf{Y}}$ include ${\mathbf{W}}_{\mathbf{Y}} = {\mathbf{I}}_r$ (flat metric) \cite{journee2010low}, ${\mathbf{W}}_{\mathbf{Y}} = ({\mathbf{Y}}^\top {\mathbf{Y}})^{-1}$ (right-invariant metric) \cite{meyer2011linear}, and ${\mathbf{W}}_{\mathbf{Y}} = {\mathbf{Y}}^\top {\mathbf{Y}}$ \cite{mishra2012riemannian}. \end{Remark} Next, we present two quotient geometries for fixed-rank PSD matrices (based on full-rank factorization and polar factorization) and three quotient geometries for fixed-rank general matrices (based on full-rank factorization, polar factorization, and subspace-projection factorization) in Sections \ref{sec: quotient-PSD} and \ref{sec: quotient-general}, respectively. These quotient geometries have been explored in \cite{meyer2011regression,bonnabel2010riemannian,journee2010low,mishra2014fixed,mishra2012riemannian,meyer2011linear}. Here, we provide a unified way to characterize these quotient geometries, e.g., the introduction of ${\mathbf{W}}_{\mathbf{Y}}$ in the Riemannian metric, in both PSD and general cases, propose several new representations for the horizontal spaces (see the forthcoming Lemmas \ref{lm: general-quotient-manifold2-prop}, \ref{lm: general-quotient-manifold3-prop} and the discussion afterwards), derive explicit formulas for the Riemannian Hessians under these quotient geometries (see the forthcoming Remark \ref{rem: closed-form-riemannian-hessian-quotient}) and show their geometric connections to the embedded geometry in fixed-rank matrix optimization. \subsubsection{Quotient Geometries for ${\cal M}_{r+}$} \label{sec: quotient-PSD} Suppose ${\mathbf{X}} \in \mathbb{S}^{p \times p}$ is a rank $r$ PSD matrix with economic eigendecomposition ${\mathbf{X}} = {\mathbf{U}}' \boldsymbol{\Sigma}' {\mathbf{U}}^{'\top} $. \vskip.1cm {\bf (1) Full-rank Factorization $\mathcal{M}_{r+}^{q_1}$.} In this factorization, we view ${\mathbf{X}}$ as ${\mathbf{X}} = {\mathbf{Y}} {\mathbf{Y}}^\top$ for ${\mathbf{Y}} \in \mathbb{R}^{p \times r}_*$. Such a factorization exists, e.g., ${\mathbf{Y}} = {\mathbf{U}}' \boldsymbol{\Sigma}^{'1/2}$, but is not unique because of the invariance mapping ${\mathbf{Y}} \mapsto {\mathbf{Y}} \O$ for any $\O \in \mathbb{O}_r$. To cope with it, we encode the invariance mapping in an abstract search space by defining the equivalence classes $[{\mathbf{Y}}] = \{ {\mathbf{Y}} \O: \O \in \mathbb{O}_r \}$. Since the invariance mapping is performed via the Lie group $\mathbb{O}_r$, we have ${\cal M}_{r+}^{q_1} := \widebar{{\cal M}}_{r+}^{q_1}/\mathbb{O}_r$, where $\widebar{{\cal M}}_{r+}^{q_1} = \mathbb{R}^{p \times r}_*$, is a quotient manifold of $\widebar{{\cal M}}_{r+}^{q_1}$ \cite[Theorem 21.10]{lee2013smooth}. We equip $T_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1}$ with the metric $\bar{g}^{r+}_{\mathbf{Y}}(\eta_{\mathbf{Y}}, \theta_{\mathbf{Y}})= {\rm tr}({\mathbf{W}}_{\mathbf{Y}} \eta_{\mathbf{Y}}^\top \theta_{\mathbf{Y}})$ as given in Table \ref{tab: basic-prop-simple-manifold}. In the following Lemma \ref{lm: psd-quotient-manifold1-prop}, we provide the corresponding vertical and horizontal spaces of $T_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1}$ and show under some proper assumptions on ${\mathbf{W}}_{\mathbf{Y}}$, ${\cal M}_{r+}^{q_1}$ is a Riemannian quotient manifold endowed with the Riemannian metric $g^{r+}_{[{\mathbf{Y}}]}$ induced from $\bar{g}^{r+}_{\mathbf{Y}}$. \begin{Lemma} \label{lm: psd-quotient-manifold1-prop} (i) Given ${\mathbf{U}} \in {\rm St}(r,p)$ that spans the top $r$ eigenspace of ${\mathbf{Y}}\Y^\top$ and $\P = {\mathbf{U}}^\top {\mathbf{Y}}$, the vertical and horizontal spaces of $T_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1}$ are given as follows: \begin{equation*} \begin{split} \mathcal{V}_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1} &= \{ \theta_{\mathbf{Y}}: \theta_{\mathbf{Y}} = {\mathbf{Y}} \boldsymbol{\Omega}, \boldsymbol{\Omega} = - \boldsymbol{\Omega}^\top \in \mathbb{R}^{r \times r} \} = \{ \theta_{\mathbf{Y}}: \theta_{\mathbf{Y}} = {\mathbf{U}} \boldsymbol{\Omega} \P^{-\top}, \boldsymbol{\Omega} = - \boldsymbol{\Omega}^\top \in \mathbb{R}^{r \times r} \},\\ \mathcal{H}_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1} &= \{ \theta_{\mathbf{Y}}: \theta_{\mathbf{Y}} = ({\mathbf{U}} \S + {\mathbf{U}}_\perp {\mathbf{D}}) \P^{-\top}, \S\P^{-\top}{\mathbf{W}}_{\mathbf{Y}} \P^{-1} \in \mathbb{S}^{r \times r}, {\mathbf{D}} \in \mathbb{R}^{(p-r) \times r} \}, \end{split} \end{equation*} with $\dim(\mathcal{V}_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1}) = (r^2-r)/2$, $\dim(\mathcal{H}_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1}) = pr-(r^2-r)/2$ and $\mathcal{V}_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1} \perp \mathcal{H}_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1}$ with respect to $\bar{g}^{r+}_{\mathbf{Y}}$. (ii) Moreover, ${\cal M}_{r+}^{q_1}$ is a Riemannian quotient manifold endowed with the metric $g_{[{\mathbf{Y}}]}^{r+}$ induced from $\bar{g}_{\mathbf{Y}}^{r+}$ if and only if ${\mathbf{W}}_{\mathbf{Y}} = \O {\mathbf{W}}_{{\mathbf{Y}}\O} \O^\top$ holds for any $\O \in \mathbb{O}_r$. \end{Lemma} \vskip.1cm {\bf (2) Polar Factorization $\mathcal{M}_{r+}^{q_2}$.} We factorize ${\mathbf{X}} = {\mathbf{U}}'\O (\O^\top \boldsymbol{\Sigma}' \O) ({\mathbf{U}}'\O)^\top = {\mathbf{U}} {\mathbf{B}} {\mathbf{U}}^\top$ with $\O \in \mathbb{O}_r, {\mathbf{U}} \in {\rm St}(r,p)$, ${\mathbf{B}} \in \mathbb{S}_+(r)$ \cite{bonnabel2010riemannian}. Due to the rotational invariance $({\mathbf{U}}, {\mathbf{B}}) \mapsto ({\mathbf{U}}\O, \O^\top {\mathbf{B}} \O)$ for $\O \in \mathbb{O}_r$, we define the search space as the following equivalence classes $[{\mathbf{U}}, {\mathbf{B}}] = \{({\mathbf{U}} \O, \O^\top {\mathbf{B}} \O): \O \in \mathbb{O}_r \}$. This results in the second quotient manifold: ${\cal M}_{r+}^{q_2}:= \widebar{{\cal M}}_{r+}^{q_2}/\mathbb{O}_r$, where $\widebar{{\cal M}}_{r+}^{q_2} = {\rm St}(r,p) \times \mathbb{S}_+(r)$. By taking the canonical metrics on ${\rm St}(r,p)$ and $\mathbb{S}_+(r)$ given in Table \ref{tab: basic-prop-simple-manifold}, we endow $\widebar{{\cal M}}_{r+}^{q_2}$ with the metric $\bar{g}_{({\mathbf{U}},{\mathbf{B}})}^{r+}(\eta_{({\mathbf{U}}, {\mathbf{B}})}, \theta_{({\mathbf{U}}, {\mathbf{B}})}) = {\rm tr}(\eta_U^\top \theta_U) + {\rm tr}({\mathbf{B}}^{-1} \eta_B {\mathbf{B}}^{-1} \theta_B)$ for $\eta_{({\mathbf{U}}, {\mathbf{B}})} = [\eta_U^\top \quad \eta_B^\top]^\top, \theta_{({\mathbf{U}}, {\mathbf{B}})} = [\theta_U^\top \quad \theta_B^\top]^\top \in T_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}$. \cite{bonnabel2010riemannian} showed that endowed with the metric $g^{r+}_{[{\mathbf{U}},{\mathbf{B}}]}$ induced from $\bar{g}^{r+}_{({\mathbf{U}},{\mathbf{B}})}$, ${\cal M}_{r+}^{q_2}$ is a Riemannian quotient manifold and has the following vertical and horizontal spaces: \begin{Lemma}{\rm (\cite[Theorem 1]{bonnabel2010riemannian})}\label{lm:BS10-Thm1} ${\cal M}_{r+}^{q_2}$ endowed with the metric $g^{r+}_{[{\mathbf{U}},{\mathbf{B}}]}$ induced from $\bar{g}^{r+}_{({\mathbf{U}},{\mathbf{B}})}$ is a Riemannian quotient manifold with the vertical and horizontal spaces given as: \begin{equation*} \begin{split} \mathcal{V}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2} &= \{ \theta_{({\mathbf{U}},{\mathbf{B}})} =[\theta_U^\top \quad \theta_B^\top]^\top: \theta_U = {\mathbf{U}} \boldsymbol{\Omega}, \theta_B = {\mathbf{B}} \boldsymbol{\Omega} - \boldsymbol{\Omega} {\mathbf{B}}, \boldsymbol{\Omega} = - \boldsymbol{\Omega}^\top \in \mathbb{R}^{r \times r} \},\\ \mathcal{H}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2} &= \{ \theta_{({\mathbf{U}},{\mathbf{B}})} =[\theta_U^\top \quad \theta_B^\top]^\top: \theta_U = {\mathbf{U}}_\perp {\mathbf{D}}, \theta_B \in \mathbb{S}^{r \times r}, {\mathbf{D}} \in \mathbb{R}^{(p-r) \times r}\}. \end{split} \end{equation*} Here, $\dim(\mathcal{V}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}) = (r^2-r)/2$, $\dim(\mathcal{H}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}) = pr-(r^2-r)/2$. \end{Lemma} \begin{Remark}[$\mathcal{V}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}$ and $\mathcal{H}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}$ are not orthogonal \label{rem: choice-horizontal-space} In the context of Lemma \ref{lm:BS10-Thm1}, $\mathcal{H}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}$ is complementary but not orthogonal to $\mathcal{V}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}$, which means $\mathcal{H}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}$ is not a canonical horizontal space for the quotient manifold $\mathcal{M}_{r+}^{q_2}$. In fact, it is easier to find a correspondence between this non-canonical horizontal space $\mathcal{H}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}$ and the tangent space $T_{{\mathbf{X}}}{\cal M}_{r+}^e$ under the embedded geometry (see later Proposition \ref{prop: psd-bijection2}). Such a correspondence is critical in establishing the landscape connections of embedded and quotient geometries in Riemannian optimization as we have illustrated in Step 2 of the general procedure in Section \ref{sec: general-strategy-for-connection}. \end{Remark} In Table \ref{tab: basic-prop-quotient-psd}, we summarize the basic properties of the full-rank factorization and polar factorization based quotient manifolds on fixed-rank PSD matrices. \begin{table}[ht] \centering \begin{tabular}{c | c | c } \hline & ${\cal M}_{r+}^{q_1}$ & ${\cal M}_{r+}^{q_2}$ \\ \hline Matrix representation & ${\mathbf{Y}}$ & $({\mathbf{U}},{\mathbf{B}})$ \\ \hline Equivalence classes & $[{\mathbf{Y}}] = \{{\mathbf{Y}} \O: \O \in \mathbb{O}_r \}$ & $[{\mathbf{U}}, {\mathbf{B}}]=\{({\mathbf{U}}\O, \O^\top {\mathbf{B}} \O), \O \in \mathbb{O}_r \} $ \\ \hline Total space $\widebar{{\cal M}}_{r+}$ &$ \mathbb{R}^{p \times r}_* $ & ${\rm St}(r,p)\times \mathbb{S}_{+} (r)$ \\ \hline Tangent space in & \multirow{2}{3em}{$T_{\mathbf{Y}} \mathbb{R}_*^{p \times r}$} & \multirow{2}{10em}{ $T_{\mathbf{U}} {\rm St}(r,p) \times T_{\mathbf{B}} \mathbb{S}_+(r)$ } \\ total space & & \\ \hline Metric $\bar{g}^{r+}$ on & \multirow{2}{7em}{${\rm tr}({\mathbf{W}}_{\mathbf{Y}} \eta_{\mathbf{Y}}^\top \theta_{\mathbf{Y}})$, ${\mathbf{W}}_{\mathbf{Y}} \in \mathbb{S}_+(r)$} & \multirow{2}{13em}{$ {\rm tr}(\eta_U^\top \theta_U) + {\rm tr}({\mathbf{B}}^{-1} \eta_B {\mathbf{B}}^{-1} \theta_B)$} \\ total space & & \\ \hline \end{tabular}\caption{Basic Properties of Quotient Manifolds ${\cal M}_{r+}^{q_1}$, ${\cal M}_{r+}^{q_2}$.} \label{tab: basic-prop-quotient-psd} \end{table} \subsubsection{Quotient Geometries for ${\cal M}_r$} \label{sec: quotient-general} In this section, we introduce three quotient structures for ${\cal M}_r$ based on three fixed-rank matrix factorizations. For ${\mathbf{X}} \in {\cal M}_r$, denote the SVD as ${\mathbf{X}} = {\mathbf{U}}' \boldsymbol{\Sigma}' {\mathbf{V}}^{'\top}$. \vskip.1cm {\bf (1) Full-rank Factorization $\mathcal{M}_{r}^{q_1}$.} In this factorization, we rearrange the SVD of ${\mathbf{X}}$ as ${\mathbf{X}} = ({\mathbf{U}}' \boldsymbol{\Sigma}^{'1/2}) (\boldsymbol{\Sigma}^{'1/2} {\mathbf{V}}^{'\top}) = {\mathbf{L}} {\mathbf{R}}^\top $, where ${\mathbf{L}} \in \mathbb{R}^{p_1 \times r}_*$, ${\mathbf{R}} \in \mathbb{R}^{p_2 \times r}_*$. The first quotient geometry for ${\cal M}_r$ results from the invariance mapping $({\mathbf{L}}, {\mathbf{R}}) \mapsto ({\mathbf{L}} {\mathbf{M}}, {\mathbf{R}}{\mathbf{M}}^{-\top})$ for ${\mathbf{M}} \in {\rm GL}(r)$, where ${\rm GL}(r) := \{ {\mathbf{M}} \in \mathbb{R}^{r \times r}: {\mathbf{M}} \text{ is invertible}\}$ denotes the degree $r$ general linear group. It is thus straightforward to consider the equivalence classes $[{\mathbf{L}}, {\mathbf{R}}]=\{ ({\mathbf{L}}{\mathbf{M}}, {\mathbf{R}} {\mathbf{M}}^{-\top}): {\mathbf{M}} \in {\rm GL}(r) \}$ as the search space. The set of equivalence classes form the quotient manifold ${\cal M}_{r}^{q_1}:= \widebar{{\cal M}}_r^{q_1}/{\rm GL}(r)$, where $\widebar{{\cal M}}_r^{q_1} = \mathbb{R}_*^{p_1 \times r} \times \mathbb{R}_*^{p_2 \times r}$, as ${\rm GL}(r)$ is a Lie group \cite[Theorem 21.10]{lee2013smooth}. Suppose ${\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}}, {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}$ are two $r$-by-$r$ positive definite matrices depending on ${\mathbf{L}}$ and ${\mathbf{R}}$, the metric we endow on $T_{({\mathbf{L}}, {\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1}$ is $\bar{g}_{({\mathbf{L}},{\mathbf{R}})}^r ( \eta_{({\mathbf{L}},{\mathbf{R}})}, \theta_{({\mathbf{L}},{\mathbf{R}})} ) = {\rm tr}( {\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}} \eta_L^\top \theta_R ) + {\rm tr}({\mathbf{V}}_{{\mathbf{L}}, {\mathbf{R}}} \eta_R^\top \theta_R)$ for $\eta_{({\mathbf{L}}, {\mathbf{R}})} = [ \eta_L^\top \quad \eta_R^\top ]^\top, \theta_{({\mathbf{L}}, {\mathbf{R}})} = [ \theta_L^\top \quad \theta_R^\top ]^\top \in T_{({\mathbf{L}}, {\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1}$. In the following Lemma \ref{lm: general-quotient-manifold1-prop}, we show with some proper assumptions on ${\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}}$ and $ {\mathbf{V}}_{{\mathbf{L}}, {\mathbf{R}}}$, ${\cal M}_{r}^{q_1}$ is a Riemannian quotient manifold. \begin{Lemma} \label{lm: general-quotient-manifold1-prop} (i) Suppose ${\mathbf{U}} \in {\rm St}(r,p_1)$ and $ {\mathbf{V}} \in {\rm St}(r,p_2)$ span the top $r$ left and right singular subspaces of ${\mathbf{L}}{\mathbf{R}}^\top$, respectively and $\P_1 = {\mathbf{U}}^\top {\mathbf{L}}, \P_2 = {\mathbf{V}}^\top {\mathbf{R}}$. Then the vertical and horizontal spaces of $T_{({\mathbf{L}},{\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1}$ are given as follows: \begin{equation*} \begin{split} \mathcal{V}_{({\mathbf{L}},{\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1} &= \left\{ \theta_{({\mathbf{L}},{\mathbf{R}})}=[\theta_L^\top \quad \theta_R^\top]^\top : \theta_L = {\mathbf{U}} \S \P_2^{-\top}, \theta_R = -{\mathbf{V}} \S^\top \P_1^{-\top}, \S \in \mathbb{R}^{r \times r} \right\},\\ \mathcal{H}_{({\mathbf{L}},{\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1} &= \left\{ \theta_{({\mathbf{L}},{\mathbf{R}})}= \begin{bmatrix} \theta_L\\ \theta_R \end{bmatrix}: \begin{array}{l} \theta_L = ({\mathbf{U}} \S \P_2{\mathbf{W}}_{{\mathbf{L}}, {\mathbf{R}}}^{-1} \P_2^\top + {\mathbf{U}}_\perp {\mathbf{D}}_1 ) \P_2^{-\top}, {\mathbf{D}}_1 \in \mathbb{R}^{(p_1-r)\times r}\\ \theta_R = ({\mathbf{V}} \S^\top \P_1 {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_1^\top + {\mathbf{V}}_\perp {\mathbf{D}}_2 ) \P_1^{-\top}, {\mathbf{D}}_2 \in \mathbb{R}^{(p_2-r)\times r} \end{array}, \S \in \mathbb{R}^{r \times r} \right\}, \end{split} \end{equation*} with $\dim(\mathcal{V}_{({\mathbf{L}},{\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1}) = r^2$, $\dim(\mathcal{H}_{({\mathbf{L}},{\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1}) = (p_1 + p_2-r)r$ and $\mathcal{V}_{({\mathbf{L}},{\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1} \perp \mathcal{H}_{({\mathbf{L}},{\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1}$ with respect to $\bar{g}^{r}_{({\mathbf{L}},{\mathbf{R}})}$. (ii) Moveover, ${\cal M}_{r}^{q_1}$ is a Riemannian quotient manifold endowed with metric $g_{[{\mathbf{L}},{\mathbf{R}}]}^r$ induced from $\bar{g}_{({\mathbf{L}},{\mathbf{R}})}^r$ if and only if ${\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}} = {\mathbf{M}} {\mathbf{W}}_{{\mathbf{L}}{\mathbf{M}},{\mathbf{R}} {\mathbf{M}}^{-\top}} {\mathbf{M}}^\top$ and ${\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}} = {\mathbf{M}}^{-\top } {\mathbf{V}}_{{\mathbf{L}}{\mathbf{M}},{\mathbf{R}} {\mathbf{M}}^{-\top}} {\mathbf{M}}^{-1}$ hold for any ${\mathbf{M}} \in {\rm GL}(r)$. \end{Lemma} \begin{Remark} Similarly to ${\mathbf{W}}_{\mathbf{Y}}$ for the PSD case, we introduce ${\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}}$ and $ {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}$ in $\bar{g}_{({\mathbf{L}},{\mathbf{R}})}^r$ to accommodate various metric choices considered in literature. Common choices include: ${\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}} = ({\mathbf{L}}^\top {\mathbf{L}})^{-1}, {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}} = ({\mathbf{R}}^\top {\mathbf{R}})^{-1}$ \cite{meyer2011linear,mishra2014fixed} and ${\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}} = {\mathbf{R}}^\top {\mathbf{R}}, {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}} = {\mathbf{L}}^\top {\mathbf{L}}$ \cite{mishra2012riemannian}. Distinct from the quotient geometry on ${\cal M}^{q_1}_{r+}$, the flat metric, i.e., ${\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}} = {\mathbf{I}}_r$ and ${\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}} = {\mathbf{I}}_r$, is no longer proper as it does not yield a valid Riemannian metric in the quotient space. \end{Remark} \vskip.1cm {\bf (2) Polar Factorization $\mathcal{M}_{r}^{q_2}$.} We consider another factorization: ${\mathbf{X}} = {\mathbf{U}}'\O (\O^\top \boldsymbol{\Sigma}' \O ) ({\mathbf{V}}' \O)^\top = {\mathbf{U}} {\mathbf{B}} {\mathbf{V}}^\top$, where $\O \in \mathbb{O}_r, {\mathbf{U}} \in {\rm St}(r,p_1)$, ${\mathbf{B}} \in \mathbb{S}_+(r), {\mathbf{V}} \in {\rm St}(r,p_2)$. The rotational invariance mapping here is $({\mathbf{U}}, {\mathbf{B}}, {\mathbf{V}}) \mapsto ({\mathbf{U}} \O, \O^\top {\mathbf{B}} \O, {\mathbf{V}} \O)$ for $\O \in \mathbb{O}_r$. This gives us the equivalence classes $[{\mathbf{U}}, {\mathbf{B}}, {\mathbf{V}}] = \{ ({\mathbf{U}} \O, \O^\top {\mathbf{B}} \O, {\mathbf{V}} \O): \O \in \mathbb{O}_r \}$ and the second quotient manifold ${\cal M}_r^{q_2} = \widebar{{\cal M}}_{r}^{q_2}/\mathbb{O}_r$, where $\widebar{{\cal M}}_{r}^{q_2} = {\rm St}(r,p_1) \times \mathbb{S}_+(r) \times {\rm St}(r,p_2)$. By picking natural metrics for ${\rm St}(r,p)$ and $\mathbb{S}_+(r)$ given in Table \ref{tab: basic-prop-simple-manifold}, we endow $T_{({\mathbf{U}},{\mathbf{B}}, {\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2}$ with the product metric $\bar{g}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}^r ( \eta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}, \theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} ) = {\rm tr}( \eta_U^\top \theta_U ) + {\rm tr}({\mathbf{B}}^{-1} \eta_B {\mathbf{B}}^{-1} \theta_B ) + {\rm tr}(\eta_V^\top \theta_V) $ for $\eta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} = [ \eta_U^\top \quad \eta_B^\top \quad \eta_V^\top ]^\top, \theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} = [ \theta_U^\top \quad \theta_B^\top \quad \theta_V^\top ]^\top \in T_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2}$. In the following Lemma \ref{lm: general-quotient-manifold2-prop}, we show ${\cal M}_{r}^{q_2}$ is a Riemannian quotient manifold and provide expressions for its vertical and horizontal spaces. \begin{Lemma} \label{lm: general-quotient-manifold2-prop} ${\cal M}_{r}^{q_2}$ endowed with the metric $g^{r}_{[{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}]}$ induced from $\bar{g}^{r}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}$ is a Riemannian quotient manifold with the following vertical and horizontal spaces: \begin{equation} \label{eq: vertical-horizontal-quotient-general-manifold2} \begin{split} \mathcal{V}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2} &= \left\{ \theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} =[\theta_U^\top \quad \theta_B^\top \quad \theta_V^\top]^\top: \begin{array}{c} \theta_U = {\mathbf{U}} \boldsymbol{\Omega},\theta_B = {\mathbf{B}} \boldsymbol{\Omega} - \boldsymbol{\Omega} {\mathbf{B}},\\ \theta_V = {\mathbf{V}} \boldsymbol{\Omega},\boldsymbol{\Omega} = - \boldsymbol{\Omega}^\top \in \mathbb{R}^{r \times r} \end{array} \right\},\\ \mathcal{H}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2} &= \left\{ \theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} =[\theta_U^\top \quad \theta_B^\top \quad \theta_V^\top]^\top: \begin{array}{c} \theta_U ={\mathbf{U}}_\perp {\mathbf{D}}_1 + {\mathbf{U}} \boldsymbol{\Omega}, \theta_B \in \mathbb{S}^{r \times r},\theta_V ={\mathbf{V}}_\perp {\mathbf{D}}_2 - {\mathbf{V}} \boldsymbol{\Omega}, \\ \boldsymbol{\Omega} = - \boldsymbol{\Omega}^\top \in \mathbb{R}^{r \times r} , {\mathbf{D}}_1 \in \mathbb{R}^{(p_1-r) \times r}, {\mathbf{D}}_2 \in \mathbb{R}^{(p_2-r) \times r} \end{array} \right\}, \end{split} \end{equation}with $\dim(\mathcal{V}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2}) = (r^2-r)/2$, $\dim(\mathcal{H}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2}) = (p_1 + p_2 - r)r$. \end{Lemma} \vskip.1cm {\bf (3) Subspace-projection Factorization $\mathcal{M}_{r}^{q_3}$.} The third quotient geometry is based on the factorization ${\mathbf{X}} = {\mathbf{U}}' (\boldsymbol{\Sigma}' {\mathbf{V}}^{'\top}) = {\mathbf{U}} {\mathbf{Y}}^\top$, where ${\mathbf{U}} \in {\rm St}(r,p_1)$ and ${\mathbf{Y}} \in \mathbb{R}^{p_2 \times r}_*$. This factorization is called the subspace-projection factorization \cite{mishra2014fixed} as ${\mathbf{U}}$ represents the column space of ${\mathbf{X}}$ and ${\mathbf{Y}}$ is the left projection coefficient matrix of ${\mathbf{X}}$ on ${\mathbf{U}}$. Here the rotational invariance mapping is $({\mathbf{U}}, {\mathbf{Y}}) \mapsto ({\mathbf{U}}\O, {\mathbf{Y}}\O)$ for $\O \in \mathbb{O}_r$ and the equivalence classes are $[{\mathbf{U}}, {\mathbf{Y}}] = \{ ({\mathbf{U}}\O, {\mathbf{Y}} \O): \O \in \mathbb{O}_r \}$. This results in the third quotient manifold we are interested in: ${\cal M}_r^{q_3}:= \widebar{{\cal M}}_{r}^{q_3}/\mathbb{O}_r$, where $\widebar{{\cal M}}_{r}^{q_3} = {\rm St}(r,p_1) \times \mathbb{R}_*^{p_2 \times r}$. By taking the canonical metrics on ${\rm St}(r,p_1)$ and $\mathbb{R}_{*}^{p_2 \times r}$, we endow $\widebar{{\cal M}}_{r}^{q_3}$ with the metric $\bar{g}_{({\mathbf{U}},{\mathbf{Y}})}^{r}(\eta_{({\mathbf{U}}, {\mathbf{Y}})}, \theta_{({\mathbf{U}}, {\mathbf{Y}})}) = {\rm tr}(\eta_U^\top \theta_U) + {\rm tr}({\mathbf{W}}_{\mathbf{Y}} \eta_Y^\top \theta_Y)$ for $\eta_{({\mathbf{U}}, {\mathbf{Y}})} = [\eta_U^\top \quad \eta_Y^\top]^\top, \theta_{({\mathbf{U}}, {\mathbf{Y}})} = [\theta_U^\top \quad \theta_Y^\top]^\top \in T_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3}$. In the following Lemma \ref{lm: general-quotient-manifold3-prop}, we provide the vertical and horizontal spaces of $T_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3}$ and show with some proper assumptions on ${\mathbf{W}}_{{\mathbf{Y}}}$, ${\cal M}_{r}^{q_3}$ is a Riemannian quotient manifold. \begin{Lemma} \label{lm: general-quotient-manifold3-prop} (i) The vertical and horizontal spaces of $T_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3}$ are \begin{equation} \label{eq: vertical-horizontal-quotient-general-manifold3} \begin{split} \mathcal{V}_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3} &= \{ \theta_{({\mathbf{U}},{\mathbf{Y}})} = [\theta_U^\top \quad \theta_Y^\top]^\top: \theta_U = {\mathbf{U}} \boldsymbol{\Omega}, \theta_Y = {\mathbf{Y}} \boldsymbol{\Omega}, \boldsymbol{\Omega} = - \boldsymbol{\Omega}^\top \in \mathbb{R}^{r \times r} \},\\ \mathcal{H}_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3} &= \{ \theta_{({\mathbf{U}},{\mathbf{Y}})} =[\theta_U^\top \quad \theta_Y^\top]^\top: \theta_U = {\mathbf{U}}_\perp {\mathbf{D}}, \theta_Y \in \mathbb{R}^{p_2 \times r}, {\mathbf{D}} \in \mathbb{R}^{(p_1-r) \times r} \}. \end{split} \end{equation} Here, $\dim(\mathcal{V}_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3}) = (r^2-r)/2$, $\dim(\mathcal{H}_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3}) = (p_1 + p_2 - r)r$. (ii) Moreover, ${\cal M}_{r}^{q_3}$ is a Riemannian quotient manifold endowed with metric $g_{[{\mathbf{U}},{\mathbf{Y}}]}^r$ induced from $\bar{g}_{({\mathbf{U}},{\mathbf{Y}})}^r$ if and only if ${\mathbf{W}}_{\mathbf{Y}} = \O {\mathbf{W}}_{{\mathbf{Y}}\O} \O^\top$ holds for any $\O \in \mathbb{O}_r$. \end{Lemma} \begin{Remark} In Lemmas \ref{lm: general-quotient-manifold2-prop} and \ref{lm: general-quotient-manifold3-prop}, we introduce new horizontal spaces that are distinct from the canonical horizontal spaces in the literature in ${\cal M}_r^{q_2}$ and ${\cal M}_r^{q_3}$ \cite{mishra2013low,mishra2014fixed,meyer2011linear,absil2014two}. These new horizontal spaces admit closed-form expressions, which makes developing a correspondence between the embedded manifold tangent space $T_{\mathbf{X}} {\cal M}^e_r$ and the non-canonical horizontal spaces $\mathcal{H}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2}$, $\mathcal{H}_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3}$ easier, and facilitates the later landscape analysis (see later in Propositions \ref{prop: general-bijection2} and \ref{prop: general-bijection3}). \end{Remark} In Table \ref{tab: basic-prop-quotient-general}, we summarize the basic properties of the full-rank factorization, polar factorization, and subspace-projection factorization based quotient manifolds of the general fixed-rank matrices. \begin{table}[ht] \centering \begin{tabular}{c | c | c| c} \hline & ${\cal M}_{r}^{q_1}$ & ${\cal M}_{r}^{q_2}$ & ${\cal M}_{r}^{q_3}$\\ \hline Matrix & \multirow{2}{3em}{$({\mathbf{L}},{\mathbf{R}})$} & \multirow{2}{4em}{$({\mathbf{U}}, {\mathbf{B}},{\mathbf{V}})$} & \multirow{2}{3em}{$({\mathbf{U}}, {\mathbf{Y}})$}\\ representation & & & \\ \hline \multirow{3}{4em}{Equivalence classes} & \multirow{3}{9em}{$[{\mathbf{L}},{\mathbf{R}}] = \{ ({\mathbf{L}}{\mathbf{M}}, {\mathbf{R}} {\mathbf{M}}^{-\top}): {\mathbf{M}} \in {\rm GL}(r) \}$} & \multirow{3}{14em}{$[{\mathbf{U}}, {\mathbf{B}}, {\mathbf{V}}] = \{ ({\mathbf{U}} \O, \O^\top {\mathbf{B}} \O, {\mathbf{V}} \O): \O \in \mathbb{O}_r \}$} & \multirow{3}{10em}{$[{\mathbf{U}}, {\mathbf{Y}}] = \{ ({\mathbf{U}}\O, {\mathbf{Y}} \O): \O \in \mathbb{O}_r \}$} \\ & & &\\ & & & \\ \hline Total space & \multirow{2}{9em}{$\mathbb{R}_*^{p_1 \times r} \times \mathbb{R}_*^{p_2 \times r}$} & \multirow{2}{14em}{$ {\rm St}(r,p_1) \times \mathbb{S}_+(r) \times {\rm St}(r,p_2)$} & \multirow{2}{9em}{$ {\rm St}(r,p_1) \times \mathbb{R}_*^{p_2 \times r}$} \\ $\widebar{{\cal M}}_r$& & &\\ \hline Tangent space & \multirow{2}{9em}{$T_{\mathbf{L}} \mathbb{R}^{p_1 \times r}_* \times T_{\mathbf{R}} \mathbb{R}^{p_2 \times r}_*$} & \multirow{2}{14em}{$ T_{\mathbf{U}}{\rm St}(r,p_1) \times T_{\mathbf{B}} \mathbb{S}_+(r) \times T_{\mathbf{V}} {\rm St}(r,p_2)$} & \multirow{2}{10em}{$T_{\mathbf{U}}{\rm St}(r,p_1) \times T_{\mathbf{Y}}\mathbb{R}^{p_2 \times r}_*$} \\ in total space & & & \\ \hline \multirow{4}{5em}{Metric $\bar{g}^r$ on total space} & \multirow{4}{10em}{$ {\rm tr}( {\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}} \eta_L^\top \theta_L ) + {\rm tr}({\mathbf{V}}_{{\mathbf{L}}, {\mathbf{R}}} \eta_R^\top \theta_R)$, ${\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}} \in \mathbb{S}_+(r), {\mathbf{V}}_{{\mathbf{L}}, {\mathbf{R}}} \in \mathbb{S}_+(r)$} & \multirow{4}{14em}{$ {\rm tr}( \eta_U^\top \theta_U ) + {\rm tr}({\mathbf{B}}^{-1} \eta_B {\mathbf{B}}^{-1} \theta_B ) + {\rm tr}(\eta_V^\top \theta_V) $} & \multirow{4}{6em}{${\rm tr}(\eta_U^\top \theta_U) + {\rm tr}({\mathbf{W}}_{\mathbf{Y}} \eta_Y^\top \theta_Y)$, ${\mathbf{W}}_{\mathbf{Y}} \in \mathbb{S}_+(r)$}\\ & & & \\ & & & \\ & & & \\ \hline \end{tabular} \caption{Basic Properties for Quotient Manifolds ${\cal M}_{r}^{q_1}$, ${\cal M}_{r}^{q_2}$ and ${\cal M}_r^{q_3}$.} \label{tab: basic-prop-quotient-general} \end{table} \section{Geometric Connections of Embedded and Quotient Geometries in Fixed-rank PSD Matrix Optimization} \label{sec: connection-PSD} In this section, we apply the general procedure in Section \ref{sec: general-strategy-for-connection} to connect the landscape properties of optimization \eqref{eq: PSD-manifold-formulation} under the embedded and the quotient geometries. First, under the two quotient geometries introduced in Section \ref{sec: quotient-PSD}, the optimization problem \eqref{eq: PSD-manifold-formulation} can be reformulated as follows, \begin{subequations}\label{eq: PSD-opt-problem-quotient} \begin{align} \text{on } \widebar{{\cal M}}_{r+}^{q_1}: &\quad \min_{{\mathbf{Y}} \in \mathbb{R}^{p \times r}_*}\bar{h}_{r+}({\mathbf{Y}}):= f({\mathbf{Y}} {\mathbf{Y}}^\top), \label{eq: PSD-opt-problem-quotient-sub1}\\ \text{on } \widebar{{\cal M}}_{r+}^{q_2}: &\quad \min_{{\mathbf{U}} \in {\rm St}(r,p), {\mathbf{B}} \in \mathbb{S}_+(r)}\bar{h}_{r+}({\mathbf{U}}, {\mathbf{B}}):= f({\mathbf{U}} {\mathbf{B}} {\mathbf{U}}^\top) \label{eq: PSD-opt-problem-quotient-sub2}. \end{align} \end{subequations} Since $\bar{h}_{r+}({\mathbf{Y}})$ and $\bar{h}_{r+}({\mathbf{U}}, {\mathbf{B}})$ are invariant along the fibers of $\widebar{{\cal M}}_{r+}^{q_1}$ and $\widebar{{\cal M}}_{r+}^{q_2}$, they induce functions $h_{r+}([{\mathbf{Y}}])$ and $h_{r+}([{\mathbf{U}},{\mathbf{B}}])$ on quotient manifolds ${\cal M}_{r+}^{q_1}$ and ${\cal M}_{r+}^{q_2}$, respectively. Next, we provide the expressions for Riemannian gradients and Hessians of \eqref{eq: PSD-manifold-formulation} under both geometries (Step 1), construct the bijective maps between $T_{\mathbf{X}} {\cal M}_{r+}^e$ and $\mathcal{H}_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1}$, $\mathcal{H}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}$ (Step 2), and give their spectrum bounds (Step 3). \begin{Proposition}[Riemannian Gradients and Hessians of \eqref{eq: PSD-manifold-formulation}] \label{prop: gradient-hessian-exp-PSD} The Riemannian gradients and Hessians of \eqref{eq: PSD-manifold-formulation} under the embedded and the quotient geometries introduced in Section \ref{sec: embedded-quotient-fixed-rank-matrix} are: \begin{itemize}[leftmargin=*] \item On ${\cal M}^e_{r+}$: Suppose ${\mathbf{X}} \in {\cal M}^e_{r+}$, ${\mathbf{U}}$ spans the top $r$ eigenspace of ${\mathbf{X}}$, $\xi_{\mathbf{X}} = [{\mathbf{U}} \quad {\mathbf{U}}_\perp] \begin{bmatrix} \S & {\mathbf{D}}^\top\\ {\mathbf{D}} & {\mathbf{0}} \end{bmatrix} [{\mathbf{U}} \quad {\mathbf{U}}_\perp]^\top \in T_{{\mathbf{X}}}{\cal M}^e_{r+}$. Then \begin{equation} \label{eq: embedded-gd-hessian-psd} \begin{split} {\rm grad} f({\mathbf{X}}) &= P_{{\mathbf{U}}} \nabla f({\mathbf{X}})P_{{\mathbf{U}}} + P_{{\mathbf{U}}_\perp} \nabla f({\mathbf{X}})P_{{\mathbf{U}}} + P_{{\mathbf{U}}} \nabla f({\mathbf{X}})P_{{\mathbf{U}}_\perp},\\ {\rm Hess} f({\mathbf{X}})[\xi_{\mathbf{X}}, \xi_{\mathbf{X}}] &= \nabla^2 f({\mathbf{X}})[\xi_{\mathbf{X}}, \xi_{\mathbf{X}}] + 2\langle \nabla f({\mathbf{X}}), {\mathbf{U}}_\perp {\mathbf{D}} \boldsymbol{\Sigma}^{-1} {\mathbf{D}}^\top {\mathbf{U}}_\perp^\top \rangle, \end{split} \end{equation} where $\boldsymbol{\Sigma} = {\mathbf{U}}^\top {\mathbf{X}} {\mathbf{U}}$. \item On ${\cal M}_{r+}^{q_1}$: Suppose ${\mathbf{Y}} \in \mathbb{R}^{p \times r}_*$ and $\theta_{{\mathbf{Y}}} \in \mathcal{H}_{{\mathbf{Y}}} \widebar{{\cal M}}_{r+}^{q_1}$. Then \begin{equation} \label{eq: quotient-gradient-Hessian-PSD1} \begin{split} \overline{{\rm grad}\, h_{r+}([{\mathbf{Y}}])} &= 2\nabla f({\mathbf{Y}} {\mathbf{Y}}^\top) {\mathbf{Y}} {\mathbf{W}}_{\mathbf{Y}}^{-1}, \\ \overline{{\rm Hess} \, h_{r+}([{\mathbf{Y}}])}[\theta_{\mathbf{Y}}, \theta_{\mathbf{Y}}] &= \nabla^2 f({\mathbf{Y}} {\mathbf{Y}}^\top)[{\mathbf{Y}}\theta_{\mathbf{Y}}^\top + \theta_{\mathbf{Y}} {\mathbf{Y}}^\top , {\mathbf{Y}}\theta_{\mathbf{Y}}^\top + \theta_{\mathbf{Y}} {\mathbf{Y}}^\top ] + 2\langle \nabla f({\mathbf{Y}} {\mathbf{Y}}^\top ), \theta_{\mathbf{Y}} \theta_{\mathbf{Y}}^\top \rangle\\ & +2\langle \nabla f({\mathbf{Y}} {\mathbf{Y}}^\top) {\mathbf{Y}} {\rm D} {\mathbf{W}}_{\mathbf{Y}}^{-1} [\theta_{\mathbf{Y}}], \theta_{\mathbf{Y}}{\mathbf{W}}_{\mathbf{Y}} \rangle + \langle {\rm D} {\mathbf{W}}_{\mathbf{Y}}\left[\overline{ {\rm grad} \, h_{r+}([{\mathbf{Y}}])}\right], \theta_{\mathbf{Y}}^\top \theta_{\mathbf{Y}} \rangle/2. \end{split} \end{equation} \item On ${\cal M}_{r+}^{q_2}$: Suppose ${\mathbf{U}} \in {\rm St}(r,p)$, ${\mathbf{B}} \in \mathbb{S}_+(r)$ and $\theta_{({\mathbf{U}},{\mathbf{B}})} = [\theta_U^\top \quad \theta_B^\top]^\top \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}$. Then \begin{equation}\label{eq: quotient-gradient-Hessian-PSD2} \begin{split} &\overline{{\rm grad}\, h_{r+}([{\mathbf{U}},{\mathbf{B}}])} = \begin{bmatrix} \overline{{\rm grad}_{\mathbf{U}}\, h_{r+}([{\mathbf{U}},{\mathbf{B}}])}\\ \overline{{\rm grad}_{\mathbf{B}}\, h_{r+}([{\mathbf{U}},{\mathbf{B}}])} \end{bmatrix} = \begin{bmatrix} 2 P_{{\mathbf{U}}_\perp} \nabla f({\mathbf{U}} {\mathbf{B}} {\mathbf{U}}^\top) {\mathbf{U}} {\mathbf{B}} \\ {\mathbf{B}} {\mathbf{U}}^\top \nabla f({\mathbf{U}} {\mathbf{B}} {\mathbf{U}}^\top) {\mathbf{U}} {\mathbf{B}} \end{bmatrix},\\ &\overline{{\rm Hess} \, h_{r+}([{\mathbf{U}},{\mathbf{B}}])}[\theta_{({\mathbf{U}},{\mathbf{B}})}, \theta_{({\mathbf{U}},{\mathbf{B}})}]\\ &= \nabla^2 f({\mathbf{U}} {\mathbf{B}} {\mathbf{U}}^\top)[{\mathbf{U}} {\mathbf{B}} \theta_U^\top + {\mathbf{U}} \theta_B {\mathbf{U}}^\top + \theta_U {\mathbf{B}} {\mathbf{U}}^\top, {\mathbf{U}} {\mathbf{B}} \theta_U^\top + {\mathbf{U}} \theta_B {\mathbf{U}}^\top + \theta_U {\mathbf{B}} {\mathbf{U}}^\top] + 2 \langle \nabla f({\mathbf{U}} {\mathbf{B}} {\mathbf{U}}^\top), \theta_U {\mathbf{B}} \theta_U^\top \rangle\\ & \,+\langle \nabla f({\mathbf{U}} {\mathbf{B}} {\mathbf{U}}^\top) {\mathbf{U}}, 4 \theta_U \theta_B + {\mathbf{U}}\theta_B {\mathbf{B}}^{-1} \theta_B - 2 \theta_U {\mathbf{U}}^\top \theta_U {\mathbf{B}} - 2 {\mathbf{U}} {\mathbf{B}} \theta_U^\top \theta_U \rangle. \end{split} \end{equation} \end{itemize} \end{Proposition} \begin{Remark}[Quadratic Form of Riemannian Hessians] \label{rem: closed-form-riemannian-hessian-quotient} In Proposition \ref{prop: gradient-hessian-exp-PSD}, we only give the quadratic expressions of the Hessians as we use them exclusively throughout the paper. It is easy to obtain general bilinear expressions by noting that ${\rm Hess} f({\mathbf{X}})[\xi_{\mathbf{X}},\theta_{\mathbf{X}}] =({\rm Hess} f({\mathbf{X}})[\xi_{\mathbf{X}}+\theta_{\mathbf{X}},\xi_{\mathbf{X}}+\theta_{\mathbf{X}}] - {\rm Hess} f({\mathbf{X}})[\xi_{\mathbf{X}}-\theta_{\mathbf{X}},\xi_{\mathbf{X}}-\theta_{\mathbf{X}}] )/4 $. We also note the Riemannian Hessian expressions under the fixed-rank quotient geometries have been explicitly or implicitly developed in \cite{journee2010low,mishra2011low,meyer2011geometric,mishra2014fixed} for some specific problems. Most of these works only provide the linear form of the Riemannian Hessians, which often do not admit a closed-form expression due to the horizontal projection involved in \eqref{eq: quotient-hessian-linear-from}. Here we provide explicit formulas for the quadratic form of the Riemannian Hessians and the closed-form expressions are critical in establishing the landscape connections of embedded and quotient geometries in fixed-rank matrix optimization. \end{Remark} \begin{Proposition}[Bijection Between $\mathcal{H}_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1}$ and $T_{\mathbf{X}} {\cal M}_{r+}^e$] \label{prop: psd-bijection1} Suppose ${\mathbf{Y}} \in \mathbb{R}^{p \times r}_*$, ${\mathbf{X}} = {\mathbf{Y}} {\mathbf{Y}}^\top$, the eigenspace of ${\mathbf{X}}$ is ${\mathbf{U}}$, and $\P = {\mathbf{U}}^\top {\mathbf{Y}}$. For any $\theta_{\mathbf{Y}} \in \mathcal{H}_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1}$ and $\xi_{\mathbf{X}} = [{\mathbf{U}} \quad {\mathbf{U}}_\perp] \begin{bmatrix} \S & {\mathbf{D}}^\top\\ {\mathbf{D}} & {\mathbf{0}} \end{bmatrix} [{\mathbf{U}} \quad {\mathbf{U}}_\perp]^\top \in T_{{\mathbf{X}}}{\cal M}^e_{r+}$, define \begin{equation} \label{def: xi-theta-correspondence-PSD1} \begin{split} \xi^{\theta_{\mathbf{Y}}}_{{\mathbf{X}}}&:= [{\mathbf{U}} \quad {\mathbf{U}}_\perp] \begin{bmatrix} \P \theta_{\mathbf{Y}}^\top {\mathbf{U}} + {\mathbf{U}}^\top \theta_{\mathbf{Y}} \P^\top & \P \theta_{\mathbf{Y}}^\top {\mathbf{U}}_\perp \\ {\mathbf{U}}_\perp^\top \theta_{\mathbf{Y}} \P^\top & {\mathbf{0}} \end{bmatrix}[{\mathbf{U}} \quad {\mathbf{U}}_\perp]^\top \in T_{{\mathbf{X}}}{\cal M}^e_{r+}, \\ \theta_{{\mathbf{Y}}}^{\xi_{\mathbf{X}}} &:= ({\mathbf{U}} \S' + {\mathbf{U}}_\perp {\mathbf{D}} )\P^{-\top} \in \mathcal{H}_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1}, \end{split} \end{equation} where $\S'$ in $\theta_{{\mathbf{Y}}}^{\xi_{\mathbf{X}}}$ is uniquely determined by the linear equation system $\S' + \S^{'\top} = \S$ and $\S' \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} = \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} \S^{'\top} $. Then we can find a linear bijective mapping $\mathcal{L}_{\mathbf{Y}}^{r+}$ between $\mathcal{H}_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1}$ and $T_{\mathbf{X}} {\cal M}_{r+}^e$, \begin{equation*} \mathcal{L}_{\mathbf{Y}}^{r+}: \theta_{\mathbf{Y}} \in \mathcal{H}_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1} \longrightarrow \xi_{{\mathbf{X}}}^{\theta_{\mathbf{Y}}} \in T_{\mathbf{X}} {\cal M}_{r+}^e \quad \text{and} \quad (\mathcal{L}_{\mathbf{Y}}^{r+})^{-1}: \xi_{\mathbf{X}} \in T_{\mathbf{X}} {\cal M}_{r+}^e \to \theta_{\mathbf{Y}}^{\xi_{\mathbf{X}}} \in \mathcal{H}_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1}, \end{equation*} such that $\mathcal{L}_{\mathbf{Y}}^{r+}(\theta_{\mathbf{Y}}) = {\mathbf{Y}} \theta_{\mathbf{Y}}^\top + \theta_{\mathbf{Y}} {\mathbf{Y}}^\top$ holds for any $\theta_{\mathbf{Y}} \in \mathcal{H}_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1}$. Finally, $\mathcal{L}_{{\mathbf{Y}}}^{r+}$ satisfies the following spectrum bound: \begin{equation} \label{ineq: bijection-spectrum-psd1} 2\sigma_r(\P {\mathbf{W}}_{\mathbf{Y}}^{-1} \P^\top) \bar{g}^{r+}_{\mathbf{Y}}(\theta_{\mathbf{Y}}, \theta_{\mathbf{Y}}) \leq \|\mathcal{L}^{r+}_{\mathbf{Y}}(\theta_{\mathbf{Y}})\|_{\rm F}^2 \leq 4\sigma_1(\P {\mathbf{W}}_{\mathbf{Y}}^{-1} \P^\top) \bar{g}^{r+}_{\mathbf{Y}}(\theta_{\mathbf{Y}}, \theta_{\mathbf{Y}}), \quad \forall \theta_{\mathbf{Y}} \in \mathcal{H}_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1}. \end{equation} \end{Proposition} {\noindent \bf Proof of Proposition \ref{prop: psd-bijection1}.} The proof is divided into two steps: in Step 1, we show $\xi_{\mathbf{X}}^{\theta_{\mathbf{Y}}}$ and $\theta_{{\mathbf{Y}}}^{\xi_{\mathbf{X}}}$ are well defined for any $\theta_{\mathbf{Y}}$ and $\xi_{\mathbf{X}}$; in Step 2, we show $\mathcal{L}_{{\mathbf{Y}}}^{r+}$ is a bijection and prove its spectrum bounds. {\bf Step 1.} First, it is clear for any $\theta_{\mathbf{Y}} \in \mathcal{H}_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1}$, $\xi_{\mathbf{X}}^{\theta_{\mathbf{Y}}}$ is well defined. To show $\theta_{{\mathbf{Y}}}^{\xi_{\mathbf{X}}}$ is well defined for any $\xi_{\mathbf{X}} \in T_{{\mathbf{X}}}{\cal M}^e_{r+}$, we need to show the linear system $\widebar{\S} + \widebar{\S}^{\top} = \S$, $\widebar{\S} \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} = \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} \widebar{\S}^{\top} $ has a unique solution with respect to $\widebar{\S}$. Simple calculations assert that \begin{equation} \label{eq: barS subseq Sp} \begin{split} &\{\widebar{\S}: \widebar{\S} + \widebar{\S}^{\top} = \S, \widebar{\S} \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} = \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} \widebar{\S}^{\top} \} \\ \subseteq& \{\widetilde{\S}: \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} \widetilde{\S} + \widetilde{\S} \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} = \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} \S \}. \end{split} \end{equation} Observing facts that \begin{equation} \label{eq:SpSylvester} \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} \widetilde{\S} + \widetilde{\S} \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} = \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} \S \end{equation} is a Sylvester equation with respect to $\widetilde{\S}$, and $\P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1}$ and $-\P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1}$ have disjoint spectrums, we know from \cite[Theorem VII.2.1]{bhatia2013matrix} that \eqref{eq:SpSylvester} has a unique solution, denoted $\S'$. Since $\S$ is symmetric and $\P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1}$ is a PSD matrix, we have \begin{equation} \label{eq: two-symmetric-equation} \begin{split} \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} \S' + \S' \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} - \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} \S &= {\mathbf{0}} \\ \S^{' \top} \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1}+\P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} \S^{'\top} - \S\P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} &= {\mathbf{0}}. \end{split} \end{equation} By summing two equations in \eqref{eq: two-symmetric-equation}, we get $\P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} ( \S' + \S^{'\top} - \S) + ( \S' + \S^{'\top} - \S) \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} = {\mathbf{0}} $, i.e., $ \S' + \S^{'\top} - \S$ is a solution to the new Sylvester equation $\P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} \S_1 + \S_1 \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} = {\mathbf{0}}$ with respect to $\S_1$. Now, we know again by \cite[Theorem VII.2.1]{bhatia2013matrix} that ${\mathbf{0}}$ is the unique solution to the system $\P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} \S_1 + \S_1 \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} = {\mathbf{0}}$. Thus, $ \S' + \S^{'\top} = \S$, and from \eqref{eq: two-symmetric-equation}, it further holds $ \S' \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} =\P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} \S^{'\top} $. This, together with \eqref{eq: barS subseq Sp} and the uniqueness of $ \S'$, asserts that $\S'$ is the unique solution to the linear system $\widebar{\S} \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} = \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1}\widebar{\S}^\top $, $\widebar{\S} + \widebar{\S}^\top = \S$. This finishes the proof of this part. {\bf Step 2.} Note that both $\mathcal{H}_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1}$ and $T_{\mathbf{X}} {\cal M}_{r+}^e$ are of dimension $(pr - (r^2-r)/2)$. Let $\mathcal{L}_{\mathbf{Y}}^{r+'} : \xi_{\mathbf{X}} \in T_{\mathbf{X}}{\cal M}^e_{r+} \longrightarrow \theta_{\mathbf{Y}}^{\xi_{\mathbf{X}}} \in \mathcal{H}_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1}$. Moreover, for any $\xi_{\mathbf{X}} = [{\mathbf{U}} \quad {\mathbf{U}}_\perp] \begin{bmatrix} \S & {\mathbf{D}}^\top\\ {\mathbf{D}} & {\mathbf{0}} \end{bmatrix} [{\mathbf{U}} \quad {\mathbf{U}}_\perp]^\top\in T_{\mathbf{X}}{\cal M}^e_{r+}$, we have \begin{equation} \label{eq: bijection-PSD1} \mathcal{L}^{r+}_{\mathbf{Y}} ( \mathcal{L}_{\mathbf{Y}}^{r+'}(\xi_{\mathbf{X}}) ) = \mathcal{L}^{r+}_{\mathbf{Y}} (\theta_{\mathbf{Y}}^{\xi_{\mathbf{X}}}) = [{\mathbf{U}} \quad {\mathbf{U}}_\perp] \begin{bmatrix} \P \theta_{\mathbf{Y}}^{\xi_{\mathbf{X}} \top} {\mathbf{U}} + {\mathbf{U}}^\top \theta_{\mathbf{Y}}^{\xi_{\mathbf{X}}} \P^\top & \P \theta_{\mathbf{Y}}^{\xi_{\mathbf{X}}\top} {\mathbf{U}}_\perp \\ {\mathbf{U}}_\perp^\top \theta_{\mathbf{Y}}^{\xi_{\mathbf{X}}} \P^\top & {\mathbf{0}} \end{bmatrix}[{\mathbf{U}} \quad {\mathbf{U}}_\perp]^\top = \xi_{\mathbf{X}}. \end{equation} Therefore, $\mathcal{L}^{r+}_{\mathbf{Y}}$ is a bijection and $\mathcal{L}_{\mathbf{Y}}^{r+'} = (\mathcal{L}^{r+}_{\mathbf{Y}})^{-1}$. At the same time, it is easy to check $ \mathcal{L}_{\mathbf{Y}}^{r+}$ satisfies $\mathcal{L}_{\mathbf{Y}}^{r+}(\theta_{\mathbf{Y}})={\mathbf{Y}} \theta_{\mathbf{Y}}^\top + \theta_{\mathbf{Y}} {\mathbf{Y}}^\top$ by observing ${\mathbf{Y}} = {\mathbf{U}} \P$. Next, we provide the spectrum bounds for $\mathcal{L}^{r+}_{\mathbf{Y}}$. For any $\theta_{\mathbf{Y}} = ({\mathbf{U}}\S + {\mathbf{U}}_\perp {\mathbf{D}} )\P^{-\top} \in \mathcal{H}_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1}$, we have \begin{equation} \label{ineq: upper-bound-theta-norm} \begin{split} \bar{g}^{r+}_{\mathbf{Y}}(\theta_{\mathbf{Y}},\theta_{\mathbf{Y}}) = {\rm tr}({\mathbf{W}}_{\mathbf{Y}} \theta_{\mathbf{Y}}^\top \theta_{\mathbf{Y}}) = \| \theta_{\mathbf{Y}}{\mathbf{W}}_{\mathbf{Y}}^{1/2}\|_{\rm F}^2& \leq \sigma^2_1(\P^{-\top}{\mathbf{W}}_{\mathbf{Y}}^{1/2}) ( \|\S\|_{\rm F}^2 + \|{\mathbf{D}}\|_{\rm F}^2 ) \\ & = \sigma_1(\P^{-\top}{\mathbf{W}}_{\mathbf{Y}} \P^{-1}) ( \|\S\|_{\rm F}^2 + \|{\mathbf{D}}\|_{\rm F}^2 ) \\ & = ( \|\S\|_{\rm F}^2 + \|{\mathbf{D}}\|_{\rm F}^2 )/\sigma_r(\P {\mathbf{W}}_{\mathbf{Y}}^{-1} \P^\top), \end{split} \end{equation} and \begin{equation} \label{ineq: quadratic-S'-bound-PSD1} \begin{split} \langle \S^{\top}, \S \rangle &\overset{(a)}= \langle \P {\mathbf{W}}_{\mathbf{Y}}^{-1} \P^\top \S \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1}, \S \rangle \\ &= \langle ( \P {\mathbf{W}}_{\mathbf{Y}}^{-1} \P^\top)^{1/2} \S ( \P {\mathbf{W}}_{\mathbf{Y}}^{-1} \P^\top)^{-1/2}, ( \P {\mathbf{W}}_{\mathbf{Y}}^{-1} \P^\top)^{1/2} \S ( \P {\mathbf{W}}_{\mathbf{Y}}^{-1} \P^\top)^{-1/2} \rangle \geq 0, \end{split} \end{equation} here (a) is because $ \S \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} = \P^{-\top} {\mathbf{W}}_{\mathbf{Y}} \P^{-1} \S^\top $ by the construction of $\mathcal{H}_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1}$. Thus \begin{equation*} \begin{split} \|\mathcal{L}^{r+}_{\mathbf{Y}}(\theta_{\mathbf{Y}})\|_{\rm F}^2 = \|\xi^{\theta_{\mathbf{Y}}}_{{\mathbf{X}}}\|_{\rm F}^2 & \overset{ \eqref{def: xi-theta-correspondence-PSD1} }= \|\P \theta_{\mathbf{Y}}^\top {\mathbf{U}} + {\mathbf{U}}^\top \theta_{\mathbf{Y}} \P^\top \|_{\rm F}^2 + 2 \|{\mathbf{U}}_\perp^\top \theta_{\mathbf{Y}} \P^\top \|_{\rm F}^2\\ & = \|\S^\top + \S\|_{\rm F}^2 + 2\|{\mathbf{D}}\|_{\rm F}^2\\ & \overset{\eqref{ineq: quadratic-S'-bound-PSD1} } \geq 2 (\|\S\|_{\rm F}^2 + \|{\mathbf{D}}\|_{\rm F}^2) \overset{\eqref{ineq: upper-bound-theta-norm} } \geq 2\sigma_r(\P {\mathbf{W}}_{\mathbf{Y}}^{-1} \P^\top)\bar{g}^{r+}_{\mathbf{Y}}(\theta_{\mathbf{Y}},\theta_{\mathbf{Y}}), \end{split} \end{equation*} and \begin{equation*} \begin{split} \|\mathcal{L}^{r+}_{\mathbf{Y}}(\theta_{\mathbf{Y}})\|_{\rm F}^2 = \|\xi^{\theta_{\mathbf{Y}}}_{{\mathbf{X}}}\|_{\rm F}^2 & \overset{ \eqref{def: xi-theta-correspondence-PSD1} }= \|\P \theta_{\mathbf{Y}}^\top {\mathbf{U}} + {\mathbf{U}}^\top \theta_{\mathbf{Y}} \P^\top \|_{\rm F}^2 + 2 \|{\mathbf{U}}_\perp^\top \theta_{\mathbf{Y}} \P^\top \|_{\rm F}^2 \\ & \leq 4 \|{\mathbf{U}}^\top \theta_{\mathbf{Y}} \P^\top\|_{\rm F}^2 + 2 \|{\mathbf{U}}_\perp^\top \theta_{\mathbf{Y}} \P^\top \|_{\rm F}^2\\ & = 2 \|{\mathbf{U}}^\top \theta_{\mathbf{Y}} \P^\top\|_{\rm F}^2 + 2 \|\theta_{\mathbf{Y}} \P^\top\|_{\rm F}^2 \\ & \leq 4 \|\theta_{\mathbf{Y}} \P^{\top}\|_{\rm F}^2 = 4 \|\theta_{\mathbf{Y}} {\mathbf{W}}_{\mathbf{Y}}^{1/2} {\mathbf{W}}_{\mathbf{Y}}^{-1/2} \P^{\top}\|_{\rm F}^2 \\ &\overset{\eqref{ineq: upper-bound-theta-norm} }\leq 4\sigma_1(\P {\mathbf{W}}_{\mathbf{Y}}^{-1} \P^\top ) \bar{g}^{r+}_{\mathbf{Y}}(\theta_{\mathbf{Y}},\theta_{\mathbf{Y}}). \end{split} \end{equation*} This finishes the proof of this proposition. \quad $\blacksquare$ \begin{Proposition}[Bijection Between $\mathcal{H}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}$ and $T_{\mathbf{X}} {\cal M}_{r+}^e$] \label{prop: psd-bijection2} Suppose ${\mathbf{U}} \in {\rm St}(r,p)$, ${\mathbf{B}} \in \mathbb{S}_+(r)$ and ${\mathbf{X}} = {\mathbf{U}} {\mathbf{B}} {\mathbf{U}}^\top$. For any $\theta_{({\mathbf{U}},{\mathbf{B}})} = [\theta_U^\top \quad \theta_B^\top]^\top \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}$ and $\xi_{\mathbf{X}} = [{\mathbf{U}} \quad {\mathbf{U}}_\perp] \begin{bmatrix} \S & {\mathbf{D}}^\top\\ {\mathbf{D}} & {\mathbf{0}} \end{bmatrix} [{\mathbf{U}} \quad {\mathbf{U}}_\perp]^\top \in T_{{\mathbf{X}}}{\cal M}^e_{r+}$, define \begin{equation} \label{def: xi-theta-correspondence-PSD2} \begin{split} \xi^{\theta_{({\mathbf{U}},{\mathbf{B}})}}_{{\mathbf{X}}}&:= [{\mathbf{U}} \quad {\mathbf{U}}_\perp] \begin{bmatrix} \theta_B & {\mathbf{B}} \theta_U^\top {\mathbf{U}}_\perp \\ {\mathbf{U}}_\perp^\top \theta_U {\mathbf{B}} & {\mathbf{0}} \end{bmatrix}[{\mathbf{U}} \quad {\mathbf{U}}_\perp]^\top \in T_{{\mathbf{X}}}{\cal M}^e_{r+}, \\ \theta_{({\mathbf{U}},{\mathbf{B}})}^{\xi_{\mathbf{X}}} &:= [({\mathbf{U}}_\perp {\mathbf{D}} {\mathbf{B}}^{-1})^\top \quad \S]^\top \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}. \end{split} \end{equation} Then we can find a linear bijective mapping $\mathcal{L}_{{\mathbf{U}},{\mathbf{B}}}^{r+}$ between $\mathcal{H}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}$ and $T_{\mathbf{X}} {\cal M}_{r+}^e$, \begin{equation*} \begin{split} &\mathcal{L}_{{\mathbf{U}},{\mathbf{B}}}^{r+}: \theta_{({\mathbf{U}},{\mathbf{B}})} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2} \longrightarrow \xi_{{\mathbf{X}}}^{\theta_{({\mathbf{U}},{\mathbf{B}})}} \in T_{\mathbf{X}} {\cal M}_{r+}^e, \\ &(\mathcal{L}_{{\mathbf{U}},{\mathbf{B}}}^{r+})^{-1}: \xi_{\mathbf{X}} \in T_{\mathbf{X}} {\cal M}_{r+}^e \to \theta_{({\mathbf{U}},{\mathbf{B}})}^{\xi_{\mathbf{X}}} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2} \end{split} \end{equation*}such that $\mathcal{L}_{{\mathbf{U}},{\mathbf{B}}}^{r+}(\theta_{({\mathbf{U}},{\mathbf{B}})}) = {\mathbf{U}} {\mathbf{B}} \theta_U^\top + {\mathbf{U}} \theta_B {\mathbf{U}}^\top + \theta_U {\mathbf{B}} {\mathbf{U}}^\top$ holds for any $\theta_{({\mathbf{U}},{\mathbf{B}})} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}$. Finally, $\mathcal{L}_{{\mathbf{U}},{\mathbf{B}}}^{r+}$ satisfies the following spectrum bound: $\forall \theta_{({\mathbf{U}},{\mathbf{B}})} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}$, \begin{equation} \label{ineq: bijection-spectrum-psd2} \sigma^2_r({\mathbf{X}})\bar{g}^{r+}_{({\mathbf{U}},{\mathbf{B}})}(\theta_{({\mathbf{U}},{\mathbf{B}})}, \theta_{({\mathbf{U}},{\mathbf{B}})}) \leq \|\mathcal{L}^{r+}_{{\mathbf{U}},{\mathbf{B}}}(\theta_{({\mathbf{U}},{\mathbf{B}})})\|_{\rm F}^2 \leq 2 \sigma^2_1({\mathbf{X}})\bar{g}^{r+}_{({\mathbf{U}},{\mathbf{B}})}(\theta_{({\mathbf{U}},{\mathbf{B}})}, \theta_{({\mathbf{U}},{\mathbf{B}})}). \end{equation} \end{Proposition} {\noindent \bf Proof of Proposition \ref{prop: psd-bijection2}.} First, it is easy to see $\xi^{\theta_{({\mathbf{U}},{\mathbf{B}})}}_{{\mathbf{X}}}$ and $\theta_{({\mathbf{U}},{\mathbf{B}})}^{\xi_{\mathbf{X}}}$ are well defined given any $\theta_{({\mathbf{U}},{\mathbf{B}})} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}$ and $\xi_{\mathbf{X}} \in T_{{\mathbf{X}}}{\cal M}^e_{r+}$. Next, we show $\mathcal{L}_{{\mathbf{U}},{\mathbf{B}}}^{r+}$ is a bijection. Notice $\mathcal{H}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}$ is of dimension $(pr - (r^2-r)/2)$, which is the same with $T_{\mathbf{X}} {\cal M}_{r+}^e$. Suppose $\mathcal{L}_{{\mathbf{U}},{\mathbf{B}}}^{r+'} : \xi_{\mathbf{X}} \in T_{\mathbf{X}}{\cal M}^e_{r+} \longrightarrow \theta_{({\mathbf{U}},{\mathbf{B}})}^{\xi_{\mathbf{X}}} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}$. For any $\xi_{\mathbf{X}} = [{\mathbf{U}} \quad {\mathbf{U}}_\perp] \begin{bmatrix} \S & {\mathbf{D}}^\top\\ {\mathbf{D}} & {\mathbf{0}} \end{bmatrix} [{\mathbf{U}} \quad {\mathbf{U}}_\perp]^\top \in T_{\mathbf{X}}{\cal M}^e_{r+}$, we have \begin{equation} \label{eq: bijection-PSD2} \mathcal{L}^{r+}_{{\mathbf{U}},{\mathbf{B}}} ( \mathcal{L}_{{\mathbf{U}},{\mathbf{B}}}^{r+'}(\xi_{\mathbf{X}}) ) = \mathcal{L}^{r+}_{{\mathbf{U}},{\mathbf{B}}} (\theta_{({\mathbf{U}},{\mathbf{B}})}^{\xi_{\mathbf{X}}}) = [{\mathbf{U}} \quad {\mathbf{U}}_\perp] \begin{bmatrix} \theta_B^{\xi_{\mathbf{X}}} & {\mathbf{B}} \theta^{\xi_{\mathbf{X}} \top}_U {\mathbf{U}}_\perp \\ {\mathbf{U}}_\perp^\top \theta_U^{\xi_{\mathbf{X}}} {\mathbf{B}} & {\mathbf{0}} \end{bmatrix}[{\mathbf{U}} \quad {\mathbf{U}}_\perp]^\top = \xi_{\mathbf{X}}. \end{equation} Since $\mathcal{L}^{r+}_{{\mathbf{U}},{\mathbf{B}}}$ and $\mathcal{L}_{{\mathbf{U}},{\mathbf{B}}}^{r+'}$ are linear maps, \eqref{eq: bijection-PSD2} implies $\mathcal{L}^{r+}_{{\mathbf{U}},{\mathbf{B}}}$ is a bijection and $\mathcal{L}_{{\mathbf{U}},{\mathbf{B}}}^{r+'} = (\mathcal{L}^{r+}_{{\mathbf{U}},{\mathbf{B}}})^{-1}$. At the same time, it is easy to check $\mathcal{L}_{{\mathbf{U}},{\mathbf{B}}}^{r+}(\theta_{({\mathbf{U}},{\mathbf{B}})}) = {\mathbf{U}} {\mathbf{B}} \theta_U^\top + {\mathbf{U}} \theta_B {\mathbf{U}}^\top + \theta_U {\mathbf{B}} {\mathbf{U}}^\top$ holds for any $\theta_{({\mathbf{U}},{\mathbf{B}})} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}$. Next, we provide the spectrum bounds for $\mathcal{L}^{r+}_{{\mathbf{U}},{\mathbf{B}}}$. For any $\theta_{({\mathbf{U}},{\mathbf{B}})} = [({\mathbf{U}}_\perp {\mathbf{D}})^\top \quad \theta_B]^\top \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}$, we have $\bar{g}^{r+}_{({\mathbf{U}},{\mathbf{B}})}(\theta_{({\mathbf{U}},{\mathbf{B}})},\theta_{({\mathbf{U}},{\mathbf{B}})}) = \|{\mathbf{D}}\|_{\rm F}^2 + {\rm tr}({\mathbf{B}}^{-1} \theta_B {\mathbf{B}}^{-1} \theta_B) = \|{\mathbf{D}}\|_{\rm F}^2 + \|{\mathbf{B}}^{-1/2} \theta_B {\mathbf{B}}^{-1/2}\|_{\rm F}^2$. Thus, we have \begin{equation*} \begin{split} \|\mathcal{L}^{r+}_{{\mathbf{U}},{\mathbf{B}}}(\theta_{({\mathbf{U}},{\mathbf{B}})})\|_{\rm F}^2 \overset{ \eqref{def: xi-theta-correspondence-PSD2} } = \|\theta_B\|_{\rm F}^2 + 2 \|{\mathbf{B}} \theta_U^\top {\mathbf{U}}_\perp\|_{\rm F}^2 = \|\theta_B\|_{\rm F}^2 + 2\|{\mathbf{B}} {\mathbf{D}}^\top\|_{\rm F}^2 &\leq 2\sigma^2_1({\mathbf{B}}) \left( \|{\mathbf{B}}^{-1/2} \theta_B {\mathbf{B}}^{-1/2}\|_{\rm F}^2 + \|{\mathbf{D}}\|_{\rm F}^2 \right)\\ & = 2 \sigma^2_1({\mathbf{X}}) \bar{g}^{r+}_{({\mathbf{U}},{\mathbf{B}})}(\theta_{({\mathbf{U}},{\mathbf{B}})},\theta_{({\mathbf{U}},{\mathbf{B}})}), \end{split} \end{equation*} and \begin{equation*} \begin{split} \|\mathcal{L}^{r+}_{{\mathbf{U}},{\mathbf{B}}}(\theta_{({\mathbf{U}},{\mathbf{B}})})\|_{\rm F}^2 \overset{ \eqref{def: xi-theta-correspondence-PSD2} }= \|\theta_B\|_{\rm F}^2 + 2\|{\mathbf{B}} {\mathbf{D}}^\top\|_{\rm F}^2 &\geq \sigma^2_r({\mathbf{B}}) \left( \|{\mathbf{B}}^{-1/2} \theta_B {\mathbf{B}}^{-1/2}\|_{\rm F}^2 + \|{\mathbf{D}}\|_{\rm F}^2 \right)\\ & = \sigma^2_r({\mathbf{X}}) \bar{g}^{r+}_{({\mathbf{U}},{\mathbf{B}})}(\theta_{({\mathbf{U}},{\mathbf{B}})},\theta_{({\mathbf{U}},{\mathbf{B}})}). \quad \quad \quad \blacksquare \end{split} \end{equation*} Next, we present our first main result on the geometric landscape connections of Riemannian optimization \eqref{eq: PSD-manifold-formulation} under the embedded and the quotient geometries. \begin{Theorem}[Geometric Landscape Connections of \eqref{eq: PSD-manifold-formulation} on ${\cal M}_{r+}^e$ and ${\cal M}_{r+}^{q_1}$] \label{th: embedded-quotient-connection-PSD1} Suppose the conditions in Proposition \ref{prop: psd-bijection1} hold and the ${\mathbf{W}}_{\mathbf{Y}}$ in $\bar{g}_{\mathbf{Y}}^{r+}$ satisfies ${\mathbf{W}}_{\mathbf{Y}} = \O {\mathbf{W}}_{{\mathbf{Y}}\O} \O^\top$ for any $\O \in \mathbb{O}_r$. Then \begin{equation} \label{eq: gradient-connect-PSD1} \begin{split} {\rm grad} f({\mathbf{X}}) &= \left( \overline{{\rm grad}\, h_{r+}([{\mathbf{Y}}])} {\mathbf{W}}_{\mathbf{Y}} {\mathbf{Y}}^\dagger + \left(\overline{{\rm grad}\, h_{r+}([{\mathbf{Y}}])} {\mathbf{W}}_{\mathbf{Y}} {\mathbf{Y}}^\dagger\right)^\top({\mathbf{I}}_p - {\mathbf{Y}} {\mathbf{Y}}^\dagger) \right)/2,\\ \overline{{\rm grad}\, h_{r+}([{\mathbf{Y}}])} &= 2 {\rm grad} f({\mathbf{X}}) {\mathbf{Y}} {\mathbf{W}}_{\mathbf{Y}}^{-1}. \end{split} \end{equation} Furthermore, if $[{\mathbf{Y}}]$ is a Riemannian FOSP of $h_{r+}([{\mathbf{Y}}'])$ defined via \eqref{eq: PSD-opt-problem-quotient-sub1}, we have: \begin{equation} \label{eq: Hessian-connection-PSD1} \begin{split} \overline{{\rm Hess} \, h_{r+}([{\mathbf{Y}}])}[\theta_{\mathbf{Y}}, \theta_{\mathbf{Y}}] &= {\rm Hess} f({\mathbf{X}})[\mathcal{L}_{\mathbf{Y}}^{r+}(\theta_{\mathbf{Y}}),\mathcal{L}_{\mathbf{Y}}^{r+}(\theta_{\mathbf{Y}})], \quad \forall \theta_{\mathbf{Y}} \in \mathcal{H}_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1}. \end{split} \end{equation} Finally, $\overline{{\rm Hess} \, h_{r+}([{\mathbf{Y}}])}$ and ${\rm Hess} f({\mathbf{X}})$ have $(pr-(r^2-r)/2)$ eigenvalues; for $i = 1,\ldots, pr-(r^2-r)/2$, \begin{equation*} \begin{split} & \lambda_i(\overline{{\rm Hess} \, h_{r+}([{\mathbf{Y}}])}) \text{ is sandwiched between } \\ & 2\sigma_r(\P {\mathbf{W}}_{\mathbf{Y}}^{-1} \P^\top) \lambda_i({\rm Hess} f({\mathbf{X}})) \text{ and } 4\sigma_1(\P {\mathbf{W}}_{\mathbf{Y}}^{-1} \P^\top) \lambda_i({\rm Hess} f({\mathbf{X}})). \end{split} \end{equation*} \end{Theorem} {\noindent \bf Proof of Theorem \ref{th: embedded-quotient-connection-PSD1}.} First, notice ${\mathbf{Y}}$ lies in the column space spanned by ${\mathbf{U}}$ and ${\mathbf{Y}} {\mathbf{Y}}^\dagger = P_{\mathbf{U}}$. So \eqref{eq: gradient-connect-PSD1} is by direct calculation from the gradient expressions in Proposition \ref{prop: gradient-hessian-exp-PSD}. Next, we prove \eqref{eq: Hessian-connection-PSD1}. Since $[{\mathbf{Y}}]$ is a Riemannian FOSP of $h_{r+}([{\mathbf{Y}}'])$, we have \begin{equation} \label{eq: FOSP-condition-PSD1} \overline{ {\rm grad} \, h_{r+}([{\mathbf{Y}}])} = {\mathbf{0}}, \quad \nabla f({\mathbf{Y}} {\mathbf{Y}}^\top) {\mathbf{Y}} = {\mathbf{0}}, \end{equation} and have ${\rm grad} f({\mathbf{X}}) = {\mathbf{0}}$ by \eqref{eq: gradient-connect-PSD1}. So $ \nabla f({\mathbf{X}}) = P_{{\mathbf{U}}_\perp} \nabla f({\mathbf{X}}) P_{{\mathbf{U}}_\perp}$. Recall $\P = {\mathbf{U}}^\top {\mathbf{Y}}$, ${\mathbf{X}} = {\mathbf{Y}} {\mathbf{Y}}^\top$ and let $\boldsymbol{\Sigma} = {\mathbf{U}}^\top {\mathbf{X}} {\mathbf{U}}$. Given any $\theta_{\mathbf{Y}} \in \mathcal{H}_{{\mathbf{Y}}} \widebar{{\cal M}}_{r+}^{q_1}$, we have \begin{equation} \label{eq: Hessian-con-gradient-1} \langle \nabla f({\mathbf{X}}), P_{{\mathbf{U}}_\perp} \theta_{\mathbf{Y}} \P^{\top} \boldsymbol{\Sigma}^{-1} \P \theta_{\mathbf{Y}}^\top P_{{\mathbf{U}}_\perp} \rangle = \langle \nabla f({\mathbf{X}}), \theta_{\mathbf{Y}} \theta_{\mathbf{Y}}^\top \rangle, \end{equation} where the equality is because $\P$ is nonsingular, $\P \P^\top = \boldsymbol{\Sigma}$ and $ \nabla f({\mathbf{X}}) = P_{{\mathbf{U}}_\perp} \nabla f({\mathbf{X}}) P_{{\mathbf{U}}_\perp} $. Then by Proposition \ref{prop: gradient-hessian-exp-PSD}: \begin{equation*} \begin{split} & \quad \overline{{\rm Hess} \, h_{r+}([{\mathbf{Y}}])}[\theta_{\mathbf{Y}}, \theta_{\mathbf{Y}}] \\ &= \nabla^2 f({\mathbf{Y}} {\mathbf{Y}}^\top)[{\mathbf{Y}}\theta_{\mathbf{Y}}^\top + \theta_{\mathbf{Y}} {\mathbf{Y}}^\top , {\mathbf{Y}}\theta_{\mathbf{Y}}^\top + \theta_{\mathbf{Y}} {\mathbf{Y}}^\top ] + 2\langle \nabla f({\mathbf{Y}} {\mathbf{Y}}^\top ), \theta_{\mathbf{Y}} \theta_{\mathbf{Y}}^\top \rangle\\ &\quad + 2\langle \nabla f({\mathbf{Y}} {\mathbf{Y}}^\top) {\mathbf{Y}} {\rm D} {\mathbf{W}}_{\mathbf{Y}}^{-1} [\theta_{\mathbf{Y}}], \theta_{\mathbf{Y}}{\mathbf{W}}_{\mathbf{Y}} \rangle + \langle {\rm D} {\mathbf{W}}_{\mathbf{Y}}\left[\overline{ {\rm grad} \, h_{r+}([{\mathbf{Y}}])}\right], \theta_{\mathbf{Y}}^\top \theta_{\mathbf{Y}} \rangle/2\\ & \overset{ \eqref{eq: FOSP-condition-PSD1} }= \nabla^2 f({\mathbf{X}})[{\mathbf{Y}}\theta_{\mathbf{Y}}^\top + \theta_{\mathbf{Y}} {\mathbf{Y}}^\top , {\mathbf{Y}}\theta_{\mathbf{Y}}^\top + \theta_{\mathbf{Y}} {\mathbf{Y}}^\top ] + 2\langle \nabla f({\mathbf{Y}} {\mathbf{Y}}^\top ), \theta_{\mathbf{Y}} \theta_{\mathbf{Y}}^\top \rangle \\ & \overset{ \text{Proposition } \ref{prop: psd-bijection1}, \eqref{eq: Hessian-con-gradient-1} }= \nabla^2 f({\mathbf{X}})[\mathcal{L}_{\mathbf{Y}}^{r+}(\theta_{\mathbf{Y}}),\mathcal{L}_{\mathbf{Y}}^{r+}(\theta_{\mathbf{Y}})] + 2\langle \nabla f({\mathbf{X}}), P_{{\mathbf{U}}_\perp} \theta_{\mathbf{Y}} \P^{\top} \boldsymbol{\Sigma}^{-1} \P \theta_{\mathbf{Y}}^\top P_{{\mathbf{U}}_\perp} \rangle\\ & = {\rm Hess} f({\mathbf{X}})[\mathcal{L}_{\mathbf{Y}}^{r+}(\theta_{\mathbf{Y}}),\mathcal{L}_{\mathbf{Y}}^{r+}(\theta_{\mathbf{Y}})], \end{split} \end{equation*} where the last equality follows from the expression of ${\rm Hess} f({\mathbf{X}})$ in \eqref{eq: embedded-gd-hessian-psd} and the definition of $\mathcal{L}_{\mathbf{Y}}^{r+}$. Then, by \eqref{ineq: bijection-spectrum-psd1}, \eqref{eq: Hessian-connection-PSD1} and Theorem \ref{th: hessian-sandwich}, we have $\overline{{\rm Hess} \, h_{r+}([{\mathbf{Y}}])}$ and ${\rm Hess} f({\mathbf{X}})$ have $(pr-(r^2-r)/2)$ eigenvalues and $\widebar{\lambda}_i({\rm Hess} \, h_{r+}([{\mathbf{Y}}]))$ is sandwiched between $2\sigma_r(\P {\mathbf{W}}_{\mathbf{Y}}^{-1} \P^\top) \lambda_i({\rm Hess} f({\mathbf{X}})) $ and $4\sigma_1(\P {\mathbf{W}}_{\mathbf{Y}}^{-1} \P^\top) \lambda_i({\rm Hess} f({\mathbf{X}}))$ for $i = 1,\ldots,pr-(r^2-r)/2$. \quad $\blacksquare$ \begin{Theorem}[Geometric Landscape Connections of \eqref{eq: PSD-manifold-formulation} on ${\cal M}_{r+}^e$ and ${\cal M}_{r+}^{q_2}$] \label{th: embedded-quotient-connection-PSD2} Suppose ${\mathbf{U}} \in {\rm St}(r,p)$, ${\mathbf{B}} \in \mathbb{S}_+(r)$ and ${\mathbf{X}} = {\mathbf{U}} {\mathbf{B}} {\mathbf{U}}^\top$. Then \begin{equation} \label{eq: gradient-connect-PSD2} \begin{split} {\rm grad} f({\mathbf{X}}) &= ( \overline{{\rm grad}_{\mathbf{U}}\, h_{r+}([{\mathbf{U}},{\mathbf{B}}])} {\mathbf{B}}^{-1} {\mathbf{U}}^\top )/2 + ( \overline{{\rm grad}_{\mathbf{U}}\, h_{r+}([{\mathbf{U}},{\mathbf{B}}])} {\mathbf{B}}^{-1} {\mathbf{U}}^\top )^\top/2 \\ & \quad + {\mathbf{U}} {\mathbf{B}}^{-1} \overline{{\rm grad}_{\mathbf{B}}\, h_{r+}([{\mathbf{U}},{\mathbf{B}}])} {\mathbf{B}}^{-1} {\mathbf{U}}^\top,\\ \overline{{\rm grad}\, h_{r+}([{\mathbf{U}},{\mathbf{B}}])} &= \begin{bmatrix} 2 P_{{\mathbf{U}}_\perp} {\rm grad} f({\mathbf{X}}) {\mathbf{U}} {\mathbf{B}} \\ {\mathbf{B}} {\mathbf{U}}^\top {\rm grad} f({\mathbf{X}}) {\mathbf{U}} {\mathbf{B}} \end{bmatrix}. \end{split} \end{equation} Furthermore, if $[{\mathbf{U}},{\mathbf{B}}]$ is a Riemannian FOSP of $h_{r+}([{\mathbf{U}}',{\mathbf{B}}'])$ defined via \eqref{eq: PSD-opt-problem-quotient-sub2}, we have: \begin{equation} \label{eq: Hessian-connection-PSD2} \begin{split} \overline{{\rm Hess} \, h_{r+}([{\mathbf{U}},{\mathbf{B}}])}[\theta_{({\mathbf{U}},{\mathbf{B}})}, \theta_{({\mathbf{U}},{\mathbf{B}})}] &= {\rm Hess} f({\mathbf{X}})[\mathcal{L}_{{\mathbf{U}},{\mathbf{B}}}^{r+}(\theta_{({\mathbf{U}},{\mathbf{B}})}),\mathcal{L}_{{\mathbf{U}},{\mathbf{B}}}^{r+}(\theta_{({\mathbf{U}},{\mathbf{B}})})], \quad \forall \theta_{({\mathbf{U}},{\mathbf{B}})} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}})} \widebar{{\cal M}}_{r+}^{q_2}. \end{split} \end{equation} Finally, $\overline{{\rm Hess} \, h_{r+}([{\mathbf{U}},{\mathbf{B}}])}$ has $(pr-(r^2-r)/2)$ eigenvalues and for $i = 1,\ldots, pr-(r^2-r)/2$, $$\lambda_i(\overline{{\rm Hess} \, h_{r+}([{\mathbf{U}},{\mathbf{B}}])}) \text{ is sandwiched between }\sigma^2_r({\mathbf{X}}) \lambda_i({\rm Hess} f({\mathbf{X}})) \text{ and } 2\sigma_1^2({\mathbf{X}}) \lambda_i({\rm Hess} f({\mathbf{X}})).$$ \end{Theorem} {\noindent \bf Proof of Theorem \ref{th: embedded-quotient-connection-PSD2}.} The proof of this theorem is similar to the proof of Theorem \ref{th: embedded-quotient-connection-PSD1} and is postponed to Appendix \ref{sec: addition-proof-psd}. \quad $\blacksquare$ Theorems \ref{th: embedded-quotient-connection-PSD1} and \ref{th: embedded-quotient-connection-PSD2} immediately show the following equivalence of Riemannian FOSPs, SOSPs and strict saddles of optimization \eqref{eq: PSD-manifold-formulation} under the embedded and the quotient geometries. \begin{Corollary}[Equivalence on Riemannian FOSPs, SOSPs and strict saddles of \eqref{eq: PSD-manifold-formulation} Under Embedded and Quotient Geometries]\label{coro: landscape connection PSD} Suppose ${\mathbf{W}}_{\mathbf{Y}} = \O {\mathbf{W}}_{{\mathbf{Y}}\O} \O^\top$ holds for any $\O \in \mathbb{O}_r$. Then we have \begin{itemize} \item[(a)] given ${\mathbf{Y}} \in \mathbb{R}^{p \times r}_*, {\mathbf{U}} \in {\rm St}(r,p)$ and ${\mathbf{B}} \in \mathbb{S}_{+}(r)$, if $[{\mathbf{Y}}]$ ($[{\mathbf{U}},{\mathbf{B}}]$) is a Riemannian FOSP or SOSP or strict saddle of $h_{r+}([{\mathbf{Y}}'])$ ($h_{r+}([{\mathbf{U}}',{\mathbf{B}}'])$), then ${\mathbf{X}} = {\mathbf{Y}} {\mathbf{Y}}^\top$ (${\mathbf{X}} = {\mathbf{U}} {\mathbf{B}} {\mathbf{U}}^\top$) is a Riemannian FOSP or SOSP or strict saddle of \eqref{eq: PSD-manifold-formulation} under the embedded geometry; \item[(b)] if ${\mathbf{X}}$ is a Riemannian FOSP or SOSP or strict saddle of \eqref{eq: PSD-manifold-formulation} under the embedded geometry, then there is a unique $[{\mathbf{Y}}]$ ($[{\mathbf{U}},{\mathbf{B}}]$) such that ${\mathbf{Y}} {\mathbf{Y}}^\top = {\mathbf{X}}$ (${\mathbf{U}} {\mathbf{B}} {\mathbf{U}}^\top = {\mathbf{X}}$) and it is a Riemannian FOSP or SOSP or strict saddle of $h_{r+}([{\mathbf{Y}}'])$ ($h_{r+}([{\mathbf{U}}',{\mathbf{B}}'])$). \end{itemize} \end{Corollary} {\noindent \bf Proof of Corollary \ref{coro: landscape connection PSD}.} Here we prove the Riemannian FOSP, SOSP and strict saddle equivalence on ${\cal M}_{r+}^e$ and ${\cal M}_{r+}^{q_1}$, similar proof applies to the equivalence of \eqref{eq: PSD-manifold-formulation} on ${\cal M}_{r+}^e$ and ${\cal M}_{r+}^{q_2}$. First, by the connection of Riemannian gradients in \eqref{eq: gradient-connect-PSD1}, the connection of Riemannian FOSPs under two geometries clearly holds. Suppose $[{\mathbf{Y}}]$ is a Riemannian SOSP of $h_{r+}([{\mathbf{Y}}'])$ and let ${\mathbf{X}} = {\mathbf{Y}} {\mathbf{Y}}^\top$. Given any $\xi_{\mathbf{X}} \in T_{{\mathbf{X}}}{\cal M}^e_{r+}$, we have ${\rm Hess} f({\mathbf{X}})[\xi_{\mathbf{X}}, \xi_{\mathbf{X}}] \overset{ \eqref{eq: Hessian-connection-PSD1} }=\overline{{\rm Hess} \, h_{r+}([{\mathbf{Y}}])}[(\mathcal{L}^{r+}_{\mathbf{Y}})^{-1}(\xi_{\mathbf{X}}), (\mathcal{L}^{r+}_{\mathbf{Y}})^{-1}(\xi_{\mathbf{X}})] \geq 0$, where the inequality is by the SOSP assumption on $[{\mathbf{Y}}]$. Combining the fact ${\mathbf{X}}$ is a Riemannian FOSP, this shows ${\mathbf{X}} = {\mathbf{Y}} {\mathbf{Y}}^\top$ is a Riemannian SOSP under the embedded geometry. Next, let us show the other direction: suppose ${\mathbf{X}}$ is a Riemannian SOSP under the embedded geometry, there is a unique $[{\mathbf{Y}}]$ such that ${\mathbf{Y}} {\mathbf{Y}}^\top = {\mathbf{X}}$ and it is a Riemannian SOSP of $h_{r+}([{\mathbf{Y}}'])$. To see this, first the uniqueness of $[{\mathbf{Y}}]$ is guaranteed by the fact $\ell: {\mathbf{Y}} \in \widebar{{\cal M}}_{r+}^{q_1} \to {\mathbf{Y}} {\mathbf{Y}}^\top \in {\cal M}_{r+}^e $ induces a diffeomorphim between ${\cal M}_{r+}^e$ and ${\cal M}_{r+}^{q_1}$ \cite[Proposition A.7]{massart2020quotient}. In addition, we have shown $[{\mathbf{Y}}]$ is a Riemannian FOSP of $h_{r+}([{\mathbf{Y}}'])$. Then by \eqref{eq: Hessian-connection-PSD1}, we have for any $\theta_{\mathbf{Y}} \in \mathcal{H}_{\mathbf{Y}} \widebar{{\cal M}}_{r+}^{q_1}$, $\overline{{\rm Hess} \, h_{r+}([{\mathbf{Y}}])}[\theta_{\mathbf{Y}}, \theta_{\mathbf{Y}}] = {\rm Hess} f({\mathbf{X}})[\mathcal{L}_{\mathbf{Y}}^{r+}(\theta_{\mathbf{Y}}),\mathcal{L}_{\mathbf{Y}}^{r+}(\theta_{\mathbf{Y}})] \geq 0$. Finally the equivalence on strict saddles also follows easily from the sandwich inequality from Theorem \ref{th: embedded-quotient-connection-PSD1} and the definition of strict saddle. \quad $\blacksquare$ \begin{Remark}[Spectrum Connection of Riemannian Hessians] The sandwich inequalities in Theorems \ref{th: embedded-quotient-connection-PSD1} and \ref{th: embedded-quotient-connection-PSD2} provide a finer connection on the spectrum of the Riemannian Hessians under the embedded and the quotient geometries at Riemannian FOSPs. These results are useful in transferring a few common geometric landscape properties from one geometry formulation to another. One such example is the so-called strict saddle property \cite{ge2015escaping,lee2019first}, which states that the function has a strict negative curvature at all stationary points but local minima. With this strict saddle property, various Riemannian gradient descent and trust region methods are guaranteed to escape all strict saddles and converge to a SOSP \cite{boumal2019global,criscitiello2019efficiently,lee2019first,sun2018geometric,sun2019escaping}. \end{Remark} \begin{Remark}[Effects of Riemannian Metric and Quotient Structure on Landscape Connection] \label{rem: effiect-of-metric-on-landscape} We can see from Corollary \ref{coro: landscape connection PSD} that the choices of the quotient structure and the ${\mathbf{W}}_{\mathbf{Y}}$ in the Riemannian metric $\bar{g}^{r+}_{\mathbf{Y}}$ does not affect the landscape connection of FOSPs and SOSPs under two geometries. Similar phenomenon will also occur in the general fixed-rank matrix optimization. On the other hand, the quotient structure and the ${\mathbf{W}}_{\mathbf{Y}}$ do affect the gaps of the sandwich inequalities in Theorems \ref{th: embedded-quotient-connection-PSD1} and \ref{th: embedded-quotient-connection-PSD2}. For example, in Theorem \ref{th: embedded-quotient-connection-PSD1} the gap coefficients are $2\sigma^2_r({\mathbf{Y}})$ and $4\sigma^2_1({\mathbf{Y}})$ when ${\mathbf{W}}_{\mathbf{Y}} = {\mathbf{I}}_r$, however, they become universal constants $1$ and $2$ if we choose ${\mathbf{W}}_{\mathbf{Y}} = 2{\mathbf{Y}}^\top {\mathbf{Y}}$. As we will discuss in Remark \ref{rem: algorithmic-connection} that when the gap coefficients are some universal constants, there is a surprising algorithmic connection of adopting embedded and quotient geometries in Riemannian fixed-rank matrix optimization. \end{Remark} \begin{Remark}[Implications on Connections of Different Geometries for Riemannian Optimization] \label{rem: implication-on-connection-diff-approaches} Generally speaking, embedded and quotient geometries are the most common two choices in Riemannian optimization. Compared to quotient geometry, embedded geometry allows computing and interpreting many geometric objects straightforwardly. Theorems \ref{th: embedded-quotient-connection-PSD1} and \ref{th: embedded-quotient-connection-PSD2} establish a strong geometric landscape connection between two geometries in fixed-rank PSD matrix optimization and this provides an example under which two different geometries are indeed connected in treating the same constraint in Riemannian optimization. Finally, we note although we focus on the geometric connections of \eqref{eq: PSD-manifold-formulation} under the embedded and the quotient geometries, it is also relative easy to obtain the geometric connections under different quotient geometries based on our results. \end{Remark} \section{Geometric Connections of Embedded and Quotient Geometries in Fixed-rank General Matrix Optimization} \label{sec: connection-general} In this section, we present the geometric landscape connections of optimization problem \eqref{eq: general prob} under the embedded and the quotient geometries. First, the problem \eqref{eq: general prob} can be reformulated under each of the three quotient geomeries in Section \ref{sec: quotient-general} as follows, \begin{subequations}\label{eq: general-opt-problem-quotient} \begin{align} \text{on } \widebar{{\cal M}}_{r}^{q_1}:& \quad \min_{{\mathbf{L}} \in \mathbb{R}^{p_1 \times r}_*,{\mathbf{R}} \in \mathbb{R}^{p_2 \times r}_*}\bar{h}_{r}({\mathbf{L}},{\mathbf{R}}):= f({\mathbf{L}} {\mathbf{R}}^\top), \label{eq: general-opt-problem-quotient-sub1}\\ \text{on } \widebar{{\cal M}}_{r}^{q_2}:& \quad \min_{{\mathbf{U}} \in {\rm St}(r,p_1), {\mathbf{B}} \in \mathbb{S}_+(r), {\mathbf{V}} \in {\rm St}(r,p_2)}\bar{h}_{r}({\mathbf{U}}, {\mathbf{B}}, {\mathbf{V}}):= f({\mathbf{U}} {\mathbf{B}} {\mathbf{V}}^\top), \label{eq: general-opt-problem-quotient-sub2}\\ \text{on } \widebar{{\cal M}}_{r}^{q_3}:& \quad \min_{{\mathbf{U}} \in {\rm St}(r,p_1), {\mathbf{Y}} \in \mathbb{R}^{p_2 \times r}_*}\bar{h}_{r}({\mathbf{U}}, {\mathbf{Y}}):= f({\mathbf{U}} {\mathbf{Y}}^\top).\label{eq: general-opt-problem-quotient-sub3} \end{align} \end{subequations} Since $\bar{h}_{r}({\mathbf{L}},{\mathbf{R}})$, $\bar{h}_{r}({\mathbf{U}}, {\mathbf{B}}, {\mathbf{V}})$ and $\bar{h}_{r}({\mathbf{U}}, {\mathbf{Y}})$ are invariant along the fibers of $\widebar{{\cal M}}_{r}^{q_1}$, $\widebar{{\cal M}}_{r}^{q_2}$ and $\widebar{{\cal M}}_{r}^{q_3}$, they induce functions $h_{r}([{\mathbf{L}},{\mathbf{R}}])$, $h_{r}([{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}])$ and $h_r([{\mathbf{U}},{\mathbf{Y}}])$ on quotient manifolds ${\cal M}_{r}^{q_1}$, ${\cal M}_{r}^{q_2}$ and ${\cal M}_{r}^{q_3}$, respectively. In the following Proposition \ref{prop: gradient-hessian-exp-general}, we provide Riemannian gradients and Hessians of \eqref{eq: general prob} under embedded and quotient geometries. \begin{Proposition}[Riemannian Gradients and Hessians of \eqref{eq: general prob}] \label{prop: gradient-hessian-exp-general} The Riemannian gradients and Hessians of \eqref{eq: general prob} under the embedded and the quotient geometries introduced in Section \ref{sec: embedded-quotient-fixed-rank-matrix} are: \begin{itemize}[leftmargin=*] \item On ${\cal M}^e_{r}$: Suppose ${\mathbf{X}} \in {\cal M}_{r}^e$, ${\mathbf{U}},{\mathbf{V}}$ span the top $r$ left and right singular subspaces of ${\mathbf{X}}$, respectively and $\xi_{\mathbf{X}} = [{\mathbf{U}} \quad {\mathbf{U}}_\perp] \begin{bmatrix} \S & {\mathbf{D}}_2^\top\\ {\mathbf{D}}_1 & {\mathbf{0}} \end{bmatrix} [{\mathbf{V}} \quad {\mathbf{V}}_\perp]^\top \in T_{{\mathbf{X}}}{\cal M}^e_{r}$. Then \begin{equation}\label{eq: embedded-gd-hessian-general} \begin{split} {\rm grad} f({\mathbf{X}}) &= P_{{\mathbf{U}}} \nabla f({\mathbf{X}})P_{{\mathbf{V}}} + P_{{\mathbf{U}}_\perp} \nabla f({\mathbf{X}})P_{{\mathbf{V}}} + P_{{\mathbf{U}}} \nabla f({\mathbf{X}})P_{{\mathbf{V}}_\perp},\\ {\rm Hess} f({\mathbf{X}})[\xi_{\mathbf{X}}, \xi_{\mathbf{X}}] &= \nabla^2 f({\mathbf{X}})[\xi_{\mathbf{X}}, \xi_{\mathbf{X}}] + 2\langle \nabla f({\mathbf{X}}), {\mathbf{U}}_\perp {\mathbf{D}}_1 \boldsymbol{\Sigma}^{-1} {\mathbf{D}}_2^\top {\mathbf{V}}_\perp^\top \rangle, \end{split} \end{equation} where $\boldsymbol{\Sigma} = {\mathbf{U}}^\top {\mathbf{X}} {\mathbf{V}}$. \item On ${\cal M}_{r}^{q_1}$: Suppose ${\mathbf{L}} \in \mathbb{R}^{p_1 \times r}_*$, ${\mathbf{R}} \in \mathbb{R}^{p_2 \times r}_*$ and $\theta_{({\mathbf{L}},{\mathbf{R}})} = [\theta_L^\top \quad \theta_R^\top]^\top \in \mathcal{H}_{({\mathbf{L}},{\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1}$. Then \begin{equation} \label{eq: quotient-gradient-Hessian-general1} \begin{split} &\overline{{\rm grad}\, h_{r}([{\mathbf{L}},{\mathbf{R}}])} = \begin{bmatrix} \overline{{\rm grad}_{\mathbf{L}}\, h_{r}([{\mathbf{L}},{\mathbf{R}}])}\\ \overline{{\rm grad}_{\mathbf{R}}\, h_{r}([{\mathbf{L}},{\mathbf{R}}])} \end{bmatrix} = \begin{bmatrix} \nabla f({\mathbf{L}} {\mathbf{R}}^\top) {\mathbf{R}} {\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \\ (\nabla f({\mathbf{L}} {\mathbf{R}}^\top))^\top {\mathbf{L}} {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \end{bmatrix}, \\ &\overline{{\rm Hess} \, h_{r}([{\mathbf{L}},{\mathbf{R}}])}[\theta_{({\mathbf{L}},{\mathbf{R}})}, \theta_{({\mathbf{L}},{\mathbf{R}})}] \\ &= \nabla^2 f({\mathbf{L}} {\mathbf{R}}^\top)[{\mathbf{L}} \theta_R^\top + \theta_L {\mathbf{R}}^\top,{\mathbf{L}} \theta_R^\top + \theta_L {\mathbf{R}}^\top] + 2 \langle \nabla f({\mathbf{L}} {\mathbf{R}}^\top), \theta_L \theta_R^\top \rangle \\ & \quad + \langle \nabla f({\mathbf{L}} {\mathbf{R}}^\top) {\mathbf{R}} {\rm D} {\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}}^{-1}[\theta_{({\mathbf{L}},{\mathbf{R}})}] , \theta_L {\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}} \rangle + \langle \left(\nabla f({\mathbf{L}} {\mathbf{R}}^\top) \right)^\top {\mathbf{L}} {\rm D} {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{-1}[\theta_{({\mathbf{L}},{\mathbf{R}})}], \theta_R {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}} \rangle \\ & \quad + \langle {\rm D} {\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}}[ \overline{ {\rm grad} \, h_{r}([{\mathbf{L}},{\mathbf{R}}])} ], \theta_L^\top \theta_L \rangle /2 + \langle {\rm D} {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}[ \overline{ {\rm grad} \, h_{r}([{\mathbf{L}},{\mathbf{R}}])} ], \theta_R^\top \theta_R \rangle /2. \end{split} \end{equation} \item On ${\cal M}_{r}^{q_2}$: Suppose ${\mathbf{U}} \in {\rm St}(r,p_1)$, ${\mathbf{B}} \in \mathbb{S}_+(r)$, ${\mathbf{V}} \in {\rm St}(r,p_2)$ and $\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} = [\theta_U^\top \quad \theta_B^\top \quad \theta_V^\top]^\top \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_r^{q_2}$. Then \begin{equation}\label{eq: quotient-gradient-Hessian-general2} \begin{split} &\overline{{\rm grad}\, h_{r}([{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}])} = \begin{bmatrix} \overline{{\rm grad}_{\mathbf{U}}\, h_{r}([{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}])}\\ \overline{{\rm grad}_{\mathbf{B}}\, h_{r}([{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}])} \\ \overline{{\rm grad}_{\mathbf{V}}\, h_{r}([{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}])} \end{bmatrix} \\ & \quad \quad \quad \quad \quad \quad \quad \quad = \begin{bmatrix} P_{{\mathbf{U}}_\perp} \nabla f({\mathbf{U}} {\mathbf{B}} {\mathbf{V}}^\top) {\mathbf{V}} {\mathbf{B}} + {\mathbf{U}}\left(\skew(\boldsymbol{\Delta} ){\mathbf{B}} + {\mathbf{B}} \skew(\boldsymbol{\Delta} ) \right)/2 \\ {\mathbf{B}} {\rm Sym}(\boldsymbol{\Delta}) {\mathbf{B}} \\ P_{{\mathbf{V}}_\perp} (\nabla f({\mathbf{U}} {\mathbf{B}} {\mathbf{V}}^\top))^\top {\mathbf{U}} {\mathbf{B}} - {\mathbf{V}} \left( \skew(\boldsymbol{\Delta} ){\mathbf{B}} + {\mathbf{B}} \skew(\boldsymbol{\Delta} ) \right)/2 \end{bmatrix},\\ & \overline{{\rm Hess} \, h_{r}([{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}])}[\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}, \theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}] \\ =&\nabla^2 f({\mathbf{U}} {\mathbf{B}} {\mathbf{V}}^\top)[\theta_U {\mathbf{B}} {\mathbf{V}}^\top + {\mathbf{U}} \theta_B {\mathbf{V}}^\top + {\mathbf{U}} {\mathbf{B}} \theta_V^\top, \theta_U {\mathbf{B}} {\mathbf{V}}^\top + {\mathbf{U}} \theta_B {\mathbf{V}}^\top + {\mathbf{U}} {\mathbf{B}} \theta_V^\top] + 2\langle \nabla f({\mathbf{U}}{\mathbf{B}}{\mathbf{V}}^\top), \theta_U {\mathbf{B}} \theta_V^\top \rangle\\ & + \left\langle \boldsymbol{\Delta}, {\rm Sym}({\mathbf{U}}^\top \theta_U {\mathbf{U}}^\top\theta_U) {\mathbf{B}} + {\mathbf{B}} {\rm Sym}({\mathbf{V}}^\top \theta_V {\mathbf{U}}^\top \theta_U) -2\theta_U^\top \theta_U {\mathbf{B}} \right\rangle/2\\ & + \left\langle \boldsymbol{\Delta}, {\mathbf{B}}{\rm Sym}({\mathbf{V}}^\top \theta_V {\mathbf{V}}^\top\theta_V) + {\rm Sym}({\mathbf{U}}^\top \theta_U {\mathbf{V}}^\top \theta_V) {\mathbf{B}} -2{\mathbf{B}}\theta_V^\top \theta_V + 2\theta_B {\mathbf{B}}^{-1} \theta_B \right\rangle/2\\ & + \langle \boldsymbol{\Delta}', 2\theta_B - {\mathbf{U}}^\top \theta_U {\mathbf{B}} - \theta_U^\top {\mathbf{U}} {\mathbf{B}}/2 -{\mathbf{V}}^\top \theta_V {\mathbf{B}}/2 \rangle + \langle \boldsymbol{\Delta}'', 2\theta_B - {\mathbf{B}}\theta_V^\top {\mathbf{V}} - {\mathbf{B}}{\mathbf{V}}^\top \theta_V/2 -{\mathbf{B}} \theta_U^\top {\mathbf{U}} /2 \rangle, \end{split} \end{equation} where $\boldsymbol{\Delta} = {\mathbf{U}}^\top\nabla f({\mathbf{U}} {\mathbf{B}} {\mathbf{V}}^\top) {\mathbf{V}}$, $\boldsymbol{\Delta}' = \theta_U^\top\nabla f({\mathbf{U}} {\mathbf{B}} {\mathbf{V}}^\top) {\mathbf{V}}$ and $\boldsymbol{\Delta}'' = {\mathbf{U}}^\top\nabla f({\mathbf{U}} {\mathbf{B}} {\mathbf{V}}^\top) \theta_V$. \item On ${\cal M}_r^{q_3}$: Suppose ${\mathbf{U}} \in {\rm St}(r,p_1)$, ${\mathbf{Y}} \in \mathbb{R}^{p_2 \times r}_*$ and $\theta_{({\mathbf{U}},{\mathbf{Y}})} = [\theta_U^\top \quad \theta_Y^\top]^\top \in \mathcal{H}_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3}$. Then \begin{equation}\label{eq: quotient-gradient-Hessian-general3} \begin{split} &\overline{{\rm grad}\, h_{r}([{\mathbf{U}},{\mathbf{Y}}])} = \begin{bmatrix} \overline{{\rm grad}_{\mathbf{U}}\, h_{r}([{\mathbf{U}},{\mathbf{Y}}])}\\ \overline{{\rm grad}_{\mathbf{Y}}\, h_{r}([{\mathbf{U}},{\mathbf{Y}}])} \end{bmatrix} = \begin{bmatrix} P_{{\mathbf{U}}_\perp} \nabla f({\mathbf{U}} {\mathbf{Y}}^\top) {\mathbf{Y}} \\ (\nabla f({\mathbf{U}} {\mathbf{Y}}^\top))^\top {\mathbf{U}} {\mathbf{W}}_{\mathbf{Y}}^{-1} \end{bmatrix},\\ &\overline{{\rm Hess} \, h_{r}([{\mathbf{U}},{\mathbf{Y}}])}[\theta_{({\mathbf{U}},{\mathbf{Y}})}, \theta_{({\mathbf{U}},{\mathbf{Y}})}] \\ =& \nabla^2 f({\mathbf{U}} {\mathbf{Y}}^\top)[{\mathbf{U}} \theta_Y^\top + \theta_U {\mathbf{Y}}^\top, {\mathbf{U}} \theta_Y^\top + \theta_U {\mathbf{Y}}^\top] + 2 \langle \nabla f({\mathbf{U}} {\mathbf{Y}}^\top), \theta_U \theta_Y^\top \rangle - \langle {\mathbf{U}}^\top \nabla f({\mathbf{U}} {\mathbf{Y}}^\top) {\mathbf{Y}}, \theta_U^\top \theta_U \rangle \\ & + \langle (\nabla f({\mathbf{U}} {\mathbf{Y}}^\top))^\top {\mathbf{U}} {\rm D} {\mathbf{W}}_{\mathbf{Y}}^{-1} [\theta_Y], \theta_Y {\mathbf{W}}_{\mathbf{Y}} \rangle + \langle {\rm D} {\mathbf{W}}_{\mathbf{Y}}\left[\overline{ {\rm grad}_{\mathbf{Y}} \, h_{r}([{\mathbf{U}},{\mathbf{Y}}])}\right], \theta_Y^\top \theta_Y \rangle /2. \end{split} \end{equation} \end{itemize} \end{Proposition} Next, we construct bijective maps between $T_{\mathbf{X}} {\cal M}_{r}^e$ and $\mathcal{H}_{({\mathbf{L}},{\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1}$, $\mathcal{H}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2}$, $\mathcal{H}_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3}$, and give their spectrum bounds. \begin{Proposition}[Bijection Between $\mathcal{H}_{({\mathbf{L}},{\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1}$ and $T_{\mathbf{X}} {\cal M}_{r}^e$] \label{prop: general-bijection1} Suppose ${\mathbf{L}} \in \mathbb{R}^{p_1 \times r}_*$, ${\mathbf{R}} \in \mathbb{R}^{p_2 \times r}_*$, ${\mathbf{X}} = {\mathbf{L}} {\mathbf{R}}^\top$ with its top $r$ left and right singular subspaces spanned by ${\mathbf{U}}$ and ${\mathbf{V}}$, respectively and $\P_1 = {\mathbf{U}}^\top {\mathbf{L}}$, $\P_2 = {\mathbf{V}}^\top {\mathbf{R}}$. For any $\theta_{({\mathbf{L}},{\mathbf{R}})} = [\theta_L^\top \quad \theta_R^\top]^\top \in \mathcal{H}_{({\mathbf{L}},{\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1}$ and $\xi_{\mathbf{X}} = [{\mathbf{U}} \quad {\mathbf{U}}_\perp] \begin{bmatrix} \S & {\mathbf{D}}_2^\top\\ {\mathbf{D}}_1 & {\mathbf{0}} \end{bmatrix} [{\mathbf{V}} \quad {\mathbf{V}}_\perp]^\top \in T_{{\mathbf{X}}}{\cal M}^e_{r}$, define \begin{equation} \label{def: xi-theta-correspondence-general1} \begin{split} \xi^{\theta_{({\mathbf{L}},{\mathbf{R}})}}_{{\mathbf{X}}}&:= [{\mathbf{U}} \quad {\mathbf{U}}_\perp] \begin{bmatrix} \P_1 \theta_R^\top {\mathbf{V}} + {\mathbf{U}}^\top \theta_L \P_2^\top & \P_1 \theta_R^\top {\mathbf{V}}_\perp \\ {\mathbf{U}}_\perp^\top \theta_L \P_2^\top & {\mathbf{0}} \end{bmatrix}[{\mathbf{V}} \quad {\mathbf{V}}_\perp]^\top \in T_{{\mathbf{X}}}{\cal M}^e_{r}, \\ \theta_{{({\mathbf{L}},{\mathbf{R}})}}^{\xi_{\mathbf{X}}} &:= \begin{bmatrix} ({\mathbf{U}} \S' \P_2 {\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_2^\top + {\mathbf{U}}_\perp {\mathbf{D}}_1 ) \P_2^{-\top}\\ ({\mathbf{V}} \S^{'\top} \P_1 {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_1^\top + {\mathbf{V}}_\perp {\mathbf{D}}_2 )\P_1^{-\top} \end{bmatrix} \in \mathcal{H}_{({\mathbf{L}},{\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1}, \end{split} \end{equation} where $\S'$ in $\theta_{({\mathbf{L}},{\mathbf{R}})}^{\xi_{\mathbf{X}}}$ is uniquely determined by the Sylvester equation $\P_1 {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_1^\top \S' + \S' \P_2 {\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_2^\top = \S$. Then we can find a linear bijective mapping $\mathcal{L}_{{\mathbf{L}},{\mathbf{R}}}^{r}$ between $\mathcal{H}_{({\mathbf{L}},{\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1}$ and $T_{\mathbf{X}} {\cal M}_{r}^e$, \begin{equation*} \mathcal{L}_{{\mathbf{L}},{\mathbf{R}}}^{r}: \theta_{({\mathbf{L}},{\mathbf{R}})} \in \mathcal{H}_{({\mathbf{L}},{\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1} \longrightarrow \xi_{{\mathbf{X}}}^{\theta_{({\mathbf{L}},{\mathbf{R}})}} \in T_{\mathbf{X}} {\cal M}_{r}^e \quad \text{and} \quad (\mathcal{L}_{{\mathbf{L}},{\mathbf{R}}}^{r})^{-1}: \xi_{\mathbf{X}} \in T_{\mathbf{X}} {\cal M}_{r}^e \to \theta_{({\mathbf{L}},{\mathbf{R}})}^{\xi_{\mathbf{X}}} \in \mathcal{H}_{({\mathbf{L}},{\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1}, \end{equation*} such that $\mathcal{L}_{{\mathbf{L}},{\mathbf{R}}}^{r}(\theta_{({\mathbf{L}},{\mathbf{R}})}) = {\mathbf{L}} \theta_R^\top + \theta_L {\mathbf{R}}^\top$ holds for any $\theta_{({\mathbf{L}},{\mathbf{R}})} \in \mathcal{H}_{({\mathbf{L}},{\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1}$. Finally, we have the following spectrum bounds for $\mathcal{L}_{{\mathbf{L}},{\mathbf{R}}}^{r}$: \begin{equation} \label{ineq: bijection-spectrum-general1} \gamma \cdot \bar{g}^{r}_{({\mathbf{L}},{\mathbf{R}})}(\theta_{({\mathbf{L}},{\mathbf{R}})}, \theta_{({\mathbf{L}},{\mathbf{R}})}) \leq \|\mathcal{L}^{r}_{{\mathbf{L}},{\mathbf{R}}}(\theta_{({\mathbf{L}},{\mathbf{R}})})\|_{\rm F}^2 \leq 2 \Gamma \cdot \bar{g}^{r}_{({\mathbf{L}},{\mathbf{R}})}(\theta_{({\mathbf{L}},{\mathbf{R}})}, \theta_{({\mathbf{L}},{\mathbf{R}})}), \quad \forall \theta_{({\mathbf{L}},{\mathbf{R}})} \in \mathcal{H}_{({\mathbf{L}},{\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1}, \end{equation} where $\gamma := \sigma_r(\P_2 {\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_2^\top) \wedge \sigma_r(\P_1 {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_1^\top) $ and $\Gamma := \sigma_1(\P_2 {\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_2^\top) \vee \sigma_1(\P_1 {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_1^\top)$. \end{Proposition} {\noindent \bf Proof of Proposition \ref{prop: general-bijection1}.} First, the uniqueness of $\S'$ in $\theta_{({\mathbf{L}},{\mathbf{R}})}^{\xi_{\mathbf{X}}}$ is guaranteed by the fact $\P_1 {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_1^\top$ and $-\P_2 {\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_2^\top$ have disjoint spectra and \cite[Theorem VII.2.1]{bhatia2013matrix}. Thus, $\xi^{\theta_{({\mathbf{L}},{\mathbf{R}})}}_{{\mathbf{X}}}$ and $\theta_{{({\mathbf{L}},{\mathbf{R}})}}^{\xi_{\mathbf{X}}}$ are well defined given any $\theta_{({\mathbf{L}},{\mathbf{R}})}$ and $\xi_{\mathbf{X}}$. Next, we show $\mathcal{L}^r_{{\mathbf{L}},{\mathbf{R}}}$ is a bijection. Notice both $\mathcal{H}_{({\mathbf{L}},{\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1}$ and $T_{\mathbf{X}} {\cal M}_{r}^e$ are of dimension $(p_1 + p_2-r)r$. Suppose $\mathcal{L}_{{\mathbf{L}},{\mathbf{R}}}^{r'} : \xi_{\mathbf{X}} \in T_{\mathbf{X}}{\cal M}^e_{r} \longrightarrow \theta_{({\mathbf{L}},{\mathbf{R}})}^{\xi_{\mathbf{X}}} \in \mathcal{H}_{({\mathbf{L}},{\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1}$. For any $\xi_{\mathbf{X}} = [{\mathbf{U}} \quad {\mathbf{U}}_\perp] \begin{bmatrix} \S & {\mathbf{D}}_2^\top\\ {\mathbf{D}}_1 & {\mathbf{0}} \end{bmatrix} [{\mathbf{V}} \quad {\mathbf{V}}_\perp]^\top\in T_{\mathbf{X}}{\cal M}^e_{r}$, we have \begin{equation} \label{eq: bijection-general1} \mathcal{L}^{r}_{{\mathbf{L}},{\mathbf{R}}} ( \mathcal{L}_{\mathbf{Y}}^{r'}(\xi_{\mathbf{X}}) ) = \mathcal{L}^{r}_{{\mathbf{L}},{\mathbf{R}}} (\theta_{({\mathbf{L}},{\mathbf{R}})}^{\xi_{\mathbf{X}}}) = [{\mathbf{U}} \quad {\mathbf{U}}_\perp] \begin{bmatrix} \P_1 \theta_R^{\xi_{\mathbf{X}} \top} {\mathbf{V}} + {\mathbf{U}}^\top \theta_L^{\xi_{\mathbf{X}}} \P_2^\top & \P_1 \theta_R^{\xi_{\mathbf{X}}\top} {\mathbf{V}}_\perp \\ {\mathbf{U}}_\perp^\top \theta_L^{\xi_{\mathbf{X}}} \P_2^\top & {\mathbf{0}} \end{bmatrix}[{\mathbf{V}} \quad {\mathbf{V}}_\perp]^\top = \xi_{\mathbf{X}}. \end{equation} Since $\mathcal{L}^{r}_{{\mathbf{L}},{\mathbf{R}}}$ and $\mathcal{L}_{{\mathbf{L}},{\mathbf{R}}}^{r'}$ are linear maps, \eqref{eq: bijection-general1} implies $\mathcal{L}^{r}_{{\mathbf{L}},{\mathbf{R}}}$ is a bijection and $\mathcal{L}_{{\mathbf{L}},{\mathbf{R}}}^{r'} = (\mathcal{L}^{r}_{{\mathbf{L}},{\mathbf{R}}})^{-1}$. At the same time, it is easy to check $ \mathcal{L}_{{\mathbf{L}},{\mathbf{R}}}^{r}$ satisfies $\mathcal{L}_{{\mathbf{L}},{\mathbf{R}}}^{r}(\theta_{({\mathbf{L}},{\mathbf{R}})})={\mathbf{L}} \theta_R^\top + \theta_L {\mathbf{R}}^\top $ by observing ${\mathbf{L}} = {\mathbf{U}} \P_1, {\mathbf{R}} = {\mathbf{V}} \P_2$. Next, we provide the spectrum bounds for $\mathcal{L}^{r}_{{\mathbf{L}},{\mathbf{R}}}$. For any $\theta_{({\mathbf{L}},{\mathbf{R}})} = [\theta_L^\top \quad \theta_R^\top]^\top \in \mathcal{H}_{({\mathbf{L}},{\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1}$, where $\theta_L = ({\mathbf{U}} \S \P_2{\mathbf{W}}_{{\mathbf{L}}, {\mathbf{R}}}^{-1} \P_2^\top + {\mathbf{U}}_\perp {\mathbf{D}}_1 ) \P_2^{-\top}$, $\theta_R =({\mathbf{V}} \S^\top \P_1 {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_1^\top + {\mathbf{V}}_\perp {\mathbf{D}}_2 ) \P_1^{-\top} $ we have \begin{equation} \label{ineq: upper-bound-theta-norm-general} \begin{split} &\quad \bar{g}_{({\mathbf{L}},{\mathbf{R}})}(\theta_{({\mathbf{L}},{\mathbf{R}})}, \theta_{({\mathbf{L}},{\mathbf{R}})}) \\ &= {\rm tr}({\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}} \theta_L^\top \theta_L) + {\rm tr}({\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}} \theta_R^\top \theta_R) = \|\theta_L {\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}}^{1/2}\|_{\rm F}^2 + \|\theta_R {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{1/2}\|_{\rm F}^2 \\ & \leq (\|\S \P_2{\mathbf{W}}_{{\mathbf{L}}, {\mathbf{R}}}^{-1} \P_2^\top\|_{\rm F}^2 + \|{\mathbf{D}}_1\|_{\rm F}^2 ) \sigma_1^2(\P_2^{-\top} {\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}}^{1/2}) + (\|\S^\top \P_1 {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_1^\top\|_{\rm F}^2 + \|{\mathbf{D}}_2\|_{\rm F}^2 ) \sigma_1^2(\P_1^{-\top} {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{1/2}) \\ & \leq \frac{1}{ \sigma_r(\P_2 {\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_2^\top ) \wedge \sigma_r(\P_1 {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_1^\top ) } (\|\S \P_2{\mathbf{W}}_{{\mathbf{L}}, {\mathbf{R}}}^{-1} \P_2^\top\|_{\rm F}^2 + \|{\mathbf{D}}_1\|_{\rm F}^2 + \|\S^\top \P_1 {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_1^\top\|_{\rm F}^2 + \|{\mathbf{D}}_2\|_{\rm F}^2 ), \end{split} \end{equation} and \begin{equation}\label{ineq: quadratic-S'-bound-general1} \begin{split} &\langle \P_1 {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_1^\top \S, \S \P_2{\mathbf{W}}_{{\mathbf{L}}, {\mathbf{R}}}^{-1} \P_2^\top \rangle\\ =& \langle (\P_1 {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_1^\top)^{1/2} \S(\P_2{\mathbf{W}}_{{\mathbf{L}}, {\mathbf{R}}}^{-1} \P_2^\top)^{1/2}, (\P_1 {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_1^\top)^{1/2}\S (\P_2{\mathbf{W}}_{{\mathbf{L}}, {\mathbf{R}}}^{-1} \P_2^\top)^{1/2} \rangle \geq 0. \end{split} \end{equation} Thus \begin{equation*} \begin{split} \|\mathcal{L}^{r}_{\mathbf{Y}}(\theta_{({\mathbf{L}},{\mathbf{R}})})\|_{\rm F}^2 = \|\xi^{\theta_{({\mathbf{L}},{\mathbf{R}})}}_{{\mathbf{X}}}\|_{\rm F}^2 & \overset{ \eqref{def: xi-theta-correspondence-general1} }= \|\P_1 \theta_R^\top {\mathbf{V}} + {\mathbf{U}}^\top \theta_L \P_2^\top \|_{\rm F}^2 + \|\P_1 \theta_R^\top {\mathbf{V}}_\perp \|_{\rm F}^2 + \|{\mathbf{U}}_\perp^\top \theta_L \P_2^\top\|_{\rm F}^2 \\ & = \| \P_1 {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_1^\top \S + \S \P_2{\mathbf{W}}_{{\mathbf{L}}, {\mathbf{R}}}^{-1} \P_2^\top\|_{\rm F}^2 + \|{\mathbf{D}}_1\|_{\rm F}^2 + \|{\mathbf{D}}_2\|_{\rm F}^2\\ & \overset{\eqref{ineq: quadratic-S'-bound-general1} } \geq \|\S \P_2{\mathbf{W}}_{{\mathbf{L}}, {\mathbf{R}}}^{-1} \P_2^\top\|_{\rm F}^2 + \|\S^\top \P_1 {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_1^\top\|_{\rm F}^2+ \|{\mathbf{D}}_1\|_{\rm F}^2 + \|{\mathbf{D}}_2\|_{\rm F}^2 \\ & \overset{\eqref{ineq: upper-bound-theta-norm-general} } \geq (\sigma_r(\P_2 {\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_2^\top ) \wedge \sigma_r(\P_1 {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_1^\top )) \bar{g}^{r}_{({\mathbf{L}},{\mathbf{R}})}(\theta_{({\mathbf{L}},{\mathbf{R}})},\theta_{({\mathbf{L}},{\mathbf{R}})}), \end{split} \end{equation*} and \begin{equation*} \begin{split} \|\mathcal{L}^{r}_{\mathbf{Y}}(\theta_{({\mathbf{L}},{\mathbf{R}})})\|_{\rm F}^2 = \|\xi^{\theta_{({\mathbf{L}},{\mathbf{R}})}}_{{\mathbf{X}}}\|_{\rm F}^2 & \overset{ \eqref{def: xi-theta-correspondence-general1} }= \|\P_1 \theta_R^\top {\mathbf{V}} + {\mathbf{U}}^\top \theta_L \P_2^\top \|_{\rm F}^2 + \|\P_1 \theta_R^\top {\mathbf{V}}_\perp \|_{\rm F}^2 + \|{\mathbf{U}}_\perp^\top \theta_L \P_2^\top\|_{\rm F}^2\\ & \leq 2( \|\P_1 \theta_R^\top {\mathbf{V}} \|_{\rm F}^2 + \| {\mathbf{U}}^\top \theta_L \P_2^\top \|_{\rm F}^2) + \|\P_1 \theta_R^\top {\mathbf{V}}_\perp \|_{\rm F}^2 + \|{\mathbf{U}}_\perp^\top \theta_L \P_2^\top\|_{\rm F}^2\\ & \leq 2( \|\P_1 \theta_R^\top \|_{\rm F}^2 + \| \theta_L \P_2^\top \|_{\rm F}^2 ) \\ & \overset{\eqref{ineq: upper-bound-theta-norm-general}} \leq 2( \sigma^2_1({\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}}^{-1/2} \P_2^\top ) \vee \sigma^2_1({\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{-1/2} \P_1^\top ) )\bar{g}^{r}_{({\mathbf{L}},{\mathbf{R}})}(\theta_{({\mathbf{L}},{\mathbf{R}})},\theta_{({\mathbf{L}},{\mathbf{R}})})\\ & = 2( \sigma_1(\P_2{\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_2^\top ) \vee \sigma_1(\P_1{\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \P_1^\top ) )\bar{g}^{r}_{({\mathbf{L}},{\mathbf{R}})}(\theta_{({\mathbf{L}},{\mathbf{R}})},\theta_{({\mathbf{L}},{\mathbf{R}})}) \end{split} \end{equation*} This finishes the proof of this proposition. \quad $\blacksquare$ \begin{Proposition}[Bijection Between $\mathcal{H}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2}$ and $T_{\mathbf{X}} {\cal M}_{r}^e$] \label{prop: general-bijection2} Suppose ${\mathbf{U}} \in {\rm St}(r,p_1)$, ${\mathbf{B}} \in \mathbb{S}_+(r)$, ${\mathbf{V}} \in {\rm St}(p_2,r)$ and ${\mathbf{X}} = {\mathbf{U}} {\mathbf{B}} {\mathbf{V}}^\top$. For any $\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} = [\theta_U^\top \quad \theta_B^\top \quad \theta_V^\top]^\top \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2}$ and $\xi_{\mathbf{X}} = [{\mathbf{U}} \quad {\mathbf{U}}_\perp] \begin{bmatrix} \S & {\mathbf{D}}_2^\top\\ {\mathbf{D}}_1 & {\mathbf{0}} \end{bmatrix} [{\mathbf{V}} \quad {\mathbf{V}}_\perp]^\top \in T_{{\mathbf{X}}}{\cal M}^e_{r}$, define \begin{equation} \label{def: xi-theta-correspondence-general2} \begin{split} \xi^{\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}}_{{\mathbf{X}}}&:= [{\mathbf{U}} \quad {\mathbf{U}}_\perp] \begin{bmatrix} {\mathbf{U}}^\top \theta_U {\mathbf{B}} + \theta_B + {\mathbf{B}} \theta_V^\top {\mathbf{V}} & {\mathbf{B}} \theta_V^\top {\mathbf{V}}_\perp \\ {\mathbf{U}}_\perp^\top \theta_U {\mathbf{B}} & {\mathbf{0}} \end{bmatrix}[{\mathbf{V}} \quad {\mathbf{V}}_\perp]^\top \in T_{{\mathbf{X}}}{\cal M}^e_{r}, \\ \theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}^{\xi_{\mathbf{X}}} &:= [({\mathbf{U}}_\perp {\mathbf{D}}_1 {\mathbf{B}}^{-1} + {\mathbf{U}} \boldsymbol{\Omega}')^\top \quad \S' \quad ({\mathbf{V}}_\perp {\mathbf{D}}_2 {\mathbf{B}}^{-1} - {\mathbf{V}} \boldsymbol{\Omega}')^\top]^\top \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2}, \end{split} \end{equation} where $\S', \boldsymbol{\Omega}'$ are uniquely determined by the linear equation system: $\boldsymbol{\Omega}' {\mathbf{B}} + \S' - {\mathbf{B}} \boldsymbol{\Omega}^{'\top} = \S$, $\boldsymbol{\Omega}' = - \boldsymbol{\Omega}^{'\top}$, $\S' = \S^{'\top}$. Then we can find a linear bijective mapping $\mathcal{L}_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}}^{r}$ between $\mathcal{H}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2}$ and $T_{\mathbf{X}} {\cal M}_{r}^e$, \begin{equation*} \begin{split} &\mathcal{L}_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}}^{r}: \theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2} \longrightarrow \xi_{{\mathbf{X}}}^{\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}} \in T_{\mathbf{X}} {\cal M}_{r}^e, \\ &(\mathcal{L}_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}}^{r})^{-1}: \xi_{\mathbf{X}} \in T_{\mathbf{X}} {\cal M}_{r}^e \to \theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}^{\xi_{\mathbf{X}}} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2} \end{split} \end{equation*}such that $\mathcal{L}_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}}^{r}(\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}) = \theta_U {\mathbf{B}} {\mathbf{V}}^\top + {\mathbf{U}} \theta_B {\mathbf{V}}^\top + {\mathbf{U}} {\mathbf{B}} \theta_V^\top$ holds for any $\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2}$. Finally, we have the following spectrum bounds for $\mathcal{L}_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}}^{r}$: for all $\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2}$, \begin{equation} \label{ineq: bijection-spectrum-general2} \sigma^2_r({\mathbf{X}})\bar{g}^{r}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}(\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}, \theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}) \leq \|\mathcal{L}^{r}_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}}(\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})})\|_{\rm F}^2 \leq 2 \sigma^2_1({\mathbf{X}})\bar{g}^{r}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}(\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}, \theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}), \end{equation} \end{Proposition} {\noindent \bf Proof of Proposition \ref{prop: general-bijection2}.} The proof is divided into two steps: in Step 1, we show $\xi_{\mathbf{X}}^{\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}}$ and $\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}^{\xi_{\mathbf{X}}}$ are well defined for any $\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}$ and $\xi_{\mathbf{X}}$; in Step 2, we show $\mathcal{L}_{{\mathbf{U}}\,{\mathbf{B}},{\mathbf{V}}}^{r}$ is a bijection and prove its spectrum bounds. {\bf Step 1.} First, it is clear for any $\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2}$, $\xi_{\mathbf{X}}^{\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}}$ is well defined. To show $\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}^{\xi_{\mathbf{X}}}$ is well defined given any $\xi_{\mathbf{X}} \in T_{{\mathbf{X}}}{\cal M}^e_{r}$, we need to show the equation system: $(i) \boldsymbol{\Omega}_1 {\mathbf{B}} + \S_1 - {\mathbf{B}} \boldsymbol{\Omega}_1^\top = \S$, (ii) $\boldsymbol{\Omega}_1 = - \boldsymbol{\Omega}_1^\top$, (iii) $\S_1 = \S_1^{\top}$ with respect to $\S_1, \boldsymbol{\Omega}_1$ has a unique solution. By (i) we have $\S_1 = \S - \boldsymbol{\Omega}_1 {\mathbf{B}} + {\mathbf{B}} \boldsymbol{\Omega}_1^\top$. By plugging it into (iii), we have ${\mathbf{B}} \boldsymbol{\Omega}_1^\top - \boldsymbol{\Omega}_1 {\mathbf{B}} = (\S^\top - \S)/2$. Combining it with (ii) states ${\mathbf{B}} \boldsymbol{\Omega}_1 + \boldsymbol{\Omega}_1 {\mathbf{B}} = (\S - \S^\top)/2$. So we conclude \begin{equation*} \begin{split} &\{ (\S_1, \boldsymbol{\Omega}_1): \boldsymbol{\Omega}_1 {\mathbf{B}} + \S_1 - {\mathbf{B}} \boldsymbol{\Omega}_1^\top = \S, \boldsymbol{\Omega}_1 = - \boldsymbol{\Omega}_1^\top, \S_1 = \S_1^{\top} \} \\ \subseteq & \{(\S_1, \boldsymbol{\Omega}_1): {\mathbf{B}} \boldsymbol{\Omega}_1 + \boldsymbol{\Omega}_1 {\mathbf{B}} = (\S - \S^\top)/2, \S_1 = \S - \boldsymbol{\Omega}_1 {\mathbf{B}} + {\mathbf{B}} \boldsymbol{\Omega}_1^\top \}. \end{split} \end{equation*} Note that ${\mathbf{B}} \boldsymbol{\Omega}_1 + \boldsymbol{\Omega}_1 {\mathbf{B}} = (\S - \S^\top)/2$ is a Sylvester equation with respect to $\boldsymbol{\Omega}_1$. Since ${\mathbf{B}}$ and $-{\mathbf{B}}$ have distinct spectra, the system has a unique solution \cite[Theorem VII.2.1]{bhatia2013matrix} and we denote it by $\boldsymbol{\Omega}'$. Let $\S' = \S - \boldsymbol{\Omega}' {\mathbf{B}} + {\mathbf{B}} \boldsymbol{\Omega}^{'\top}$. If we can show $\boldsymbol{\Omega}' = - \boldsymbol{\Omega}^{'\top}$ and $\S' = \S^{'\top}$, then we can conclude $(\S',\boldsymbol{\Omega}')$ is the unique solution of the linear equation system $\boldsymbol{\Omega}_1 {\mathbf{B}} + \S_1 - {\mathbf{B}} \boldsymbol{\Omega}_1^\top = \S, \boldsymbol{\Omega}_1 = - \boldsymbol{\Omega}_1^\top, \S_1 = \S_1^{\top}$. Let us first show $\boldsymbol{\Omega}' = - \boldsymbol{\Omega}^{'\top}$. We know $\boldsymbol{\Omega}'$ satisfies ${\mathbf{B}} \boldsymbol{\Omega}' + \boldsymbol{\Omega}' {\mathbf{B}} = (\S - \S^\top)/2$ and $\boldsymbol{\Omega}^{'\top}{\mathbf{B}} + {\mathbf{B}}\boldsymbol{\Omega}^{'\top} = (\S^\top - \S)/2$. By summing these two equations we have ${\mathbf{B}} (\boldsymbol{\Omega}' + \boldsymbol{\Omega}^{'\top}) + (\boldsymbol{\Omega}' + \boldsymbol{\Omega}^{'\top}) {\mathbf{B}} = {\mathbf{0}}$. This is a new Sylvester equation ${\mathbf{B}} \widebar{\boldsymbol{\Omega}} + \widebar{\boldsymbol{\Omega}} {\mathbf{B}} = {\mathbf{0}}$ with respect to $\widebar{\boldsymbol{\Omega}}$ and we we know again by \cite[Theorem VII.2.1]{bhatia2013matrix} that ${\mathbf{0}}$ is the unique solution to this system. So we have $\boldsymbol{\Omega}' + \boldsymbol{\Omega}^{'\top} = {\mathbf{0}}$, i.e., $\boldsymbol{\Omega}' = - \boldsymbol{\Omega}^{'\top}$. Then \begin{equation*} \begin{split} \S' = \S - \boldsymbol{\Omega}' {\mathbf{B}} + {\mathbf{B}} \boldsymbol{\Omega}^{'\top} &= \S + \boldsymbol{\Omega}^{'\top} {\mathbf{B}} + {\mathbf{B}} \boldsymbol{\Omega}^{'\top} = \S + (\S^\top - \S)/2 \\ &= (\S^\top + \S)/2 = \S^\top + {\mathbf{B}} \boldsymbol{\Omega}^{'} + \boldsymbol{\Omega}'{\mathbf{B}} = \S^\top - {\mathbf{B}} \boldsymbol{\Omega}^{'\top} + \boldsymbol{\Omega}'{\mathbf{B}} = \S^{'\top}. \end{split} \end{equation*} So we have shown $\xi_{\mathbf{X}}^{\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}}$ is well defined for any $\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2}$. {\bf Step 2.} Notice $\mathcal{H}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2}$ is of dimension $(p_1 + p_2 -r)r$ and it is the same with $\dim(T_{\mathbf{X}} {\cal M}_{r}^e)$. Suppose $\mathcal{L}_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}}^{r'} : \xi_{\mathbf{X}} \in T_{\mathbf{X}}{\cal M}^e_{r} \longrightarrow \theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}^{\xi_{\mathbf{X}}} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2}$. For any $\xi_{\mathbf{X}} = [{\mathbf{U}} \quad {\mathbf{U}}_\perp] \begin{bmatrix} \S & {\mathbf{D}}_2^\top\\ {\mathbf{D}}_1 & {\mathbf{0}} \end{bmatrix} [{\mathbf{V}} \quad {\mathbf{V}}_\perp]^\top \in T_{\mathbf{X}}{\cal M}^e_{r}$, we have \begin{equation} \label{eq: bijection-general2} \begin{split} \mathcal{L}^{r}_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}} ( \mathcal{L}_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}}^{r'}(\xi_{\mathbf{X}}) ) &= \mathcal{L}^{r}_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}} (\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}^{\xi_{\mathbf{X}}}) \\ &= [{\mathbf{U}} \quad {\mathbf{U}}_\perp] \begin{bmatrix} {\mathbf{U}}^\top \theta_U^{\xi_{\mathbf{X}}} {\mathbf{B}} + \theta_B^{\xi_{\mathbf{X}}} + {\mathbf{B}} \theta_V^{\xi_{\mathbf{X}} \top} {\mathbf{V}} & {\mathbf{B}} \theta^{\xi_{\mathbf{X}} \top}_V {\mathbf{V}}_\perp \\ {\mathbf{U}}_\perp^\top \theta_U^{\xi_{\mathbf{X}}} {\mathbf{B}} & {\mathbf{0}} \end{bmatrix}[{\mathbf{V}} \quad {\mathbf{V}}_\perp]^\top = \xi_{\mathbf{X}}. \end{split} \end{equation} Since $\mathcal{L}^{r}_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}}$ and $\mathcal{L}_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}}^{r'}$ are linear maps, \eqref{eq: bijection-general2} implies $\mathcal{L}^{r}_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}}$ is a bijection and $\mathcal{L}_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}}^{r'} = (\mathcal{L}^{r}_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}})^{-1}$. At the same time, it is easy to check $\mathcal{L}_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}}^{r}(\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}) = \theta_U {\mathbf{B}} {\mathbf{V}}^\top + {\mathbf{U}} \theta_B {\mathbf{V}}^\top + {\mathbf{U}} {\mathbf{B}} \theta_V^\top$ holds for any $\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2}$. Next, we provide the spectrum bounds for $\mathcal{L}_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}}^r$. For any $\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} = [\theta_U^\top \quad \theta_B^\top \quad \theta_V^\top]^\top \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2}$ with $\theta_U = {\mathbf{U}}_\perp {\mathbf{D}}_1 + {\mathbf{U}} \boldsymbol{\Omega}, \theta_V = {\mathbf{V}}_\perp {\mathbf{D}}_2 - {\mathbf{V}} \boldsymbol{\Omega}$, $\boldsymbol{\Omega} = - \boldsymbol{\Omega}^\top$, $\theta_B \in \mathbb{S}^{r \times r}$. We have \begin{equation} \label{eq: upper-bound-theta-norm-general2} \begin{split} \bar{g}^{r}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}(\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}, \theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}) &= \|\theta_U\|_{\rm F}^2 + \|\theta_V\|_{\rm F}^2 + {\rm tr}({\mathbf{B}}^{-1} \theta_B {\mathbf{B}}^{-1} \theta_B) \\ & = \|\theta_U\|_{\rm F}^2 + \|\theta_V\|_{\rm F}^2 + \|{\mathbf{B}}^{-1/2} \theta_B {\mathbf{B}}^{-1/2}\|_{\rm F}^2\\ & = \|{\mathbf{D}}_1\|_{\rm F}^2 + \|{\mathbf{D}}_2\|_{\rm F}^2 + 2 \|\boldsymbol{\Omega}\|_{\rm F}^2 + \|{\mathbf{B}}^{-1/2} \theta_B {\mathbf{B}}^{-1/2}\|_{\rm F}^2, \end{split} \end{equation} and \begin{equation} \label{ineq: quadratic-S'-bound-general2} \begin{split} \langle \boldsymbol{\Omega} {\mathbf{B}} - {\mathbf{B}} \boldsymbol{\Omega}^\top, \theta_B \rangle = \langle {\mathbf{B}} \boldsymbol{\Omega}^\top - \boldsymbol{\Omega} {\mathbf{B}}, \theta_B^\top \rangle \overset{\theta_B = \theta_B^\top}= - \langle \boldsymbol{\Omega} {\mathbf{B}} - {\mathbf{B}} \boldsymbol{\Omega}^\top, \theta_B \rangle \Longrightarrow\langle \boldsymbol{\Omega} {\mathbf{B}} - {\mathbf{B}} \boldsymbol{\Omega}^\top, \theta_B \rangle = 0. \end{split} \end{equation} Thus, \begin{equation} \label{ineq: bijection-general2-ineq1} \begin{split} \|\mathcal{L}^r_{{\mathbf{U}},{\mathbf{B}},{\mathbf{B}}}(\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})})\|_{\rm F}^2 = \|\xi_{\mathbf{X}}^{\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}}\|_{\rm F}^2 &\overset{ \eqref{def: xi-theta-correspondence-general2}} = \|{\mathbf{U}}^\top \theta_U {\mathbf{B}} + \theta_B + {\mathbf{B}} \theta_V^\top {\mathbf{V}}\|_{\rm F}^2 + \|{\mathbf{B}} \theta_V^\top {\mathbf{V}}_\perp\|_{\rm F}^2 + \|{\mathbf{U}}_\perp^\top \theta_U {\mathbf{B}}\|_{\rm F}^2 \\ &= \|\boldsymbol{\Omega} {\mathbf{B}} + \theta_B - {\mathbf{B}} \boldsymbol{\Omega}^\top\|_{\rm F}^2 + \|{\mathbf{B}} {\mathbf{D}}_2^\top\|_{\rm F}^2 + \|{\mathbf{D}}_1 {\mathbf{B}}\|_{\rm F}^2\\ & \overset{ \eqref{ineq: quadratic-S'-bound-general2} } = \|\boldsymbol{\Omega} {\mathbf{B}} - {\mathbf{B}} \boldsymbol{\Omega}^\top\|_{\rm F}^2 + \|\theta_B\|_{\rm F}^2 + \|{\mathbf{B}} {\mathbf{D}}_2^\top\|_{\rm F}^2 + \|{\mathbf{D}}_1 {\mathbf{B}}\|_{\rm F}^2 \\ & \overset{ \boldsymbol{\Omega} = -\boldsymbol{\Omega}^\top } = 2 \|\boldsymbol{\Omega} {\mathbf{B}} \|_{\rm F}^2 + 2\|{\mathbf{B}}^{1/2} \boldsymbol{\Omega} {\mathbf{B}}^{1/2} \|_{\rm F}^2 + \|\theta_B\|_{\rm F}^2 + \|{\mathbf{B}} {\mathbf{D}}_2^\top\|_{\rm F}^2 + \|{\mathbf{D}}_1 {\mathbf{B}}\|_{\rm F}^2 \\ & \overset{(a)} \geq \sigma^2_r({\mathbf{X}}) \bar{g}^{r}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}(\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}, \theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}), \end{split} \end{equation} where in (a) we use the fact $\sigma_r({\mathbf{B}}) = \sigma_r({\mathbf{X}})$, and \begin{equation*} \begin{split} \|\mathcal{L}^r_{{\mathbf{U}},{\mathbf{B}},{\mathbf{B}}}(\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})})\|_{\rm F}^2 = \|\xi_{\mathbf{X}}^{\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}}\|_{\rm F}^2 &\overset{ \eqref{ineq: bijection-general2-ineq1} } = 2 \|\boldsymbol{\Omega} {\mathbf{B}} \|_{\rm F}^2 + 2\|{\mathbf{B}}^{1/2} \boldsymbol{\Omega} {\mathbf{B}}^{1/2} \|_{\rm F}^2 + \|\theta_B\|_{\rm F}^2 + \|{\mathbf{B}} {\mathbf{D}}_2^\top\|_{\rm F}^2 + \|{\mathbf{D}}_1 {\mathbf{B}}\|_{\rm F}^2 \\ & \leq 2 \sigma^2_1({\mathbf{X}}) \bar{g}^{r}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}(\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}, \theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}). \end{split} \end{equation*} This finishes the proof of this proposition. \quad $\blacksquare$ \begin{Proposition}[Bijection Between $\mathcal{H}_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3}$ and $T_{\mathbf{X}} {\cal M}_{r}^e$] \label{prop: general-bijection3} Suppose ${\mathbf{U}} \in {\rm St}(r,p_1)$, ${\mathbf{Y}} \in \mathbb{R}^{p_2 \times r}_*$ and ${\mathbf{X}} = {\mathbf{U}} {\mathbf{Y}}^\top$ with top $r$ right singular subspace spanned by ${\mathbf{V}}$. For any $\theta_{({\mathbf{U}},{\mathbf{Y}})} = [\theta_U^\top \quad \theta_Y^\top]^\top \in \mathcal{H}_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3}$ and $\xi_{\mathbf{X}} = [{\mathbf{U}} \quad {\mathbf{U}}_\perp] \begin{bmatrix} \S & {\mathbf{D}}_2^\top\\ {\mathbf{D}}_1 & {\mathbf{0}} \end{bmatrix} [{\mathbf{V}} \quad {\mathbf{V}}_\perp]^\top \in T_{{\mathbf{X}}}{\cal M}^e_{r}$, define \begin{equation} \label{def: xi-theta-correspondence-general3} \begin{split} \xi^{\theta_{({\mathbf{U}},{\mathbf{Y}})}}_{{\mathbf{X}}}&:= [{\mathbf{U}} \quad {\mathbf{U}}_\perp] \begin{bmatrix} \theta_Y^\top {\mathbf{V}} & \theta_Y^\top {\mathbf{V}}_\perp \\ {\mathbf{U}}_\perp^\top \theta_U {\mathbf{Y}}^\top {\mathbf{V}} & {\mathbf{0}} \end{bmatrix}[{\mathbf{V}} \quad {\mathbf{V}}_\perp]^\top \in T_{{\mathbf{X}}}{\cal M}^e_{r}, \\ \theta_{({\mathbf{U}},{\mathbf{Y}})}^{\xi_{\mathbf{X}}} &:= [({\mathbf{U}}_\perp {\mathbf{D}}_1 ({\mathbf{Y}}^\top {\mathbf{V}})^{-1})^\top \quad ({\mathbf{V}} \S^\top + {\mathbf{V}}_\perp {\mathbf{D}}_2)^\top]^\top \in \mathcal{H}_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3}. \end{split} \end{equation}Then we can find a linear bijective mapping $\mathcal{L}_{{\mathbf{U}},{\mathbf{Y}}}^{r}$ between $\mathcal{H}_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3}$ and $T_{\mathbf{X}} {\cal M}_{r}^e$, \begin{equation*} \begin{split} &\mathcal{L}_{{\mathbf{U}},{\mathbf{Y}}}^{r}: \theta_{({\mathbf{U}},{\mathbf{Y}})} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3} \longrightarrow \xi_{{\mathbf{X}}}^{\theta_{({\mathbf{U}},{\mathbf{Y}})}} \in T_{\mathbf{X}} {\cal M}_{r}^e, \quad (\mathcal{L}_{{\mathbf{U}},{\mathbf{Y}}}^{r})^{-1}: \xi_{\mathbf{X}} \in T_{\mathbf{X}} {\cal M}_{r}^e \to \theta_{({\mathbf{U}},{\mathbf{Y}})}^{\xi_{\mathbf{X}}} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3} \end{split} \end{equation*}such that $\mathcal{L}_{{\mathbf{U}},{\mathbf{Y}}}^{r}(\theta_{({\mathbf{U}},{\mathbf{Y}})}) = {\mathbf{U}} \theta_Y^\top + \theta_U {\mathbf{Y}}^\top$ holds for any $\theta_{({\mathbf{U}},{\mathbf{Y}})} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3}$. Finally, we have the following spectrum bounds for $\mathcal{L}_{{\mathbf{U}},{\mathbf{Y}}}^{r}$: for all $\theta_{({\mathbf{U}},{\mathbf{Y}})} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3}$, \begin{equation} \label{ineq: bijection-spectrum-general3} ( \sigma^2_r({\mathbf{Y}}) \wedge \frac{1}{ \sigma_1({\mathbf{W}}_{\mathbf{Y}})} )\bar{g}^{r}_{({\mathbf{U}},{\mathbf{Y}})}(\theta_{({\mathbf{U}},{\mathbf{Y}})}, \theta_{({\mathbf{U}},{\mathbf{Y}})}) \leq \|\mathcal{L}^{r}_{{\mathbf{U}},{\mathbf{Y}}}(\theta_{({\mathbf{U}},{\mathbf{Y}})})\|_{\rm F}^2 \leq ( \sigma^2_1({\mathbf{Y}}) \vee \frac{1}{ \sigma_r({\mathbf{W}}_{\mathbf{Y}})} )\bar{g}^{r}_{({\mathbf{U}},{\mathbf{Y}})}(\theta_{({\mathbf{U}},{\mathbf{Y}})}, \theta_{({\mathbf{U}},{\mathbf{Y}})}), \end{equation} \end{Proposition} {\noindent \bf Proof of Proposition \ref{prop: general-bijection3}.} First, it is easy to see $\xi^{\theta_{({\mathbf{U}},{\mathbf{Y}})}}_{{\mathbf{X}}}$ and $\theta_{({\mathbf{U}},{\mathbf{Y}})}^{\xi_{\mathbf{X}}}$ are well defined given any $\theta_{({\mathbf{U}},{\mathbf{Y}})} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3}$ and $\xi_{\mathbf{X}} \in T_{{\mathbf{X}}}{\cal M}^e_{r}$. Next, we show $\mathcal{L}_{{\mathbf{U}},{\mathbf{Y}}}^{r}$ is a bijection. Notice $\mathcal{H}_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3}$ is of dimension $(p_1+p_2-r)r$, which is the same with $T_{\mathbf{X}} {\cal M}_{r}^e$. Suppose $\mathcal{L}_{{\mathbf{U}},{\mathbf{Y}}}^{r'} : \xi_{\mathbf{X}} \in T_{\mathbf{X}}{\cal M}^e_{r} \longrightarrow \theta_{({\mathbf{U}},{\mathbf{Y}})}^{\xi_{\mathbf{X}}} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3}$. For any $\xi_{\mathbf{X}} = [{\mathbf{U}} \quad {\mathbf{U}}_\perp] \begin{bmatrix} \S & {\mathbf{D}}_2^\top\\ {\mathbf{D}}_1 & {\mathbf{0}} \end{bmatrix} [{\mathbf{V}} \quad {\mathbf{V}}_\perp]^\top \in T_{\mathbf{X}}{\cal M}^e_{r}$, we have \begin{equation} \label{eq: bijection-general3} \mathcal{L}^{r}_{{\mathbf{U}},{\mathbf{Y}}} ( \mathcal{L}_{{\mathbf{U}},{\mathbf{Y}}}^{r'}(\xi_{\mathbf{X}}) ) = \mathcal{L}^{r}_{{\mathbf{U}},{\mathbf{Y}}} (\theta_{({\mathbf{U}},{\mathbf{Y}})}^{\xi_{\mathbf{X}}}) = [{\mathbf{U}} \quad {\mathbf{U}}_\perp] \begin{bmatrix} \theta_Y^{\xi_{\mathbf{X}} \top}{\mathbf{V}} & \theta^{\xi_{\mathbf{X}} \top}_Y {\mathbf{V}}_\perp \\ {\mathbf{U}}_\perp^\top \theta_U^{\xi_{\mathbf{X}}} {\mathbf{Y}}^\top {\mathbf{V}} & {\mathbf{0}} \end{bmatrix}[{\mathbf{V}} \quad {\mathbf{V}}_\perp]^\top = \xi_{\mathbf{X}}. \end{equation} Since $\mathcal{L}^{r}_{{\mathbf{U}},{\mathbf{Y}}}$ and $\mathcal{L}_{{\mathbf{U}},{\mathbf{Y}}}^{r'}$ are linear maps, \eqref{eq: bijection-general3} implies $\mathcal{L}^{r}_{{\mathbf{U}},{\mathbf{Y}}}$ is a bijection and $\mathcal{L}_{{\mathbf{U}},{\mathbf{Y}}}^{r'} = (\mathcal{L}^{r}_{{\mathbf{U}},{\mathbf{Y}}})^{-1}$. At the same time, it is easy to check $\mathcal{L}_{{\mathbf{U}},{\mathbf{Y}}}^{r}(\theta_{({\mathbf{U}},{\mathbf{Y}})}) = {\mathbf{U}} \theta_Y^\top + \theta_U {\mathbf{Y}}^\top$ holds for any $\theta_{({\mathbf{U}},{\mathbf{Y}})} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3}$ by observing $P_{{\mathbf{U}}_\perp} \theta_U = \theta_U$ and ${\mathbf{Y}}$ lies in the column space of ${\mathbf{V}}$. Next, we provide the spectrum bounds for $\mathcal{L}^{r}_{{\mathbf{U}},{\mathbf{Y}}}$. For any $\theta_{({\mathbf{U}},{\mathbf{Y}})} = [({\mathbf{U}}_\perp {\mathbf{D}})^\top \quad \theta_Y^\top]^\top \in \mathcal{H}_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3}$, we have $\bar{g}^{r}_{({\mathbf{U}},{\mathbf{Y}})}(\theta_{({\mathbf{U}},{\mathbf{Y}})},\theta_{({\mathbf{U}},{\mathbf{Y}})}) = \|{\mathbf{D}}\|_{\rm F}^2 + {\rm tr}({\mathbf{W}}_{\mathbf{Y}} \theta_Y^\top \theta_Y) = \|{\mathbf{D}}\|_{\rm F}^2 + \|\theta_Y {\mathbf{W}}_{\mathbf{Y}}^{1/2} \|_{\rm F}^2$. Thus, we have \begin{equation*} \begin{split} \|\mathcal{L}_{{\mathbf{U}},{\mathbf{Y}}}(\theta_{({\mathbf{U}},{\mathbf{Y}})})\|_{\rm F}^2 &= \|\theta_Y^\top {\mathbf{V}}\|_{\rm F}^2 + \|\theta_Y^\top {\mathbf{V}}_\perp\|_{\rm F}^2 + \|{\mathbf{U}}_\perp^\top \theta_U {\mathbf{Y}}^\top {\mathbf{V}}\|_{\rm F}^2 \\ & = \|\theta_Y\|_{\rm F}^2 + \|{\mathbf{D}} {\mathbf{Y}}^\top {\mathbf{V}}\|_{\rm F}^2 \overset{(a)}\geq \|{\mathbf{D}}\|_{\rm F}^2 \sigma^2_r({\mathbf{Y}}) + \|\theta_Y {\mathbf{W}}_{\mathbf{Y}}^{1/2} \|_{\rm F}^2/\sigma_1({\mathbf{W}}_{\mathbf{Y}}) \\ &\geq ( \sigma^2_r({\mathbf{Y}}) \wedge \frac{1}{ \sigma_1({\mathbf{W}}_{\mathbf{Y}})} )\bar{g}^{r}_{({\mathbf{U}},{\mathbf{Y}})}(\theta_{({\mathbf{U}},{\mathbf{Y}})}, \theta_{({\mathbf{U}},{\mathbf{Y}})}), \end{split} \end{equation*} where (a) is because ${\mathbf{Y}}$ lies in the column space of ${\mathbf{V}}$, and \begin{equation*} \begin{split} \|\mathcal{L}_{{\mathbf{U}},{\mathbf{Y}}}(\theta_{({\mathbf{U}},{\mathbf{Y}})})\|_{\rm F}^2 & = \|\theta_Y\|_{\rm F}^2 + \|{\mathbf{D}} {\mathbf{Y}}^\top {\mathbf{V}}\|_{\rm F}^2 \leq (\sigma^2_1({\mathbf{Y}}) \vee \frac{1}{ \sigma_r({\mathbf{W}}_{\mathbf{Y}})})\bar{g}^{r}_{({\mathbf{U}},{\mathbf{Y}})}(\theta_{({\mathbf{U}},{\mathbf{Y}})}, \theta_{({\mathbf{U}},{\mathbf{Y}})}). \quad \blacksquare \end{split} \end{equation*} Now, we are ready to present our main results on the geometric landscape connection of Riemannian fixed-rank matrix optimization \eqref{eq: general prob} under the embedded and quotient geometries. \begin{Theorem}[Geometric Landscape Connections of \eqref{eq: general prob} on ${\cal M}_{r}^e$ and ${\cal M}_{r}^{q_1}$] \label{th: embedded-quotient-connection-general1} Suppose the conditions in Proposition \ref{prop: general-bijection1} hold and the ${\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}}$ and ${\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}$ in $\bar{g}^r_{({\mathbf{L}},{\mathbf{R}})}$ satisfy ${\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}} = {\mathbf{M}} {\mathbf{W}}_{{\mathbf{L}}{\mathbf{M}},{\mathbf{R}} {\mathbf{M}}^{-\top}} {\mathbf{M}}^\top$, ${\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}} = {\mathbf{M}}^{-\top } {\mathbf{V}}_{{\mathbf{L}}{\mathbf{M}},{\mathbf{R}} {\mathbf{M}}^{-\top}} {\mathbf{M}}^{-1}$ for any ${\mathbf{M}} \in {\rm GL}(r)$. Then \begin{equation} \label{eq: gradient-connect-general1} \begin{split} {\rm grad} f({\mathbf{X}}) &= \overline{{\rm grad}_{\mathbf{L}}\, h_{r}([{\mathbf{L}},{\mathbf{R}}])} {\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}} {\mathbf{R}}^\dagger + ( \overline{{\rm grad}_{\mathbf{R}}\, h_{r}([{\mathbf{L}},{\mathbf{R}}])} {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}} {\mathbf{L}}^\dagger )^\top ({\mathbf{I}}_{p_2} - {\mathbf{R}} {\mathbf{R}}^\dagger) \\ \overline{{\rm grad}\, h_{r}([{\mathbf{L}},{\mathbf{R}}])} &= \begin{bmatrix} {\rm grad} f({\mathbf{X}}) {\mathbf{R}} {\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \\ ({\rm grad} f({\mathbf{X}}))^\top {\mathbf{L}} {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{-1} \end{bmatrix}. \end{split} \end{equation} Furthermore, if $[{\mathbf{L}},{\mathbf{R}}]$ is a Riemannian FOSP of $h_{r}([{\mathbf{L}}',{\mathbf{R}}'])$ defined via \eqref{eq: general-opt-problem-quotient-sub1}, we have: \begin{equation} \label{eq: Hessian-connection-general1} \begin{split} \overline{{\rm Hess} \, h_{r}([{\mathbf{L}},{\mathbf{R}}])}[\theta_{({\mathbf{L}},{\mathbf{R}})}, \theta_{({\mathbf{L}},{\mathbf{R}})}] &= {\rm Hess} f({\mathbf{X}})[\mathcal{L}_{{\mathbf{L}},{\mathbf{R}}}^{r}(\theta_{({\mathbf{L}},{\mathbf{R}})}),\mathcal{L}_{{\mathbf{L}},{\mathbf{R}}}^{r}(\theta_{({\mathbf{L}},{\mathbf{R}})})], \quad \forall \theta_{({\mathbf{L}},{\mathbf{R}})} \in \mathcal{H}_{({\mathbf{L}},{\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1}. \end{split} \end{equation} Finally, $\overline{{\rm Hess} \, h_{r}([{\mathbf{L}},{\mathbf{R}}])}$ and ${\rm Hess} f({\mathbf{X}})$ have $(p_1 + p_2-r)r$ eigenvalues and for $i = 1,\ldots, (p_1 + p_2-r)r$, we have $\lambda_i(\overline{{\rm Hess} \, h_{r}([{\mathbf{L}},{\mathbf{R}}])})$ is sandwiched between $\gamma \lambda_i({\rm Hess} f({\mathbf{X}})) $ and $2\Gamma \lambda_i({\rm Hess} f({\mathbf{X}})) $, where $\gamma$ and $\Gamma$ are given in \eqref{ineq: bijection-spectrum-general1}. \end{Theorem} {\noindent \bf Proof of Theorem \ref{th: embedded-quotient-connection-general1}.} First, recall ${\mathbf{U}}$ and ${\mathbf{V}}$ span the top $r$ left and right singular subspaces of ${\mathbf{L}}{\mathbf{R}}^\top$, respectively and we have ${\mathbf{L}} {\mathbf{L}}^\dagger = P_{\mathbf{U}}$, ${\mathbf{R}} {\mathbf{R}}^\dagger = P_{\mathbf{V}}$. So \eqref{eq: gradient-connect-general1} is by direct calculation from the gradient expressions in Proposition \ref{prop: gradient-hessian-exp-general}. Next, we prove \eqref{eq: Hessian-connection-general1}. Since $[{\mathbf{L}},{\mathbf{R}}]$ is a Riemannian FOSP of $h_{r}([{\mathbf{L}}',{\mathbf{R}}'])$, we have \begin{equation} \label{eq: FOSP-condition-general1} \overline{ {\rm grad} \, h_{r}([{\mathbf{L}},{\mathbf{R}}])} = {\mathbf{0}},\quad \nabla f({\mathbf{L}} {\mathbf{R}}^\top) {\mathbf{R}} = {\mathbf{0}} \quad \text{ and } \left(\nabla f({\mathbf{L}} {\mathbf{R}}^\top) \right)^\top {\mathbf{L}} = {\mathbf{0}} \end{equation} and have ${\rm grad} f({\mathbf{X}}) = {\mathbf{0}}$ by \eqref{eq: gradient-connect-general1}. So $ \nabla f({\mathbf{X}}) = P_{{\mathbf{U}}_\perp} \nabla f({\mathbf{X}}) P_{{\mathbf{V}}_\perp}$. Recall $\P_1 = {\mathbf{U}}^\top {\mathbf{L}}$, $\P_2 = {\mathbf{V}}^\top {\mathbf{R}}$, ${\mathbf{X}} = {\mathbf{L}} {\mathbf{R}}^\top$ and let $\boldsymbol{\Sigma} = {\mathbf{U}}^\top {\mathbf{X}} {\mathbf{V}}$. Given any $\theta_{({\mathbf{L}},{\mathbf{R}})} = [\theta_L^\top \quad \theta_R^\top]^\top \in \mathcal{H}_{({\mathbf{L}},{\mathbf{R}})} \widebar{{\cal M}}_{r}^{q_1}$, we have \begin{equation} \label{eq: Hessian-con-gradient-general1} \langle \nabla f({\mathbf{X}}), P_{{\mathbf{U}}_\perp} \theta_L \P_2^{\top} \boldsymbol{\Sigma}^{-1} \P_1 \theta_R^\top P_{{\mathbf{V}}_\perp} \rangle = \langle \nabla f({\mathbf{X}}), \theta_L \theta_R^\top \rangle, \end{equation} where the equality is because $\P_1,\P_2$ are nonsingular, $\P_1 \P_2^\top = \boldsymbol{\Sigma}$ and $ \nabla f({\mathbf{X}}) = P_{{\mathbf{U}}_\perp} \nabla f({\mathbf{X}}) P_{{\mathbf{V}}_\perp} $. Then by Proposition \ref{prop: gradient-hessian-exp-general}: \begin{equation*} \begin{split} &\quad \overline{{\rm Hess} \, h_{r}([{\mathbf{L}},{\mathbf{R}}])}[\theta_{({\mathbf{L}},{\mathbf{R}})}, \theta_{({\mathbf{L}},{\mathbf{R}})}] \\ &= \nabla^2 f({\mathbf{L}} {\mathbf{R}}^\top)[{\mathbf{L}} \theta_R^\top + \theta_L {\mathbf{R}}^\top,{\mathbf{L}} \theta_R^\top + \theta_L {\mathbf{R}}^\top] + 2 \langle \nabla f({\mathbf{L}} {\mathbf{R}}^\top), \theta_L \theta_R^\top \rangle \\ & \quad + \langle \nabla f({\mathbf{L}} {\mathbf{R}}^\top) {\mathbf{R}} {\rm D} {\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}}^{-1}[\theta_{({\mathbf{L}},{\mathbf{R}})}] , \theta_L {\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}} \rangle + \langle \left(\nabla f({\mathbf{L}} {\mathbf{R}}^\top) \right)^\top {\mathbf{L}} {\rm D} {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}^{-1}[\theta_{({\mathbf{L}},{\mathbf{R}})}], \theta_R {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}} \rangle \\ & \quad + \langle {\rm D} {\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}}[ \overline{ {\rm grad} \, h_{r}([{\mathbf{L}},{\mathbf{R}}])} ], \theta_L^\top \theta_L \rangle /2 + \langle {\rm D} {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}}[ \overline{ {\rm grad} \, h_{r}([{\mathbf{L}},{\mathbf{R}}])} ], \theta_R^\top \theta_R \rangle /2\\ & \overset{ \eqref{eq: FOSP-condition-general1} } = \nabla^2 f({\mathbf{L}} {\mathbf{R}}^\top)[{\mathbf{L}} \theta_R^\top + \theta_L {\mathbf{R}}^\top,{\mathbf{L}} \theta_R^\top + \theta_L {\mathbf{R}}^\top] + 2 \langle \nabla f({\mathbf{L}} {\mathbf{R}}^\top), \theta_L \theta_R^\top \rangle \\ & \overset{ \text{Proposition } \ref{prop: general-bijection1}, \eqref{eq: Hessian-con-gradient-general1} } = \nabla^2 f({\mathbf{X}})[\mathcal{L}^r_{{\mathbf{L}},{\mathbf{R}}}(\theta_{({\mathbf{L}},{\mathbf{R}})}),\mathcal{L}^r_{{\mathbf{L}},{\mathbf{R}}}(\theta_{({\mathbf{L}},{\mathbf{R}})})] + 2 \langle \nabla f({\mathbf{X}}), P_{{\mathbf{U}}_\perp} \theta_L \P_2^{\top} \boldsymbol{\Sigma}^{-1} \P_1 \theta_R^\top P_{{\mathbf{V}}_\perp} \rangle\\ & = {\rm Hess} f({\mathbf{X}})[\mathcal{L}^r_{{\mathbf{L}},{\mathbf{R}}}(\theta_{({\mathbf{L}},{\mathbf{R}})}),\mathcal{L}^r_{{\mathbf{L}},{\mathbf{R}}}(\theta_{({\mathbf{L}},{\mathbf{R}})})], \end{split} \end{equation*} where the last equality follows from the expression of ${\rm Hess} f({\mathbf{X}})$ in \eqref{eq: embedded-gd-hessian-general} and the definition of $\mathcal{L}^r_{{\mathbf{L}},{\mathbf{R}}}$. Then, by \eqref{ineq: bijection-spectrum-general1}, \eqref{eq: Hessian-connection-general1} and Theorem \ref{th: hessian-sandwich}, we have $\overline{{\rm Hess} \, h_{r}([{\mathbf{L}},{\mathbf{R}}])}$ and ${\rm Hess} f({\mathbf{X}})$ have $(p_1+p_2- r)r$ eigenvalues and $\widebar{\lambda}_i({\rm Hess} \, h_{r}([{\mathbf{L}},{\mathbf{R}}]))$ is sandwiched between $\gamma \lambda_i({\rm Hess} f({\mathbf{X}})) $ and $2\Gamma \lambda_i({\rm Hess} f({\mathbf{X}})) $ for $i = 1,\ldots,(p_1+p_2- r)r$, where $\gamma$ and $\Gamma$ are given in \eqref{ineq: bijection-spectrum-general1}.\quad $\blacksquare$ \begin{Theorem}[Geometric Landscape Connections of \eqref{eq: general prob} on ${\cal M}_{r}^e$ and ${\cal M}_{r}^{q_2}$] \label{th: embedded-quotient-connection-general2} Suppose ${\mathbf{U}} \in {\rm St}(r,p_1)$, ${\mathbf{B}} \in \mathbb{S}_+(r)$, ${\mathbf{V}} \in {\rm St}(p_2,r)$ and ${\mathbf{X}} = {\mathbf{U}} {\mathbf{B}} {\mathbf{V}}^\top$. Then \begin{equation} \label{eq: gradient-connect-general2} \begin{split} {\rm grad} f({\mathbf{X}}) &= P_{{\mathbf{U}}_\perp} \overline{{\rm grad}_{\mathbf{U}}\, h_{r}([{\mathbf{U}},{\mathbf{Y}}])} {\mathbf{B}}^{-1} {\mathbf{V}}^\top + {\mathbf{U}} \boldsymbol{\Delta}_1 {\mathbf{V}}^\top + (P_{{\mathbf{V}}_\perp} \overline{{\rm grad}_{\mathbf{V}}\, h_{r}([{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}])} {\mathbf{B}}^{-1} {\mathbf{U}}^\top)^\top \end{split} \end{equation} where $\boldsymbol{\Delta}_1$ is uniquely determined by the equation system: \begin{equation*} \begin{split} &{\mathbf{B}}\skew^\top(\boldsymbol{\Delta}_1 ) + \skew^\top(\boldsymbol{\Delta}_1 ) {\mathbf{B}} = 2 \overline{{\rm grad}_{\mathbf{U}}\, h_{r}([{\mathbf{U}},{\mathbf{Y}}])}^\top {\mathbf{U}},\\ &{\rm Sym}(\boldsymbol{\Delta}_1) = {\mathbf{B}}^{-1} \overline{{\rm grad}_{\mathbf{B}}\, h_{r}([{\mathbf{U}},{\mathbf{Y}}])} {\mathbf{B}}^{-1}, \end{split} \end{equation*} and \begin{equation} \label{eq: gradient-connect-general2-secondone} \begin{split} \overline{{\rm grad}\, h_{r}([{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}])} &= \begin{bmatrix} P_{{\mathbf{U}}_\perp}{\rm grad} f({\mathbf{X}}) {\mathbf{V}} {\mathbf{B}} + {\mathbf{U}} ( \skew(\boldsymbol{\Delta}_2) {\mathbf{B}} + {\mathbf{B}} \skew(\boldsymbol{\Delta}_2) )/2 \\ {\mathbf{B}} {\rm Sym}(\boldsymbol{\Delta}_2) \\ ({\mathbf{B}} {\mathbf{U}}^\top {\rm grad} f({\mathbf{X}}) P_{{\mathbf{V}}_\perp} )^\top - {\mathbf{V}} ( \skew(\boldsymbol{\Delta}_2) {\mathbf{B}} + {\mathbf{B}} \skew(\boldsymbol{\Delta}_2) )/2 \end{bmatrix}, \end{split} \end{equation} where $\boldsymbol{\Delta}_2 = {\mathbf{U}}^\top {\rm grad} f({\mathbf{X}}) {\mathbf{V}}$. Furthermore, if $[{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}]$ is a Riemannian FOSP of $h_{r}([{\mathbf{U}}',{\mathbf{B}}',{\mathbf{V}}'])$ defined via \eqref{eq: general-opt-problem-quotient-sub2}, we have: for any $ \theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2}$, \begin{equation} \label{eq: Hessian-connection-general2} \begin{split} \overline{{\rm Hess} \, h_{r}([{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}])}[\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}, \theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}] &= {\rm Hess} f({\mathbf{X}})[\mathcal{L}_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}}^{r}(\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}),\mathcal{L}_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}}^{r}(\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})})]. \end{split} \end{equation} Finally, $\overline{{\rm Hess} \, h_{r}([{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}])}$ has $(p_1 + p_2-r)r$ eigenvalues and for $i = 1,\ldots, (p_1 + p_2-r)r$, we have $\lambda_i(\overline{{\rm Hess} \, h_{r}([{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}])})$ is sandwiched between $\sigma^2_r({\mathbf{X}})\lambda_i({\rm Hess} f({\mathbf{X}})) $ and $2\sigma^2_1({\mathbf{X}}) \lambda_i({\rm Hess} f({\mathbf{X}})) $. \end{Theorem} {\noindent \bf Proof of Theorem \ref{th: embedded-quotient-connection-general2}.} First, recall $\boldsymbol{\Delta} = {\mathbf{U}}^\top \nabla f({\mathbf{U}} {\mathbf{B}} {\mathbf{V}}^\top) {\mathbf{V}}$ and notice ${\mathbf{B}}\skew^\top(\boldsymbol{\Delta}_1 ) + \skew^\top(\boldsymbol{\Delta}_1 ) {\mathbf{B}} = 2 \overline{{\rm grad}_{\mathbf{U}}\, h_{r}([{\mathbf{U}},{\mathbf{Y}}])}^\top {\mathbf{U}} = {\mathbf{B}}\skew^\top(\boldsymbol{\Delta} ) + \skew^\top(\boldsymbol{\Delta} ) {\mathbf{B}}$ is a Sylvester equation with respect $\skew(\boldsymbol{\Delta}_1)^\top$, which has a unique solution as ${\mathbf{B}},-{\mathbf{B}}$ have disjoint spectra and \cite[Theorem VII.2.1]{bhatia2013matrix}. Since $\boldsymbol{\Delta}_1 = \skew(\boldsymbol{\Delta}_1) + {\rm Sym}(\boldsymbol{\Delta}_1)$, $\boldsymbol{\Delta}_1$ is uniquely determined by the equation system $ {\mathbf{B}}\skew^\top(\boldsymbol{\Delta}_1 ) + \skew^\top(\boldsymbol{\Delta}_1 ) {\mathbf{B}} = 2 \overline{{\rm grad}_{\mathbf{U}}\, h_{r}([{\mathbf{U}},{\mathbf{Y}}])}^\top {\mathbf{U}}, {\rm Sym}(\boldsymbol{\Delta}_1) = {\mathbf{B}}^{-1} \overline{{\rm grad}_{\mathbf{B}}\, h_{r}([{\mathbf{U}},{\mathbf{Y}}])} {\mathbf{B}}^{-1}$. Finally, because $\boldsymbol{\Delta}$ is a solution to this equation system, we have $\boldsymbol{\Delta}_1 = \boldsymbol{\Delta} = {\mathbf{U}}^\top \nabla f({\mathbf{U}} {\mathbf{B}} {\mathbf{V}}^\top) {\mathbf{V}}$. The rest proofs of \eqref{eq: gradient-connect-general2} and \eqref{eq: gradient-connect-general2-secondone} are by direct calculation from the gradient expressions in Proposition \ref{prop: gradient-hessian-exp-general}. Next, we prove \eqref{eq: Hessian-connection-general2}. Since $[{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}]$ is a Riemannian FOSP of $h_{r}([{\mathbf{U}}',{\mathbf{B}}',{\mathbf{V}}'])$, we have \begin{equation} \label{eq: FOSP-condition-general2} \overline{ {\rm grad} \, h_{r}([{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}])} = {\mathbf{0}},\quad {\rm grad} f({\mathbf{X}}) \overset{\eqref{eq: gradient-connect-general2}}= {\mathbf{0}}, \quad \nabla f({\mathbf{U}} {\mathbf{B}} {\mathbf{V}}^\top) {\mathbf{V}} = {\mathbf{0}} \,\text{ and }\, {\mathbf{U}}^\top\nabla f({\mathbf{U}} {\mathbf{B}} {\mathbf{V}}^\top) = {\mathbf{0}}. \end{equation} So $ \nabla f({\mathbf{X}}) = P_{{\mathbf{U}}_\perp} \nabla f({\mathbf{X}}) P_{{\mathbf{V}}_\perp}$. Given any $\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} = [\theta_U^\top \quad \theta_B^\top \quad \theta_V^\top]^\top \in \mathcal{H}_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})} \widebar{{\cal M}}_{r}^{q_2}$, we have \begin{equation} \label{eq: Hessian-con-gradient-general2} \langle \nabla f({\mathbf{X}}), P_{{\mathbf{U}}_\perp} \theta_U {\mathbf{B}} {\mathbf{B}}^{-1} {\mathbf{B}} \theta_V^\top P_{{\mathbf{V}}_\perp} \rangle = \langle \nabla f({\mathbf{X}}), \theta_U {\mathbf{B}} \theta_V^\top \rangle, \end{equation} where the equality is because $ \nabla f({\mathbf{X}}) = P_{{\mathbf{U}}_\perp} \nabla f({\mathbf{X}}) P_{{\mathbf{V}}_\perp} $. Then by Proposition \ref{prop: gradient-hessian-exp-general} and recall $\boldsymbol{\Delta}' = \theta_U^\top\nabla f({\mathbf{U}} {\mathbf{B}} {\mathbf{V}}^\top) {\mathbf{V}}$ and $\boldsymbol{\Delta}'' = {\mathbf{U}}^\top\nabla f({\mathbf{U}} {\mathbf{B}} {\mathbf{V}}^\top) \theta_V$, we have \begin{equation*} \begin{split} &\quad \overline{{\rm Hess} \, h_{r}([{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}])}[\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}, \theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}] \\ &= \nabla^2 f({\mathbf{U}} {\mathbf{B}} {\mathbf{V}}^\top)[\theta_U {\mathbf{B}} {\mathbf{V}}^\top + {\mathbf{U}} \theta_B {\mathbf{V}}^\top + {\mathbf{U}} {\mathbf{B}} \theta_V^\top, \theta_U {\mathbf{B}} {\mathbf{V}}^\top + {\mathbf{U}} \theta_B {\mathbf{V}}^\top + {\mathbf{U}} {\mathbf{B}} \theta_V^\top] + 2\langle \nabla f({\mathbf{U}}{\mathbf{B}}{\mathbf{V}}^\top), \theta_U {\mathbf{B}} \theta_V^\top \rangle\\ & + \left\langle \boldsymbol{\Delta}, {\rm Sym}({\mathbf{U}}^\top \theta_U {\mathbf{U}}^\top\theta_U) {\mathbf{B}} + {\mathbf{B}} {\rm Sym}({\mathbf{V}}^\top \theta_V {\mathbf{U}}^\top \theta_U) -2\theta_U^\top \theta_U {\mathbf{B}} \right\rangle/2\\ & + \left\langle \boldsymbol{\Delta}, {\mathbf{B}}{\rm Sym}({\mathbf{V}}^\top \theta_V {\mathbf{V}}^\top\theta_V) + {\rm Sym}({\mathbf{U}}^\top \theta_U {\mathbf{V}}^\top \theta_V) {\mathbf{B}} -2{\mathbf{B}}\theta_V^\top \theta_V + 2\theta_B {\mathbf{B}}^{-1} \theta_B \right\rangle/2\\ & + \langle \boldsymbol{\Delta}', 2\theta_B - {\mathbf{U}}^\top \theta_U {\mathbf{B}} - \theta_U^\top {\mathbf{U}} {\mathbf{B}}/2 -{\mathbf{V}}^\top \theta_V {\mathbf{B}}/2 \rangle + \langle \boldsymbol{\Delta}'', 2\theta_B - {\mathbf{B}}\theta_V^\top {\mathbf{V}} - {\mathbf{B}}{\mathbf{V}}^\top \theta_V/2 -{\mathbf{B}} \theta_U^\top {\mathbf{U}} /2 \rangle,\\ & \overset{ \eqref{eq: FOSP-condition-general2} } = \nabla^2 f({\mathbf{U}} {\mathbf{B}} {\mathbf{V}}^\top)[\theta_U {\mathbf{B}} {\mathbf{V}}^\top + {\mathbf{U}} \theta_B {\mathbf{V}}^\top + {\mathbf{U}} {\mathbf{B}} \theta_V^\top, \theta_U {\mathbf{B}} {\mathbf{V}}^\top + {\mathbf{U}} \theta_B {\mathbf{V}}^\top + {\mathbf{U}} {\mathbf{B}} \theta_V^\top] + 2\langle \nabla f({\mathbf{U}}{\mathbf{B}}{\mathbf{V}}^\top), \theta_U {\mathbf{B}} \theta_V^\top \rangle \\ & \overset{ \text{Proposition } \ref{prop: general-bijection2}, \eqref{eq: Hessian-con-gradient-general2} } = \nabla^2 f({\mathbf{X}})[\mathcal{L}^r_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}}(\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}),\mathcal{L}^r_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}}(\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})})] + 2\langle \nabla f({\mathbf{X}}), P_{{\mathbf{U}}_\perp} \theta_U {\mathbf{B}} {\mathbf{B}}^{-1} {\mathbf{B}} \theta_V^\top P_{{\mathbf{V}}_\perp} \rangle\\ & = {\rm Hess} f({\mathbf{X}})[\mathcal{L}^r_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}}(\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})}),\mathcal{L}^r_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}}(\theta_{({\mathbf{U}},{\mathbf{B}},{\mathbf{V}})})], \end{split} \end{equation*} where the last equality follows from the expression of ${\rm Hess} f({\mathbf{X}})$ in \eqref{eq: embedded-gd-hessian-general} and the definition of $\mathcal{L}^r_{{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}}$. Then, by \eqref{ineq: bijection-spectrum-general2}, \eqref{eq: Hessian-connection-general2} and Theorem \ref{th: hessian-sandwich}, we have $\overline{{\rm Hess} \, h_{r}([{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}])}$ has $(p_1+p_2- r)r$ eigenvalues and $\widebar{\lambda}_i({\rm Hess} \, h_{r}([{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}]))$ is sandwiched between $\sigma^2_r({\mathbf{X}})\lambda_i({\rm Hess} f({\mathbf{X}})) $ and $2\sigma^2_1({\mathbf{X}}) \lambda_i({\rm Hess} f({\mathbf{X}})) $ for $i = 1,\ldots,(p_1+p_2- r)r$. \quad $\blacksquare$ \begin{Theorem}[Geometric Landscape Connections of \eqref{eq: general prob} on ${\cal M}_{r}^e$ and ${\cal M}_{r}^{q_3}$] \label{th: embedded-quotient-connection-general3} Suppose the conditions in Proposition \ref{prop: general-bijection3} hold and the ${\mathbf{W}}_{\mathbf{Y}}$ in $\bar{g}^r_{({\mathbf{U}},{\mathbf{Y}})}$ satisfies ${\mathbf{W}}_{\mathbf{Y}} = \O {\mathbf{W}}_{{\mathbf{Y}}\O} \O^\top$ for any $\O \in \mathbb{O}_r$. Then \begin{equation} \label{eq: gradient-connect-general3} \begin{split} {\rm grad} f({\mathbf{X}}) &= \overline{{\rm grad}_{\mathbf{U}}\, h_{r}([{\mathbf{U}},{\mathbf{Y}}])}{\mathbf{Y}}^\dagger + \left(\overline{{\rm grad}_{\mathbf{Y}}\, h_{r}([{\mathbf{U}},{\mathbf{Y}}])} {\mathbf{W}}_{\mathbf{Y}} {\mathbf{U}}^\top \right)^\top \\ \overline{{\rm grad}\, h_{r}([{\mathbf{U}},{\mathbf{Y}}])} &= \begin{bmatrix} P_{{\mathbf{U}}_\perp}{\rm grad} f({\mathbf{X}}) {\mathbf{Y}} \\ ({\rm grad} f({\mathbf{X}}))^\top {\mathbf{U}} {\mathbf{W}}_{\mathbf{Y}}^{-1} \end{bmatrix}. \end{split} \end{equation} Furthermore, if $[{\mathbf{U}},{\mathbf{Y}}]$ is a Riemannian FOSP of $h_{r}([{\mathbf{U}}',{\mathbf{Y}}'])$ defined via \eqref{eq: general-opt-problem-quotient-sub3}, we have: \begin{equation} \label{eq: Hessian-connection-general3} \begin{split} \overline{{\rm Hess} \, h_{r}([{\mathbf{U}},{\mathbf{Y}}])}[\theta_{({\mathbf{U}},{\mathbf{Y}})}, \theta_{({\mathbf{U}},{\mathbf{Y}})}] &= {\rm Hess} f({\mathbf{X}})[\mathcal{L}_{{\mathbf{U}},{\mathbf{Y}}}^{r}(\theta_{({\mathbf{U}},{\mathbf{Y}})}),\mathcal{L}_{{\mathbf{U}},{\mathbf{Y}}}^{r}(\theta_{({\mathbf{U}},{\mathbf{Y}})})], \quad \forall \theta_{({\mathbf{U}},{\mathbf{Y}})} \in \mathcal{H}_{({\mathbf{U}},{\mathbf{Y}})} \widebar{{\cal M}}_{r}^{q_3}. \end{split} \end{equation} Finally, $\overline{{\rm Hess} \, h_{r}([{\mathbf{U}},{\mathbf{Y}}])}$ has $(p_1 + p_2-r)r$ eigenvalues and for $i = 1,\ldots, (p_1 + p_2-r)r$, we have $\lambda_i(\overline{{\rm Hess} \, h_{r}([{\mathbf{U}},{\mathbf{Y}}])})$ is sandwiched between $ ( \sigma^2_r({\mathbf{Y}}) \wedge \frac{1}{ \sigma_1({\mathbf{W}}_{\mathbf{Y}})} ) \lambda_i({\rm Hess} f({\mathbf{X}})) $ and $( \sigma^2_1({\mathbf{Y}}) \vee \frac{1}{ \sigma_r({\mathbf{W}}_{\mathbf{Y}})} ) \lambda_i({\rm Hess} f({\mathbf{X}})) $. \end{Theorem} {\noindent \bf Proof of Theorem \ref{th: embedded-quotient-connection-general3}.} The proof is similar to the proof of Theorem \ref{th: embedded-quotient-connection-general1} and is postponed to Appendix \ref{sec: additional-proofs-general}. \quad $\blacksquare$ By Theorems \ref{th: embedded-quotient-connection-general1}, \ref{th: embedded-quotient-connection-general2} and \ref{th: embedded-quotient-connection-general3}, we have the following Corollary \ref{coro: landscape connection general case} on the equivalence of Riemannian FOSPs, SOSPs and strict saddles of optimization \eqref{eq: general prob} under the embedded and the quotient geometries. The proof is given in Appendix \ref{sec: additional-proofs-general}. \begin{Corollary}[Equivalence on Riemannian FOSPs, SOSPs and strict saddles of \eqref{eq: general prob} Under Embedded and Quotient Geometries] \label{coro: landscape connection general case} Suppose ${\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}} = {\mathbf{M}} {\mathbf{W}}_{{\mathbf{L}}{\mathbf{M}},{\mathbf{R}} {\mathbf{M}}^{-\top}} {\mathbf{M}}^\top$, ${\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}} = {\mathbf{M}}^{-\top } {\mathbf{V}}_{{\mathbf{L}}{\mathbf{M}},{\mathbf{R}} {\mathbf{M}}^{-\top}} {\mathbf{M}}^{-1}$ hold for any ${\mathbf{M}} \in {\rm GL}(r)$ and ${\mathbf{W}}_{\mathbf{Y}} = \O {\mathbf{W}}_{{\mathbf{Y}}\O} \O^\top$ holds for any $\O \in \mathbb{O}_r$. Then we have \begin{itemize} \item[(a)] given ${\mathbf{L}} \in \mathbb{R}^{p_1 \times r}_*, {\mathbf{R}} \in \mathbb{R}^{p_2 \times r}_*, {\mathbf{U}} \in {\rm St}(r,p_1)$, ${\mathbf{B}} \in \mathbb{S}_{+}(r)$, ${\mathbf{V}} \in {\rm St}(r,p_2)$ and ${\mathbf{Y}} \in \mathbb{R}^{p_2 \times r}_*$, if $[{\mathbf{L}},{\mathbf{R}}]$ ($[{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}]$ or $[{\mathbf{U}},{\mathbf{Y}}]$) is a Riemannian FOSP or SOSP or strict saddle of $h_{r}([{\mathbf{L}}',{\mathbf{R}}'])$ ($h_{r}([{\mathbf{U}}',{\mathbf{B}}',{\mathbf{V}}'])$ or $h_r([{\mathbf{U}}',{\mathbf{Y}}'])$), then ${\mathbf{X}} = {\mathbf{L}} {\mathbf{R}}^\top$ (${\mathbf{X}} = {\mathbf{U}} {\mathbf{B}} {\mathbf{V}}^\top$ or ${\mathbf{X}} = {\mathbf{U}} {\mathbf{Y}}^\top$) is a Riemannian FOSP or SOSP or strict saddle of \eqref{eq: general prob} under the embedded geometry; \item[(b)]if ${\mathbf{X}}$ is a Riemannian FOSP or SOSP or strict saddle of \eqref{eq: general prob} under the embedded geometry, then there is a unique $[{\mathbf{L}},{\mathbf{R}}]$ ($[{\mathbf{U}},{\mathbf{B}},{\mathbf{V}}]$ or $[{\mathbf{U}},{\mathbf{Y}}]$) such that ${\mathbf{L}} {\mathbf{R}}^\top = {\mathbf{X}}$ (${\mathbf{U}} {\mathbf{B}} {\mathbf{V}}^\top = {\mathbf{X}}$ or ${\mathbf{U}} {\mathbf{Y}}^\top ={\mathbf{X}}$) and it is a Riemannian FOSP or SOSP or strict saddle of $h_{r}([{\mathbf{L}}',{\mathbf{R}}'])$ ($h_{r}([{\mathbf{U}}',{\mathbf{B}}',{\mathbf{V}}'])$ or $h_r([{\mathbf{U}}',{\mathbf{Y}}'])$). \end{itemize} \end{Corollary} \begin{Remark}[Algorithmic Connection of Embedded and Quotient Geometries] \label{rem: algorithmic-connection} Contrast to the geometric connection of embedded and quotient geometries in fixed-rank matrix optimization, the algorithmic connection between two geometries is more subtle. First, the algorithms under the quotient geometry are performed in the horizontal space and they depend on the quotient structure and the Riemannian metric we pick. Thus, it is hard to expect for a universal algorithmic connection under two geometries. On the other hand, we find that by taking some specific metrics under the quotient geometry, there are indeed some algorithmic connections. This is particularly true when the metrics are chosen in the way such that the sandwich gap coefficients in the geometric connection are some universal constants (see Remark \ref{rem: effiect-of-metric-on-landscape}). For example, if we pick ${\mathbf{W}}_{\mathbf{Y}} = 2{\mathbf{Y}}^\top {\mathbf{Y}}$ in $\bar{g}_{\mathbf{Y}}^{r+}$ and ${\mathbf{W}}_{{\mathbf{L}},{\mathbf{R}}} = {\mathbf{R}}^\top {\mathbf{R}}, {\mathbf{V}}_{{\mathbf{L}},{\mathbf{R}}} = {\mathbf{L}}^\top {\mathbf{L}}$ in $\bar{g}^r_{({\mathbf{L}},{\mathbf{R}})}$, then we have the following gradient flows of \eqref{eq: PSD-manifold-formulation} under ${\cal M}_{r+}^e$, ${\cal M}_{r+}^{q_1}$ and \eqref{eq: general prob} under ${\cal M}_{r}^e$, ${\cal M}_{r}^{q_1}$: \begin{itemize}[leftmargin=*] \item (PSD case) \begin{equation*} \begin{split} \text{under } {\cal M}_{r+}^e: &\quad \frac{d {\mathbf{X}}}{d t} = -{\rm grad} f({\mathbf{X}}) = -P_{{\mathbf{U}}} \nabla f({\mathbf{X}}) - \nabla f({\mathbf{X}})P_{{\mathbf{U}}} + P_{{\mathbf{U}}} \nabla f({\mathbf{X}})P_{{\mathbf{U}}},\\ \text{under } {\cal M}_{r+}^{q_1}:& \quad \frac{d {\mathbf{X}}}{d t} = -\overline{{\rm grad}\, h_{r+}([{\mathbf{Y}}])}{\mathbf{Y}}^\top - {\mathbf{Y}} \left(\overline{{\rm grad}\, h_{r+}([{\mathbf{Y}}])}\right)^\top = -P_{{\mathbf{U}}} \nabla f({\mathbf{X}}) - \nabla f({\mathbf{X}})P_{{\mathbf{U}}}; \end{split} \end{equation*} \item (general case) \begin{equation*} \begin{split} \text{under } {\cal M}_{r}^e: &\quad \frac{d {\mathbf{X}}}{d t} = -{\rm grad} f({\mathbf{X}}) = -P_{{\mathbf{U}}} \nabla f({\mathbf{X}}) - \nabla f({\mathbf{X}})P_{{\mathbf{V}}} + P_{{\mathbf{U}}} \nabla f({\mathbf{X}})P_{{\mathbf{V}}},\\ \text{under } {\cal M}_{r}^{q_1}: & \quad \frac{d {\mathbf{X}}}{d t} = -\overline{{\rm grad}_{\mathbf{L}}\, h_{r}([{\mathbf{L}},{\mathbf{R}}])}{\mathbf{R}}^\top - {\mathbf{L}} \left(\overline{{\rm grad}_{\mathbf{R}}\, h_{r}([{\mathbf{L}},{\mathbf{R}}])}\right)^\top = -P_{{\mathbf{U}}} \nabla f({\mathbf{X}}) - \nabla f({\mathbf{X}})P_{{\mathbf{V}}}. \end{split} \end{equation*} \end{itemize} We can see that the gradient flows of \eqref{eq: PSD-manifold-formulation} and \eqref{eq: general prob} under these embedded and the quotient geometries only differ one term, which has magnitude smaller than the other terms. Some empirical evidence which shows the remarkably similar algorithmic performance under these embedded and quotient geometries was provided in \cite{mishra2012riemannian} and here our geometric connection results provide more theoretical insights for this empirical observation. \end{Remark} \begin{Remark}[Geometric Connection of Non-convex Factorization and Quotient Manifold Formulations for \eqref{eq: PSD-manifold-formulation} and \eqref{eq: general prob}] As we have discussed in the Introduction, another popular approach for handling the rank constraint in \eqref{eq: PSD-manifold-formulation} or \eqref{eq: general prob} is via factorizing ${\mathbf{X}}$ into ${\mathbf{Y}} {\mathbf{Y}}^\top$ or ${\mathbf{L}} {\mathbf{R}}^\top$ and then treat the new problem as unconstrained optimization in the Euclidean space. In the recent work \cite{luo2021nonconvex}, they showed for both \eqref{eq: PSD-manifold-formulation} and \eqref{eq: general prob} the geometric landscapes under the factorization and embedded submanifold formulations are almost equivalent. By combining their results and the results in this paper, we also have a geometric landscape connection of \eqref{eq: PSD-manifold-formulation} and \eqref{eq: general prob} under the factorization and quotient manifold formulations. \end{Remark} \section{Conclusion and Discussions} \label{sec: conclusion} In this paper, we propose a general procedure for establishing geometric connections of a Riemannian optimization problem under different geometries. By applying it to problems \eqref{eq: PSD-manifold-formulation} and \eqref{eq: general prob} under the embedded and quotient geometries, we establish an exact Riemannian gradient connection under two geometries at every point on the manifold and sandwich inequalities between the spectra of the Riemannian Hessians at Riemannian FOSPs. These results immediately imply an equivalence on the sets of Riemannian FOSPs, SOSPs and strict saddles of \eqref{eq: PSD-manifold-formulation} and \eqref{eq: general prob} under embedded and quotient geometries. There are many interesting extensions to the results in this paper to be explored in the future. First, as we have mentioned in the Example \ref{ex: example-1} in Section \ref{sec: general-strategy-outline}, our results on the connection of Riemannian Hessians under the embedded and the quotient geometries are established at FOSPs. It is interesting to explore whether it is possible to connect the landscapes under two geometries at non-stationary points. Second, although we have a unified treatment on various Riemannian metrics in the quotient geometry, different quotient structures are still treated case-by-case in the theoretical analysis. It is an interesting future work to unify that part as well. Third, other than the geometries covered in this paper, there are many other embedded and quotient geometries for fixed-rank matrices, such the ones in \cite{absil2009geometric,grubivsic2007efficient,vandereycken2013riemannian} for ${\cal M}_{r+}$, it will be interesting to study the geometric landscape connection of \eqref{eq: PSD-manifold-formulation} and \eqref{eq: general prob} under these geometries. Fourth, another common manifold that has both embedded and quotient representations is the Stiefel manifold \cite{edelman1998geometry}. We believe our general procedure in Section \ref{sec: general-strategy-for-connection} can also be used to establish the geometric connection in that setting as well. Finally, our ultimate goal is to better understand the connections and comparisons of different Riemannian geometries and give some guidelines on how to choose them given a Riemannian optimization problem. Some progress and discussions on how to choose Riemannian metrics in quotient geometries can be found in \cite{mishra2016riemannian,vandereycken2013riemannian}. While there is still not too much study on how to choose different quotient structures and manifold classes. It is an important future work to develop a general theory to connect any two different Riemannian geometries on a given Riemannian optimization problem from either an algorithmic or a geometric point of view. \section*{Acknowledgement} Y. Luo would like to thank Nicolas Garcia Trillos for helpful discussions during the project.
1,108,101,563,949
arxiv
\section{Introduction} Topological semimetals\cite{armitage2017} host Dirac or Weyl fermions in the bulk and have attracted lots of theoretical and experimental research interests in recent years. The Dirac or Weyl fermions found in condensed matter physics are quasiparticles which do not have to obey the Lorentz invariance, indicating that the band structure in the momentum space can be anisotropic. Type-II Dirac or Weyl fermions\cite{soluyanov2015, Xu2015prl} are obtained when Dirac or Weyl cones are tilted strongly in the momentum space, that the electron and hole pockets co-exist with the Dirac or Weyl nodes. Type-II Weyl fermions are predicted to exist in many materials, such as $\text{WTe}_2$\cite{soluyanov2015}, $\text{MoTe}_2$\cite{Sunyan2015,Wang2016}, $\text{Ta}_3\text{S}_2$ and $\text{LaAlGe}$\cite{XuSY2017}. More recently, it has been reported that $\text{PdTe}_2$\cite{Noh2017, Fei2017} and $\text{PtTe}_2$\cite{yan2017} are type-II Dirac semimetals which host tilted Dirac cones in three-dimensions. Except for the type-II topological semimetals mentioned above, one can also obtain tilted Dirac or Weyl cones in two-dimensions \cite{Chiu2017, Nagaosa2016}. The tilted anisotropic Dirac cones have been found in the $8-pmmn$ borophene\cite{Lopez2016} and the organic semiconductor $\alpha$-$\text{(BEDT-TTF)}_2\text{I}_3$\cite{Goerbig2008,Hirata2017}. In particular, it has been proposed that the crystal symmetries can give rise to type-II Dirac surface states\cite{Chiu2017} which are characterized by tilted Dirac cones with helical spin polarization and open electron and hole pockets touching at the Dirac point. The purpose of this paper is to investigate the properties of the Kondo screening in two-dimensional (2D) tilted Dirac surface states with helical spin polarization. The Kondo problem is an important issue in condensed matter physics and has been widely studied by using various methods \cite{Krishna1980,tsvelick1984,Andrei1984,Zhang1983,Coleman1984,read1983,Kuramoto1983,Gunnarsson1983,affleck1990}. The Kondo problem as well as the RKKY interactions in systems with isotropic Dirac cones have been studied intensively since the discoveries of graphene and topological insulators\cite{Chang2015,Ulloa2016,Shun2016,Zheng2016}. At half-filling, the density of states (DOS) of the Dirac fermions vanishes, and the problem of a magnetic impurity in such systems falls into the category of pseudo-gap Kondo problem \cite{Gonzalez1998,Fritz2004,Vojta2004}. There exists a critical value of hybridization for the impurity and the conduction electrons to form a bound state\cite{Feng2010, shirakawa2014}. For tilted Dirac surface states, due to the co-existance of spin-orbit coupling and the anisotropy of band structure, the spin-spin correlations in both the spin and coordinate spaces show rich features and are much more interesting than those in normal metals. In this paper, we systematically study the binding energy and real space spin-spin correlations of a magnetic impurity in titled Dirac surface states. We use the variational method, and compare the results with those obtained in conventional 2D helical metals. The variational method we apply has been used to study the ground state of the Kondo problem in normal metals \cite{Gunnarsson1983,Varma1976}, antiferromagnet \cite{Aji2008}, 2D helical metals \cite{Feng2010}, 3D Weyl semimetals\cite{Jinhua2015}, and the Fermi arc surface states of Weyl semimetals\cite{Ma2017}. The paper is organized as follows. We present the model and dispersion relation in Sec. \ref{Sec:Hamiltonian}. In Sec. \ref{Sec:selfconsist}, we apply the variational method to study the binding energy. In Sec. \ref{Sec:sscorr}, we investigate the spin-spin correlation between the magnetic impurity and the conduction electrons in tilted Dirac surface states. Two cases are mainly studied: (1) $v_x=v_y$, $v_t\neq 0$ and (2) $v_x\neq v_y$, $v_t \neq 0$, where $v_x$, $v_y$ are the velocities along the $k_x$- and $k_y$-axis and $v_t$ is the tilting term. The results are compared with the counterparts in a two dimensional helical metal ($v_x=v_y$, $v_t=0$). Finally, the discussions and conclusions are given in Sec. \ref{conclusion}. \section{Hamiltonian}\label{Sec:Hamiltonian} We use the Anderson impurity model to study the Kondo screening of a spin-1/2 magnetic impurity in tilted Dirac surface states. The model Hamiltonian contains three parts: the kinetic energy term $H_0$ of the tilted Dirac cone, the impurity Hamiltonian $H_d$, and the hybridization between the magnetic impurity and the tilted Dirac surface states $H_V$. The Hamiltonian reads \begin{equation}\label{Eq:tilt_Dirac} \begin{aligned} H=H_0 + H_d + H_V. \end{aligned} \end{equation} The Hamiltonian of a tilted Dirac cone in a 2D plane is given by \cite{Zabolotskiy2016,SK2017} \begin{equation}\label{Eq:tilt_Dirac} \begin{aligned} H_0 = \sum_{\mathbf{k}} h_0(\mathbf{k})=\sum_\mathbf{k} \Psi_{\mathbf{k}}^\dagger \left(v_x k_x \sigma_x + v_y k_y \sigma_y + v_t k_y \sigma_0\right)\Psi_{\mathbf{k}}, \end{aligned} \end{equation} where $\sigma_x$, $\sigma_y$ are the spin Pauli matrices and $\sigma_0$ is the identity matrix. $\Psi_{\mathbf{k}}\equiv \{c_{\mathbf{k}\uparrow}, c_{\mathbf{k}\downarrow}\}^T$ and $\Psi_{\mathbf{k}}^\dagger = \{c_{\mathbf{k}\uparrow}^\dagger, c_{\mathbf{k}\downarrow}^\dagger\}$, where $c_{\mathbf{k}\sigma}^\dagger$ ($c_{\mathbf{k}\sigma}$) creates (annihilates) an spin-$\sigma$ electron with momentum $\mathbf{k}$. $v_x$ and $v_y$ are the velocity along the $k_x$ and $k_y$ axes, respectively. When $v_t = 0$ and $v_x = v_y$, the dispersion relation is exactly the same as a single Dirac cone in graphene or in a 2D helical metal. The non-zero $v_t$ tilts the Dirac cone, and if $v_x \neq v_y$ extra anisotropy is induced in the system, such that the real space spin-spin correlation between a magnetic impurity and the conduction electrons shall be affected accordingly. \\ The single particle eigenenergy writes \begin{equation}\label{Eq:tilt_dispersion} \begin{aligned} \epsilon_{ks} = k_y v_t - s \sqrt{k_x^2v_x^2 + k_y^2v_y^2}, \end{aligned} \end{equation} where $s=\{+, -\}$ refer to the valence and the conduction bands. The dispersion relation for $v_x=v_y=1.0$ and $v_t=0.5$ is shown in Fig. \ref{Fig:0_dispersion.pdf}. If $v_t=0$, the spectrum is isotropic in the 2D plane. The non-zero $v_t$ tilts the Dirac cone along the $k_y$-axis. We can see that the DOS is still zero for a small $v_t$, but as $v_t$ increases, the DOS at half-filling will become finite. In this present paper, we may study about the case with relatively small $v_t$, such that the DOS at half-filling is still zero while the spectra become anisotropic due to the tilting term. The eigenstates are given by $\{\{ -e^{-i\theta_\mathbf{k}}, 1\}, \{e^{-i\theta_\mathbf{k}},1\}\}$, where $\theta_\mathbf{k} \equiv \text{arctan}(-k_yv_y/k_xv_x)$. Then one can define a unitary matrix to diagonalize $h_0(\mathbf{k})$ as \begin{equation} \begin{aligned} U =\frac{1}{\sqrt{2}}\left( \begin{array}{cc} e^{-\frac{i\theta_\mathbf{k}}{2}} & -e^{\frac{i\theta_\mathbf{k}}{2}} \\ e^{-\frac{i\theta_\mathbf{k}}{2}} & e^{ \frac{i\theta_\mathbf{k}}{2}} \end{array} \right) .\\ \end{aligned} \end{equation} The eigenstates of the tilted Dirac cone is given by \begin{equation} \begin{aligned} \{\gamma_{\mathbf{k}+}, \gamma_{\mathbf{k}-}\}^T = U\{c_{\mathbf{k}\uparrow}, c_{\mathbf{k}\downarrow}\}^T,\\ \end{aligned} \end{equation} and then $H_0$ in its diagonal basis writes \begin{equation} \begin{aligned} H_0 = \sum_\mathbf{k} h_0(\mathbf{k}) = \sum_{\mathbf{k}s} \epsilon_{\mathbf{k}s}\gamma_{\mathbf{k}s}^\dagger \gamma_{\mathbf{k}s}, \ \ (s=\{+, -\}). \\ \end{aligned} \end{equation} \begin{figure}[htpb] \begin{center} \includegraphics[width=8cm]{0_dispersion.eps} \end{center} \caption{(Color online). The band structure of tilted Dirac cone for $v_x = v_y = 1.0$ and $v_t = 0.5$. $\epsilon_d$ is the impurity energy level which is below the Fermi surface. The dispersion relation is tilted along the $k_y$-axis due to the non-zero $v_t$ term. } \label{Fig:0_dispersion.pdf} \end{figure} The local impurity Hamiltonian is given by \begin{equation}\label{Eq:Hamil_Impu} \begin{aligned} H_{d} &=(\epsilon_d -\mu) \sum _{\sigma }d_{\sigma }{}^\dagger d_{\sigma }+\text{U}d_{\uparrow }{}^\dagger d_{\uparrow }d_{\downarrow }{}^\dagger d_{\downarrow },\\ \end{aligned} \end{equation} $d_{\uparrow(\downarrow)}^\dagger$ and $d_{\uparrow(\downarrow)}$ are the creation and annihilation operators of the spin-up (spin-down) state on the impurity site. $\epsilon_d$ is the impurity energy level, $U$ is the on-site Coulomb repulsion. We may assume that $\epsilon_d$ is slightly below the chemical potential and $U$ is finite but very large, such that $\epsilon_d<\mu\ll \epsilon_d+U$, that the impurity is always singly occupied with a local moment, and the impurity energy shall be $\epsilon_d-\mu$. The hybridization between the electrons on the magnetic impurity site and in the tilted Dirac cone is described by \begin{equation}\label{Eq:Hv} \begin{aligned} H_{V}&=\sum _{\mathbf{k}\sigma}V_\mathbf{k}\left(c_{\mathbf{k}\sigma}^\dagger d_{\sigma} + d_{\sigma}^\dagger c_{\mathbf{k}\sigma} \right) =\sum_{\mathbf{k}s} V_{\mathbf{k}}(\gamma_{\mathbf{k}s}^\dagger d_{\mathbf{k}s} + \gamma_{\mathbf{k}s}d_{\mathbf{k}s}^\dagger), \end{aligned} \end{equation} $V_k$ is the hybridization strength, and we assume that the electrons on the magnetic impurity is equally coupled to the conduction and valence bands. The momentum space impurity operators $d_{\mathbf{k}s}$ are connected to the original ones $d_{\sigma}$ through the following unitary transformation \begin{equation} \begin{aligned} \{d_{\mathbf{k}+}, d_{\mathbf{k}-}\}^T = U\{d_\uparrow, d_{\downarrow}\}^T.\\ \end{aligned} \end{equation} We assume that the hybridization only occurs between the magnetic impurity and the conduction electrons on the same location in coordinate space. Hence in the following, the hybridization strength $V_{\mathbf{k}}$ is in fact momentum-independent. \section{The self-consistent calculation }\label{Sec:selfconsist} First we may assume $H_V=0$, which is the simplest case that the magnetic impurity and the host material is completely decoupled from each other. The ground state of $H_0$ is given by \begin{equation} \begin{aligned} |\Psi_0 \rangle =\prod _{\mathbf{k}s} \gamma _{\mathbf{k}s}^{\dagger}|0\rangle , \end{aligned} \end{equation} where the product runs over all the states below the Fermi surface, and $s=\{+,-\}$ refer to the valence and the conduction bands in the tilted Dirac cone. If we consider about singly occupied impurity, and ignore the hybridization between conduction electrons and the magnetic impurity, the total energy of the system is just the sum of the bare impurity energy and the total energy of the tilted Dirac cone, \begin{equation} \begin{aligned} E_0=\epsilon _d -\mu +\sum _{\mathbf{k}s}(\epsilon_{\mathbf{k}s}-\mu). \end{aligned} \end{equation} In order to investigate the eigenstate property, we utilize a trial wavefunction approach. The Coulomb repulsion $U$ is assumed to be a finite but very large value, and $\epsilon_d$ is below the chemical potential, such that the impurity site is always singly occupied. If the hybridization interaction is taken into account, the band electron states and the localized states are combined. According to the most right side of Eq. \ref{Eq:Hv}, the hybridization term only involves the band states and the impurity states with the same indices $\{\mathbf{k}s\}$, such that the trial wave function for the ground state can be written in the diagonal form of $\{\mathbf{k}s\}$ as \begin{equation}\label{Eq:newwavef} \begin{aligned} |\Psi \rangle =\left(a_0+\sum _{\mathbf{k}s}a_{\mathbf{k}s}d_{\mathbf{k}s}^{\dagger}\gamma_{\mathbf{k}s}\right)|\Psi_0\rangle. \end{aligned} \end{equation} $a_{0}$, $a_{\mathbf{k}s}$ are all numbers and they are the variational parameters to be determined through self-consistent calculations. The energy of total Hamiltonian in the variational state $|\Psi \rangle $ shall be \begin{equation}\label{Eq:energy} \begin{aligned} E=\frac{\langle\Psi |H|\Psi\rangle}{\langle\Psi|\Psi\rangle}, \end{aligned} \end{equation} where $\langle\Psi|\Psi\rangle=a_0^2+\sum_{\mathbf{k}s}a_{\mathbf{k}s}^2$=1 according to the wavefunction normalization condition. \begin{figure}[t] \begin{center} \includegraphics[scale=0.4, bb=280 60 400 517]{5_binding.eps} \end{center} \caption{(Color online). The results of binding energy for $v_x=v_y=1.0$ at $\mu=0$ with various values of $v_t$. The impurity energy level is chosen as $\epsilon_d = -0.01\Gamma_d$. When $v_t=0$, the magnetic impurity and the conduction electrons form bound states only if $2\pi(V_\mathbf{k}/\Gamma_d)^2 > \frac{|\epsilon_d|}{\Gamma_d}$\cite{Feng2010}. Thus the critical value of hybridization shall be $V_c = \sqrt{|\epsilon_d|\Gamma_d/(2\pi)}$. As $v_t$ increases, there still exists a critical value of hybridization $V_c$, and it decreases as the Dirac cone is more and more strongly tilted. } \label{Fig:5_binding} \end{figure} Then the total energy of the tilted Dirac system with a magnetic impurity in the trial state $|\Psi \rangle$ writes \begin{equation}\label{Eq:totalE} \begin{aligned} E&=\frac{\sum_{\mathbf{k}s} \left[ (E_0 - \epsilon_{\mathbf{k}s} +\mu)a_{\mathbf{k}s}^2 + 2V_\mathbf{k}a_0a_{\mathbf{k}s} +(\epsilon_{\mathbf{k}s}-\mu)a_0^2 \right]} {a_0^2+\sum_{\mathbf{k}s}a_{\mathbf{k}s}^2}. \end{aligned} \end{equation} The variational principle requires that $\partial E/\partial a_0=\partial E/\partial a_{\mathbf{k}}=0$, which will lead us to two equations below: \begin{equation}\label{Eq:threeab} \begin{aligned} & (E-\sum_{\mathbf{k}s}(\epsilon_{\mathbf{k}s}-\mu))a_0 = \sum_{\mathbf{k}s}V_\mathbf{k} a_{\mathbf{k}s},\\ & (E-E_0+(\epsilon_{\mathbf{k}s}-\mu))a_{\mathbf{k}s} = V_\mathbf{k} a_0.\\ \end{aligned} \end{equation} We then obtain the self-consistent equation \begin{equation}\label{Eq:selfConsis} \begin{aligned} \epsilon_d - \mu - \Delta_b = \sum_{\mathbf{k}s} \frac{V_\mathbf{k}^2}{\epsilon_{\mathbf{k}s} -\mu- \Delta_b }, \\ \end{aligned} \end{equation} $\Delta_b=E_0-E$ is the binding energy. If $\Delta_b>0$, the hybridized state has lower energy and is more stable than the bare state. $\Delta_b$ can be obtained by numerically solving Eq. \ref{Eq:selfConsis}, and $a_0$ and $a_\mathbf{k}$ can be calculated according to the relations \begin{equation}\label{Eq:a0ak} \begin{aligned} a_0^2 + \sum_{\mathbf{k}s}a_{\mathbf{k}s}^2=1,\\ a_{\mathbf{k}s} = \frac{V_\mathbf{k}}{\epsilon_{\mathbf{k}s}-\mu-\Delta_b}a_0. \end{aligned} \end{equation} If $v_x= v_y= 1.0$ and $v_t =0$ the Dirac cone is not tilted at all, the band structure given in Eq. \ref{Eq:tilt_dispersion} is isotropic in the momentum space and the binding energy shall be exactly the same as that in a 2D helical metal\cite{Feng2010}. If $\mu=0$, the DOS is zero, such that the hybridization has a critical value $V_c$, below which the system has no positive binding energy. The results of the binding energy for $v_x=v_y=1.0$ with various $v_t$ values are given in Fig. \ref{Fig:5_binding}. The impurity energy level is chosen as $\epsilon_d = -0.01\Gamma_d$. When $v_t=0$, the magnetic impurity and the conduction electrons form bound states only if $2\pi(V_\mathbf{k}/\Gamma_d)^2 > \frac{|\epsilon_d|}{\Gamma_d}$\cite{Feng2010}. Thus the critical value of hybridization shall be $V_c = \sqrt{|\epsilon_d|\Gamma_d/(2\pi)}$. As $v_t$ increases, there still exist a critical value of hybridization $V_c$, since the DOS at the Fermi energy still vanishes for $v_t<v_y$. However, $V_c$ decreases as the Dirac cone is more strongly tilted, indicating that the tilted Dirac system forms a bound state more easily than the Dirac cones which are not tilted. For a more complicated case when $v_x \neq v_y$, if the DOS at $\mu=0$ is still zero, there should exist a critical value of hybridization $V_c$, since the existence of a critical value merely depends on the DOS at the Fermi energy. The values of $V_c$ is determined by the velocities $v_i$ ($i=x,y,t$). When $\mu\neq 0$, the DOS at the Fermi energy becomes finite, so there exists positive binding energy for arbitrary $V_k$ values. While $v_t> 0$, the band structure of the Dirac cone is tilted along the $k_y$ axis, and if $v_t$ is larger than $v_y$, the Dirac cone is so strongly tilted that the DOS at the Fermi energy for $\mu=0$ becomes finite. In this case the magnetic impurity and the conduction electrons always form a bound state. \section{Spin-spin correlation}\label{Sec:sscorr} In this section, we study about the spin-spin correlation between the magnetic impurity and the conduction electrons. The spin operators of magnetic impurity and conduction electrons are defined as $\mathbf{S_d}=\frac{1}{2} d^{\dagger}\sigma d$, $\mathbf{S_c}=\frac{1}{2} c^{\dagger}\sigma c$, where $\sigma$ is the spin-Pauli Matrix. The Fourier transformations of the conduction electrons read $c_{\sigma }(\bf{r})=\frac{1}{\sqrt{N}}\sum_{\bf{q}} e^{\text{i\bf{qr}}}c_{\text{\bf{q}$\sigma$}} ;\text{ }c_{\sigma }{}^{\dagger }(r)=\frac{1}{\sqrt{N}}\sum _{\bf{q}} e^{-\text{i\bf{qr}}}c_{\text{\bf{q}$\sigma$}}{}^{\dagger }$. We choose the position of magnetic impurity as $\mathbf{r}=0$, and consider about spin-spin correlation $\mathbf{J}_{uv}(\mathbf{r})= \langle S_{c}^u (\mathbf{r})S_d^v(0)\rangle$ on the $x-y$ plane, where $\mathbf{r}$ is the location of the conduction electron. Here $u,v=x,y,z$ and $\langle \cdots \rangle$ denotes the ground state average. \begin{figure}[t] \begin{center} \includegraphics[scale=0.53, bb=180 100 400 775]{1_110.eps} \end{center} \caption{(Color online). The results of $J_{uv}(\mathbf{r})\times r^2$ for $v_x = v_y = 1.0$ and $v_t=0$. (a) $r^2 J_{zz}(\mathbf{r})$, (b) $r^2 J_{xx}(\mathbf{r})$, (c) $r^2 J_{yy}(\mathbf{r})$, (d) $r^2 J_{xz}(\mathbf{r})$, (e) $r^2 J_{yz}(\mathbf{r})$, (f) $r^2 J_{xy}(\mathbf{r})$. } \label{Fig:1_110} \end{figure} The spin-spin correlation function is evaluated for relatively small $v_t$ and for $\mu \neq 0$. In this case, the DOS at half-filling still vanishes, but the DOS is significant when $\mu\neq 0$ such that the binding energy $\Delta_b$ is always positive. This means that the magnetic impurity and the conduction electrons always form a bound state. The diagonal terms and the nonzero off-diagonal terms of the spin-spin correlation in coordinate space are given by \begin{equation}\label{Eq:sscorr} \begin{aligned} \mathbf{J}_{zz}(\mathbf{r})&= -\frac{1}{8} \left| \mathcal{A(\mathbf{r})} \right|^2 + \frac{1}{16} \left| \mathcal{B(\mathbf{r})} \right|^2 +\frac{1}{16} \left| \mathcal{C(\mathbf{r})} \right|^2 ,\\ \mathbf{J}_{xx}(\mathbf{r})&= -\frac{1}{8} \left| \mathcal{A(\mathbf{r})} \right|^2 - \frac{1}{8} \text{Re}\left[ \mathcal{B^*(\mathbf{r})} \mathcal{C(\mathbf{r})} \right] \\ \mathbf{J}_{yy}(\mathbf{r})&= -\frac{1}{8} \left| \mathcal{A(\mathbf{r})} \right|^2 + \frac{1}{8} \text{Re}\left[ \mathcal{B^*(\mathbf{r})} \mathcal{C(\mathbf{r})} \right] \\ \mathbf{J}_{xz}(\mathbf{r})&= -\frac{1}{8}\text{Re}\left[ \mathcal{A^*(\mathbf{r})} \mathcal{B(\mathbf{r})} \right]+ \frac{1}{8}\text{Re}\left[ \mathcal{A^*(\mathbf{r})} \mathcal{C(\mathbf{r})} \right], \\ \mathbf{J}_{yz}(\mathbf{r})&= \frac{1}{8}\text{Im}\left[ \mathcal{A^*(\mathbf{r})} \mathcal{B(\mathbf{r})} \right]+ \frac{1}{8}\text{Im}\left[ \mathcal{A^*(\mathbf{r})} \mathcal{C(\mathbf{r})} \right], \\ \mathbf{J}_{xy}(\mathbf{r})&= \frac{1}{8}\text{Im}\left[ \mathcal{B^*(\mathbf{r})} \mathcal{C(\mathbf{r})} \right], \\ \end{aligned} \end{equation} where $\mathcal{A}(\mathbf{r})= \sum_{\mathbf{k}s} e^{i\mathbf{k\cdot r}}a_{\mathbf{k}s}$, $\mathcal{B}(\mathbf{r})= \sum_{\mathbf{k}s} \text{sgn}(s) e^{i(\mathbf{k\cdot r}+\theta_{\mathbf{k}})}a_{\mathbf{k}s}$, $\mathcal{C}(\mathbf{r})= \sum_{\mathbf{k}s} \text{sgn}(s) e^{i(\mathbf{k\cdot r}-\theta_{\mathbf{k}})}a_{\mathbf{k}s}$. In Fig. \ref{Fig:1_110} - Fig. \ref{Fig:4_10805} we show the patterns of spatial spin-spin correlation between the magnetic impurity and conduction electrons in the $x-y$ plane, for different values of $v_x$, $v_y$ and $v_t$. For all the cases, the Dirac cone is weakly tilted that the DOS at the Dirac point is still zero. The various components of spin-spin correlation show spatial oscillations and decay with respect to the displacement $\mathbf{r}$. In order to investigate the patterns more clearly, we show $J_{uv}(\mathbf{r})\times r^2$ instead of $J_{uv}(\mathbf{r})$, and the length unit is $1/\Gamma_d$ where $\Gamma_d$ is the energy cut-off. The spin-spin correlation between the magnetic impurity and a conduction electron of distance $\mathbf{r}$ follows a power law decay $1/r^{d}$ if $r<\xi_K$, and $1/r^{d+1}$ if $r>\xi_K$, with $\xi_K$ the Kondo coherence length and $d$ the dimensionality of the host material\cite{ishii1978,Barzykin1998,Borda2007}. In fact the binding energy $\Delta_b$ shall take different values while $v_x$, $v_y$ or $v_t$ changes. Here for simplicity, we may fix $\Delta_b$ as a constant value since the change of the spatial spin-spin correlation is our major concern. The parameter we use in this section is $V_k = 0.05 \Gamma_d$, $\Delta_b=0.1\Gamma_d$, $\mu=-0.1\Gamma_d$. We may find through simple calculation that the off-diagonal terms of $J_{uv}(\mathbf{r})$ have the relation that $J_{xz}(\mathbf{r})=-J_{zx}(\mathbf{r})$, $J_{yz}(\mathbf{r})=-J_{zy}(\mathbf{r})$ and $J_{xy}(\mathbf{r})=J_{yx}(\mathbf{r})$, so only $J_{xz}(\mathbf{r})$, $J_{yz}(\mathbf{r})$ and $J_{xy}(\mathbf{r})$ are explicitly given in Fig. \ref{Fig:1_110} - Fig. \ref{Fig:4_10805}. Shown in Fig. \ref{Fig:1_110} are the results of $J_{uv}(\mathbf{r})$$(u,v=x,y,z)$ while $v_x = v_y = 1.0$ and $v_t=0$. The spatial patterns of all the six components of the spin-spin correlation shall be exactly the same as those given in a 2D helical metal\cite{Feng2010}. $J_{zz}(\mathbf{r})$ shown in Fig. \ref{Fig:1_110} (a) is antiferromagnetic around the magnetic impurity, and is isotropic in the coordinate space. Both $J_{xx}(\mathbf{r})$ in Fig. \ref{Fig:1_110} (b) and $J_{yy}(\mathbf{r})$ in Fig. \ref{Fig:1_110} (c) are also dominated by antiferromagnetic correlation around the impurity location, but is spatially anisotropic along the $x$- or $y$-axis. $J_{xz}(\mathbf{r})$ plotted in Fig. \ref{Fig:1_110} (d) shows more interesting behavior. Around the magnetic impurity, the correlation is antiferromagnetic while $y>0$ and ferromagnetic while $y<0$, and is zero along the $y$-axis. $J_{yz}(\mathbf{r})$ in Fig. \ref{Fig:1_110} (e) shows the same behavior as $J_{xz}(\mathbf{r})$ if we exchange the real space coordinate $x\rightarrow y$ and $y\rightarrow x$. $J_{xy}(\mathbf{r})$ is plotted in Fig. \ref{Fig:1_110} (f). It is ferromagnetic while $xy>0$ and antiferromagnetic when $xy<0$, and is zero along both the $x$- and $y$- axes. While $v_x=v_y$ and $v_t=0$, the dispersion relation of the Dirac cone is isotropic in the momentum space, and hence the various components of spin-spin correlation between the magnetic impurity and conduction electrons show highly symmetric pattern. However, when $v_t$ term becomes finite, the Dirac cone is tilted along the $y$-axis, and accordingly the $J_{uv}(\mathbf{r})$ ($u,v=x,y,z$) becomes highly anisotropic in the $x-y$ plane. Shown in Fig. \ref{Fig:2_1105.eps} are the results of $J_{uv}(\mathbf{r})\times r^2$ for $v_x = v_y = 1.0$ and $v_t=0.5$. The band structure of the tilted Dirac cone is given in Fig. \ref{Fig:0_dispersion.pdf}, that the symmetry between the $k_x$- and $k_y$-axis are broken by the non-zero $v_t$ term. The broken symmetry in the momentum space also affect the patterns of spin-spin correlation in the real space, and the results are shown in Fig. \ref{Fig:2_1105.eps}. We can see that all the components of spin-spin correlation oscillates faster along the $y$-axis, and slower along the $x$-axis in comparison to those give in Fig. \ref{Fig:1_110}. The spatial spin-spin correlation shows clear interference patterns with large $r$. $J_{zz}(\mathbf{r})$ shown in Fig. \ref{Fig:2_1105.eps} (a) becomes strongly anisotropic in real space. Around the magnetic impurity, $J_{zz}(\mathbf{r})$ is still antiferromagnetic, but the correlation along the $x$- and $y$-axis oscillates in different periods. $J_{xx}(\mathbf{r})$ and $J_{yy}(\mathbf{r})$ are both squeezed along the $y$-axis, and the interference pattern emerges for large $r$. $J_{xx}(\mathbf{r})$ and $J_{yy}(\mathbf{r})$ given in Fig. \ref{Fig:2_1105.eps} (b) and (c) also show interference pattern when $r$ is away from the magnetic impurity location. For both of the spin-spin correlation components, the antiferromagnetic behavior around the magnetic impurity remains unchanged, but the oscillation on the $x$-, $y$-axis becomes slightly different. In Fig. \ref{Fig:2_1105.eps} (d) and (e), we show $J_{xz}(\mathbf{r})$ and $J_{yz}(\mathbf{r})$ which show much different patterns in comparison with those given in Fig. \ref{Fig:1_110} (d) and (e). $J_{xz}(\mathbf{r})$ and $J_{yz}(\mathbf{r})$ are both squeezed along the $y$-axis, and show clear interference patterns near the $x$-axis while $r$ is large. $J_{xy}(\mathbf{r})$ given in Fig. \ref{Fig:2_1105.eps} (e) is the most interesting one. Besides the interference patterns for large $r$, it also shows different symmetry. When $v_t=0$ as shown in Fig. \ref{Fig:1_110} (f), the $J_{xy}(\mathbf{r})$ is always zero along the $x$- or $y$-axis, and the absolute value has a 4-fold rotational symmetry. However when $v_t\neq 0$, the $J_{xy}(\mathbf{r})$ is still zero along the $x$-axis, but becomes non-zero along the $y$-axis. The 4-fold rotational symmetry of the absolute value is also broken due to the tilting term. \begin{figure}[t] \begin{center} \includegraphics[scale=0.53, bb=180 100 400 775]{2_1105.eps} \end{center} \caption{(Color online). The results of $J_{uv}(\mathbf{r})\times r^2$ for $v_x = v_y = 1.0$ and $v_t=0.5$. (a) $r^2 J_{zz}(\mathbf{r})$, (b) $r^2 J_{xx}(\mathbf{r})$, (c) $r^2 J_{yy}(\mathbf{r})$, (d) $r^2 J_{xz}(\mathbf{r})$, (e) $r^2 J_{yz}(\mathbf{r})$, (f) $r^2 J_{xy}(\mathbf{r})$. } \label{Fig:2_1105.eps} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[scale=0.53, bb=180 100 400 775]{4_10805.eps} \end{center} \caption{(Color online). The results of $J_{uv}(\mathbf{r})\times r^2$ for $v_x =1.0$, $ v_y = 0.8$ and $v_t=0.5$. (a) $r^2 J_{zz}(\mathbf{r})$, (b) $r^2 J_{xx}(\mathbf{r})$, (c) $r^2 J_{yy}(\mathbf{r})$, (d) $r^2 J_{xz}(\mathbf{r})$, (e) $r^2 J_{yz}(\mathbf{r})$, (f) $r^2 J_{xy}(\mathbf{r})$. } \label{Fig:4_10805} \end{figure} In Fig. \ref{Fig:4_10805}, we shows the spin-spin correlation components while $v_x=1.0$, $v_y=0.8$ and $v_t=0.5$. Actually in the $8-pmmn$ borophene\cite{Lopez2016}, the typical value of the parameters are $v_x=0.89$, $v_y=0.67$ and $v_t=0.32$. Hence our choice of the $v_i$($i=x,y,t$) values will show spin-spin correlation patterns very close to those in a $8-pmmn$ borophene. Besides the $v_t$ term which tilts the Dirac cone along the $k_y$ axis, the velocity along the $k_x$- and $k_y$-axis becomes distinct, and this will add extra anisotropy in the momentum space. In general, we can easily find that the components of the spatial spin-spin correlation are more strongly squeezed than those in Fig. \ref{Fig:2_1105.eps}. Here we set $v_x > v_y$ that the anisotropy along the $x$- and $y$-axis is enhanced by the velocity terms. We can see that the spin-spin correlation decays and oscillates much faster along the $y$-axis and slower along the $x$-axis. $J_{zz}(\mathbf{r})$ shown in Fig. \ref{Fig:4_10805} (a) becomes more strongly anisotropic in the coordinate space. Around the magnetic impurity, $J_{zz}(\mathbf{r})$ is still antiferromagnetic, but the correlation along the $x$-axis oscillates much slower than that along the $y$-axis. This is caused by the distinct velocity along the $k_x$, $k_y$-axis. In contrast, if we choose $v_x<v_y$, the velocity difference will compensate the anisotropy caused by the tilting term, and shows spin-spin correlation patterns more close to those given in Fig. \ref{Fig:1_110}. $J_{xx}(\mathbf{r})$ and $J_{yy}(\mathbf{r})$ given in Fig. \ref{Fig:4_10805} (b) and (c) are more strongly squeezed along the $y$-axis, and show more clear interference pattern when $r$ is away from the magnetic impurity location. The antiferromagnetic nature remains unchanged, but the oscillation along the $x$-, $y$-axis show completely distinct patterns. When $v_x=v_y$ and $v_t=0$ as given in Fig. \ref{Fig:1_110}, $J_{xx}(\mathbf{r})$ and $J_{yy}(\mathbf{r})$ if we rotate the coordinate space by $90^\circ$. However, this symmetry is completely broken by the tilting term and the distinct velocities along the $x$-, $y$-axis. In Fig. \ref{Fig:4_10805} (d) and (e), we show $J_{xz}(\mathbf{r})$ and $J_{yz}(\mathbf{r})$ which are more strongly squeezed along the $y$-axis, and shows clear interference patterns near the $x$-axis for large $r$. $J_{xy}(\mathbf{r})$ is given in Fig. \ref{Fig:4_10805} (e). We can see that the 4-fold rotational symmetry of the absolute value is completely destroyed. The $x-y$ spin-spin correlation is still zero along the $x$-axis, but is clearly non-zero along the $y$-axis. \section{conclusions}\label{conclusion} In this paper we utilize the variational method study the Kondo screening of a spin-1/2 magnetic impurity in tilted Dirac surface states at the large-$U$ limit. The host material is described by a tilted Dirac cone in two dimensions. The Kondo screening in topological semimetals using the same trial wavefunction method had been studied in Ref. \cite{Jinhua2015}. In order to see the spatial changes of spin-spin correlation, we choose two sets of $v_i$($i=x,y,t$) parameters, they are: (1) $v_x=v_y=1.0$, $v_t=0.5$, (2) $v_x = 1.0$, $v_y=0.8$, $v_t=0.5$ and compare the results with the counterparts in a 2D helical metal while $v_x=v_y=1.0$, $v_t=0$\cite{Feng2010}. When the Dirac cone is slightly tilted ($v_t<v_x$, $v_y$), the DOS at a charge neutral point still vanishes as in graphene, so there exist a critical value of hybridization $V_c$. The magnetic impurity and conduction electrons form a bound state only if $V_\mathbf{k} > V_c$. If the Fermi surface is tuned away from the Dirac point, then the magnetic impurity and conduction electrons will always form a bound state for arbitrary $V_\mathbf{k}$. If a finite $v_t$ term is added, the Dirac cone is tilted along the $k_y$-axis. The components of the spatial spin-spin correlation oscillates with different period along the $x$- or $y$-axis, and show more anisotropic patterns. The tilting of the Dirac cone does not change the signs of correlation close to the magnetic impurity, but interference patterns show up while $r$ is large. So far, we have only studied the effect of a single magnetic impurity in tilted Dirac fermion systems with spin-orbit coupling in two dimensions. The 3D tilted Dirac/Weyl fermion systems should exhibit similar behaviors to those of the 2D tilted Dirac systems. However, the spin-spin correlation is expected to show more rich patterns due to an extra dimension and these will be investigated in our future work. \section{Acknowledgement} This work is supported in part by NSFC (under Grant No. 11604166) and the K.C. Wong Magna Fund in Ningbo University. L. Li is supported by the NSFC (under Grant No. 11604138), D.-H. Xu is supported by the NSFC (under Grant No. 11704106) and the Chutian Scholars Program in Hubei Province. \bibliographystyle{apsrev4-1}
1,108,101,563,950
arxiv
\section{Introduction} Over the past decade, there has been an increasing demand for accurate indoor localization solutions to support a multitude of applications, ranging from providing location-based advertising to users in a shopping mall \cite{Steiniger_et_al_2006}, to better solutions for tracking inventory in a warehouse \cite{select}, to assisted-living applications \cite{Witrisal_et_al_2016}. {\color{black}In many of these applications, the availability of location information enhances communication protocols (e.g., in a warehouse, communication with a tag can take place once it is within range of a reader) and the importance of localization to such communications systems has led to standardization efforts such as IEEE 802.15.4a \cite{ieee_802.15.4a}, which is a standard for joint localization and communication.} While localization of cellular devices falls under the category of active localization (where the target transmits a ranging signal), the current paper will focus on the important case of passive localization, where the target only reflects radio-frequency (RF) signals (i.e., radar). In the above-mentioned applications, there are usually multiple targets present that have nearly identical radar signatures and hence, cannot be distinguished on that basis alone. The typical localization architecture involves the deployment of multiple transmitters (TXs) and receivers (RXs) and hence, can be modeled using the distributed MIMO (multiple-input multiple-output) radar framework \cite{haimovich_blum_et_al_2008}. Due to cost and space constraints (e.g., in sensor applications), each TX and RX may be equipped with a single antenna only\footnote{\color{black}Since a major contributor to the cost is the amplifier in a RF chain, a cheaper distributed MIMO architecture can be realized by deploying a single RF chain each for the TX and RX antennas and switching between them.}. Hence, the radar cannot exploit the information contained in the angles of arrival/departure from the target-reflected signal as direction finding requires not just multiple antennas, but also careful calibration of the antenna elements, which might be cost-prohibitive in the case of sensor nodes and similar devices. This motivates the study of multi-target localization (MTL) without angular information in an indoor distributed MIMO radar setting. In such a setting, a direct path (DP) is one that propagates directly from TX to target to RX. Each DP gives rise to an ellipse-shaped ambiguity region passing through the target location, with the TX and RX at the foci, and the intersection of three or more such curves unambiguously determine the target location. For indoor localization, the following additional challenges arise - (i) targets can be blocked by non-target scatterers such as walls, furniture etc., (ii) the scatterers can also give rise to indirect paths (IPs) which need to be distinguished from DPs, (iii) in the presence of multiple targets, yet another challenge is to \emph{match} the DPs to the right targets\footnote{This process is also referred to as data association \cite{bar1995multitarget, Mah_2007}. Throughout this work, we shall use the term matching to refer to data association.}. An incorrect matching would result in ghost targets \cite{Shen_and_Molisch_2013_2}. There has been a considerable amount of literature on MIMO radar over the last decade. The fundamental limits of localization in MIMO radar networks were studied in \cite{Godrich_et_al_2008}. A number of works have dealt with MTL using co-located antenna arrays at the TX and RX \cite{Zheng_et_al_2012, li.etal_2013, yan.etal_2008, Liu_2010, Gorji_2010_et_al, Duofang_et_al_2008, Jinli_et_al_2008, Jin_et_al_2009, Chen_et_al_2010, Miao_et_al_2008, Gorji_et_al_2012, xia.etal_2007, xu.li_2007}. The single-target localization problem using widely-spaced antenna arrays was investigated in \cite{He_et_al_2010, Niu_et_al_2012} and the multi-target case in \cite{YueAi_2014}. None of these works address the issues of blocking and multipath common in an indoor environment. The works closest to ours are \cite{Kim_Lee_2014} and \cite{Shen_and_Molisch_2013_2}, where MTL in a distributed MIMO radar setting is addressed. The experiments and the system model in \cite{Kim_Lee_2014} do not consider the effect of blocking and a brute-force method is used for matching the DPs to targets, which is computationally infeasible for a large number of targets, as shown in Section~\ref{sec:algo}. On the other hand,\cite{Shen_and_Molisch_2013_2} considers the effect of blocking, but relies on the assumption of a constant and independent blocking probability for all DPs. In reality however, the DP blocking events in any environment are not mutually independent. As shown in Fig. \ref{fig:1}, the location of the two TXs is such that if one of them has line-of-sight (LoS) to the target, it is highly \emph{likely} that the other does as well. Similarly, if one of them is blocked with respect to the target, it is highly \emph{likely} that the other is as well. In other words, the DP blocking events are, in general, correlated and the extent of correlation depends on the network geometry. In this work, we investigate how correlated blocking can be exploited to obtain better location estimates for the targets. \begin{figure} \centering \includegraphics[scale=0.25]{intro_to_corr_block} \caption{Correlated blocking: An example} \label{fig:1} \end{figure} Intuitively, our approach works as follows: when three or more ellipses intersect at a point, we first assume that they are DPs. We then compute the joint probability that LoS exists to the TXs and RXs in question at the point of intersection. If the probability is sufficiently high, then we conclude that a target is present. The main contributions of this work are as follows: \begin{itemize} \item The general problem of localizing all targets and scatterers in an unknown environment is cast as a Bayesian estimation problem. Such a fundamental formulation of the problem goes beyond the description in \cite{Shen_and_Molisch_2013_2} (and other papers, to the best of our knowledge). \item We show this problem to be ill-posed and proceed to derive a tractable approximation called the Bayesian MTL problem, where the objective is to localize only the targets, but not the scatterers. This is also a Bayesian estimation problem where the joint DP blocking distribution plays the role of a prior. \item We propose a polynomial time approximate algorithm to solve the Bayesian MTL problem, which can be used even when only empirical blocking statistics, obtained via measurements or simulations, are available. \end{itemize} This paper consists of six sections. In the system model in Section \ref{sec:sysmodel}, we define decision variables to decide if a multipath component (MPC) is a DP, IP or a noise peak. The generalized problem of localizing all targets and scatterers is formulated as a Bayesian estimation problem in Section \ref{sec:BLP}, where along with the target and scatterer locations, the aforementioned decision variables are the estimation parameters. Furthermore, this problem is shown to be ill-posed and a more tractable approximation called the Bayesian MTL problem is derived, where the objective is to localize only the targets. In Section \ref{sec:algo}, the brute force solution to the Bayesian MTL problem is shown to have exponential complexity in the number of targets and TX-RX pairs (TRPs). As a result, a sub-optimal polynomial time algorithm taking correlated blocking into account is proposed instead. Simulation as well as experimental results for the proposed algorithm are presented in Section \ref{sec:simres} and finally, Section \ref{sec:summary} concludes the paper. \textbf{Notation:} Vectors and scalars are represented using bold (e.g., $\mathbf{x}$) and regular (e.g., $x$) lowercase letters, respectively. In particular, $\mathbf{1}$ denotes the all-one vector. For a collection of scalars $\{ a_{ij} : i\in J_1, j \in J_2 \}$, where $J_1$ and $J_2$ are discrete index sets, $vec(a_{ij})$ denotes the column vector containing all $a_{ij}$, ordered first according to index $i$, followed by $j$ and so on. Similarly, $\sum\limits_{i,j}$ and $\prod\limits_{i,j}$ respectively denote summation and product over index $i$, followed by $j$ and so on. For positive integers $a$ and $b$, $a \mbox{ \rm mod } b$ denotes the modulo operator, i.e., the remainder of the operation $a/b$. The set of real numbers is denoted by $\mathbb{R}$. For $x\in \mathbb{R}$, $\lfloor x \rfloor$ denotes the greatest integer lesser than or equal to $x$. For continuous random variables $X$ and $Y$, $f(X,Y)$ denotes their joint probability density function (pdf), $f(X)$ the marginal pdf of $X$, and $f(X|Y)$ the conditional pdf of $X$, given $Y$. $\mathbb{P}(.)$ and $\mathbb{E}[.]$ denote the probability and expectation operators, respectively. \section{System Model} \label{sec:sysmodel} In this section, we introduce the formal notation to decide the identity of a MPC and model the correlated blocking of DPs. This lays the groundwork for the fundamental problem formulation that will then be solved in Sections \ref{sec:BLP} and \ref{sec:algo}. Consider a distributed MIMO radar with $M_{\rm TX}$ TXs and $M_{\rm RX}$ RXs, each equipped with a single omni-directional antenna and deployed in an unknown environment. An unknown number of stationary point targets\footnote{The point target assumption simplifies the analysis of the problem, as each target can give rise to at most one DP. The impact of real (i.e., non-point) targets can be seen in our experimental results, presented in Section~\ref{sec:exp}.} are present and the objective is to localize all of them. We assume that the environment has non-target scatterers too, which can either block some target(s) to some TX(s) and/or RX(s), and/or give rise to IPs. All TX and RX locations are assumed to be known. The number of TRPs, denoted by $I$, equals $M_{\rm TX} M_{\rm RX}$. Unless otherwise mentioned, the convention throughout this work is that the $i$-th TRP $(i=1, \cdots, I)$ comprises of the $i_T$-th TX and $i_R$-th RX, where $i_T = 1+(i-1) \mbox{ \rm mod }M_{\rm TX}$ and $i_R = \lfloor (i-1)/M_{\rm TX} \rfloor + 1$ (Table \ref{tab:TRPeg}). \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline TRP No. & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline TX No. & 1 & 2 & 3 & 1 & 2 & 3 & 1 & 2 & 3 \\ \hline RX No. & 1 & 1 & 1 & 2 & 2 & 2 & 3 & 3 & 3 \\ \hline \end{tabular} \caption{TRP indexing notation for the 3 TX, 3 RX case} \label{tab:TRPeg} \end{table} For ease of notation, we restrict our attention to the two-dimensional (2D) case, where all the TXs, RXs, scatterers and targets are assumed to lie in the horizontal plane. The extension to the 3D case is straightforward. Let the TX and the RX for the $i$-th TRP be located at $(c_i,d_i)$ and $(a_i,b_i)$, respectively. We assume that the TXs use orthogonal signals so that the RXs can distinguish between signals sent from different TXs. For each TRP, the RX extracts the channel impulse response from the measured receive signal; an MPC is assumed to exist at a particular delay (within the resolution limit of the RX) when the amplitude of the impulse response at that delay bin exceeds a threshold; alternately, a maximum likelihood (ML) estimator or other high-resolution algorithms can be used to extract the amplitudes and delays of all the MPCs \cite{CLEAN},\cite{Richter_2005}. All MPCs that do not involve a reflection off a target (e.g., TX$\rightarrow$scatterer$\rightarrow$RX) are assumed to be removed by a background cancellation technique. For stationary or even slow-moving targets, a simple way to achieve this is to measure the impulse responses for all the TRPs when no targets are present. This set of template signals can then be subtracted from the signals obtained when the target(s) are introduced, which would remove MPCs of the form TX$\rightarrow$scatterer$\rightarrow$RX since they appear twice \cite{salmi_molisch_2011}. Other background subtraction techniques for localization and tracking applications are described in \cite{Chen_Leung_Tian_2014},\cite{Zet_Esch_Jova_thoma_2015}. An MPC involving more than two reflections is assumed to be too weak to be detected. Finally, two or more MPCs could have their delays so close to one another that they can be unresolvable due to finite bandwidth. Under this model, each extracted MPC could be one or more of the following: \begin{itemize} \item[1.] A DP to one or more targets, which occurs when a target has LoS to both the TX and RX in question. \item[2.] An IP of the first kind, which is of the form TX$\rightarrow$target$\rightarrow$scatterer$\rightarrow$RX. \item[3.] An IP of the second kind, having the form TX$\rightarrow$scatterer$\rightarrow$target$\rightarrow$RX. \item[4.] A noise peak. \end{itemize} Each MPC gives rise to a time-of-arrival (ToA) estimate which, in turn, corresponds to a range estimate. If only additive white Gaussian noise (AWGN) is present at the RXs, then each ToA estimate is approximately perturbed by zero-mean Gaussian errors whose variance depends on the SNR via the Cramer-Rao lower bound (CRLB) and the choice of estimator \cite{Shen_et_al_2012}. For simplicity, it is assumed that all ToA estimation errors have the same variance $\hat{\sigma}^2$. The extension to the general case where the variance is different for each MPC is straightforward. Thus, for a DP, the true range of the target from its TRP is corrupted by AWGN of variance $\sigma^2=c^2\hat{\sigma}^2$, where $c$ is the speed of light in the environment. Suppose the $i$-th TRP has $N_i$ MPCs extracted from its received signal. Let $r_{ij}$ denote the range of the $j$-th extracted MPC at the $i$-th TRP and let $\mathbf{r}_i=[r_{i1}, \hspace*{1mm} r_{i2}, \hspace*{1mm} \cdots, \hspace*{1mm} r_{iN_i}] \in \mathbb{R}^{N_i \times 1}$ denote the vector of range estimates from the $i$-th TRP. Similarly, let $\mathbf{r}=[\mathbf{r}_1, \hspace*{1mm} \mathbf{r}_2, \hspace*{1mm} \cdots, \hspace*{1mm} \mathbf{r}_I] \in \mathbb{R}^{N_1N_2...N_I \times 1}$ denote the stacked vector of range estimates from all TRPs. If $r_{ij}$ is a DP corresponding to a target at $(x_t,y_t)$, then the conditional pdf of $r_{ij}$, given $(x_t,y_t)$, is Gaussian and denoted by $f_{\rm DP}(r_{ij}|x_t,y_t)$ and has the following expression: \begin{align} \label{eq:f_DP} f_{\rm DP}(r_{ij}|x_t,y_t) &= \frac{1}{\sqrt{2\pi}\sigma}\exp\left[-\frac{(r_{ij}- r_i(x_t,y_t))^2}{2\sigma^2}\right] \\ \mbox{where,} \hspace{1mm} r_i(x_t,y_t) &= \sqrt{(x_t-a_i)^2+(y_t-b_i)^2} \notag \\ & ~ + \sqrt{(x_t-c_i)^2+(y_t-d_i)^2} \notag \end{align} $r_i(x_t,y_t)$ denotes the range of a target at $(x_t,y_t)$ from the $i$-th TRP. Similarly, let $f_{\rm IP,1}(r_{ij}|x_t,y_t,u_m,v_m)$ and $f_{\rm IP,2}(r_{ij}|x_t,y_t,u_m,v_m)$ denote the conditional IP pdfs of the first and second kind, respectively, given a target at $(x_t,y_t)$ and a scatterer at $(u_m,v_m)$. These pdfs are also Gaussian, \begin{align} f_{\rm IP,1}(r_{ij}|x_t,y_t,u_m,v_m) &= \notag \\ \frac{1}{\sqrt{2\pi}\sigma}\exp&\left[-\frac{(r_{ij}- l_i(x_t,y_t,u_m,v_m))^2}{2\sigma^2}\right] \\ f_{\rm IP,2}(r_{ij}|x_t,y_t,u_m,v_m) &= \notag \\ \frac{1}{\sqrt{2\pi}\sigma}\exp&\left[-\frac{(r_{ij}- m_i(x_t,y_t,u_m,v_m))^2}{2\sigma^2}\right] \\ \mbox{where}~ l_i(x_t,y_t,u_m,v_m) &= \sqrt{(c_i-x_t)^2+(d_i-y_t)^2} \notag \\ +\sqrt{(x_t-u_m)^2+(y_t-v_m)^2} &+ \sqrt{(u_m-a_i)^2 +(v_m-b_i)^2} \notag\\ m_i(x_t,y_t,u_m,v_m) &= \sqrt{(c_i-u_m)^2+(d_i-v_m)^2} \notag \\ +\sqrt{(u_m-x_t)^2+(v_m-y_t)^2} &+ \sqrt{(x_t-a_i)^2+(y_t-b_i)^2} \notag \end{align} $l_i(x_t,y_t,u_m,v_m)$ and $m_i(x_t,y_t,u_m,v_m)$ respectively denote the path length between the $i$-th TRP, a target at $(x_t,y_t)$ and a scatterer at $(u_m,v_m)$ for an IP of the first and second kind. Finally, the range of a noise peak is modelled as a uniform random variable in the interval $[0,R_{\rm obs}]$, where $R_{\rm obs}$ denotes the maximum observable range in the region of interest. Let the number of targets and scatterers be denoted by $T$ and $M$, respectively. To determine all the unknowns, every MPC needs to be accounted for. Hence, we define the following variables, \begin{align} \label{eq:ki} k_{it}&=\begin{cases} 1, &\mbox{if $t$-th target is NOT blocked to the $i$-th TRP} \\ 0, &\mbox{else} \end{cases}\\ \label{eq:gij} g_{imt}&=\begin{cases} 1, &\minibox{if $\exists$ an IP of the first kind between the $i$-th \\ TRP, $m$-th scatterer and $t$-th target} \\ 0, &\mbox{else} \end{cases}\\ \label{eq:hij} h_{imt}&=\begin{cases} 1, &\minibox{if $\exists$ an IP of the second kind between the $i$-th \\ TRP, $m$-th scatterer and $t$-th target}\\ 0, &\mbox{else} \end{cases} \end{align} The values of $k_{it}$, $g_{imt}$ and $h_{imt}$ ($i\in \{1, \cdots, I\}$, $m\in\{1, \cdots, M\}$, $t\in\{1, \cdots, T\}$) capture the ground truth regarding the existence of DPs and IPs and depend on the map of the environment, which is unknown. Therefore, these quantities need to be estimated from $\mathbf{r}$. To do this, we define the following decision variables to determine if an MPC $r_{ij}$ is a DP, IP or noise peak, \begin{align} \tilde{k}_{ijt} &= \begin{cases} 1, &\mbox{if $r_{ij}$ is a DP to the $t$-th target} \\ 0, &\mbox{else} \end{cases}\\ \tilde{g}_{ijmt} &= \begin{cases} 1, &\minibox{if $r_{ij}$ is an IP of the first kind between the \\ $m$-th scatterer and $t$-th target} \\ 0, &\mbox{else} \end{cases}\\ \tilde{h}_{ijmt} &= \begin{cases} 1, &\minibox{if $r_{ij}$ is an IP of the second kind between the \\ $m$-th scatterer and $t$-th target} \\ 0, &\mbox{else} \end{cases}\\ \tilde{z}_{ij} &= \begin{cases} 1, &\mbox{if $r_{ij}$ is a noise peak} \\ 0, &\mbox{else} \end{cases} \end{align} Since two or more resolvable MPCs cannot be DPs to the same target or IPs of a particular kind between a given target-scatterer pair, it follows that the estimates of $k_{it}$, $g_{imt}$ and $h_{imt}$, denoted by $\hat{k}_{it}$, $\hat{g}_{imt}$ and $\hat{h}_{imt}$, respectively, are given by: \begin{align} \label{eq:constraints_k} \hat{k}_{it}=\displaystyle\sum\limits_{j=1}^{N_i} \tilde{k}_{ijt} \\ \label{eq:constraints_g} \hat{g}_{imt}=\displaystyle\sum\limits_{j=1}^{N_i} \tilde{g}_{ijmt} \\ \label{eq:constraints_h} \hat{h}_{imt}=\displaystyle\sum\limits_{j=1}^{N_i} \tilde{h}_{ijmt} \end{align} Before concluding this section, we define the following vectors which shall be useful when the Bayesian MTL problem is defined in the next section \begin{align} \label{eq:reqvecs} \mbox{Ground truth:}~ \mathbf{k} &= vec(k_{it})\\ \mathbf{g} &= vec(g_{imt}) \\ \mathbf{h} &= vec(h_{imt}) \\ \mbox{DP/IP/noise peak decisions:}~ \tilde{\mathbf{k}} &= vec(\tilde{k}_{ijt})\\ \tilde{\mathbf{g}}&= vec(\tilde{g}_{ijmt})\\ \tilde{\mathbf{h}} &= vec(\tilde{h}_{ijmt})\\\tilde{\mathbf{z}} &= vec(\tilde{z}_{ij}) \\ \mbox{Estimates of ground truth:}~ \hat{\mathbf{k}}&=vec(\hat{k}_{it}) \\\hat{\mathbf{g}}&=vec(\hat{g}_{imt}) \\\hat{\mathbf{h}}&=vec(\hat{h}_{imt}) \end{align} \section{Bayesian MTL} \label{sec:BLP} Using the notation from the previous section, the MTL problem in multipath environments with correlated blocking is formulated as a Bayesian estimation problem in this section. We first show that the scatterer locations cannot be determined uniquely, in general, as they are not point objects. Then, we show that that the distribution of $\mathbf{k}$ in (\ref{eq:reqvecs}) captures correlated blocking in its entirety and acts as a prior. We also assume a single error at most between the entries of $\hat{\mathbf{k}}_t$ and $\mathbf{k}_t$ in order to obtain a tractable algorithm for the MTL problem in Section~\ref{sec:algo}. Let $\Theta_{\rm tar}=\{(x_t,y_t):t=1,\cdots,T\}$ and $\Theta_{\rm sc}=\{(u_m,v_m):1,\cdots,M\}$ denote the collection of target and scatterer locations, respectively, and let $\tilde{\mathbf{p}}_{\rm dec}=[\tilde{\mathbf{k}},\tilde{\mathbf{g}},\tilde{\mathbf{h}}, \tilde{\mathbf{z}}]$ denote the vector of decision variables. Using the terminology defined in Section \ref{sec:sysmodel}, determining the location of all targets and scatterers can be formulated as a Bayesian estimation problem in the following manner, \begin{align} \underset{\substack{T, M, \Theta_{\rm tar}, \Theta_{\rm sc}, \\\tilde{\mathbf{p}}_{\rm dec}, \mathbf{k},\mathbf{g},\mathbf{h}}} {\mbox{maximize}} ~ f(\mathbf{r}|\tilde{\mathbf{p}}_{\rm dec},\Theta_{\rm tar}&,\Theta_{\rm sc},\mathbf{k},\mathbf{g},\mathbf{h}) \times f(\Theta_{\rm tar},\Theta_{\rm sc}) \notag \\ \label{eq:Bayes} \times \mathbb{P}(\hat{\mathbf{k}},\hat{\mathbf{g}},\hat{\mathbf{h}}|\Theta_{\rm tar},\Theta_{\rm sc}&,\mathbf{k},\mathbf{g},\mathbf{h}) \times \mathbb{P}(\mathbf{k},\mathbf{g},\mathbf{h}|\Theta_{\rm tar},\Theta_{\rm sc}) \\ \mbox{subject to}~ (\ref{eq:constraints_k}), (\ref{eq:constraints_g}), (\ref{eq:constraints_h})& \notag \\ \label{eq:con2} \sum_{j,t} \tilde{k}_{ijt} + \sum_{j,m,t} (\tilde{g}_{ijmt}+&\tilde{h}_{ijmt})+ \sum_{j} \tilde{z}_{ij} \geq N_i , ~ \forall i \\ \label{eq:con3} \tilde{k}_{ijt}, \tilde{g}_{ijmt}, \tilde{h}_{ijmt}, \hat{k}_{it}, \hat{g}_{imt}, &\hat{h}_{imt} \in \{0,1\}, ~ \forall i,j,t,m \end{align} where, the first term in the objective (\ref{eq:Bayes}) denotes the likelihood function and the remaining three terms denote the prior. A detailed explanation of all the terms and constraints in (\ref{eq:Bayes})-(\ref{eq:con3}) is provided below: \begin{itemize} \item[(a)] The term $f(\Theta_{\rm tar},\Theta_{\rm sc})$ denotes the \emph{prior} joint distribution of the target and scatterer locations. It is reasonable to assume that the target and scatterer locations are independent of each other. Hence, $f(\Theta_{\rm tar},\Theta_{\rm sc})=f(\Theta_{\rm tar})f(\Theta_{\rm sc})$. In addition, $f(\Theta_{\rm tar})$ and $f(\Theta_{\rm sc})$ are both assumed to be uniform pdfs over the region of interest. \item[(b)] The discrete distribution $\mathbb{P}(\mathbf{k},\mathbf{g},\mathbf{h}|\Theta_{\rm tar},\Theta_{\rm sc})$ represents the geometry of the environment, such as the blocked DPs for each TRP, the IPs (if any) between a target-scatterer pair etc. Let $\Theta_{\rm TX}=\{(c_i,d_i):i=1,\cdots,I\}$ and $\Theta_{\rm RX}=\{(a_i,b_i):i=1,\cdots,I\}$ denote the collection of TX and RX locations, respectively. $\Theta_{\rm TX}$ and $\Theta_{\rm RX}$ are known quantities and for a given set of values for $\Theta_{\rm tar}$ and $\Theta_{\rm sc}$, the set $\Theta_{\rm env}=\{\Theta_{\rm TX}, \Theta_{\rm RX}, \Theta_{\rm tar}, \Theta_{\rm sc}\}$ completely describes all the propagation paths in the environment and the values of $\mathbf{k}$, $\mathbf{g}$ and $\mathbf{h}$ are deterministic functions of $\Theta_{\rm env}$, denoted by $\mathbf{k}^{\rm(det)}(\Theta_{\rm env})$, $\mathbf{g}^{\rm(det)}(\Theta_{\rm env})$ and $\mathbf{h}^{\rm(det)}(\Theta_{\rm env})$, respectively\footnote{This is akin to ray-tracing}. Hence, \begin{align} \label{eq:env_det} \mathbb{P}(\mathbf{k},\mathbf{g},\mathbf{h}|\Theta_{\rm tar},\Theta_{\rm sc}) &= \mathbbm{1}_{\mathbf{k}^{\rm (det)}(\Theta_{\rm env})}(\mathbf{k})\times \mathbbm{1}_{\mathbf{g}^{\rm (det)}(\Theta_{\rm env})}(\mathbf{g}) \notag \\ &\hspace{5mm}\times\mathbbm{1}_{\mathbf{h}^{\rm (det)}(\Theta_{\rm env})}(\mathbf{h}) \end{align} where $\mathbbm{1}_{\mathbf{y}}(\mathbf{x})$ equals 1 if $\mathbf{x}=\mathbf{y}$ and 0, otherwise. \item[(c)] The estimates $\hat{k}_{it}$, $\hat{g}_{imt}$ and $\hat{h}_{imt}$ may differ from their respective ground truths, $k_{it}$, $g_{imt}$ and $h_{imt}$ due to noise or IPs. Assuming that $\hat{k}_{it}$ (or $\hat{g}_{imt}$, $\hat{h}_{imt}$) is conditionally independent of other estimates, given $k_{it}$ (or $g_{imt}$, $h_{imt}$), we get \begin{align} \label{eq:env_det2} &\hspace{5mm} \mathbb{P}(\hat{\mathbf{k}},\hat{\mathbf{g}},\hat{\mathbf{h}}|\Theta_{\rm tar},\Theta_{\rm sc},\mathbf{k},\mathbf{g},\mathbf{h}) \notag \\ &= \mathbb{P}(\hat{\mathbf{k}},\hat{\mathbf{g}},\hat{\mathbf{h}}|\mathbf{k}^{\rm (det)}(\Theta_{\rm env}), \mathbf{g}^{\rm (det)}(\Theta_{\rm env}), \mathbf{h}^{\rm (det)}(\Theta_{\rm env})) \\ &= \displaystyle\prod\limits_{i,t,m} \mathbb{P}(\hat{k}_{it}|k^{\rm (det)}_{it}(\Theta_{\rm env})) \times \mathbb{P}(\hat{g}_{imt}|g^{\rm (det)}_{imt}(\Theta_{\rm env})) \notag\\ &\hspace{25mm} \times \mathbb{P}(\hat{h}_{imt}|h^{\rm (det)}_{imt}(\Theta_{\rm env})) \end{align} where (\ref{eq:env_det2}) follows from (\ref{eq:env_det}). \item[(d)] $\tilde{\mathbf{p}}_{\rm dec}$ is a sufficient statistic for estimating $\mathbf{k}$, $\mathbf{g}$ and $\mathbf{h}$. Hence, the likelihood function, $f(\mathbf{r}|\tilde{\mathbf{p}}_{\rm dec},\Theta_{\rm tar},\Theta_{\rm sc},\mathbf{k},\mathbf{g},\mathbf{h})$, equals $f(\mathbf{r}|\tilde{\mathbf{p}}_{\rm dec},\Theta_{\rm tar},\Theta_{\rm sc})$. Further, $f(\mathbf{r}|\tilde{\mathbf{p}}_{\rm dec},\Theta_{\rm tar},\Theta_{\rm sc})$ decomposes into product form as the noise terms on each $r_{ij}$ are mutually independent. Thus, \begin{align} f(\mathbf{r}|\tilde{\mathbf{p}}_{\rm dec},\Theta_{\rm tar},\Theta_{\rm sc},\mathbf{k},\mathbf{g},\mathbf{h}) &= f(\mathbf{r}|\tilde{\mathbf{p}}_{\rm dec},\Theta_{\rm tar},\Theta_{\rm sc}) \notag\\ &= \displaystyle\prod\limits_{i,j} f(r_{ij}|\tilde{\mathbf{p}}_{\rm dec},\Theta_{\rm tar},\Theta_{\rm sc}) \notag\\ \mbox{where,}~ f(r_{ij}|\tilde{\mathbf{p}}_{\rm dec},\Theta_{\rm tar},\Theta_{\rm sc}) &= \displaystyle\prod\limits_{t,m} (f_{\rm DP}(r_{ij}|x_t,y_t))^{\tilde{k}_{ijt}} \notag \\ \times (f_{\rm IP,1}&(r_{ij}|x_t,y_t,u_m,v_m))^{\tilde{g}_{ijmt}} \notag\\ \times (f_{\rm IP,2}&(r_{ij}|x_t,y_t,u_m,v_m))^{\tilde{h}_{ijmt}} \notag\\ &\times \left(\frac{1}{R_{\rm obs}}\right)^{\tilde{z}_{ij}} \end{align} \item[(e)] Finally, constraint (\ref{eq:con2}) ensures that the number of DPs, IPs and noise peaks received at the $i$-th TRP is at least $N_i$, the number of resolvable MPCs extracted at the $i$-th TRP. \end{itemize} After taking natural logarithms, (\ref{eq:Bayes}) can be re-written as follows to obtain problem $P1$, where $\mathbf{k}$, $\mathbf{g}$ and $\mathbf{h}$ are no longer unknowns due to (\ref{eq:env_det}): \begin{align} \label{eq:logBayes} &P1:\underset{T,M,\tilde{\mathbf{p}}_{\rm dec},\Theta_{\rm tar},\Theta_{\rm sc}}{\mbox{minimize}} ~ \frac{1}{\sigma^2}\left[\displaystyle\sum\limits_{i,j,t} \tilde{k}_{ijt}(r_{ij} -r_i(x_t,y_t))^2 \right] \notag \\ & + \frac{1}{\sigma^2}\left[ \displaystyle\sum\limits_{i,j,t,m} \tilde{g}_{ijmt}(r_{ij}-l_i(x_t,y_t,u_m,v_m))^2 \right] \notag \\ & + \frac{1}{\sigma^2}\left[ \displaystyle\sum\limits_{i,j,t,m} \tilde{h}_{ijmt}(r_{ij}-m_i(x_t,y_t,u_m,v_m)^2\right]\notag\\ &+ \left[\displaystyle\sum\limits_{i,j,t} \tilde{k}_{ijt} + \displaystyle\sum\limits_{i,j,m,t} (\tilde{g}_{ijmt}+\tilde{h}_{ijmt})\right] \log\sqrt{2\pi}\sigma \notag \\ & + \left(\displaystyle\sum\limits_{i,j} \tilde{z}_{ij}\right) \log R_{\rm obs} - \sum_{i,t} \log\mathbb{P}(\hat{k}_{it}|k^{\rm (det)}_{it}(\Theta_{\rm env})) \notag\\ & - \sum_{i,m,t} \log\mathbb{P}(\hat{g}_{imt}|g^{\rm (det)}_{imt}(\Theta_{\rm env})) \notag \\ &- \sum_{i,m,t} \log\mathbb{P}(\hat{h}_{imt}|h_{imt}^{\rm (det)}(\Theta_{\rm env})) \\ & \mbox{subject to}~ (\ref{eq:constraints_k}), (\ref{eq:constraints_g}), (\ref{eq:constraints_h}), (\ref{eq:con2}), (\ref{eq:con3}) \notag \end{align} Typically, $\Theta_{\rm sc}$ represents a finite collection of points belonging to \emph{distributed} non-point objects (e.g., a wall), where reflection takes place. A minimum of three reflections are needed at each $(u_m,v_m)$ for uniquely determining $\Theta_{\rm sc}$, which need not be satisfied in all circumstances. Hence, $P1$ is ill-posed if the map of the environment is unknown\footnote{If the map of the environment is known, then $P1$ is not ill-posed and the IPs can be re-cast as \emph{virtual} DPs, obtained from virtual TXs and RXs \cite{setlur.etal_2012}.}. To make $P1$ tractable, we restrict ourselves to localizing only the targets by retaining those terms and constraints involving just the DPs in (\ref{eq:Bayes})-(\ref{eq:con3}). This gives rise to the following approximation, $P2$, which is also a Bayesian estimation problem that accounts for all the DPs \begin{align} \label{eq:Bayesobj_approx1} &P2:\underset{T,\tilde{\mathbf{k}},\Theta_{\rm tar}}{\mbox{minimize}} ~ \left(\displaystyle\sum\limits_{i,j,t} \tilde{k}_{ijt} \right) \log\sqrt{2\pi}\sigma - \log \mathbb{P}(\mathbf{k}|\Theta_{\rm tar}) \notag \\ &+ \frac{1}{\sigma^2}\left[\displaystyle\sum\limits_{i,j,t} \tilde{k}_{ijt}(r_{ij} -r_i(x_t,y_t))^2 \right] - \sum_{i,t} \log\mathbb{P}(\hat{k}_{it}|k_{it}) \\ \label{eq:con1_approx1} & \mbox{subject to} ~ \tilde{k}_{ijt}, \hat{k}_{it} \in \{0,1\}, ~ \forall i,j,t,m \\ \label{eq:con2_approx1} & \hspace{15mm}\sum_{j} \tilde{k}_{ijt} = \hat{k}_{it} \end{align} The joint DP blocking distribution $\mathbb{P}(\mathbf{k}|\Theta_{\rm tar})$ in $P2$ is no longer a discrete-delta function, like (\ref{eq:env_det}). Instead, it depends on the distribution of scatterer locations in the environment. From (\ref{eq:ki}), $k_{it}=0$ if either the TX or the RX of the $i$-th TRP does not have LoS to $(x_t,y_t)$; hence, $k_{it}$ can be expressed as a product of two terms in the following manner: \begin{align} k_{it} &= v_{i_T,t} \times w_{i_R,t}, \\ \mbox{where,}~ v_{i_T, t} &= \begin{cases} & 1 , ~\mbox{if the $i_T$-th TX has LoS to $(x_t,y_t)$} \\ & 0 , ~\mbox{else} \end{cases}\notag\\ w_{i_R, t} &= \begin{cases} & 1 , ~\mbox{if the $i_R$-th RX has LoS to $(x_t,y_t)$} \\ & 0 , ~\mbox{else} \end{cases}\notag \end{align} $k_{it}$ can be interpreted as a Bernoulli random variable when considering an ensemble of settings in which the scatterers are placed at random. For vectors $\mathbf{k}_t = [k_{1t}, \hspace{1mm}\cdots, \hspace{1mm} k_{It}]$, $\mathbf{v}_t = [v_{1,t}, \hspace{1mm}\cdots, \hspace{1mm} v_{M_{\rm TX}, t}]$ and $\mathbf{w}_t = [w_{1,t}, \hspace{1mm}\cdots, \hspace{1mm} w_{M_{\rm RX}, t}]$, it can be seen that $\mathbf{k}_t = \mathbf{w}_t \bigotimes \mathbf{v}_t$, where $\bigotimes$ denotes the Kronecker product. $\mathbf{k}_t$ is a vector of dependent Bernoulli random variables (Fig. \ref{fig:1}) and shall henceforth be referred to as the blocking vector at $(x_t,y_t)$. Note that $\mathbf{k}=vec(k_{it})=[\mathbf{k}_1, \hspace{1mm}\cdots, \hspace{1mm} \mathbf{k}_T]$ and therefore, $\mathbb{P}(\mathbf{k}|\Theta_{\rm tar})=\mathbb{P}(\mathbf{k}_1; \cdots; \mathbf{k}_T)$. In general, two or more blocking vectors may also be dependent as nearby targets can experience similar blocking. Thus, the joint distribution $\mathbb{P}(\mathbf{k_1}; \cdots; \mathbf{k}_T)$ captures correlated blocking in its entirety. Consequently, target-by-target localization is not optimal, in general. However, for ease of computation, we resort to such an approach in this paper, thereby implicitly assuming independent blocking vectors at distinct locations, i.e., $\mathbb{P}(\mathbf{k}|\Theta_{\rm tar}) \approx \displaystyle\prod\limits_t \mathbb{P}(\mathbf{k}_t)$. The generalization to joint-target localization will be described in a future work. Among the $2^I$ possible values, $\mathbf{k}_t$ can only take on $(2^{M_{\rm TX}}-1)(2^{M_{\rm RX}}-1)+1$ physically realizable values, which can be expressed in the form $\mathbf{w}_t \bigotimes \mathbf{v}_t$ (e.g., $\mathbf{k}_t=[1~1~0~1~1~0~1~1~0]=[1~1~1]\bigotimes[1~1~0]$ for the TRP indexing notation in Table~\ref{tab:TRPeg}). These are referred to as \emph{consistent} blocking vectors while the remaining values are \emph{inconsistent} (e.g., $\mathbf{k}_t=[1~1~0~1~1~1~1~0~0]$). If $\mathbf{k}_t$ is inconsistent, then $\mathbb{P}(\mathbf{k}_t)=0$. To characterize $\mathbb{P}(\hat{k}_{it}|k_{it})$, a distinction between two kinds of estimation errors needs to be made: \begin{itemize} \item[a)] The DP corresponding to the $t$-th target at the $i$-th TRP may not detected if the noise pushes the range estimate far away from the true value. As a result, $\hat{k}_{it}=0$ when $k_{it}=1$. If the noise is independent and identically distributed (i.i.d) for all TRPs, we may assume that $\mathbb{P}(\hat{k}_{it}=0|k_{it}=1)=\rho_{01}$ ($\forall \hspace{2mm} i,t$), where $\rho_{01}$ is determined by the SNR (signal-to-noise ratio) and the ToA estimator. \item[b)] If the DP for the $t$-th target at the $i$-th TRP is blocked, but a noise peak or IP is mistaken for a DP because it has the right range, then $\hat{k}_{it}=1$ and $k_{it}=0$. $\mathbb{P}(\hat{k}_{it}=1|k_{it}=0)$ depends on the scatterer distribution and varies according to TX, RX and target locations. However, in the absence of IP statistics, we make the simplifying assumption that $\mathbb{P}(\hat{k}_{it}=1|k_{it}=0)=\rho_{10}$, for all $i,t$. The availability of empirical IP statistics would obviously improve localization performance. \end{itemize} Let $\hat{\mathbf{k}}_t = [\hat{k}_{1t}, \hspace{1mm} \cdots, \hspace{1mm} \hat{k}_{It}]$ denote the estimated blocking vector at $(x_t,y_t)$. While $\hat{\mathbf{k}}_t$ can, in principle, take on all $2^I$ values, a false alarm is less likely if $\hat{\mathbf{k}}_t$ is a short Hamming distance away from a consistent vector having high probability. Let $\mathcal{K}$ denote the set of consistent blocking vectors. We restrict $\hat{\mathbf{k}}_t$ to be at most a unit Hamming distance from some element in $\mathcal{K}$. This assumption is reasonable when the number of scatterers is small and the SNR at all RXs is sufficiently high. Given $\hat{\mathbf{k}}_t$, let $\mathcal{K}_{t} \subseteq \mathcal{K}$ denote the set of consistent vectors that are at most a unit Hamming distance away from $\hat{\mathbf{k}}_t$. Then, \begin{align} \mathbb{P}(\hat{\mathbf{k}}_t)&= \displaystyle\sum\limits_{\mathbf{k}_t \in \mathcal{K}_t} \mathbb{P}(\hat{\mathbf{k}}_t|\mathbf{k}_t)\mathbb{P}(\mathbf{k}_t) \notag\\ \label{eq:hatk} &= \displaystyle\sum\limits_{\mathbf{k}_t \in \mathcal{K}_t} \left(\displaystyle\prod\limits_i \mathbb{P}(\hat{k}_{it}|k_{it}) \right) \mathbb{P}(\mathbf{k}_t) \\ \label{eq:kerror} &\approx \begin{cases} \displaystyle\sum_{\mathbf{k}_t \in \mathcal{K}_{t}} \rho_{01}^{\eta_{01}} (1-\rho_{01})^{\eta_{11}} \rho_{10}^{\eta_{10}} (1-\rho_{10})^{\eta_{00}} \mathbb{P}(\mathbf{k}_t), \\ \hspace{30mm} \mbox{if $\mathcal{K}_{t}$ is non-empty} \\ 0 , \hspace{26mm} \mbox{otherwise} \\ \end{cases} \\ \mbox{where,} ~ \eta_{01} &= |\{ i: \hat{k}_{it}=0; k_{it} = 1 \}| \notag\\ \eta_{11} &= |\{ i: \hat{k}_{it}=1; k_{it} = 1 \}|\notag\\ \eta_{10} &= |\{ i: \hat{k}_{it}=1; k_{it} = 0 \}| \notag\\ \eta_{00} &= |\{ i: \hat{k}_{it}=0; k_{it} = 0 \}| \notag \end{align} Using (\ref{eq:hatk}) and assuming independent blocking vectors at distinct points (i.e., target-by-target detection), $P2$ can be reduced to the Bayesian MTL problem $P3$, given below: \begin{align} \label{eq:Bayesobj_approx2} &P3: \underset{T,\tilde{\mathbf{k}},\Theta_{\rm tar}}{\mbox{minimize}}~ \left[ \frac{1}{\sigma^2}\left(\displaystyle\sum\limits_{i,j,t} \tilde{k}_{ijt}(r_{ij} -r_i(x_t,y_t))^2 \right) \right. \notag\\ &\left. - \left(\displaystyle\sum\limits_{i,j,t} \tilde{k}_{ijt} \right) \log\sqrt{2\pi}\sigma \right] - \displaystyle\sum\limits_t \log \mathbb{P}(\hat{\mathbf{k}}_t) \\ & \mbox{subject to} ~ (\ref{eq:con1_approx1}), (\ref{eq:con2_approx1}) \notag \end{align} A \emph{matching} $q_t(\mathbf{r})=\{r_{ij} \in \mathbf{r} | \tilde{k}_{ijt}=1 \}$ is the set of DPs corresponding to the $t$-th target. Given $q_t(\mathbf{r})$ and a point $(x_t,y_t)$, the term in square parentheses in (\ref{eq:Bayesobj_approx2}) determines if the ellipses corresponding to the MPCs in $q_t(\mathbf{r})$ pass through $(x_t,y_t)$ or not. The other term in (\ref{eq:Bayesobj_approx2}) plays the role of a prior by determining the probability of the blocking vector, $\hat{\mathbf{k}}_t$, obtained from $q_t(\mathbf{r})$, at $(x_t,y_t)$. The objective in (\ref{eq:Bayesobj_approx2}) is minimized only when both these quantities are small. To solve $P3$, a mechanism for detecting DPs is required. Since the IP distribution is unknown, none of the conventional tools such as Bayesian, minimax or Neyman-Pearson hypothesis testing can be used for this purpose. In the next section, we describe our DP detection technique and propose a polynomial-time algorithm to solve $P3$. \section{MTL Algorithm using Blocking Statistics} \label{sec:algo} In this section, we define a likelihood function for identifying DPs that enable us to obtain the matchings required for solving $P3$ in a tractable manner. The number of matchings possible for $T$ targets, $M$ scatterers and $I$ TRPs is $({I\choose 3} N^3 + {I\choose 4} N^4 \cdots + {I\choose I} N^I)^T$, where $N=(2M+1)T$ is an upper bound on the number of MPCs extracted at each TRP, ignoring noise peaks. The computational complexity of a brute-force search over all possible matchings for solving $P3$ is $O(N^{IT})$, which is intractable for a large number of TRPs and/or targets. To obtain accurate matchings in a tractable manner, we employ an iterative approach. Consider, without loss of generality, a matching $q_t^{(i-1)}(\mathbf{r})$ for the $t$-th target consisting of MPCs from the first $i-1$ TRPs ($3 \leq i \leq I$). The size of $q_t^{(i-1)}(\mathbf{r})$ is at most $i-1$. Let $(\hat{x}_t^{(i-1)},\hat{y}_t^{(i-1)})$ denote the estimate of the target location obtained from $q_t^{(i-1)}(\mathbf{r})$ (e.g., using the two-step estimation method \cite{Shen_et_al_2012}). For an MPC $r_{ij_i}$ from the $i$-th TRP, let $q_{t,\rm temp}^{(i)}(\mathbf{r}) = q_{t}^{(i-1)}(\mathbf{r}) \cup r_{ij_i}$ and let $\hat{\mathbf{k}}_t^{(i)}$ denote the $i$-length partial blocking vector at $(\hat{x}_t^{(i-1)},\hat{y}_t^{(i-1)})$, obtained from $q_{t,\rm temp}^{(i)}(\mathbf{r})$. If $q_{t,\rm temp}^{(i)}(\mathbf{r})$ consists entirely of DPs from $(x_t,y_t)$, then (i) the ellipses corresponding to its constituent MPCs should pass \emph{close} to $(x_t,y_t)$, and (ii) the blocking vector $\hat{\mathbf{k}}_t^{(i)}$ should have high probability. This motivates the definition of a blocking-aware vector likelihood function, $\mathbf{L_B}(q_{t,\rm temp}^{(i)}(\mathbf{r}))$, defined as follows: \begin{align} \label{eq:newLfnc} \mathbf{L_B}(q_{t,\rm temp}^{(i)}(\mathbf{r}))&= \left(\left|\frac{L_{\rm E}(q_{t,\rm temp}^{(i)}(\mathbf{r}))}{\sigma(q_{t,\rm temp}^{(i)}(\mathbf{r}))}\right|, -\log \mathbb{P} (\hat{\mathbf{k}}^{(i)}_t)\right) \\ \label{eq:oldLfnc} \mbox{where,}~ L_{\rm E}(q_{t,\rm temp}^{(i)}(\mathbf{r}))&= r_{ij_i}-r_i(\hat{x}_t^{(i-1)},\hat{y}_t^{(i-1)}) \end{align} and $\sigma(q_{t,\rm temp}^{(i)}(\mathbf{r}))$ is the standard deviation of $L_{\rm E}(q_{t,\rm temp}^{(i)}(\mathbf{r}))$. If the ellipses corresponding to the MPCs in $q_{t,\rm temp}^{(i)}(\mathbf{r})$ pass through the vicinity of $(x_t,y_t)$, then $L_{\rm E}(q_{t,\rm temp}^{(i)}(\mathbf{r}))$ should be very small in magnitude. Under this condition, it can be shown by a Taylor's series approximation that $L_{\rm E}(q_{t,\rm temp}^{(i)}(\mathbf{r}))$ is a zero-mean Gaussian random variable \cite{Shen_and_Molisch_2013_2}. Hence, if $|L_{\rm E}(q_{t,\rm temp}^{(i)}(\mathbf{r}))/\sigma(q_{t,\rm temp}^{(i)}(\mathbf{r}))| \leq \delta$, where $\delta$ is an \emph{ellipse intersection threshold}, then we conclude that $r_{ij_i}$ passes through $(\hat{x}_t^{(i-1)},\hat{y}_t^{(i-1)})$. If the above ellipse intersection condition is satisfied, then the term $-\log \mathbb{P} (\hat{\mathbf{k}}^{(i)}_t)$, which denotes the \emph{blocking likelihood} of $q_{t,\rm temp}^{(i)}(\mathbf{r})$ at $(\hat{x}_t^{(i-1)},\hat{y}_t^{(i-1)})$, needs to be small as well. The following cases are of interest: \begin{itemize} \item[1.] If $\hat{\mathbf{k}}_t^{(i)}$ is consistent and $-\log \mathbb{P} (\hat{\mathbf{k}}^{(i)}_t) \leq \mu$, where $\mu (>0)$ is a \emph{blocking threshold}, then we define $q_t^{(i)}(\mathbf{r})=q_{t,\rm temp}^{(i)}(\mathbf{r})$ and compute a refined target location estimate $(\hat{x}_t^{(i)},\hat{y}_t^{(i)})$ from $q_t^{(i)}(\mathbf{r})$. \item[2.] If $\hat{\mathbf{k}}^{(i)}_t$ is inconsistent, then let $\mathcal{K}_t^{(i)}$ denote the set of consistent $i$-length partial blocking vectors that are at most a unit Hamming distance away from $\hat{\mathbf{k}}^{(i)}_t$. The following cases are of interest then: \begin{itemize} \item[(a)] If $\mathcal{K}_t^{(i)}$ is empty, then $\mathbb{P}(\hat{\mathbf{k}}^{(i)}_t)=0$ (from (\ref{eq:hatk}) and (\ref{eq:kerror}), which hold for partial blocking vectors as well) and $-\log \mathbb{P}(\hat{\mathbf{k}}^{(i)}_t) = \infty$. Hence, we conclude that a target is not present at the estimated location. \item[(b)] If $\mathcal{K}_t^{(i)}$ is not empty, then each element of $\mathcal{K}_t^{(i)}$ is a feasible ground truth. In particular, an element in $\mathcal{K}_t^{(i)}$ whose Hamming weight is lower than that of $\hat{\mathbf{k}}_t^{(i)}$ represents a ground truth where exactly one MPC in $q_{t,\rm temp}^{(i)}(\mathbf{r})$ is not a DP. For each such element, a new matching can be derived by removing the corresponding \emph{non-DP} from $q_{t,\rm temp}^{(i)}(\mathbf{r})$ and evaluated a new blocking likelihood. On the other hand, an element of $\mathcal{K}_t^{(i)}$ with a higher Hamming weight compared to $\hat{\mathbf{k}}_t^{(i)}$ represents a ground truth where one DP is absent from $q_{t,\rm temp}^{(i)}(\mathbf{r})$ due to noise. Unlike the previous case, no modification of the matching is possible and the blocking likelihood of $\hat{\mathbf{k}}_t^{(i)}$ is computed according to (\ref{eq:hatk})-(\ref{eq:kerror}). In this manner, it is possible that multiple matchings may exist for a single potential target location, each corresponding to a different ground truth. All the matchings whose blocking likelihood satisfies the threshold $\mu$ are retained, since it is premature to determine the most likely ground truth until all TRPs are considered. After the $I$-th TRP has been processed, if multiple matchings still exist for the $t$-th target, then the one that minimizes the objective function in (\ref{eq:Bayesobj_approx2}) is declared the true matching and the corresponding $(\hat{x}_t^{(I)},\hat{y}_t^{(I)})$ is the location estimate for the $t$-th target. \end{itemize} \end{itemize} Otherwise, if no MPC from the $i$-th TRP satisfies the ellipse intersection condition (i.e., $|L_{\rm E}(q_{t,\rm temp}^{(i)}(\mathbf{r}))/\sigma(q_{t,\rm temp}^{(i)}(\mathbf{r}))|$ $>$ $\delta$, for all $r_{ij_i}$), then $q_t^{(i)}(\mathbf{r}) = q_t^{(i-1)}(\mathbf{r})$ and $(\hat{x}_t^{(i)},\hat{y}_t^{(i)})=(\hat{x}_t^{(i-1)},\hat{y}_t^{(i-1)})$. For the resulting $\hat{\mathbf{k}}_t^{(i)}$, inconsistencies are handled as stated above in point 2. If $-\log \mathbb{P}(\hat{\mathbf{k}}_t^{(i)})> \mu$, then we conclude that a target is not present at the estimated location. This motivates an algorithmic approach that is divided into stages, indexed by $i$. In general, let $(z_1,z_2,...,z_I)$, a permutation of $(1, 2, \cdots, I)$, be the order in which TRPs are processed. At the beginning of the $i$-th stage $(3\leq i \leq I)$, each $q_t^{(i-1)}(\mathbf{r})$ has at most $i-1$ entries. During the $i$-th stage, all the DPs among the MPCs of the $z_i$-th TRP are identified to obtain a set of matchings $\{q_t^{(i)}(\mathbf{r})\}$ for each target $t$. A matching is consistent (inconsistent) if the corresponding blocking vector is consistent (inconsistent). By construction, the only inconsistent matchings are due to missing DPs (see bullet point 2(b) in previous paragraph). A finite value of $\mu$ ensures that an inconsistent $\hat{\mathbf{k}}^{(i)}_t$ is always a unit Hamming distance away from consistency, due to (\ref{eq:kerror}). Since the blocking likelihood $-\log \mathbb{P}(\hat{\mathbf{k}}_t^{(i)})$ is non-decreasing in $i$, a matching and its corresponding target location can be removed from consideration if at any stage its blocking likelihood exceeds the blocking threshold, $\mu$. Let $P(a,b,j_a,j_b)$ denote the points of intersection of the ellipses corresponding to the $j_a$-th MPC of the $a$-th TRP and the $j_b$-th MPC of the $b$-th TRP. For the initial set of matchings (i.e., $i=3$), $P(z_1,z_2,j_{z_1},j_{z_2})$ is computed for all $j_{z_1}, j_{z_2} (1 \leq j_{z_1} \leq N_{z_1}, 1 \leq j_{z_2} \leq N_{z_2})$. There can be at most four points in any $P(z_1,z_2,j_{z_1},j_{z_2})$ and each such point is an ML estimate of the target location for the matching $q_t^{(2)}(\mathbf{r})=\{r_{z_1,j_{z_1}}, r_{z_2,j_{z_2}}\}$. Hence, the target location estimate $(\hat{x}_t^{(2)},\hat{y}_t^{(2)})$ need not be unique. Furthermore, in the $i$-th stage ($i\geq 4$) we also compute $P(z_u,z_i,j_{z_u},j_{z_i})$ for all $j_{z_u},j_{z_i} (\forall u<i)$ to identify previously blocked targets. In summary, any intersection of two ellipses is a potential target location to begin with. At each such location, the likelihood of a target being present is updated depending on the number of other ellipses passing around its vicinity. Unlikely target locations, corresponding to matchings whose likelihood (given by $\mathbf{L}_{\mathbf{B}}(.)$) does not satisfy the thresholds $\delta$ and $\mu$, are eliminated at each stage. The number of targets that remain at the end is the estimate of $T$. Algorithm \ref{algo} lists the pseudocode of the Bayesian MTL algorithm. \subsection{Complexity of Bayesian MTL Algorithm} Let $\hat{T}(i)$ denote the number of targets identified at the end of stage $i$. The following relation holds, \begin{align} \label{eq:T_est} \hat{T}(i) &\leq \hat{T}(i-1) + {i-1 \choose 2}N^3, \hspace{2mm} (i=4, \cdots, I) \\ \mbox{and,} \hspace{3mm} \hat{T}(3) &\leq N^3 \end{align} At the end of $i-1$-th stage, each target can have at most $(i-1)$ matchings. Hence, $O(i\hat{T}(i-1)N)$ likelihood computations are carried out in the $i$-th stage due to existing targets. The second term in (\ref{eq:T_est}) is an upper bound on the number of new targets that can be identified in the $i-th$ stage and the number of likelihood computations due to these is $O(i^2 N^3)$. At each stage, the number of potential targets increases at most polynomially in $N$ and $I$ (\ref{eq:T_est}). Hence, the number of likelihood computations is also polynomial in $N$ and $I$. The reduction in complexity occurs because target locations are determined by `grouping' pair-wise ellipse intersections that are close together. Since there are only $O(I^2N^2)$ ellipse intersections to begin with, it is intuitive that the proposed algorithm terminates in polynomial-time. \subsection{Limitations of Bayesian MTL Algorithm} The Bayesian MTL algorithm assumes complete knowledge of the distribution of $\mathbf{k}_t$ at all locations $(x_t,y_t)$. This would have to be obtained either from very detailed theoretical models or exhaustive measurements, neither of which might be feasible in practice. A sub-optimal, but more practical, alternative could involve the use of second-order statistics of $\mathbf{k}_t$. In particular, the Mahalanobis distance, defined as $\sqrt{(\hat{\mathbf{k}}_t-\mathbf{m}_t)^T\mathbf{C}_t^{-1}(\hat{\mathbf{k}}_t-\mathbf{m}_t)}$, where $\mathbf{m}_t$ and $\mathbf{C}_t$ respectively denote the mean vector and covariance matrix of $\mathbf{k}_t$ and $(.)^T$ and $(.)^{-1}$ denote the matrix transpose and inverse operations, respectively, can be compared to a threshold $\mu_2$ as the basis for a blocking likelihood decision. Even in this simplified case, one still needs the mean blocking vector and the covariance matrix at each point. In practice, these can be measured at only at a fixed set of grid points. Hence, the accuracy of the algorithm would depend on the grid resolution of the measured data. \begin{algorithm}[t] \caption{Bayesian MTL algorithm} \begin{algorithmic} \State Obtain the TRP processing order $(z_1,z_2,\cdots,z_n)$ \cite{Shen_and_Molisch_2013_2} \State $t=0$ \Comment {{\color{red} (Initial set of matchings)}} \For {each $j_{z_1}, j_{z_2}$} \For {each ellipse intersection $(x,y)$ corresponding to $r_{z_1,j_{z_1}}$ and $r_{z_2,j_{z_2}}$} \If {$\mathbf{L_B}(\{r_{z_1,j_{z_1}},r_{z_2,j_{z_2}}\}) \leq (\delta,\mu)$} \State $t=t+1$ \State $q_t^{(2)}(\mathbf{r})=\{r_{z_1,j_{z_1}},r_{z_2,j_{z_2}}\}$ \State $(\hat{x}_t^{(2)},\hat{y}_t^{(2)})=(x,y)$ \EndIf \EndFor \EndFor \State $\hat{T}(2)=t$ \Comment {{\color{red} $\hat{T}(i)$ denotes the number of estimates at the end of the $i$-th stage}} \For {$i=3$ to $I$} \For {$t=1$ to $\hat{T}(i-1)$} \Comment {{\color{red} (Updating existing matchings)}} \If {$\exists$ any $r_{z_i,j_{z_i}}$ such that $L_E(q_{t}^{(i-1)}(\mathbf{r})\cup r_{z_i,j_{z_i}})\leq\delta$} \State $q_{t,\rm temp}^{(i)}(\mathbf{r})=q_t^{(i-1)}(\mathbf{r}) \cup r_{z_i,j_{z_i}}$ \Else \State $q_{t,\rm temp}^{(i)}(\mathbf{r})=q_t^{(i-1)}(\mathbf{r})$ \EndIf \State Derive $\mathcal{K}_t^{(i)}$ from $q_{t,\rm temp}^{(i)}(\mathbf{r})$ \For {each $\hat{\mathbf{k}}_t^{(i)} \in \mathcal{K}_t^{(i)}$} \If {$-\log \hat{\mathbf{k}}_t^{(i)} \leq \mu$} \State Derive $q_t^{(i)}(\mathbf{r})$ from $q_{t,\rm temp}^{(i)}(\mathbf{r})$ according to $\hat{\mathbf{k}}_t^{(i)}$ \EndIf \EndFor \EndFor \State Update $\hat{T}(i)$ and set $t=\hat{T}(i)$ \For {each $j_{z_i}, j_{z_u}$ ($u=1,\cdots,i-1$)} \Comment {{\color{red} (New targets, previously unidentified due to blocking)}} \For {each ellipse intersection $(x,y)$ corresponding to $r_{z_i,j_{z_i}}$ and $r_{z_u,j_{z_u}}$} \If {$\mathbf{L_B}(\{r_{z_i,j_{z_i}},r_{z_u,j_{z_u}}\}) \leq (\delta,\mu)$} \State $t=t+1$ \State $q_t^{(i)}(\mathbf{r})=\{r_{z_i,j_{z_i}},r_{z_u,j_{z_u}}\}$ \State $(\hat{x}_t^{(i)},\hat{y}_t^{(i)})=(x,y)$ \EndIf \EndFor \EndFor \State $\hat{T}(i)=t$ \EndFor \end{algorithmic} \label{algo} \end{algorithm} \section{Simulation and Experimental Results} \label{sec:simres} In this section, we present our simulation and experimental results for the Bayesian MTL algorithm introduced in Section~\ref{sec:algo}. In Section~\ref{sec:priorart}, the algorithm is validated by reproducing the results described in prior art for independent blocking, which is a special instance of $P3$. The importance of considering correlated blocking and the accuracy of the matchings obtained by the Bayesian MTL algorithm are discussed in Sections~\ref{sec:mainsim} and \ref{sec:genie}, respectively. Finally, experimental results which provide insights into the impact of non-point targets and imperfect background subtraction are presented in Section~\ref{sec:exp}. Unless otherwise mentioned, we use the following settings for our simulation results: $G = [-10{\rm m}, 10{\rm m}] \times [-10{\rm m}, 10{\rm m}]$ is the region of interest. Scatterers are modelled as balls of diameter $L$; obviously, the blocking correlation increases with $L$. The standard deviation of the ranging error, $\sigma$ is assumed to be $0.01 {\rm m}$. Two or more MPCs that are within a distance of $2\sigma$ apart are considered to be unresolvable; in that case, the earliest arriving peak is retained and the other peaks are discarded. For a given $\delta$, $\rho_{01}=\rho_{10}=2Q(\delta)$ was assumed, where $Q(x)=\displaystyle\int\limits_x^{\infty} e^{-x^2/2}/\sqrt{2\pi} dx$. A target is considered to be missed if there is no location estimate lying within a radius of $3\sigma$ from the actual coordinates. Similarly, a false alarm is declared whenever there is no target within a radius of $3\sigma$ from an estimated target location. For a given network realization, let $\hat{T}_D$ and $\hat{T}_F$ denote the number of detections and false alarms, respectively. Then, the detection and false alarm probabilities, denoted by $P_D$ and $P_F$, respectively, are calculated as follows, \begin{align} P_D &= \mathbb{E}[\hat{T}_D/T] \\ P_F &= \mathbb{E}[\hat{T}_F/(\hat{T}_D+\hat{T}_F)] \end{align} where the expectation is over the ensemble of network realizations \subsection{Comparison to Prior Art} \label{sec:priorart} In \cite{Shen_and_Molisch_2013_2}, the probability of \emph{any} DP being blocked was assumed to be constant throughout $G$ and independent of other blocking probabilities. Target detection was achieved if there existed a matching of size at least $I-\Phi$, where $\Phi$ denotes the maximum number of \emph{undetected} DPs permitted, regardless of consistency. We now proceed to demonstrate how this criterion is a special case of the Bayesian MTL Algorithm, obtained by assuming independent blocking with constant blocking probabilities (henceforth referred to as the i.c.b assumption). Let $p_{\rm los}$ denote the probability that LoS exists between any two points in $G$. The probability that a DP is blocked is then given by $p_{\rm b}=1-p_{\rm los}^2$. Taking into account both blockage and missed detection by noise, the probability that a DP is undetected (denoted by $p_{\rm dp}$) is given by \begin{align} \label{eq:pdp} p_{\rm dp} = (1-p_{\rm b}).2Q(\delta)+p_{\rm b} \end{align} The blocking likelihood of a matching with $\Phi$ undetected DPs equals $-\log ((1-p_{\rm dp})^{I-\Phi} p_{\rm dp}^{\Phi})$. If $p_{\rm dp} < 1/2$, then the blocking likelihood monotonically increases with $\Phi$. Hence, for a given $\Phi$, the corresponding blocking threshold, $\mu(\Phi)$, can be set as follows \begin{align} \label{eq:mu(phi)} \mu (\Phi)= -\log ((1-p_{\rm dp})^{I-\Phi} p_{\rm dp}^{\Phi}) \end{align} which ensures that the detected targets have matchings of size at least $I-\Phi$. To validate the Bayesian MTL algorithm, we compared it with the prior art proposed in \cite{Shen_and_Molisch_2013_2}, under the i.c.b assumption. The comparison was done on the network shown in Fig. \ref{fig:contrived_network}. To model the i.c.b condition, the values for $L$ and $p_{\rm los}$ were chosen to be $0.001{\rm m}$ and 0.9, respectively. With probability $1-p_{\rm los}$, a scatterer was placed independently and uniformly along each line segment between a node (TX/RX) and a target. \begin{figure} \includegraphics[scale=0.55]{example_network} \caption{A network consisting of 3 TXs at $(-8{\rm m},7{\rm m})$, $(-7{\rm m},8{\rm m})$ and $(7{\rm m},7{\rm m})$, 3 RXs at $(-7{\rm m},7{\rm m})$, $(8{\rm m},7{\rm m})$ and $(7{\rm m},8{\rm m})$ (i.e., $I=9$ TRPs) and 2 targets at $(0{\rm m},0{\rm m})$ and $(0{\rm m},5{\rm m})$. The TX and RX locations are such that the LoS blocking probabilities are independent only if $L$ is very small. For $L=0.001{\rm m}$, the independent blocking assumption holds.} \label{fig:contrived_network} \end{figure} The two algorithms were evaluated over 100 realizations for three values of $\delta(= 1, 2 \mbox{ and } 3)$ and $\Phi$ $(=1, 3$ and $6)$. For each value of $\Phi$, the threshold $\mu(\Phi)$ for the Bayesian MTL algorithm was chosen according to (\ref{eq:mu(phi)}). The region of convergence (ROC) curves, plotting $P_D$ versus $P_F$, for both algorithms are shown in Fig. \ref{fig:priorart}. As expected, they yield identical missed-detection and false alarm rates. Increasing $\delta$ loosens the compactness constraint on the ellipse intersections around a potential target location, while increasing $\mu$ relaxes the constraint on the probability of a blocking vector/matching. Hence, both $P_F$ and $P_D$ are non-decreasing in $\delta$ and $\mu$, as seen in Fig. \ref{fig:priorart}. In the special case where only three ellipse intersections are sufficient to declare the presence of a target $(\Phi=6)$, the false alarm rates are very high. This is in agreement with the results reported in \cite{Aditya_and_Molisch_2015}. \begin{figure} \includegraphics[scale=0.5]{comparison_prior_art} \caption{Region of Convergence (ROC) curves plotting $P_D$ versus $P_F$ for the i.c.b condition. The prior art in \cite{Shen_and_Molisch_2013_2} is a special case of the Bayesian MTL algorithm} \label{fig:priorart} \end{figure} \subsection{Effect of Correlated Blocking} \label{sec:mainsim} To highlight the effect of correlated blocking, the value of $L$ was increased to $5{\rm m}$ and the scatterer centers were distributed according to a homogeneous Poisson point process (PPP) of intensity $\lambda= 0.0075{\rm m}^{-2}$, which amounts to three scatterers in $G$, on average, per realization. The blocking distribution for the PPP scatterer model is derived in Appendix \ref{app:blockmodel}. A total of 100 network realizations were considered, with $M_{\rm TX}=M_{\rm RX}=3$ and $T=2$, which corresponds to $N=T(2M+1)=2(2\times 3+1)=14$ MPCs per TRP, on average. Let $S_{\rm sc}\subseteq G$ denote the region occupied by scatterers in a given realization. The TX, RX and target locations were uniformly and independently distributed over the region $G\setminus S_{\rm sc}$, where `$\setminus$' denotes the set difference operator. Under the i.c.b assumption for the above settings, $p_{\rm los}=\exp(-\lambda L d_{\rm avg})$, where $d_{\rm avg}=10.1133 {\rm m}$ is the average distance between a target and a node. Hence, from (\ref{eq:pdp}), $p_{\rm dp}=0.5329 > 1/2$ for $\delta=3$. The distribution of the average number of DPs at a point is tabulated in Table \ref{tab:sumk_pmf} for both the true blocking distribution and the i.c.b assumption. As per the true blocking distribution, a target has LoS to all TXs and RXs (i.e., 9 DPs) over $66\%$ of the time and the probability that a target has only 3 DPs is a little over $1\%$. As a result, a matching of size 3 is more likely to be a false alarm. However, since $p_{\rm dp} > 1/2$, a matching of size 3 is more probable than a matching of size 9 (which occurs with less than $1\%$ probability) under the i.c.b assumption. As a result, false alarms are identified first, followed by detections, as the value of $\mu$ increases. This is reflected in the ROC curves plotted in Fig. \ref{fig:ROC_justify}, where the i.c.b assumption gives rise to very high false alarm rates. \begin{table*} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Blocking Distribution & $< 3$ & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline True & 0.0700 & 0.0150 & 0.0750 & 0 & 0.1750 & 0 & 0 & 0.6650 \\ i.c.b assumption & 0.3367 & 0.1961 & 0.2578 & 0.2259 & 0.1320 & 0.0496 & 0.0109 & 0.0011 \\ \hline \end{tabular} \caption{Distribution of the average number of DPs at a point for $L=5{\rm m}$ and $\lambda=0.0075{\rm m^{-2}}$.} \label{tab:sumk_pmf} \end{table*} \begin{figure} \includegraphics[scale=0.6]{correlation_matters} \caption{Ignoring correlated blocking can result in false alarms being more likely to occur than detections.} \label{fig:ROC_justify} \end{figure} \subsection{Comparison with genie-aided method} \label{sec:genie} In many radar applications, a missed-detection is more costly than a false alarm. As a benchmark, the missed-detection probability of the Bayesian MTL algorithm is compared with that of a genie-aided method, which involves running the Bayesian MTL Algorithm on the true target matchings, in Fig. \ref{fig:genie_dia5}. It can be seen that the proposed algorithm performs as well as the genie-aided method. \begin{figure} \includegraphics[scale=0.6]{genie_5_all_mu} \caption{Comparison with genie-aided method.} \label{fig:genie_dia5} \end{figure} \subsection{Experimental Results} \label{sec:exp} We now present some experimental results that further validate the performance of the Bayesian MTL algorithm. We chose a portion of UltraLab at USC, a cluttered indoor environment, for our measurements. The floor was paved with square tiles of side $0.61\rm m$ which provided a natural Cartesian coordinate system, as shown in Fig.~\ref{fig:pic_env2}. The measurement setup is shown in Fig.~\ref{fig:setup}. For the $i$-th TRP, the frequency response of the ultrawideband (UWB) channel over 6-8 {\rm GHz} was measured twice at 1601 frequency points - once without the targets (i.e., the background measurement, denoted by $H^{\rm back}_i(f)$) and then with the targets present (denoted by $H^{\rm tar}_i(f)$) - using a pair of horn antennas with beamwidth $60^o$, connected to a vector network analyzer (VNA). This corresponds to $\sigma=0.15{\rm m}$. Horn antennas were preferred over omnidirectional antennas to restrict the background clutter to a narrow sector. The antennas were maintained at the same height from the ground in order to create a 2D localization scenario, and were oriented to face the targets. Two identical, foil-wrapped cylindrical poles were chosen as the targets. Although the height of the cylinders exceeded that of the TX and RX antennas, the portion of the cylinder that was in the plane of the antennas was wrapped in foil to maintain the 2D nature of the problem. Let $h_i(t)$ denote the channel impulse response for the $i$-th TRP due to the targets alone (i.e., after background subtraction). Then, $h_i(t)$ is given by the following expression, \begin{align} h_i(t)={\rm IFFT}(H^{\rm tar}_i(f) - H_i^{\rm back}(f)) \end{align} The noise floor corresponding to the $i$-th TRP was determined by computing the average power in the last 100 samples of $h_i(t)$. These delay bins correspond to a signal run length in excess of $200\rm m$, which is well in excess of the ranges encountered in our measurement scenario (less than $10\rm m$). Hence, it is reasonable to assume that the energy in these delay bins is due to thermal noise alone. After determining the noise power, MPCs were extracted from $h_i(t)$ whenever the SNR was greater than $10{\rm dB}$. A distributed virtual MIMO radar was implemented by moving the TX and RX antennas to different locations, as shown in Fig.~\ref{fig:lay_env2}. Six TRPs were considered, which are indexed in Table~\ref{tab:lookup_scenario2}. LoS was present between all TXs, RXs and targets (i.e., $\mathbb{P}(\mathbf{k}_t)=\mathbf{1}, ~t\in\{1,2\}$). \begin{figure} \centering \begin{subfigure}{0.5\textwidth} \centering \includegraphics[scale=0.45]{pic_env2} \caption{The cluttered indoor measurement environment.} \label{fig:pic_env2} \end{subfigure} \\ \vspace{5mm} \begin{subfigure}{0.5\textwidth} \includegraphics[scale=0.3]{vna_prototype} \caption{Measurement setup using VNA} \label{fig:setup} \end{subfigure} \caption{The experimental setup.} \end{figure} \begin{figure} \includegraphics[scale=0.6]{env_2} \caption{Layout of TXs, RXs and targets in the cluttered environment of Fig.~\ref{fig:pic_env2}.} \label{fig:lay_env2} \end{figure} \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline TRP & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline TX & 1 & 2 & 3 & 3 & 5 & 4 \\ \hline RX & 1 & 2 & 2 & 3 & 3 & 3 \\ \hline \end{tabular} \caption{Look-up table mapping the TRP index with the corresponding TX and RX IDs, corresponding to Figs.~\ref{fig:pic_env2} and \ref{fig:lay_env2}.} \label{tab:lookup_scenario2} \end{table} The estimated target locations are plotted in Fig.~\ref{fig:res_env2}, from which the following inferences can be drawn: \begin{figure} \includegraphics[scale=0.6]{localization_results_v2} \caption{Position estimates for the targets obtained from the Bayesian MTL algorithm in the Measurement Scenario 2.} \label{fig:res_env2} \end{figure} \begin{itemize} \item[(i)] Setting $\mu=1$ ensures that only those points at which all six ellipses (corresponding to the six TRPs) intersect are detected as target locations. It can be seen that both targets are localized. \item[(ii)] The Bayesian MTL algorithm was formulated under the assumption of point targets. Since the targets are not point objects, multiple DPs are possible, in general, for each target. As a result, we obtain a \emph{cluster} of location estimates for each target location. \item[(iii)] The DPs to the targets are at least 10 dB above the noise floor post background cancellation, even in a cluttered environment, which can be attributed to the targets being strong reflectors and having a sufficiently large radar cross-section, due to the foil wrapping. \item[(iv)] The steel pillar to the left of Target 1, which is a part of the clutter, was still `localized' in spite of background subtraction. In terms of range, Target 1 and the pillar are closely separated for all the TRPs. Hence, some of the energy from the DP to Target 1 spills over into the delay bin corresponding to the pillar location. As a result, the pillar cannot be perfectly canceled out during background subtraction. The residual energy manifests itself as a DP to the pillar, leading to its localization. An implication of this is that the background in the immediate vicinity of a target cannot be subtracted completely. \end{itemize} \section{Summary and Conclusions} \label{sec:summary} In this paper, we considered the impact of environment-induced correlated blocking on localization performance. We first provided a theoretical framework for MTL using a distributed MIMO radar by formulating the general problem of localizing all the targets and scatterers in an unknown environment as a Bayesian estimation problem. We then proceeded to derive a more tractable approximation, known as the Bayesian MTL problem, where the objective was to localize only the targets, but not the scatterers. We then proposed a polynomial-time approximation algorithm - at the heart of which was a blocking-aware vector likelihood function that took correlated blocking into account - to solve this problem. The algorithm relies on two thresholds, $\delta$ and $\mu$, to detect targets and works with either theoretical or empirical blocking statistics that may be obtained via measurements or simulations. Our simulations showed that ignoring correlated blocking can be lead to very poor detection performance, with false alarms being more likely to occur than detections, and our experiments yielded encouraging results, even in the presence of non-idealities such as improper background subtraction and non-point targets. \section{Acknowledgments} The authors would like to thank C. Umit Bas, O. Sangodoyin and R. Wang for their assistance in carrying out the measurements. \appendices \section{Blocking model} Let the scatterers be represented by balls of diameter $L$, whose centers are distributed according to a homogeneous PPP with intensity $\lambda$. For LoS to exist between two points separated by a distance $d$, no scatterer center should lie within a rectangle of sides $L$ and $d$ (Fig. \ref{fig:PPPbasics}). Therefore, the LoS probability is $\exp(-\lambda L d)$. \begin{figure} \centering \includegraphics[scale=0.45]{PPPbasics} \caption{LoS is obstructed if there exists at least one scatterer center within a distance of $L/2$ from the LoS path} \label{fig:PPPbasics} \end{figure} Consider a consistent blocking vector $\mathbf{k}_t=\mathbf{w}_t \bigotimes \mathbf{v}_t$ at $(x_t,y_t)$. The set of nodes that are blocked/unblocked at $(x_t,y_t)$ is determined by $\mathbf{v}_t$ and $\mathbf{w}_t$. For each unblocked node, there exists a rectangle which cannot contain any scatterer center. The \emph{LoS polygon}, $S_{\rm los}$, is the union of such rectangles (shaded grey in Fig. \ref{fig:pmf}). In contrast, for each blocked node $n$, there exists an NLoS polygon $S_n$ - the portion of its rectangle not contained in $S_{\rm los}$ - which must contain at least one scatterer center. Let $N_{\rm bl}$ denote the number of blocked nodes. Then, \begin{align} \label{eq:prob_lbound} \mathbb{P}(\mathbf{k}_t) &\geq \exp(-\lambda {\rm Ar}(S_{\rm los})) \prod_{n=1}^{N_{\rm bl}}(1-\exp(-\lambda {\rm Ar}(S_n))) \end{align} where ${\rm Ar}(.)$ denotes the area operator, acting on sets in $\mathbb{R}^2$. The expression in (\ref{eq:prob_lbound}) is a lower bound since it ignores overlapping NLoS polygons which may share scatterer centers (e.g., TX 2 and TX 3 in Fig. \ref{fig:pmf}). The bound is met with equality when none of the NLoS polygons overlap. \label{app:blockmodel} \begin{figure} \centering \includegraphics[scale=0.28]{blockingpmf} \caption{$\mathbf{k}_t=[0, 0, 0, 1, 0, 0, 1, 0, 0]=[0, 1, 1] \bigotimes [1, 0, 0]$} \label{fig:pmf} \end{figure} \bibliographystyle{IEEEtran}
1,108,101,563,951
arxiv
\section{Introduction} Artificial General Intelligence (AGI) is the bold endeavor of reaching beyond the narrow focus of much of the current artificial intelligence research community, aiming to build agents that encompass the whole breadth of human intellectual faculties and more. In contrast to narrow AI, the focus of AGI research lies on the breadth of the range of environments in which an agent behaves intelligently, rather than the performance in a particular environment. Ultimately, that range should cover all environments that humans can act in, and more. While artificial intelligence originally had this broad vision (\cite{Schmidhuber:06ai75}), many practitioners have taken to specializing on particular, more manageable subproblems. While this trend has greatly advanced the field and been highly successful for many practical applications, many find it philosophically unsatisfactory. Not surprisingly therefore, the pendulum has started to swing back, and there is a resurgent interest in the big questions on how artificial \emph{general} intelligence can come about. The recent development of a solid theoretical framework for AGI by \cite{hutter05universal} has played a major role in this rekindling. Alongside this development there has been an increased effort toward designing an objective and practical benchmark for measuring general intelligence, because it would allow for a better comparability between the very diverse approaches to AGI and homogenize the field. While the debate is slowly converging around the recently proposed formal measure of \emph{universal intelligence} by \cite{Legg2007}, no truly general yet practical benchmark has been established. A general benchmark will necessarily need to evaluate an agent on a large set of environments, and in order to form a single composite score, each environment must have an associated weight. In the best case, the set of environments includes \emph{all} well-defined, practically evaluable environments. In practice, however, we will have to restrict this set, thereby introducing a bias and making the benchmark less general. Concretely, this paper proposes a benchmark limited to the subset of \emph{game} environments. We will argue for this particular choice, showing that it preserves a high level of generality, while at the same time being practically useful. We will start by introducing measures of general intelligence, how they can be altered to include resource constraints and how they implicitly determine a weighting of the set of environments (section~\ref{intelligence}). We then discuss the suitability of games as a class of environments (section~\ref{gamebench}), before connecting the dots and defining a benchmark for measuring general game intelligence (section~\ref{gameintel}). Finally, we tackle the issue of game description languages, and how existing ones could be used (section~\ref{gamelang}). \section{Defining Intelligence} \label{intelligence} Intelligence is one of those interesting concepts that everyone has an opinion about, but few people are able to give a definition for -- and when they do, their definitions tend to disagree with each other. And curiously, the consensus opinions change over time: consider for example a number of indicators for human intelligence like arithmetic skills, memory capacity, chess playing, theorem proving -- all of which were commonly employed in the past, but since machines now outperform humans on those tasks, they have fallen into disuse. We refer the interested reader to a comprehensive treatment of the subject matter in~\cite{Legg2008}. The current artificial intelligence literature features a panoply of benchmarks, many of which, unfortunately, are very narrow, applicable only on a small class of tasks. This is not to say that they cannot be useful for advancing the field, but in retrospect it often becomes clear how little an advance on a narrow task contributed to the general field. For example, researchers used to argue that serious progress on a game as complex as chess would necessarily generate many insights, and the techniques employed in the solution would be useful for real-world problems -- well, no. All this highlights the need for a very general definition that goes beyond an aggregation of a handful of tasks. We now introduce the most general definition to date (section~\ref{univintel}), which unfortunately is only of theoretical use, as the quantities it relies on are not computable. In section~\ref{time} then, we delineate a more practical version that takes computation time into account. \subsection{Universal Intelligence} \label{univintel} Building upon Solomonoff's theory of universal induction~\cite{Solomonoff:64}, and extending it to handle agents that act in their environment (in contrast to just passively pondering upon observations), \cite{hutter05universal} recently developed a formal framework for universal artificial intelligence. Within the very general reinforcement learning setting, he formally describes an optimal agent (called AIXI) that maximizes the expected reward for all possible unknown environments -- the only caveat being its incomputability. As a dual of this framework, \cite{Legg2007} define a formal measure of universal intelligence, for which AIXI, per definition, achieves the highest score. Interestingly, their resulting definition coincides well with many informal, mainstream ones. Here we summarize their results, as they will form the theoretical basis for much of this paper. \begin{verse} \emph{Intelligence measures an agent's ability to achieve goals in a wide range of environments.} \end{verse} Formally, the intelligence measure $\Upsilon$ of an agent $\pi$ in a class of (computable) environments $E$ is defined as \[ \Upsilon(\pi) := \sum_{\mu \in E} 2^{-K(\mu)}V_{\mu}(\pi) \] where $V_{\mu}({\pi})$ is the expected total reward of $\pi$ when acting in a particular environment $\mu$, and $K(\mu)$ is a complexity measure of the environment (satisfying the technical condition $\sum_{\mu \in E} 2^{-K(\mu)} < \infty$). We call $\Upsilon(\pi)$ the \emph{universal intelligence} of $\pi$ when $E$ is the set of all computable environments, and $K(\mu)$ is the Kolmogorov complexity of $\mu$, i.e. the length of the shortest program that fully describes $\mu$. Very informally, this equation can be read as ``the universal intelligence of an agent is the sum of how well it performs in all computable environments, logarithmically weighted by their complexity so that simpler environments count more''. For a more intuitive understanding, it may be useful to expand upon a number of aspects of this definition: \begin{itemize} \item {\bf Environments}: We take the term environment to encompass not only the dynamics that define what happens for each possible action (which might in turn include the reaction of an adversary), but also the rules according to which the agent is rewarded. \item {\bf Goals}: According to the most well-received theory of the development of life on earth (Darwin's), all living beings can be said to have a goal (maximization of evolutionary fitness), but as this is not necessarily true, i.e. not true in all possible worlds, it is debatable whether goals are an existential requirement for intelligence. While we do not claim that intelligence in the broad sense cannot in part be purely `contemplative', we cannot conceive of how the intelligence of an agent that is devoid of any goal-driven behavior could be measured. For practicality therefore, we assume that each environment produces a numeric `reward' value, and maximizing it is the goal of an intelligent agent. \item {\bf Acting in the environment}: In order for intelligence to be measurable, an agent must act, as only actions can be evaluated objectively, not its internal processes (like awareness). \item {\bf Space of environments}: The presented definition can be seen as an intelligence measure, also if the set $E$ is more restrictive, e.g.~limited to a particular domain of interest, tasks where humans excel, say. This is related to the notion of pragmatic general intelligence by \cite{Goertzel2010}. \item {\bf Weighting by complexity}: We need some way to assign relative importance to different environments, which is done through a complexity measure $K$, traditionally measured in bits\footnote{Note that there is no way to avoid non-uniform weighting: There exists no uniform probability distribution on the integers.}. All environments are computable, therefore can be concisely represented as a shortest piece of code -- the length of this code is the environment complexity. \end{itemize} Legg and Hutter propose their definition as a basis for any test of artificial general intelligence. Among the advantages they list are its wide range of applicability (from random to super-human), its objectivity, its universality, and the fact that it is formally defined. Unfortunately however, it suffers from two major limitations: \begin{inparaenum}[\itshape a\upshape)] \item {\em Incomputability}: Universal intelligence is incomputable, because the Kolmogorov complexity is incomputable for any environment (due to the halting problem). \item {\em Unlimited resources}: The authors deliberately do not include any consideration of time or space resources in their definition. This means that two agents that act identically in theory will be assigned the exact same intelligence $\Upsilon$, even if one of them requires infinitely more computational resources to choose its action (i.e. would never get to do any action in practice) than the other. \end{inparaenum} There have been a number of attempts to overcome the limitations of AIXI. \begin{enumerate} \item We may replace Solomonoff's incomputable prior by the {\em Speed Prior} (\cite{Schmidhuber:02colt}), which assigns high probability to quickly computable environments (instead of those with the shortest descriptions / lowest Kolmogorov complexity favored by Solomonoff's prior). This yields a computable agent AIS which can predict expected reward with arbitrary accuracy in finite time. \item We may use a G\"{o}del Machine with a fixed limit of computational resources per action (\cite{Schmidhuber:03gm,Schmidhuber:09gm}). \item We may use a Monte-Carlo approximation of AIXI (\cite{Veness2009}). This already yielded promising practical results on an ad-hoc portfolio of simple maze-tasks and games (including Tic-Tac-Toe and Pac-Man): the same AIXI-approximating agent learned to act reasonably well in all of them. \end{enumerate} While it is clearly a useful direction to derive practical, scaled-down variants of uncomputable, universally optimal agents, here we are concerned with the dual case: Rendering the \emph{definition} practical. One way of doing this would be to rephrase the definitions of Legg and Hutter in the context of the Speed Prior (\cite{Schmidhuber:02colt}) instead of Solomonoff's prior. In the next section, however, we will follow an even more pragmatic approach: we will greatly limit the class of environments further such that each member of the class is not only quickly computable, but also of obvious interest to a wide community of people, namely, gamers. \subsection{Intelligence and Limited Time} \label{time} Clearly, any practically useful measure of general intelligence (i.e.~that yields a good result in finite time) needs to take computation time of the agent and environment into account. Also, we posit that all agents are computable, and all environments are episodic (i.e.~finish after a finite number of interactions). The episodic assumption should be rather uncontroversial, as all known organisms have finite life spans, and it is unclear what it would mean to behave well in a task that might never finish. The computability assumption is necessary for anything running on standard computers. Although other resources, like memory, may be of practical importance as well, in the following we limit ourselves to only time. (Most of the arguments remain applicable to other limited resources.) The two aspects of computation time to consider are: \begin{enumerate} \item The time the \emph{environment} requires to generate the next state/observation, after each action. We propose to incorporate the expected time into the environment's complexity measure. A standard formulation for this would be the equivalent of Levin complexity: \[ K(\mu) = l(\mu) + \log(\tau(\mu)), \] where $l(\mu)$ is the (not necessarily shortest) length of the description of the environment $\mu$ and $\tau(\mu)$ is the expected computation time of $\mu$ (for one episode)\footnote{Usually, and for asynchronous environments in particular, $\tau(\mu)$ is an unproblematic value to define, it could even be set to the total time budget $T$, see below. In some cases however, the computation time of the environment may depend on the agent's actions (e.g. updating a physical simulation when an agent jumps into a liquid should take longer to compute than if the agent stands still). We sidestep this issue by taking $\tau(\mu)$ with respect to a randomly acting agent. This may be reasonable however, because as the inherent complexity of the environment increases, the actions of the agent arguably account for a relatively smaller variation in $\tau(\mu)$ (which in turn contributes relatively less to $K(\mu)$), thus becoming irrelevant in the limit. }. This corresponds to a trade-off that weights length exponentially more heavily than time. Many other trade-offs are possible (see e.g.~\cite{sun2010frontier} for an extensive discussion). \item The time the \emph{agent} requires to decide upon an action. We propose to incorporate this into the definition of intelligence itself. \end{enumerate} There are two ways for integrating resource limitations into a general intelligence measure: Either the agent has an unlimited budget of time, but its performance is somehow weighted by the resources it consumes (see e.g.~\cite{Goertzel2010} for a simple variant of this), or the agent has a limited budget, and the intelligence measure is based on the (cumulative or average) reward it can achieve within that limit (e.g.~\cite{Hernandez-Orallo2010,Schmidhuber:09gm}). The first of these is seriously flawed, as it can be exploited by `hyperactive' agents, that act extremely fast and randomly, with the effect of multiplying the low reward by a large number. In addition, resource consumption can be handled through the reward function, e.g. if there is an implicit assumption that if completing an episode faster is valued more, then this can be reflected in the rewards directly. \subsection{Two-phase Evaluation} Given an environment $\mu$ and a total time budget $T$\footnote{As far as possible, the time units should be invariant with respect to hardware of implementation details, e.g. it could be the number of CPU instructions executed.} for the agent, we propose to set up the measurement as follows. The time is split into a \emph{learning phase} and an \emph{evaluation phase}, and the agent chooses itself when to (irreversibly) switch to the latter. Rewards gained during the learning phase are disregarded, and the environment is reset at that point (otherwise an agent could exploit by switching just before a predicted success, and stop afterward). The final reward $V_{\mu, T}(\mu)$ is the average reward of the completed episodes during the evaluation phase (we need to take the average instead of the sum, here again, to handle `hyperactive' agents), the accumulated reward so far if no episode was completed, or zero if the agent never switched. The motivation for having two phases is the issue of \emph{learning}. Given that a task is potentially attempted more than once, it is debatable whether the average performance is what defines intelligence, or instead the capability to improve between early trials and later ones. Applying the latter naively is prone to exploitation: That agent would be judged most intelligent which can best hide its true skill in the first episodes (deliberately acting as bad as possible) while acting normally in the end. Requiring the agent to switch itself removes this moral hazard. On the other hand, using average performance over all attempts is a relatively harsh setting for the exploration-exploitation trade-off, which may force the agent to act overly carefully, not learning enough about the environment, and therefore leading to an underestimation of its intelligence, a problem alleviated by the learning phase. Importantly, this setup allows us to compare the intelligence of very different types of agents, all on the same scale: a reasoning-based agent (that does not learn) can use all available resources for that, skipping the learning phase, while an evolution-based agent could employ most of those resources to evolve a good policy during many quick episodes and have itself be evaluated on the best one encountered. In between those extremes, a good reinforcement learning agent could handle the exploration-exploitation trade-off explicitly, including the switching action. Interactions could be synchronous (i.e.~the environment is paused until the agent chooses an action), but should preferably be asynchronous (`pass' actions until the agent has made a decision), as it entails a more natural and environment-specific penalisation of slowly acting agents. \subsection{Anytime-measure} In the best case, a practical intelligence measure should be an `anytime'-measure, i.e.~that can be stopped anytime, and gives more accurate results the longer it runs. A simple but effective way to achieve this is to use a Monte-Carlo estimate of $\Upsilon$, sampling more and more environments (that form a set $\hat{E}$), where the probability of $\mu$ being sampled is proportional to $2^{-K(\mu)}$. Like most Monte-Carlo-based methods, this intrinsically easy to parallelize. \[ \widehat{\Upsilon}(\pi) := \frac{1}{|\hat{E}|} \sum_{\mu \in \hat{E} \subset E} V_{\mu, T}(\pi) \] Note that this assumes that the time limit $T$ is specified within the environment description itself, and thereby incorporated in the task complexity (larger total budgets corresponding to more complex environments). If $T$ is not part of the complexity, the following iterative scheme can be shown to be an equivalent alternative. Start with $T=T_0$, $i=0$. In each iteration $i$, evaluate $2^i$ environments, half of which for $T=2^{0}T_0$, a quarter for $T=2^{1}T_0$, and so forth, and one for $T=2^{i}T_0$. To summarize, we adapted the definition of universal intelligence in three places to make it practically useful: \begin{enumerate} \item We replaced Kolmogorov complexity with a computable complexity measure that penalizes heavy computational requirements for the environments, \item we incorporated resource usage into the performance measure of an agent to favor efficient ones while avoiding some well-known pitfalls, \item we formalized a Monte-Carlo approximation that can handle the infinite set of environments, while at the same time forming an anytime-measure. \end{enumerate} The next sections will add the final missing piece, namely a suitably biased domain of environments, which are at once meaningful, easy to evaluate and easy to sample from: Games. \section{Games as Testbeds} \label{gamebench} Games and artificial intelligence have close and long-standing ties to each other. Turing himself, who proposed the first test for machine intelligence, also invented the MiniMax algorithm for perfect information two-player games, and considered chess an important domain for future computer science research (\cite{turing50computing}). What is arguably the world's first reinforcement learning algorithm (a precursor to modern temporal difference learning) was invented by Samuel in the context of building an automatic checkers playing program, thus kickstarting a major strand of modern AI research. More recently, several high-profile AI researchers have proposed games as good benchmarks for AI (\cite{laird01human,Masum2003}). At least part of the argument is that the technical development of modern computer games has now definitely overtaken custom-built benchmarks and robot simulators, and the commercial game industry provides a huge amount of high-quality, well-tuned problems and environments for AI research as a side effect of the commercial pressure for better games. \subsection{What are games?} There is not one definition of what a game is, but plenty. In fact, Wittgenstein used the concept of a game in a number of thought-experiments designed to show that it was impossible to correctly define any concept in terms of sufficient and necessary conditions; instead, concepts are implicitly defined by those things that they refer to, and which are related to each other through family likeness. Learning to use a concept is learning to play the language-game that the concept forms part of, yet another example of a game (\cite{wittgenstein53philosophical}). The impossibility of defining a naturally occurring concept does not mean you should not try; a working definition of games would be great, even if we acknowledge that it's not all-encompassing. Game designer Sid Meier defines a game as ``a series of meaningful choices''. Others, such as \cite{salen04rules}, emphasize conflicts as central to games: A game is ``a system in which players engage in an artificial conflict, defined by rules, that results in a quantifiable outcome''. \cite{juul05halfreal} provides a more formal definition: ``A game is a rule-based formal system with a variable and quantifiable outcome, where different outcomes are assigned different values, the player exerts effort in order to influence the outcome, the player feels attached to the outcome, and the consequences of the activity are optional and negotiable''. The above definitions are not without their critics. For example, game designer Raph Koster remarks that none of them contain the word ``fun'' (\cite{koster05a}). As fun seems to be so central to games, he then devotes a whole book to understanding what makes games fun. For our purposes, a more relevant criticism of the above definitions is that they refer to a ``player'' who can ``exert effort'' and ``engage'' in ``meaningful choices''. Obviously, including such a player in a definition of a game which is used to test artificial intelligence would beg the question of artificial intelligence as a whole. We therefore choose to adapt Juul's definition to fit our purposes: \begin{verse} \emph{A game is a rule-based formal system with a variable and quantifiable outcome, where different outcomes are assigned different values, and an agent can affect the outcome through taking actions during the game based on full or partial knowledge of the game state.} \end{verse} \subsection{Types of Games} There are countless types of games, and more different taxonomies of games than we can do justice to here. Even if we restrict ourselves to games that can be played on a computer (which excludes football, but not football simulations) there are numerous genres, which are sometimes so different that it's remarkable that the same word is used to all of them: card games, board games, mathematical games, first-person shooter games, real-time strategy games, flight simulator games, quiz games, role-playing games, puzzle games, rhythm games, virtual world games and so on. Instead of attempting another taxonomy, we can draw up a few dimensions along which games can vary, together with short comments on how the requirements on the agent vary with these dimensions: \begin{itemize} \item {\bf Observability}: Perfect information games like chess allow the agent to see the whole game state at any time, whereas a game with very low observability like Battleships starts with the agent in complete darkness and drip-feeds of information about the game state. A perfect information game can be played by a reactive agent, whereas a game with low observability requires the agent to make hypotheses about the game state and remember past information. \item {\bf Input dimensionality and representation}: The agent's knowledge of the state space in Poker can be represented as a short sequence of integers representing the cards at hand, whereas the high-resolution 3-dimensional image which constitutes an observation in Halo requires several megabytes of data to represent accurately. The latter representation requires the agent to perform sophisticated processing of visual input, something which some researchers consider an integral part of intelligence (e.g. many researchers in the adaptive behaviour community claim that it's meaningless to study intelligence except when grounded in a complete sensorymotor loop) and other researchers consider a peripheral distraction. \item {\bf Single-, duo- or multiplayer}: Single-player games are those where you only play against the game itself, but many games have two or more players; some have many millions of players (e.g. Farmville or World of Warcraft). The extent to which the players compete directly against each other varies from those games where only one player can win, to those where players mostly collaborate. The presence of other players in the game will usually require the agent to model and predict the other players' actions in order to do well in the game. \end{itemize} \subsection{Arguments for Games as AI Testbeds} \label{gameargs} So why would games be better as testbeds for AI than other problems, say, theorem proving, image recognition, robot navigation or natural language interaction? A large number of arguments have been put forward, some of which only apply to some types of games and some aspects of intelligence, others which are more general. \begin{itemize} \item {\it Simplicity}: Many games can be concisely described using unambiguous mathematical notation, and have surprising depth despite their apparently bare-bone dynamics. A good example of this is the ancient board game Go, which offers enormous depth despite the rules being so simple they can be written in a few lines. \item {\it Natural reward function}: Games are typically constructed so that they have a very natural reward function: the score. Score-based reward functions are often smooth and fine-grained, as many different actions and events are scored. This means that the player's proficiency can be accurately quantitatively measured for both high-skilled and low-skilled players. \item {\it Scalability}: Most games are made to be played by humans, and to be learned by humans while playing. This means that they typically possess a human-friendly long and smooth learning curve, ideal for reinforcement learning. Agents of very low skill can make some progress (better than agents of no skill) while agents of higher skill can reach much higher performance. \item {\it Proven performance and remaining challenge}: Games is a domain where on some instances (chess, checkers, backgammon, etc.) computers have already reached or surpassed human-level performance, suggesting that AI approaches can learn to beat humans at other games. At the same time, there remain considerable challenges, as computer programs are nowhere near competitive with the best players of e.g. Go or StarCraft. \item {\it Understandability}: Playing games is an activity humans understand, making it easy to understand what the agent is doing and how well it is doing it. We can also judge the performance of an agent by playing directly against it. \item {\it Fun}: People like games, both playing, watching and talking about them. This goes for researchers, students and ordinary people. Therefore, it's often easier to find students for game-related research than for other topics. \item {\it Public awareness}: Most people know what games are about, unlike more abstract tasks (like theorem-proving), which makes it easier to explain what a research breakthrough is about. \item {\it Industrial applicability}: There's money in games, as the game industry is already the largest of the entertainment industries, and keeps growing. Games and virtual worlds are more and more used for training, planning and education purposes. Therefore, it can be easier to get funding for game-related research than for research on AI applications in other domains. \item {\it Availability and cost}: Game developers and hobbyists have made implementations of all types of games widely available, either for free or for a very modest sum. Some games come with extensive interfaces for interfacing AI in various roles and for modifying the games in various ways (e.g. Civilization IV and Unreal Tournament 2004). For some older games (e.g. Quake) and many hobbyist-developed games the source code is freely available. \item {\it Speed}: Most games are relatively quick to play even when played in ``real-time''. Many video games can be sped up to be played thousands of times faster than real-time on current hardware; e.g. the current Mario AI Competition uses a version of Super Mario Bros where a couple of hundred games can be played per second. Board games and mathematical games have no ``real-time'', and the simplicity of evaluating the game mechanics means that millions of games can be played per second and virtually the only limiting factor is the computational complexity of the player. \item {\it Diversity}: As noted above, there are innumerable genres of games; arguably, there exist games related to virtually every human cognitive task. For example, many games require cognitive skills such as visual and auditory perception, communication, cooperation and competition, planning and reasoning, navigation and mapping, or prediction and model-building. \end{itemize} Note that the most commonly touted classes of benchmark problems/environments fall far short of games on several of the dimensions outline above. For example: \begin{itemize} \item Physical robotics: simplicity, natural reward function, scalability, proven performance, availability and cost, speed, diversity. \item Simulated robotics: simplicity, natural reward function, scalability, proven performance, public awareness, diversity. \item Theorem proving: scalability, proven performance, understandability, fun, public awareness, industrial applicability, diversity. \item Natural language understanding/production: simplicity, natural reward function, understandability, fun, public awareness, availability and cost (of data), diversity. \item The original Turing test: scalability, proven performance, availability and cost (of test participants), speed, diversity. \end{itemize} \subsection{Competitions} \label{compintro} Game challenges for artificial intelligence are often posed in the form of competitions. The world's probably most famous AI event was a games-based competition: the match between former chess world champion Garry Kasparov and the IBM Deep Blue integrated software/hardware. The victory of machine over flesh in this competition prompted an uncommon but vigorous and certainly welcome public discussion about the nature of intelligence and whether if could be embedded in a machine. A near-consensus among commentators was that because this seemingly very complicated problem (playing chess) could be solved with just a simple search algorithm and a massive database, it was clearly not necessary to be intelligent in order to be able to play chess. (This is an example of the phenomenon of the ever-moving goalpost for AI: as soon as AI techniques are shown to solve a problem, the problem is deemed not requiring intelligence, and its solution becomes ``mere computer science''.) Since then, game AI competitions have diversified significantly. A number of games-based AI competitions are currently running and enjoying healthy numbers of submissions from academic AI researchers with various specialties. In particular, the IEEE computational intelligence society sponsors a number of competitions associated with its Congress on Evolutionary Computation and Conference on Computational Intelligence and Games. These competitions are based on submitting agents that play particular games well; the very diverse collection of games used includes Othello, Go, Pac-Man, Super Mario Bros, Unreal Tournament and a car racing game (\cite{loiacono08the,loiacono10the,togelius10the,togelius08the,hingston09a}). The point of these competitions is to focus researchers' efforts on a single problem, which has not been crafted in order to favour a particular algorithm, resulting in a reasonably fair and reliable comparison of competing AI algorithms. One competition we will find reason to return to in more depth later on is the Stanford General Game Playing Competition, which differs from the above competitions in that agents are judged not on their ability to play a single game, but a number of unseen games. These games are described in a Game Description Language, which will be discussed further in section~\ref{gamelang}. \section{General Game Intelligence} \label{gameintel} Motivated by sections~\ref{gameargs} and~\ref{compintro}, we now proceed combine the practical measure of intelligence from section~\ref{time} with a space of environments that is restricted to games, leading us to a practical measure of general game intelligence. In particular, we put the following restrictions on the set of environments: \begin{itemize} \item The total number of interactions is guaranteed to be finite (all games end eventually). \item The sum of rewards achievable in an episode (i.e.~the game-score) is bounded. \item The agent-environment interface is simple: the environment sends a string of symbols as observation, and the agent sends back a string of symbols as its action. \item Each game is encoded in a game description language (see section~\ref{gamelang} for details), and the length of this encoding, together with the computational resources required to run the game define its complexity $K$ (as described in section~\ref{time}). The assumption is that short encodings correspond to simpler games. \item When the game allows for a fixed adversary (e.g.~Deep Blue), the encoding length of the adversary, as well as its computation time, get incorporated in the complexity measure of the game, as if they were part of the environment. This automatically makes games with stronger opponents more complex, and adjusts their weight in the total intelligence measure accordingly. \end{itemize} We distinguish two classes of game environments: Those that interface to a single agent, for which we then define an \emph{absolute} measure of general game intelligence, and those with a higher number of players (typically 2) for which we similarly define a \emph{relative} measure. This relative measure of general game intelligence can then be used to establish a ladder system or a unique ranking of all participating agents (e.g.~Elo). This may give a richer description of the capabilities of an agent than the single number generated by the absolute measure, but is not quite as objective. \section{Game Description Languages} \label{gamelang} A game description language (GDL) is a language in which games can be described. More formally, each GDL is accompanied by an interpreter which transforms GDL strings into games, and a valid GDL string is one which can be transformed into a game by this interpreter. One could argue that any programming language constitutes a game description language, as would a universal Turing machine. However, a game in the sense we consider it here needs to have specified channels for input, output and reward signals, which is not true of programs in general. If we arbitrarily assign these channels to e.g. parts of memory or positions on the program tape, infinitesimally few of all valid programs would also be games in any meaningful sense. Existing GDLs are much more limited in what they can express, in that they are not Turing-complete, and cannot even express all possible games. Even within the space of games which they can express, they are biased towards particular types of games (sampling all valid strings of a particular length will yield some types of games more often than others). The following are some existing game design languages: \begin{itemize} \item The Stanford GDL is the language used in the Stanford General Game Playing Competition (\cite{Love2008}). This language is based on defining objects, actions and relations (representing legality of moves, effects etc.), and could in principle define a very large number of different games, though it is biased towards board games. It is limited to perfect information games with a discrete and finite state space, though it is claimed that it could be extended to imperfect information games. As a result of being so low-level, the Stanford GDL is not particularly compact, even when defining the type of games which it is biased towards; the example definition of Tic-Tac-Toe runs to two pages. \item The Ludi language was invented by Browne and Maire and used in their work on automatically generating games using evolutionary algorithms (\cite{Browne2009}). This language is restricted to ``recombination games'' (essentially a subset of board games) and is structured into a tree form, similar to LISP. Each branch of the tree is a {\it Ludeme} which describes a particular aspect of the game, such as the shape of the board or number of units. Due to being more specialized it is often quite compact, e.g. Tic-Tac-Toe can be defined in six lines. \item Another GDL with a more narrow domain is the language used by \cite{togelius08experiment} in their experiment on evolving Pac-Man-like games. This language only admits fixed-length strings, where each string position has a particular meaning, e.g. the number of entities (or ``things'') of a particular colour, the movement logic of some entity, and what happens when a particular type of entity collides with another. The language is limited to predator-prey-like games in a discrete space of a given size. \end{itemize} What, then, would be desirable properties of a GDL used for testing for artificial general intelligence? The following are some (potentially conflicting) suggestions: \begin{itemize} \item {\it Expressiveness}: The language should be able to express a large variety of different games, in order to test as many aspects of general intelligence as possible. It is desirable that the GDL should be able to express finite state as well as continuous state games, large as well as small inputs spaces, single-player games as well as multi-player games, perfect information games as well as partial information games, deterministic games as well as those with noisy state transitions, etc. \item {\it Compactness}: The representation of any particular game should be as short as possible. This also entails a language that is easy to sample from: many random strings will be valid games. \item {\it Meaningfulness}: As high a fraction as possible of possible games should neither be trivial nor impossible, but have a significant skill differentiation in their outcome. \end{itemize} It is worth noting that there are likely to be partial conflicts between these properties, so that e.g. more compact games would likely be less expressive; however, clever design efforts will probably be able to find languages that satisfy all properties to a reasonable degree. It is also worth noting that it is perfectly possible to devise languages which describe games with much more complex state spaces and input/output representations than the type of games described by current GDLs. There is no restriction on the complexity of the interpreter which generates games from descriptions; complete game engines, such as {\it Unity} or the {\it Unreal} engine could be included along with component artwork. This would allow the description of e.g. first-person shooter games and real-time strategy games. \section{AGI and Game Competitions} If intelligence tests along these lines are to be realized, one natural way to do it (especially for determining relative measures of general intelligence) would be in the form of public competitions. As discussed above, a number of competitions are currently ongoing (and many more have run in the past) where AI techniques are tested using games of various types. However, with the exception of the Stanford General Game Playing Competition, they are all measuring performance on a single benchmark only. The Stanford competition has met with rather limited participation, and is very different in setup from the ideas proposed here in several ways -- most importantly, the agents are given a complete description of the environment, favoring reasoning over learning approaches. It is important to state that we are not advocating an immediate transition to a unique intelligence test and a single competition based on it. Rather, the ideas in this paper could form the basis for a number of separate competitions, possibly based on already existing competitions. These competitions would be based on different GDLs, that are more or less strongly biased towards particular types of games. One way to achieve a smooth transition from a game-specific competition to a more general competition could be to take an existing benchmark game, and break it down into a number of components and parameters, which can be used to build a game description language. This GDL could then describe the original game, subsets of the original game and variations of it. For example, one of the currently ongoing competitions is about constructing an AI controller that plays Super Mario Bros\footnote{http://www.marioai.org, ~\cite{togelius10the}}. This classic game could maybe be decomposed into ``ludemes'' describing e.g. what happens when Mario runs into enemies of various types, rules determining behavior of the various NPCs, effects that power-ups have, the physics of the game, movement capabilities of Mario, sizes and topologies of levels, rules about scoring, winning and losing, etc. This would allow the expression of a number of games being somewhat similar to Mario, but differing greatly in game mechanics to be expressed in a language that could be sampled according to the ideas expressed in this paper. \section{Objections} In this section, we gather a few conceivable objections to the central proposition in this paper, along with our responses to these objections. \begin{itemize} \item {\it ``This test will have enormous computational requirements. Playing many episodes of so many different games is just not feasible.''} Measuring the performance on all possible games is certainly impossible. But our test is a sample-based anytime measure, meaning that the more time we have, the more games we can test on and therefore the more accurate our measure becomes. If we have extremely limited time and/or computer power available we can test an agent on only a handful games. If we want to compare the intelligence of two agents in very limited time, however, it is important that we test them on the same games in order to counteract the effects of having chosen this particular subset of games. Limiting the evaluation time may however bias the test toward particular aspects of intelligence, e.g. when it is too short for learning to be effective. \item {\it ``Most games drawn at random will be meaningless, and winning or losing them will not be indicative of any interesting form of intelligence.''} This partly depends on the game description language chosen; some GDLs produce a higher proportion of languages with good skill differentiation than others. But even if a large majority of sampled games are trivial or impossible, or only test very narrow abilities, this will not bias the test significantly. All agents will fail on the impossible games, and all agents of minimal intelligence will learn to solve the trivial or narrow games. Therefore, the ranking of various agents will be decided by their performance on the more interesting games. At most, the results of the test may be renormalized w.r.t.~the proportion of trivial and impossible games, for readability. It might also be possible to automatically pre-test the games for basic playability and learnability before using them as part of the test. \item {\it ``Is is not counter-intuitive to assign higher weight to simple environments, which presumably require less intelligence to tackle?''} The reason behind this is two-fold. First, there are many more complex games than simple ones, so taken together, the weight of the complex ones is substantial. Second, this helps avoid over-specialization, as each general agent is required to handle most simple environments -- also intuitively, we can question the \emph{general} intelligence of the clich\`{e} math genius who is unable to tie his shoes. And as stated above, if any game is so easy that all agents can play it, the difference between the agents will be decided by the more difficult games. Note that a short description does not necessarily imply a simple environment: complexity can emerge from simple rules (e.g. fractals, the game of Go). \item {\it ``Your test will only measure very narrow aspects of intelligence, e.g. combinatorial reasoning. You completely miss out on the most important aspects of intelligence, namely (insert property here).''} There is some substance to this objection, depending on the game description language chosen. If the GDL is only capable of describing games within a rather narrow domain, the skills needed to win at games within this domain will of course be more thoroughly tested. However, note that even games with very simple mechanics appear to require sophisticated skills; cf. the difficulties faced by AI researchers in building bots that outperform humans in both Go and Poker. Bear in mind that these seem to be very different problems requiring different approaches: An agent that could play both games (which could certainly be represented by the same GDL) at high level would arguably be more generally intelligent than anything we have now. For more on how about this sort of test measures supposedly human capacities like intuition and creativity, see the extensive discussion in the original paper on universal intelligence (\cite{Legg2007}); the basic argument is that to the extent these capabilities help the agent solve the problem, they are implicitly tested. \item {\it ``Your test is completely disembodied. But intelligence is a property of an embodied agent situated in an environment, grounding its symbols in direct interaction with the environment through sensors and actuators.''} Whether embodiment is necessary is a debated topic, with differing views among researchers and philosophers from different camps. If we restrict the GDL to combinatorial or similar games, we are certainly leaving embodiment out. But to the extent that an agent can be said to be embodied in a virtual world, embodiment can be accommodated within this framework by simply including a 3D game engine in the GDL interpreter. This would make it possible to describe games taking place in rich 3D environments, forcing the agent to interpret high-dimensional visual input and map its own movements between body-space and world-space. \item {\it ``You're not saying anything new. This is all implicit in the Stanford General Game Playing Competition.''} There is, somewhat surprisingly, very little theoretical justification to be found for the Stanford GGP, at least in published form. In this paper, we lay out a theoretical framework for that competition and similar competitions, and connect it to a well-known theoretical contribution from the AGI community. We also discuss the severe limitations of the GDL used in that competition, and propose a principled way of sampling which games to play. Finally, a crucial difference between the Stanford GGP and the test we are proposing is that the Stanford GGP provides agents with the GDL specification of the game they are playing (arguably making the competition more about parsing and internal simulation than learning based on experience) whereas we propose to let the agents explore the game by interacting with it, which is more general. \item {\it ``You're not saying anything new. This is all implicit in Legg and Hutter's definition of universal intelligence.''} Universal intelligence is incomputable and does not take finite resources into account; it is expressly meant to be approximated, as a basis for more practical tests (a call which this paper is answering). The handling of finite resources through the agent managing a time budget and deciding when to switch between training and evaluation phases is new, as far as we know, as is the idea to sample the space of a GDL based on description length. \end{itemize} \section{Conclusions} This paper we discussed why games should -- and how they can -- be used for research in Artificial General Intelligence. We note that games are in many ways ideal for AI research, but that current research which focuses on testing algorithms on particular games fails to test for general intelligence: A general AI agent instead needs to be tested on an appropriate, broad selection of unseen games. To this effect, we have derived a practical measure of general game intelligence from the Legg-Hutter definition of universal intelligence which elegantly incorporates the usage of computational resources of both the agent and the game engine, in addition of being an easily approximable anytime-measure. The central idea then is to use length- and resource-weighted sampling of strings from a game description language and evaluate the agent on the corresponding games. As game description languages are inherently limited and biased, we discussed some existing GDLs, desirable properties of GDLs for AGI testing, and how existing competitions could be turned into more general competitions. We hope that this paper can spark and sustain interest into addressing the general AI problem directly within the Game AI and Computational Intelligence and Games communities, and in developing new challenging game-based AI competitions. \section*{Acknowledgments} This work was funded in part by SNF grant number 200020-122124/1.
1,108,101,563,952
arxiv
\section{Introduction} \enlargethispage{2\baselineskip} As is well-known from semiconductor physics, the energy band structure of materials could be essentially modified by an external periodic potential, resulting in unusual transport and optical properties. That is why the energy band structure of graphene under a periodic potential (graphene superlattice) was extensively studied from the early days of graphene physics. For single layer graphene superlattices (SLGSLs), the energy band structure has been in detail examined in a number of works for periodic potentials of different natures (electric \cite{park,brey,barbier,huy} or magnetic \cite{ghosh,masir,snyman,dellan,lequi}) and different shapes (Kronig-Penny \cite{park,huy,masir,lequi}, cosine \cite{brey} or square \cite{barbier}). Interesting findings have been reported, such as a strongly anisotropic renormalization of the carrier group velocity and an emergence of extra Dirac points (DPs) in the band structure of electric SLGSLs \cite{park,brey,barbier,huy} or an emergence of finite-energy DPs in the band structure of magnetic ones \cite{snyman,dellan,lequi}. There are fewer works concerning the energy band structure of bilayer graphene superlattices (BLGSLs) and they are all devoted to the case of electric potentials \cite{peeters,killi,tan}. The most impressive features observed in the band structure of the electric BLGSLs studied (with different potential shapes: $\delta$-function \cite{peeters}, rectangular \cite{killi}, or sine \cite{tan}) are an emergence of a pair of new zero-energy DPs or an opening of a direct band gap, depending on the potential strength. These unusual features have been observed in the band structure of only electric BLGSLs and, moreover, they are assumed common for all electric BLGSLs with any potential shape, providing the average potential to be zero. Then, a question should be certainly raised of whether periodic magnetic potentials could bring about similar features in the energy band structure of bilayer graphene (BLG). Unfortunately, to our knowledge, no works on the band structure of BLGSLs with magnetic potentials have been reported. Note that while sharing with single layer graphene many properties important for electronics applications, BLG exhibits the privileges, including the ability to open a band gap in the energy spectrum and to turn it flexibly by an external electric field \cite{castro,cann}. On the other hand, the effects of a magnetic potential and those of an electric one on the energy band structure of graphene might be related to each other \cite{louie}. The purpose of the present work is to study the energy band structure of BLGSLs with a magnetic potential (magnetic BLGSLs - MBLGSLs) which are arisen from an infinitely flat Bernal-stacked BLG in a periodic magnetic field with zero average magnetic flux as schematically described in Fig.1$(a)$. The magnetic field is assumed to be uniform in the $y$-direction and staggered as $\delta$-function barriers of alternate signs, $\vec{B}_0$ and $- \vec{B}_0$, along the $x$-direction. The corresponding vector potential $A(x)$ can be then described as the standard Kronig-Penney potential with $d_B$ the barrier width, $d_W$ the well width, $d = d_B + d_W$ the superlattice period, and $A_0 = B_0 l_B$ the potential strength ($l_B = \sqrt{\hbar / eB_0}$ the magnetic length and $e$ the elementary charge) [Fig.1$(b)$]. For such the periodic magnetic potentials the only way of breaking the symmetry is associated with a difference between $d_B$ and $d_W$. So the parameter $q = d_W / d_B$ is introduced to describe the asymmetric effects and the MBLGSLs with $q = 1$ ($q \neq 1$) will be referred to as symmetric (asymmetric) MBLGSLs. Thus, within the model discussed the periodic magnetic potential is entirely characterized by the three parameters: $A_0$, $d$, and $q$. Such a $\delta$-function model will be hold as long as de Broglie wavelength of quasi-particles is much larger than the typical width of magnetic barriers \cite{ghosh}. Actually, this magnetic potential model is the same as that used for SLGSLs in Refs.\cite{ghosh,barbier,lequi}, but the structure studied here is BLG. To justify the consideration, we will ignore intervalley scattering assuming the widths $d_{B(W)}$ are much larger than the lattice constant in graphene. All spin-related effects are also neglected. Besides, potentials on both graphene layers are assumed to be the same at a given $(x,y)$-point. Under these conditions the low-energy excitations near one original Dirac point (say, $K$) in the energy band structure can be generally described in the four-band continuum nearest-neighbor, tight-binding model with the Hamiltonian \begin{equation} H \ = \ \left( \begin{array}{cccc} 0 & \ v_F \hat{\pi} & \ t_\perp & \ 0 \\ v_F \hat{\pi}^+ & \ 0 & \ 0 & \ 0 \\ t_\perp & \ 0 & \ 0 & \ v_F \hat{\pi}^+ \\ 0 & \ 0 & \ v_F \hat{\pi} & \ 0 \end{array} \right) \ , \end{equation} where $\hat{\pi} = p_x + i p_y$, $v_F = \sqrt{3} t a / (2 \hbar ) \approx 10^6 \ m/s$ is the Fermi velocity, $t \approx 3 \ eV$ is the intralayer nearest-neighbor hopping energy, $a = 2.46 \AA$ is the lattice constant of graphene, and $t_\perp \approx 0.39 \ eV$ is the interlayer nearest-neighbor hopping energy. The magnetic field effect is here accounted for by the momentum operator $\vec{p} = (p_x , p_y ) \equiv - i \hbar \vec{\nabla} + e \vec{A}$. The Hamiltonian of eq.(1) is limited to the case of symmetric on-site energies \cite{cann}. \begin{figure} [t] \begin{center} \includegraphics[trim = 0in 1in 0in 0in,width=9.0cm,height=4.0cm]{fig1.eps} \caption{ (color online) Model of MBLGSLs under study: $(a)$ Periodic $\delta$-function magnetic barriers of alternate signs, $\vec{B}_0$ and $- \vec{B}_0$ [red arrows] and $(b)$ Corresponding 1D periodic vector potential $\vec{A}(x)$ [blue curve] with $A_0$ the potential strength, $d_B$ the barrier width, and $d_W$ the well width [the period $d = d_B + d_W$]. The dashed-line box in $(a)$ describes the unit cell in $T$-matrix calculations. } \end{center} \end{figure} Due to a periodicity of the potential $A(x)$ the time-independent Schr\"{o}dinger equation $H \Psi = E \Psi$ for the Hamiltonian $H$ of eq.(1) could be most conveniently solved using the transfer matrix method \cite{peeters,chau}, which generally reduces the energy spectrum problem to solving the equation (see Supplementary Material \cite{suppl}): \begin{equation} det \ [ \ T \ - \ e^{i k_x d} R^{-1}_I (d) \ ] \ = \ 0 , \end{equation} where $k_x$ is the Bloch wave vector and $T$ and $R_I$ are matrices, depending on the Hamiltonian under study. In the case of SLGSLs, when the Hamiltonian $H$ and, therefore, $T$ and $R_I$ are $2 \times 2$ matrices, equation (2) can be analytically solved that gives straightaway a general expression for the dispersion relation $E(\vec{k})$ \cite{lequi}. For MBLGSLs in the four-band model of eq.(1), in general, equation (2) with $(4 \times 4)$-matrices $T$ and $R_I$ is too complicated to be solved analytically, so we have solved it numerically and show in Figs.2 and 3, for example, the structure of several most important minibands. {\sl Zero-energy DP}.- To examine the zero-energy DPs, Fig.2 presents the lowest conduction and the highest valence minibands for the MBLGSLs with potentials of $A_0 = 0.5$ and $d = 4$ in three cases: $(a) \ q = 1$ (symmetric potential), $(c) \ q = 1.5$, and $(e) \ q = 0.5$ (asymmetric potentials). The boxes $(b)$, $(d)$, and $(f)$ present the contour plots of the lowest conduction miniband for the energy spectra shown in $(a)$, $(c)$, and $(e)$, respectively. (Due to a symmetry of spectra with respect to the $(E = 0)$-plane the analysis is hereafter concentrated on the positive energy part). For comparison, we remind that in the energy band structure of the pristine BLG there is a single zero-energy DP located at $\vec{k} = 0$, in the vicinity of which the dispersion has the parabolic double cone shape: $E = \pm \hbar^2 k^2 / 2 m$ with the isotropic mass $m = t_\perp / 2 v_F^2$ \cite{cann}. Hereafter, for convenience, dimensionless quantities are introduced: energies in units of $t_\perp$, $x$ (or $d$) in $ (\hbar v_F / t_\perp )$, and $k_{x(y)}$ in $(t_\perp / \hbar v_F )$ with $t_\perp$ and $v_F$ given above. \begin{figure} [t] \begin{center} \includegraphics[trim = 0in 0.5in 0in 0in,width=9.0cm,height=7.0cm]{fig2.eps} \caption{ (color online) Zero-energy DP. Lowest conduction and highest valence minibands [$(a)$,$(c)$, and $(e)$] and corresponding contour plots [$(b)$ to $(a)$, $(d)$ to $(c)$, and $(f)$ to $(e)$] are shown in three cases: $q = 1$ [$(a,b)$], $q = 1.5$ [$(c,d)$], and $q = 0.5$ [$(e,f)$]. In all the cases: $d = 4$ and $A_0 = 0.5$. DP is located at $(E, k_x , k_y ) = (0, 0, 0)$ in $(a,b)$; $(0, 0, 0.1)$ in $(c,d)$; and $(0, 0, - 0.167)$ in $(e,f)$. All contour plots show isotropic dispersions. } \end{center} \end{figure} In the case of $q = 1$ (symmetric potential), Fig.2$(a)$ shows a clear zero-energy DP at $(\vec{k} = 0)$ with an isotropic dispersion [see contour plot in $(b)$] like the original zero-energy DP for pristine BLG. In the case of $q \neq 1$, however, asymmetric magnetic potential moves the zero-energy DP along the $(k_x = 0)$-direction either in positive [if $q > 1$, see Figs.2$(c,d)$] or in negative direction of $k_y$ [if $q < 1$, see Figs.2$(e,f)$], while related dispersions are all still isotropic [see contour plots in $(d)$ and $(f)$]. Actually, the effects of periodic magnetic potential on the zero-energy DP sensitively depend on the potential parameters $A_0$, $d$, and $q$ and could be quantitatively understood from eq.(2) (see Supplementary Material \cite{suppl}). First, concerning the location of DP, we have arrived at the belief that the magnetic potential [along the $x$-direction] changes only the $k_y$-coordinate of the zero-energy DP, moving it from $(E = 0, k_x = 0, k_y = 0)$ [in the absence of potential] to $(E = 0, k_x = 0, k_y = k_y^{(a)} = [(q - 1)/(q + 1)] A_0 )$ [in the presence of potential with $A_0$ and $q$]. Numerical solutions of eq.(2) well support this estimation for the location of zero-energy DP (see Fig.2, where the $(E, k_x )$-coordinates of all the DPs observed are $(0, 0)$, while their $k_y$-coordinates are depending on $q$: $k_y = k_y^{(a)} = 0, \ 0.1$, and $\approx - 0.167$ in $(a,b) [q = 1]$, $(c,d) [q = 1.5]$, and $(e,f) [q = 0.5]$, respectively). Note that $k_y^{(a)}$ does not depend on the period $d$. Next, to check the dispersion relation associated with the zero-energy DP identified we expand eq.(2) to the lowest order in $E$, $k_y$, and $k_x$ in the vicinity of its location, $(E = 0, k_x = 0, k_y = k_y^{(a)})$, that leads to \begin{equation} E \ = \ \pm \frac{\hbar^2}{ 2 m^* } [ \ k_x^2 + (k_y - k_y^{(a)})^2 \ ] , \end{equation} with the mass $m^*$ depending on $A_0$, $d$, and $q$: \begin{equation} m^* \ = \ m \ \frac{2A_0 d q}{ (q + 1)^2 \sinh (2A_0 d q / (q + 1)^2)} . \end{equation} \begin{figure} [t] \begin{center} \includegraphics[trim = 0in 0.5in 0in 0in,width=8.0cm,height=4.5cm]{fig3.eps} \caption{ (color online) Finite-energy DPs. Two lowest conduction minibands [$(a)$ and $(c)$] and corresponding contour plots [$(b)$ to $(a)$ and $(d)$ to $(c)$] are shown in two cases: $q = 1$ [$(a,b)$] and $q = 1.5$ [$(c,d)$]. In both cases: $d = 4$ and $A_0 = 0.5$. Clealy, there is always a DP at the edge of Brilouin zone, $k_x = \pi /d$ [or, in equivalence, $k_x = - \pi /d$], which is located at $(E, k_y ) = (E_1 \approx 0.37 \ t_\perp , 0)$ in $(a)$ or $(E_1 \approx 0.375 \ t_\perp, k_y^{(f)} \approx 0.09)$ in $(c)$. Contour plots $(b)$ and $(d)$ show anisotropic dispersions. } \end{center} \end{figure} While the obtained relation of eq.(3) shows a parabolic dispersion with an isotropic double cone shape like that of original zero-energy DP for the pristine BLG, the isotropic mass $m^*$ is however depending on the potential parameters ($A_0 , d$ and $q$). It is clear from eq.(4) that $(i)$ $m^*$ is always less than $m$ ($= t_\perp / 2 v_F^2$ for the pristine BLG), $(ii)$ $m^* / m$ monotonously decreases with increasing $A_0$ and/or $d$, and $(iii)$ in the $m^* (q)$-dependence there is a single minimum at $q = 1$ (i.e. for symmetric potentials). Thus, the shift of location in the $k_y$-coordinate and the dependence of mass on the potential parameters are the two effects a periodic magnetic potential can induce on the zero-energy DP in the energy band structure of BLG. For comparison, remind again that the electric periodic potentials generate either a pair of new zero-energy DPs or a direct band gap \cite{peeters,killi,tan}. {\sl Finite-energy DPs}.- Fig.3 shows the two lowest conduction minibands [$(a)$ and $(c)$] and the corresponding contour plots [$(b)$ and $(d)$] for the same MBLGSLs as those studied in Figs.2$(a,b)$ and $(c,d)$, respectively. (Noting again the symmetry of the energy spectra with respect to the $(E = 0)$-plane). In both the cases, $q = 1$ $(a,b)$ and $q = 1.5$ $(c,d)$, the touching points are clearly seen at the edges of the Brillouin zone, $(k_x = \pm \pi / d, k_y = k_y^{(f)})$, where $k_y^{(f)}$ depends on $A_0$ and $q$ and equals to zero for symmetric potentials [Fig.3$(a,b)$]. Similar finite-energy touching points are also existed at higher energies (unshown). Describing quantitatively the touching points observed is generally beyond our ability except the case of symmetric potentials, when the locations of these points as well as related dispersions can be found in the same way as that used above for the zero-energy DP. \begin{figure} [t] \begin{center} \includegraphics[trim = 0in 1.0in 0in 0in,width=9.0cm,height=5.0cm]{fig4.eps} \caption{ (color online) Three lowest from $E_n$ determined in eq.(5) are plotted versus $A_0$ [$(a)$ for $d = 4$] or $d$ [$(b)$ for $A_0 = 0.5$], $n = 1, 2$, and 3 (from bottom). In $(c)$ or $(d)$ velocities $v_{1x}$ [red solid line] and $v_{1y}$ [blue dashed line] in eq.(6) versus $A_0$ [$(c)$ for $d = 4$] or $d$ [$(d)$ for $A_0 = 0.5$], respectively. } \end{center} \end{figure} Indeed, in the case of symmetric potentials, $q = 1$, as can be seen in Figs.3$(a,b)$ [and can be directly checked using eq.(2)], the finite-energy touching points are located at ($k_x = \pm \pi / d, k_y = 0$). Substituting these wave numbers into eq.(2), we obtain the relation: \begin{eqnarray} & & \cos (k_1 d /2) \cos (k_2 d/2) - \nonumber \\ & & (A_0^2 / k_1 k_2 ) \sin (k_1 d/2) \sin (k_2 d/2) \ = \ 0 , \end{eqnarray} where $k_{1(2)} = \sqrt{E^2 \pm E - A_0^2}$. Given $A_0$ and $d$, solving this equation we can fix the energy-coordinates $E_n$ of all finite-energy touching points of interest (which are all arranged in pairs). For example, in Figs.4$(a)$ or $(b)$ three lowest energies $E_n$ are plotted as a function of $A_0$ [for $d = 4$] or $d$ [for $A_0 = 0.5$], respectively. Except the lowest energy $E_1$ in Fig.4$(a)$ which shows an decrease with increasing $A_0$, all the energies $E_n$ calculated seem to increase with the potential strength $A_0$ [two higher curves in Fig.4$(a)$], but decreasing with increasing the potential period $d$ [Fig.4$(b)$]. Particularly, Figs.3$(a)$ shows $E_1 \approx 0.37 \ t_\perp$. Thus, eq.(5) gives energy-positions $E_n$ of all finite energy touching points which are located at $(k_x = \pm \pi / d, k_y = 0)$ in the case of symmetric magnetic potentials. \begin{figure} [t] \begin{center} \includegraphics[trim = 0in 1.0in 0in 0in,width=9.0cm,height=5.0cm]{fig5.eps} \caption{ (color online) DOS for the MBLGSL with band structure presented in Fig.3$(a)$ [red solid line] and that for the pristine BLG [blue dashed line] are compared. Arrows indicate energy positions $E_1$ and $E_2$ of lowest finite energy DPs. } \end{center} \end{figure} Further, by expanding eq.(2) in the vicinity of the touching points fixed, $(E = E_n , k_x = \pi / d, k_y = 0)$, we arrive at the linear dispersion relation: \begin{equation} E - E_n \ = \ \pm \sqrt{ v_{nx}^2 ( k_x - \pi /d )^2 + v_{ny}^2 k_y^2 } , \end{equation} where $v_{nx}$ and $v_{ny}$ are carrier group velocity components depending on $A_0$ and $d$. Due to the double cone shape of these dispersions the finite-energy touching points fixed could be merely referred to as the finite-energy DPs. Unfortunately, we are unable to derive analytical expressions for $v_{nx}$ and $v_{ny}$, so for instance we show in Figs.4$(c)$ or $(d)$ the numerical values of $v_{1x}$ [solid line] and $v_{1y}$ [dashed line] for the lowest (and most important) finite-energy DP ($n = 1$) plotted against $A_0$ or $d$, respectively. At small $A_0$ and/or $d$ a large difference between the two velocities, $v_{1x} >> v_{1y}$, demonstrates a strongly anisotropic dispersion. Given $d$ (or $A_0$) there exists a single value of $A_0 = A_0^{(c)}$ (or $d = d^{(c)}$), where $v_{1x} = v_{1y}$ showing an isotropic dispersion [$A_0^{(c)} \approx 1.5 $ in Fig.4$(c)$ or $d^{(c)} \approx 10.435 $ in Fig.4$(d)$]. Beyond this point an anisotropy in dispersion is recovered, but it is much weaker than in the region of small $A_0$ and/or $d$. Returning to Figs.3$(a,b)$ with $A_0 = 0.5$ and $d = 4$, we have $v_{1x} / v_{1y} \approx 2.1 $. For higher finite-energy DPs ($n = 2, 3, ...)$ calculations show much more complicated $A_0$- and $d$-dependences of velocities (unshown) that demonstrate a strongly anisotropic dispersion at small as well as large values of $A_0 / d$ except a single point where $v_{nx} = v_{ny}$. In the opposite case of asymmetric potentials, $q \neq 1$ [Figs.3$(c,d)$], we are able only to qualitatively comment that the finite-energy DPs with linear dispersion should be still generated at $k_x = \pm \pi / d$, but at non-zero $k_y = k_y^{(f)}$ and at energies which depend on potential parameters in the way much more complicated than eq.(5) [in Figs.3$(c,d)$ $k_y^{(f)} \approx 0.09$ and $E_1 \approx 0.375 \ t_\perp $]. Note that for BLGSLs with electric $\delta$-function potentials the finite-energy touching points are also generated at $(k_x = 0, k_y = 0)$, the related dispersions are however direction-dependent in the sense that they are linear or parabolic in the $k_x$- or $k_y$-direction, respectively \cite{huy2}. Finally, Fig.5 compares the density of state (DOS) for the band structure of the MBLGSL studied in Fig.3$(a)$ [red solid line] and that of the pristine BLG [blue dashed line]. Clearly, the periodic magnetic potential makes the DOS rather fluctuated (comparing to that of pristine BLG). The central minimum in the solid curve (at $E = 0$) is related to the zero-energy DP. The local dips at finite energy in this DOS is associated with the finite-energy DPs at $E_1$ and $E_2$ [indicated by arrows], whereas the peaks are located at the bending points between these DPs. Such a potential induced behavior of DOS should be manifested in transport properties of the structure. In summary, we have studied the effects of a periodic $\delta$-function magnetic field with zero average flux (say, along the $x$-direction) on the energy band structure of BLGs. It was shown that the magnetic potential $(i)$ may move the location of the original zero-energy DP along the $(k_x = 0)$-direction to a finite $k_y$-coordinate, keeping the double cone dispersion isotropic and parabolic with the carrier mass which is however depending on the potential; and $(ii)$ generates the finite-energy DPs with a linear and anisotropic dispersion at the edges of the Brillouin zone. In the case of symmetric magnetic potentials the position and the dispersion are exactly determined for all the DPs of interest, while in the case of asymmetric potentials this can be done for the zero-energy DP only. We assume that these findings should robust against changes in the shape of the magnetic potential, providing the average flux to be zero. \\ {\sl Acknowledgments}.- This work was financially supported by Vietnam National Foundation for Science and Technology Development under Grant No. 103.02-2013.17.
1,108,101,563,953
arxiv
\section{Introduction} \label{intro} Stochastic methods for physical systems coupled to external baths have a long history dating back to Einstein \cite{Einstein1905} and Langevin \cite{Langevin1908}. The idea behind these approaches is that the many degrees of freedom of the bath induce random motion in the system \cite{Kloeden1999,Gardiner2000,Breuer2002,Razavy2006,vanKampen,Weiss2007}. Classically, this is due to collisions between the particles of the bath and of the system and can be described by a Langevin equation for certain system variables. Quantum-mechanically, the randomness is introduced by transitions between different states of the system induced by the bath and can be described by a stochastic Schr\"odinger equation (SSE) \cite{Ghirardi1990,Diosi1997,Gaspard1999,Yu1999,Strunz2000,vanKampen}. Alternatively, one can derive statistical descriptions averaging over many realisations of the stochastic process, leading to the Fokker-Planck equation for the distribution function of a classical system and the master equation for the reduced density operator of a quantum system \cite{Kloeden1999,Gardiner2000,Breuer2002,Razavy2006,vanKampen,Weiss2007}, respectively. Assuming the equivalence between the master equation for the density matrix $\hat{\rho}(t)$ and the SSE \cite{Yu1999,vanKampen}, which might not always hold \cite{DAgosta2008a}, the latter has sometimes been seen as a ``quick and dirty'' way to obtain the solution of the former. The numerical solution of the master equation scales poorly with the number of states kept in the calculation since it is an equation of motion for a \emph{matrix} in state space, whereas a Schr\"odinger equation is an equation for a \emph{vector}. This strongly limits the applicability of the master equation to complex systems. In particular if the quantum system consists of interacting particles, in which case the state space is the Fock space, the scaling of the density matrix strongly limits the numerical calculations and approximate methods are needed \cite{Pershin2008}. Alternatively, one can establish a time-dependent density functional theory \cite{Marques2006} of open quantum systems \cite{Burke2005,DiVentra2007,DAgosta2008a}. In general, the dynamics of an open system is non-Markovian, i.e., the change of the state of the system at a certain time does not only depend on its present state but also on its state at all previous times. Understandably, the solution of a non-Markovian master equation \cite{Nak58,Zwa60} is difficult because it involves the evaluation of a convolution integral which depends on the history of the system. Therefore, one often employs a Markov approximation, which replaces the memory kernel in this convolution integral by a $\delta$-function. In doing so, however, one looses the connection with the exact dynamics and the ability to reproduce the correct steady state, unless one is capable of constructing effective bath-system operators that recover the exact behaviour \cite{Wichterich2007}. It should be noted that an exact time-convolutionless master equation can be derived \cite{ToM76,STH77,Breuer2002,Timm2011}. However, this does not usually reduce the numerical complexity since one needs to evaluate the generator of the time-convolutionless master equation describing the history of the system at each time step of the numerical integration. In order to study the non-Markovian dynamics it would be advantageous to have a SSE that is local in time but is nevertheless able to reproduce the dynamics induced by a non-Markovian master equation (NMME). Such an equation has been proposed by Strunz and coworkers: first mentioned as a byproduct in \cite{Strunz2004}, it has been applied to the spin-boson model and compared to non-linear SSEs in \cite{DeVega2005} and also to a more realistic two-level model immersed in a photonic band-gap material in \cite{DeVega2005b}. Here, we arrive at the same TCLSSE of Strunz and coworkers starting from a non-Markovian SSE obtained by Gaspard and Nagaoka \cite{Gaspard1999}. We show how the TCLSSE and the NMME coincide up to third order in the coupling parameter between the system and the bath. One of the applications of the formalism is the investigation of the bath-induced relaxation of the open system towards a steady state. In contact with a single bath at a constant temperature, the system will approach an equilibrium state with that temperature. It can be shown that in the non-Markovian case there exists an exact condition that the memory kernel must satisfy for the system to reach thermal equilibrium, i.e., $\hat \rho(t\rightarrow \infty)\propto \exp(-\beta \hat H)$ \cite{Breuer2002,Biele2011a}. (It is well known that the Hamiltonian $\hat H$ appearing here might be different from the one describing the system dynamics, due for example to the Stark and Lamb shifts. For simplicity, we here assume that these effects can be neglected, since they are normally proportional to $\lambda^4$, where $\lambda$ is the coupling parameter between the system and the bath.) This condition is known as \emph{detailed balance} since it relates the absorption and emission probabilities. The detailed-balance condition is usually no longer satisfied if the Markov approximation is made \cite{Breuer2002,Biele2011a}. Hence, the history dependence of the equation of motion is an essential ingredient for thermal relaxation. This begs the questions of whether a TCLSSE is also able to describe thermal relaxation dynamics correctly. To answer this question, we study the relaxation of a three-level system employing the TCLSSE and compare its dynamics to the one obtained from the NMME. The next natural step is to test the formalism for a case with two baths at different temperatures, which leads to energy transport. Hence, we also address the questions on how to apply the TCLSSE to study energy transport in spin chains. The numerical solution of the TCLSSE requires the generation of complex coloured noise, essential to mimic the correlation functions of the non-Markovian baths \cite{Luczka2005}. Here we introduce a portable and fast algorithm to generate any coloured noise whose power spectrum is a positive function. The algorithm relies on the ability of performing a Fast Fourier Transform and is therefore easily optimised. Other algorithms have been presented in the past to solve this problem or at least the one of generating real coloured noise \cite{Rice1944,Billah1990,Barrat2011}. In section \ref{noisegeneration}, we compare our algorithm to some of them and show that it performs better than these routines while having a broader range of applicability. \section{A time-convolutionless stochastic Schr\"odinger equation} Our starting point is a standard second-order NMME \cite{Nak58,Zwa60,Gardiner2000,Breuer2002,Weiss2007}. The coupling between the system and the bath is taken to be bilinear, \begin{equation} \hat H_\mathrm{int}=\lambda\sum_a \hat S_a \otimes \hat B_a, \label{H_int} \end{equation} in the operators $\hat S_a$ and $\hat B_a$ from the system and the bath, respectively. If any operator of the system commutes or anticommutes with any operator of the bath, one can always expand any coupling operator in this form. In the following we assume that the bath and the system do not exchange \emph{fermions}, i.e., $\hat S_a$ and $\hat B_a$ commute with each other. We further restrict ourselves to the case that $\hat S_a$ and $\hat B_a$ are hermitian operators; the extension to the more general case where only $\hat H_\mathrm{int}$ is hermitian is straightforward. Under the assumptions of weak system-bath interaction, factorisation of the full density operator at the initial time $t=0$ and vanishing averages of bath operators to first order, the equation of motion for the reduced density operator $\hat\rho$ of the system is given by \cite{Gardiner2000,Breuer2002,vanKampen,Weiss2007} \begin{equation} \frac{d\hat \rho(t)}{dt}= -i\big[\hat H,\hat \rho(t)\big] +\lambda ^2 \sum_{a}\, [\hat S_a, \hat M^{\dagger}_a (t)-\hat M_a (t)] , \label{nmme} \end{equation} up to second order in the coupling parameter $\lambda$. We have set $\hbar=1$ and defined \begin{equation} \hat M_a (t) \equiv \sum_b \int_0^{t} d \tau\, c_{a b}(t,\tau)\, e^{-i\hat H (t-\tau)}\, \hat S_b\, \hat \rho(\tau)\, e^{i \hat H(t-\tau)}. \label{nmme_M} \end{equation} In this NMME, $\hat H$ is the Hamiltonian of the system and the correlation kernel is given by \begin{equation} c_{a b}(t,\tau) \equiv \mbox{Tr}_B[\hat \rho_B^{\mathrm{eq}}\, \hat B_{a}(t)\, \hat B_{ b}(\tau)] , \label{cBB} \end{equation} where the trace is over the bath degrees of freedom, $\hat B_a(t) \equiv e^{i\hat H_B t} \hat B_a, e^{-i\hat H_B t}$ and $\hat H_B$ is the Hamiltonian of the bath. Here, $\hat \rho_B^{\mathrm{eq}}$ is the statistical operator of the bath. If $\hat \rho_B^{\mathrm{eq}}$ describes a single bath in thermal equilibrium, $\hat \rho_B^{\mathrm{eq}}\propto \exp(-\beta \hat H_B)$, where $\beta$ is the inverse temperature, the system should relax towards thermal equilibrium, $\hat \rho(t\rightarrow \infty)\propto \exp(-\beta \hat H)$, with the same temperature as the bath. The property that the steady state is the thermal equilibrium must be encoded in the correlation kernel $c_{a b}(t,\tau)$. Indeed, one can show that the system relaxes towards thermal equilibrium if $c_{ab}(t,\tau)=c_{ab}(t-\tau)$ and the power spectrum $C_{ab}(\omega)\equiv\int_{-\infty}^\infty dt\, c_{a b}(t)\,e^{-i\omega t}$ satisfies the detailed-balance condition \cite{Breuer2002,Biele2011a} \begin{equation} C_{ab}(-\omega) = e^{\beta \omega}\, C_{b a}(\omega). \label{detailed_balance} \end{equation} Gaspard and Nagaoka \cite{Gaspard1999} have shown that the dynamics introduced by the NMME can be obtained not only by a numerical integration of (\ref{nmme}) but also by the solution of a SSE for a state $|\Psi(t)\rangle$, \begin{eqnarray} \frac{d}{dt} |\Psi(t)\rangle &=& \hat H |\Psi(t)\rangle + \lambda \sum_a \gamma_a(t)\, \hat S_a|\Psi(t)\rangle \nonumber \\ &&{} -i\, \lambda^2\sum_{a, b} \hat S_{a} \int_0^t dt'\,c_{a b}(t')\, e^{-i\hat H t'}\hat S_b |\Psi(t-t')\rangle. \label{non-markovian} \end{eqnarray} In this non-Markovian SSE (NMSSE), the complex noises $\gamma_{a}(t)$ have the properties \begin{equation} \overline{\gamma_a(t)}=0,\quad \overline{\gamma_{a}(t)\gamma_{ b}(t')}=0,\quad \overline{\gamma_{a}^*(t)\gamma_{ b}(t')}=c_{a b}(t-t') \label{color_noise} \end{equation} and one can obtain the dynamics of the open quantum system by taking the average over realisations of the stochastic process, indicated by the overline. In particular, the reduced density operator is obtained as $\hat \rho(t) = \overline{|\Psi(t)\rangle \langle \Psi(t)|}$. However, any attempt to solve the NMSSE (\ref{non-markovian}) requires a large numerical effort due to the time integral, which needs to be evaluated at every time step and for every realisation. This begs the question of whether there exists a simpler SSE that reproduces on average the dynamics induced by the NMME. This is the case, as Strunz and Yu have shown \cite{Strunz2004}. Indeed, the TCLSSE \begin{equation} \frac{d}{dt} |\Psi(t)\rangle=\bigg(\hat H + \lambda\sum_a \gamma_{a}(t)\,\hat S_a - i\,\lambda^2\, \hat T(t)\bigg)|\Psi(t)\rangle , \label{non-markovian2} \end{equation} with \begin{equation} \hat T(t) \equiv \sum_{a, b}\hat S_a\int_0^t dt'\,c_{a b}(t')\,e^{-i\hat H t'}\hat S_b\, e^{i\hat H t'} \end{equation} reproduces on average the dynamics induced by the NMME (\ref{nmme}) up to third order in $\lambda$ \cite{Strunz2004}. To prove this, we write (\ref{non-markovian2}) in the interaction picture, $|\Psi_I(t)\rangle= e^{i \hat H t}\,|\Psi(t)\rangle$ and $\hat S_a(t)=e^{i \hat H t} \hat S_a\, e^{-i \hat H t}$ and expand the time-evolution operator up to second order in $\lambda$, \begin{eqnarray} |\Psi_I(t)\rangle &\cong& \left[ \mbox{$1 \hspace{-1.0mm} {\bf l}$} -i \lambda\sum_{a} \int_0^t d t_1 \,\gamma_{a}(t_1)\, \hat S_{a}(t_1) \right.\nonumber \\ &&-\lambda^2\sum_{a,b} \int_0^t d t_1 \int_0^{t_1} d t_2\, c_{a b}(t_2)\, \hat S_{a}(t_1)\, \hat S_{b}(t_1-t_2) \nonumber \\ &&\left.-\lambda^2 \sum_{a,b} \int_0^t d t_1 \int_0^{t_1} d t_2\, \gamma_{a}(t_1)\, \hat S_{a}(t_1)\, \gamma_{b}(t_2)\, \hat S_b (t_2)\right] |\Psi_I(0)\rangle\nonumber\\ &&+\mathcal{O}(\lambda^3). \end{eqnarray} This expansion is inserted into the expression for the reduced density operator $\hat \rho_I(t) =\overline{|\Psi_I(t)\rangle \langle \Psi_I(t)|}$. By performing the average, using the properties given in (\ref{color_noise}) and the identity $c_{ab}(\tau,t)=c^{\ast}_{b a}(t,\tau)$, and differentiating with respect to $t$, we arrive at \begin{eqnarray} \frac{d}{dt} \hat \rho_I (t) &=& \lambda^2 \sum_{a,b} \int_0^{t} d \tau\, \big[ c_{a b}(t,\tau)\, \hat S_b(\tau)\, \hat{\rho}_I(0)\, \hat S_a(t) \nonumber \\ && {}- c_{a b}(t,\tau)\, \hat S_a(t)\, \hat S_b(\tau)\, \hat \rho_I(0) \nonumber \\ && {}+ c^{\ast}_{a b}(t,\tau)\, \hat S_a(t)\, \hat{\rho}_I(0)\, \hat S_b(\tau) \nonumber \\ && {}- c^{\ast}_{a b}(t,\tau)\, \hat{\rho}_I(0)\, \hat S_b(\tau)\, \hat S_a(t) \big] + \mathcal{O}(\lambda^4). \label{derivation_step} \end{eqnarray} Note that the averages of the terms in $\lambda^3$ vanish. Furthermore, replacing $\rho_I(0)$ by $\rho_I(\tau)+\mathcal{O}(\lambda^2)$ does not change the equation up to terms of order $\lambda^3$. Finally, by returning to the Schr\"odinger picture we arrive at the NMME (\ref{nmme}) up to terms of order $\lambda^3$, i.e., higher than the order up to which these equations are valid anyway. Indeed, the NMME and the SSE are usually derived as a second-order expansion in the coupling parameter $\lambda$. This is remarkable since one might expect a more complex time-non-local SSE to be required for reproducing the dynamics of the NMME (\ref{nmme}). Still, the TCLSSE is local in time, i.e., the operator $\hat T(t)$ does not depend on the state of the system at previous times and can thus be calculated once before the numerical integration and be used for each realisation of the stochastic process. Hence, the numerical cost of solving each realisation of the TCLSSE is comparable to that of a Markovian SSE \cite{Gaspard1999,vanKampen}. We note that at the same level of approximation, $\lambda^3$, we can derive a time-convolutionless master equation instead of the non-local (\ref{nmme}). Indeed, in (\ref{derivation_step}) we could replace $\rho_I(0)$ by $\rho_I(t)+\mathcal{O}(\lambda^2)$, arriving at \begin{eqnarray} \frac{d}{dt} \hat \rho_I (t) &=& \lambda^2 \sum_{a,b} \int_0^{t} d \tau\, \big[ c_{a b}(t,\tau)\, \hat S_b(\tau)\, \hat{\rho}_I(t)\, \hat S_a(t) \nonumber \\ && {}- c_{a b}(t,\tau)\, \hat S_a(t)\, \hat S_b(\tau)\, \hat \rho_I(t) \nonumber \\ && {}+ c^{\ast}_{a b}(t,\tau)\, \hat S_a(t)\, \hat{\rho}_I(t)\, \hat S_b(\tau) \nonumber \\ && {}- c^{\ast}_{a b}(t,\tau)\, \hat{\rho}_I(t)\, \hat S_b(\tau)\, \hat S_a(t) \big] + \mathcal{O}(\lambda^4). \label{tclme} \end{eqnarray} However, since in general we expect the density matrix and the operators $\hat S_a$ not to commute, the integral over $\tau$ still contains the density matrix in a complicated manner. From a numerical point of view, the solution of this equation is therefore not simpler than that of (\ref{nmme}). The equivalence of (\ref{nmme}) and (\ref{tclme}) is a generalization of the result that a time-convolutionless Pauli master equation, i.e., a master equation for the diagonal components of the density matrix only, can be proven to be equivalent to a Nakajima-Zwanzig-Markov Pauli master equation to second order in $\lambda$ \cite{KGL10,Timm2011}. The TCLSSE (\ref{non-markovian2}) can also be seen as an intermediate step towards the derivation of a Markovian SSE following for example the technique proposed by Gaspard and Nagaoka \cite{Gaspard1999}. In section \ref{examples}, we will show how to use the TCLSSE to evaluate the dynamics of some simple system and compare it to the time evolution described by the NMME. \section{Generation of coloured noise} \label{noisegeneration} The TCLSSE requires the generation of coloured noise and thus the method will only be practicable if an efficient algorithm for the generation of this noise is available. Such an algorithm indeed exists, as we show below, where we extend an algorithm presented by Rice \cite{Rice1944,Billah1990} to the complex noise required here. We consider only a single bath operator; the generalisation to several bath operators is straightforward. Some of the existing algorithms for the generation of coloured noise rely on the numerical solution of a stochastic differential equation that has to produce noise with the given target correlation function $c(t)$ \cite{Luczka2005,Mannella1992}. However, such an equation is a piece of information that is rarely available, since even the analytic expression for $c(t)$ may not be known. Except for a few simple models, it is more common to have access to the power spectrum $C(\omega)=\int_{-\infty}^\infty dt\, c(t)\,e^{-i\omega t}$. Indeed, $C(\omega)$ is connected to the quantum transitions in the bath. On the other hand, the algorithm presented in \cite{Barrat2011} does not require the knowledge of a stochastic differential equation. However, besides the power spectrum $C(\omega)$ is does require the inverse Fourier transform of its square root. This quantity is then convoluted with a white noise to generate the target real coloured noise. We will introduce an algorithm that directly uses $\sqrt{C(\omega)}$ as input, thereby reducing the numerical cost compared to the algorithm of \cite{Barrat2011} and that generates a complex coloured noise with the properties given in (\ref{color_noise}). Indeed, one can easily prove that the noise $\gamma(t)$ can be generated by \begin{equation} \gamma(t)=\int_{-\infty}^\infty \frac{d\omega}{\sqrt{2\pi}}\, \sqrt{C(\omega)}\: x(\omega)\, e^{i\omega t}, \label{generate_color_noise} \end{equation} where $x(\omega)$ is a white-noise process in the frequency domain satisfying \begin{equation} \overline{x(\omega)}=0,\quad \overline{x(\omega)x(\omega')}=0,\quad \overline{x^*(\omega)x(\omega')}=\delta(\omega-\omega'). \label{white_noise} \end{equation} By substituting the definition (\ref{generate_color_noise}) into $\overline{\gamma^*(t)\gamma(t')}$ and using (\ref{white_noise}), we immediately arrive at the third relation of (\ref{color_noise}). The other relations are proven in a similar way. From a numerical point of view, the generation of this coloured noise requires the calculation of the Fourier transform in (\ref{generate_color_noise}). A similar algorithm restricted to real noise has been proposed in the past \cite{Rice1944,Billah1990}. In order to compare our algorithm with the two from \cite{Barrat2011} and \cite{Rice1944,Billah1990}, we choose a test function for which we know $c(t)$ and $C(\omega)$ analytically, namely $c(t)=(2\pi\sigma^2)^{-1/4} e^{-t^2/2\sigma^2}$ and $C(\omega)=(2\pi\sigma^2)^{1/4}\,e^{-\omega^2\sigma^2/2}$. We fix $\sigma=1$ as our unit of time and choose the interval $t\in[-25,25]$ for the numerical Fourier transform. To quantify the agreement between the target $c(t)$ and the noise generated by the three algorithms, we use the statistical variance $\delta_c=\int_{-\infty}^\infty dt\,|c(t)-\overline{\gamma(0)\gamma^*(t)} |^2/\int_{-\infty}^\infty dt\,|c(t)|^2$. In principle, these algorithms produce an exact representation of the target correlation function. Discrepancies arise from the finite mesh on which the Fourier transform is evaluated, the finite number of independent realisations of the noise that we generate and the limitations of the white noise generation. \begin{figure}[ht!] \includegraphics[width=8cm]{figure1} \caption{Statistical variance $\delta_c$ versus the number of independent realisations of the coloured noise, calculated using a 16384 point mesh in time and frequency. The red (dashed) line represents the optimised version of the algorithm presented in \cite{Barrat2011}, the green (dotted) line the algorithm proposed by Rice \cite{Rice1944,Billah1990} and the black (solid) line the results obtained from (\ref{generate_color_noise}).} \label{deltac} \end{figure} In figure \ref{deltac}, we report the variance $\delta_c$ as a function of the number of independent realisations of the noise. We see that the algorithm (\ref{generate_color_noise}) performs better for a large number of runs (at least $2\times10^5$), while being close to the other two for a small number of runs. The algorithm proposed in \cite{Barrat2011} suffers from the need of performing a double Fourier transform, although for a large number of runs its performance improves consistently. On the other hand, we can consider the total computation time to generate a given number of realisations. Taking the time needed by algorithm (\ref{generate_color_noise}) as a reference, the algorithm of \cite{Rice1944, Billah1990} is about 7\% slower and the algorithm of \cite{Barrat2011} is about 50\% slower. However, we stress that the main advantage of the algorithm does not lie in the moderate numerical improvement but in the simplification it brings about by only requiring the power spectrum as input. For illustration, we show in figure \ref{single_run} a single realisation of the noise (\ref{generate_color_noise}) with 16834 mesh points. The noise appears as an analytic function of time due to the fact that time enters in $\gamma(t)$ via the oscillating term in the right hand side of (\ref{generate_color_noise}). Notice also that due to the use of the Fast Fourier Transform, the noise is periodic over the simulation time. \begin{figure}[ht!] \includegraphics[width=8.0cm]{figure2} \caption{The real (black, continuous line) and imaginary (orange, dotted line) parts of a single realisation of the coloured noise, (\ref{generate_color_noise}), for a mesh in time of 16384 points. The function $\gamma(t)$ appears smooth as a function of time. Notice also that due to the use of the Fast Fourier Transform, the noise is periodic with period given by the largest simulation time. Similar behaviours are obtained with the other two algorithms.} \label{single_run} \end{figure} \section{Examples of application} \label{examples} We would like to illustrate our results by discussing how the TCLSSE is able to reproduce the dynamics induced by the NMME and how it describes thermal relaxation and energy transfer for specific systems. These particular examples serve to validate our approach and show the wide range of application the TCLSSE might have. \subsection{A three-level system} We consider the coupling of an electronic system to the electromagnetic field in a three-dimensional cavity. In the dipole approximation, one can derive from first principles the power spectrum for this system (we set the speed of light to unity), \begin{equation} C_{\mathrm{cav}}(\omega)=\dfrac{|\omega|^3}{\pi V \epsilon_0} \big[n_B(\beta|\omega|)+\theta(-\omega)\big]\quad \mbox{for}\quad |\omega|<\omega_{c}, \label{cavity-power-spectrum} \end{equation} where $n_B(\beta\omega)\equiv 1/(e^{\beta \omega}-1)$ is the Bose-Einstein distribution function, $\theta(\omega)$ is the Heaviside step function, $V$ is the volume of the cavity and $\omega_c$ is a cutoff frequency determined by the dimensions of the system. This cutoff is necessitated by the assumption made in the dipole approximation that the electromagnetic field is uniform in the region of space occupied by the system. The derivation of this power spectrum can be found in the appendix. For $|\omega|>\omega_c$, the power spectrum is set to vanish. Note that increasing $\omega_c$ does not change the relaxation dynamics as long as $\omega_c$ is larger than the energy differences in the system and hence does not exclude any transitions. One can show that the detailed-balance condition (\ref{detailed_balance}) is satisfied by this power spectrum. Since for this model system the correlation function $c(t)$ is not given in analytical form, we will use (\ref{generate_color_noise}) to generate the noise. In order to quantify the agreement between the noise generated by (\ref{generate_color_noise}) and the power spectrum (\ref{cavity-power-spectrum}), we have performed a Fourier transform of the time-domain signal and compared it to our target. Figure \ref{cofomega} shows that the agreement is excellent. \begin{figure}[ht!] \includegraphics[width=8cm]{figure3} \caption{Comparison between the target, (\ref{cavity-power-spectrum}), (solid lines) and the Fourier transform of the correlation function obtained from (\ref{generate_color_noise}) by averaging over $90000$ realisations of the noise (dashed lines). } \label{cofomega} \end{figure} For the electronic system we consider a three-site spinless tight-binding chain described by the Hamiltonian \begin{equation} \hat H =-T\,\big(\hat c^{\dagger}_1 \hat c_2+\hat c^{\dagger}_2\hat c_1 + \hat c^{\dagger}_2\hat c_3 + \hat c^{\dagger}_3 \hat c_2 \big), \label{tight-binding} \end{equation} where the operator $\hat c^{\dagger}_i$ creates an electron at site $i$, and assume a single electron to be present. This system is coupled to the electromagnetic field inside the cavity by the operator \begin{equation} \hat S = - q \sum_{l,p} \vec{u} \cdot \langle W_l | \vec{r} | W_p \rangle\, \hat c_l^{\dagger} \hat c_p , \label{coupling} \end{equation} where $q$ is the charge of the electron and $|W_i\rangle$ is the single-particle state localized at site $i$. For simplicity, we assume that each relevant mode of the cavity has the same polarization direction $\vec{u}$, parallel to the tight-binding chain. Note that the form of this operator should be immaterial for the establishment of thermal equilibrium, which is only determined by the power spectrum. Indeed, we can check whether the detailed-balance condition is necessary for the system to reach thermal equilibrium. To that end, we use the operator in (\ref{coupling}) within a Markov approximation for the correlation function, $c(t)\propto \delta(t)$. We find that a steady state is approached that does not correspond to thermal equilibrium \cite{Biele2011a}. \begin{figure}[ht!] \includegraphics[width=8cm]{figure4} \caption{Dynamics of the occupation probabilities $p_1$, $p_2$, $p_3$ of the eigenstates of the Hamiltonian (\ref{tight-binding}) in the one-electron sector calculated from the evolution of the TCLSSE (dashed lines) and the NMME (solid lines) with the power spectrum given by (\ref{cavity-power-spectrum}). The eigenstates are labeled such that the eigenenergies satisfy $\epsilon_1\leq\epsilon_2\leq\epsilon_3$. The red dots represent the thermal-equilibrium probabilities calculated from (\ref{eq_prob}). The time $t$ is measured in terms of the energy constant $T$.} \label{probability3x3} \end{figure} In figure \ref{probability3x3} we show the occupation probabilities of the three eigenstates of the Hamiltonian in the one-electron sector as a function of time calculated using the TCLSSE (dashed lines) and the NMME (solid lines), respectively. For the TCLSSE, the results have been obtained by averaging over $90000$ independent realisations of the noise. We have used the parameters $\beta=1$, $\omega_c=1$, $T=1$, $V\epsilon_0=1$ and $\lambda=0.1$ and we have employed the Euler algorithm \cite{Press1992,Kloeden1997} with time step $\Delta t=0.005$ to numerically solve the equations. As the establishment of thermal equilibrium is independent of the choice of the initial state, we have chosen an arbitrary pure state, $| \Psi (0)\rangle = 0.94\, |1\rangle +0.2\,|2\rangle+0.28\,|3\rangle$, where $|i\rangle$ represents the \textit{i}-th eigenstate of the Hamiltonian, where the eigenenergies satisfy $\epsilon_1\leq\epsilon_2\leq\epsilon_3$. The dynamics induced by the NMME and the TCLSSE are in good agreement: The small discrepancies in the numerical solutions are due to the finite number of realisations we have used; the solution of the TCLSSE still contains some noise, as expected. For long times, both formalisms converge to the thermal-equilibrium probabilities \begin{equation} p_i = \frac{e^{-\beta\epsilon_i}} {e^{-\beta\epsilon_1} + e^{-\beta\epsilon_2} + e^{-\beta\epsilon_3}}. \label{eq_prob} \end{equation} If we were only interested in the long-time limit, we could have averaged over all times after some equilibration time $t_\mathrm{min}$ to obtain better statistics, using the ergodic theorem to replace the average over many realisations by an average over time of a single realisation. \subsection{A spin chain} To show that the TCLSSE can be used to investigate energy transport in open quantum systems, we consider a spin chain in contact with two baths at different temperatures. The baths are locally connected to the terminal spins of the chain \cite{Wichterich2007,Monasterio2007}. Energy is transferred between the high-temperature bath, via the spin chain, to the low-temperature bath. Here we assume the baths to be represented by an ensemble of harmonic oscillators with a continuous spectrum. In the long-time regime, we expect the appearance of a steady state of constant energy flow. The total Hamiltonian of a spin-$1/2$ chain coupled to two baths \textit{L} and \textit{R} reads \begin{equation} \hat H_T = \hat H_S + \sum_{i=L,R} \big(\hat H^{(i)}_B + \hat H_{SB}^{(i)} \big), \end{equation} where the system Hamiltonian is given by \begin{equation} \hat H_S = \frac{\Omega}{2} \sum_{\mu=1}^n \sigma_z^{(\mu)} + \Gamma \sum_{\mu=1}^{n-1} \vec{\sigma}^{(\mu)}\cdot \vec{\sigma}^{(\mu+1)}, \label{eq_systemH} \end{equation} with $\vec{\sigma}=(\sigma_x,\sigma_y,\sigma_z)$ and the index $\mu$ indicating the spin site. The Pauli matrices are given by \begin{equation} \sigma_x= \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix},\quad \sigma_y= \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix},\quad \sigma_z= \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}. \end{equation} Hence, the spin operators for the \textit{n}-site chain are \begin{equation} \begin{array}{ccccccccc} & 1 & & 2& &\mu& &n \\ \sigma^{(\mu)}_{j} =& \mbox{$1 \hspace{-1.0mm} {\bf l}$} & \otimes & \mbox{$1 \hspace{-1.0mm} {\bf l}$} & \otimes\cdots\otimes &\sigma_j& \otimes\cdots\otimes &\mbox{$1 \hspace{-1.0mm} {\bf l}$}. \end{array} \end{equation} In (\ref{eq_systemH}), $\Omega$ is the energy associated with a uniform magnetic field aligned along the $z$ direction and $\Gamma$ is the spin-spin Heisenberg interaction. The baths are coupled to the spins at the ends of the chain, \begin{equation} \hat H^{(i)}_{SB} = \lambda\, \hat S^{(i)} \otimes \hat B^{(i)} = \lambda\, \sigma^{(i)}_x \otimes \hat B^{(i)}, \end{equation} where $\sigma_x^{(i=L)}=\sigma_x^{(1)}$, $\sigma_x^{(i=R)}=\sigma_x^{(n)}$, and $\lambda$ is the coupling strength. According to (\ref{nmme_M}), we need to assign the correlation function $c_{ab}(\tau)$. Here we use a bath correlation function describing the electromagnetic field of a one-dimensional cavity of length $l$, \begin{eqnarray}\label{eq:bath_corr_split} c^{(i)}(\tau)=& \frac{\pi}{2 l \epsilon_0}\int^{\omega_c}_{0} d\omega\,\omega \left\{ \cos(\omega \tau)\, \coth\left(\dfrac{\beta^{(i)} \omega}{2} \right)\right. \nonumber\\ &\left.\phantom{\frac{\beta}{2}} -i \sin (\omega \tau ) \right\} , \end{eqnarray} where $\beta^{(i)}$ is the inverse temperature of bath $i=L,R$. Accordingly, one can calculate the power spectrum of this bath correlation function as \begin{equation} \label{eq:bath_corr_final} C^{(i)}(\omega) = \frac{\pi^2|\omega|}{l\epsilon_0} \left[ n_B(\beta^{(i)}|\omega|) + \theta(-\omega) \right] \quad \mbox{for}\quad |\omega|<\omega_{c}. \end{equation} This is the one-dimensional analogue of (\ref{cavity-power-spectrum}). One can immediately prove that this correlation function does fulfil the detailed-balance relation and therefore we expect the system to be driven towards thermal equilibrium if the temperatures of the two baths are the same. To investigate the energy transport, we identify the energy current according to a continuity equation for the local energy. We define a local Hamiltonian according to \begin{equation} \hat h^{(\mu)} = \frac{\Omega}{2}\, \sigma_z^{(\mu)}+\frac{\Gamma}{2}\, \big(\vec{\sigma}^{(\mu)}\cdot \vec{\sigma}^{(\mu+1)} + \vec{\sigma}^{(\mu-1)}\cdot \vec{\sigma}^{(\mu)}\big) \end{equation} if $\mu$ is different from $n$ and $1$. We also define \begin{equation} \hat h^{(1)} = \frac{\Omega}{2}\,\sigma_z^{(1)}+\frac{\Gamma}{2}\,\big( \vec{\sigma}^{(1)}\cdot \vec{\sigma}^{(2)}\big) \end{equation} and \begin{equation} \hat h^{(n)}=\frac{\Omega}{2}\,\sigma_z^{(n)}+\frac{\Gamma}{2}\,\big( \vec{\sigma}^{(n-1)}\cdot \vec{\sigma}^{(n)}\big) \end{equation} so that $\hat H_S=\sum_\mu \hat h^{(\mu)}$. The time evolution of this local Hamiltonian is given by \begin{equation} -\frac{d \hat h^{(\mu)}}{d t} = -i\, [\hat H_S,\hat h^{(\mu)}] =\hat j^{(\mu),(\mu+1)} - \hat j^{(\mu-1),(\mu)} , \label{energy-continuity} \end{equation} where energy-current operators have been defined as \begin{eqnarray} \hat j^{(\mu),(\mu+1)} = \frac{i}{4}\, \big[\Omega\, (\sigma_z^{(\mu)}-\sigma_z^{(\mu+1)}),\: \Gamma\, \vec{\sigma}^{(\mu)}\cdot \vec{\sigma}^{(\mu+1)}\big]. \end{eqnarray} (\ref{energy-continuity}) has the form of a continuity equation for the energy at site $\mu$ and is valid for sites inside the spin chains that are not coupled to a bath. \begin{figure}[ht!] \includegraphics[width=8cm]{figure5.eps} \caption{Dynamics of the energy current of a three-site spin chain coupled locally to two baths for the cases of equal and unequal temperatures, calculated with the NMME (solid lines) and the TCLSSE (dashed and dot-dashed lines). The agreement between the two sets of lines is excellent, in particular at short times. The time $t$ is measured in terms of the energy constant $\Omega$.} \label{spin-chain} \end{figure} In figure \ref{spin-chain} we report the energy current flowing from the second to the third spin of a three-site spin chain. In the equal-temperature case ($\beta^L=\beta^R=5$, green solid and grey dashed lines in figure \ref{spin-chain}), a steady state is reached for long times that coincides with the thermal equilibrium and hence no current is flowing through the system. On the other hand, for the case of unequal temperatures ($\beta^L=2$ and $\beta^R=5$), the steady state shows a non-zero energy current from the warmer to the colder bath, as expected (black solid and red dot-dashed lines in figure \ref{spin-chain}). For the TCLSSE we have averaged over 100000 independent realisations of the noise and for both calculations we have used the parameter values $\Omega = 1$, $\Gamma = 0.01$, $\lambda = 0.1$, $L\epsilon_0=1$ and $\omega_c=6$. Both for the equal-temperature case and for the case with a thermal gradient, we have chosen an initial state populated with the probabilities determined by the equilibrium distribution at the lower temperature. \section{Conclusions} In conclusion, we have investigated a time-local (time-convolutionless) version of a \emph{non-Markovian} stochastic Schr\"odinger equation, which correctly describes the approach to thermal equilibrium and energy transport as obtained from a non-Markovian master equation. These two examples show that the TCLSSE is a viable alternative for obtaining the exact dynamics of a non-Markovian open quantum system. Moreover, contrary to other approximations, e.g., the Born-Markov approximation to the Redfield equation \cite{Breuer2002}, this stochastic equation reproduces the full dynamics of the non-Markovian master equation. This TCLSSE can be integrated with moderate numerical cost, comparable to that of a Markovian system. It also shows more advantageous scaling with the number of states compared to the master equation, which is particularly useful for large systems. We have also introduced an efficient and portable numerical algorithm for the generation of the coloured complex noise necessary to solve the time-convolutionless stochastic Schr\"odinger equation. Our numerical algorithm is moderately faster than other available algorithms and requires only the power spectrum, $C(\omega)$, of the bath-correlation function as input. \ack R. B. and R. D'A. acknowledge support from MICINN (FIS2010-21282-C02-01 and PIB2010US-00652), the Grupos Consolidados UPV/EHU del Gobierno Vasco (IT-319-07) and ACI-Promociona (ACI2009-1036) and the financial support of the CONSOLIDER-INGENIO 2010 ``NanoTherm'' (CSD2010-00044). R. B. acknowledges financial support from IKERBASQUE, Basque Foundation for Science and the Ministerio de Educaci\'on, Cultura y Deporte (FPU12/01576). C. T. acknowledges financial support by the Deutsche Forschungsgemeinschaft, in part through Research Unit FOR 1154 ``Towards Molecular Spintronics''. R.D'A. acknowledges the support from the Diputacion Foral de Gipuzkoa via the grant number Q4818001B and is thankful for its hospitality to the Physics Department of the King's College London.
1,108,101,563,954
arxiv
\section{Introduction} In this work we address only polynomial rings and their quotient rings. Therefore, all definitions pertaining a ring are meant to apply to \emph{commutative} rings. As a consequence, all our modules are two-sided modules and all our ideals are two-sided ideals. In fact, we require more structure of our objects: we require that they should be graded objects. This is made precise by the first two definitions. \begin{definition} A graded ring $R$ is a ring that has a direct sum decomposition into abelian additive groups $R= \displaystyle \bigoplus _{n\in \mathbb{Z}^{\geq 0}} R_n=R_0 \bigoplus {R_1} \bigoplus {R_2} \bigoplus {R_3} \bigoplus {...}$, such that $R_{s} R_{r} \subseteq R_{s+r}$ for all $r, s \geq 0$. \end{definition} There is also the closely related concept of a graded module. \begin{definition} A graded module $M$ over any graded ring $R$ is a module that can be written as a direct sum $M= \displaystyle \bigoplus _{i\in \mathbb{Z}^{\geq 0}} M_{i}$ satisfying $R_{i} M_{j} \subseteq M_{i+j}$ for all $i, j \geq 0$. \end{definition} Both concepts of a graded object are standard; see for example \cite{villarreal2015monomial} pages 12 and 13. An example of a graded ring and also of a graded module is the polynomial ring $k[x_1,x_2,...,x_a]$ over a field $k$. The direct decomposition in this case is $R=k[x_1,x_2,...,x_a]= \displaystyle \bigoplus _{b\in \mathbb{Z}^{\geq 0}}R_b$, where each $R_b={\rm span}_{k} \left\{{\rm monomials\ of\ degree}\ b\right\}$. This means that each $R_b$ is a $k$-vector space. Moreover, since every ideal $I$ of a ring $R$ is an $R$-module one can easily prove the following result. \begin{lemma}\label{L:characterization_graded_ideal} An ideal $I$ is a graded ideal of a graded ring $R=\displaystyle \bigoplus _{n\in \mathbb{Z}^{\geq 0}} R_n$ if it can be written as a direct sum of ideals such that each summand corresponds to $I \cap R_n$ for $n \in \mathbb{Z}^{\geq 0}$. \end{lemma} \begin{definition} An ideal I of $k[x_1,x_2,...,x_a]$ is homogeneous if and only if every homogeneous component of every polynomial $p(\bar {{\rm x}})$ is in $I$, where $\bar {{\rm x}}$ denotes an a-tuple $(x_1,x_2,....x_a)$ {\rm (see for example \cite{dummit2004abstract} page 299)}. \end{definition} Here are some easy-to-prove facts relevant to the present discussion about monomial ideals. \begin{itemize} \item A monomial ideal in $k[x_1,x_2,...,x_a]$ is, by definition, one generated by monomials (see \cite{dummit2004abstract} page 318). Therefore, it is a homogeneous ideal since every monomial is a homogeneous polynomial. A monomial ideal, is also, a graded ideal because $k[x_1,x_2,...,x_a]$ is a graded ring. Hence we may apply Lemma \ref{L:characterization_graded_ideal}. \item $R/I$ is a graded module since it has the following direct sum decomposition \begin{equation} R/I= \displaystyle \bigoplus _{b\in \mathbb{Z}^{\geq 0}} (R_{b}+I)/I, \end{equation} where $b$ is the grading and $I$ is a monomial ideal in the polynomial ring $R$. Observe that every summand is also a module over the base field $k$ of polynomial ring $R$. \end{itemize} Our object of study is the graded modules $R/I$, where $R$ is a polynomial ring in finitely many variables over a field $k$ and $I$ is a finitely generated monomial ideal in $R$. In this setting, we have that for each $b \geq 0$ the summand $(R_{b}+I)/I$ is indeed a vector space since it is a module over a field. Furthermore, since the number of variables is finite each such summand is a finite dimensional vector space. This brings up a natural question: Given a summand with grading $b$, what is its dimension as a vector space over the base field $k$? This is in fact how the Hilbert function for the graded module is defined. This definition can be illustrated by considering $R=k[x_1,x_2,x_3,x_4]$ and $I=\langle x_2^{4},x_{1}x_{4},x_3^2\rangle$. Then $R/I=\bigoplus_{i=0}^{\infty}R_i$, where $R_i=\left\{{\rm all\ polynomials\ equivalence\ classes\ in\ }R/I\ {\rm with\ representatives\ of\ degree\ i }\right\}$. Each $R_i$ is no longer a ring on its own but it is a $k$-vector space. The dimension of these vectors spaces are, $\dim R_0 = 1$, $\dim R_1 = 4$, $\dim R_2 =8$, $\dim R_3 =12$, $\dim R_4=15$, and $\dim R_i=16$ for all $i \geq 5$. In general, we define the Hilbert function of $M$ as ${\rm HF}(M,b)={\rm dim}_k M_b$ for any graded module $M= \displaystyle \bigoplus _{i\in \mathbb{N}} M_{i}$. In particular, a basic result facilitating our computations is the ``rank-nullity'' theorem. \begin{theorem}\label{rank-nullity Hilbert} ${\rm HF}(R,b)={\rm HF}(R/I,b)+{\rm HF}(I,b)$, where $R=k[x_1,x_2,...,x_a]$ and $I$ is a monomial ideal. \end{theorem} \begin{proof} Let $T: V \longrightarrow W$ be a linear transformation; then using the inclusion map $i$ of the kernel \emph{into} $V$ we get a sequence of linear transformations: \begin{center} $0\longrightarrow {\rm ker(T)}\overset{i}\longrightarrow V \overset{T}\longrightarrow {\rm coker(T)} \longrightarrow 0$. \end{center} If $T$ is onto then ${\rm coker(T)} \cong {V/ {\rm ker(T)}}$. Then \begin{center} ${\rm dim(ker(T))} - {\rm dim(V)} + {\rm dim(coker(T))}=0$. \end{center} \end{proof} \begin{definition} An exact sequence of modules is either a finite or an infinite sequence of modules and homomorphisms between them such that the image of one homomorphism equals the kernel of the next homomorphism {\rm (see \cite{dummit2004abstract} page 378)}. \end{definition} An example of an exact sequence is the sequence in the next lemma (see \cite{villarreal2015monomial} page 98) and the free resolution used in the Hilbert Syzygy theorem below (see \cite{eisenbud2013commutative} page 3). We shall refer to the exact sequence in the next lemma as \emph{the short exact sequence}. Bookkeeping often requires a shift in the grading. If $M= \displaystyle \bigoplus _{i=0}^{\infty} M_i$ is a finitely generated $\mathbb{Z}^{\geq 0}$-graded module over $R$, then we denote $M(-d)$ to be the regrading of $M$ obtained by a shift of the grading of $M$. In this case, the graded component $M_i$ of $M$ becomes $M_{i+d}$ grading component of $M(-d)$. \begin{lemma} Let M be a graded $R$-module. If $x_n \in R_d$ with ${\rm deg(x_n)}=d$, then there is a degree preserving exact sequence \begin{center} $0 \rightarrow (0:x_n)(-d) \rightarrow M(-d) \overset{x_n}{\rightarrow} M \overset{\phi}{\rightarrow} M/x_{n}M \rightarrow 0$, \end{center} where $\phi(m)=m+x_{n}M$ and $(0:x_n)=\{m \in M |x_{n}m=0\}$. \end{lemma} The drawback of this sequence is that not all objects are necessarily free $R$-modules. Free $R$-modules are isomorphic to a direct sum of copies of $R$. The traditional approach (see \cite{W:B&W}) to compute the Hilbert function of a finitely graded $R$-module $M$ (of which our quotient polynomial rings are examples) is based on the following theorem (see page 45 of \cite{eisenbud2013commutative}). \begin{theorem}{\rm (Hilbert Syzygy Theorem)}\\ Any finitely generated module $M$ over the ring $R=k[x_1,x_2,...,x_a]$ has a finite graded free resolution \begin{center} $0\rightarrow P_n \overset{\phi_n} {\rightarrow} P_{n-1} \rightarrow ... {\rightarrow} P_1 \overset{\phi_1}\rightarrow P_0$. \end{center} This implies that each $P_i$ is a finitely generated free $R$-module and $M \cong P_0/\ker \phi_1$. Furthermore, $n \leq a$. \end{theorem} This exact sequence can also be written as \begin{center} $0\rightarrow P_n \overset{\phi_n} {\rightarrow} P_{n-1} \rightarrow ... {\rightarrow} P_1 \overset{\phi_1}\rightarrow P_0 \rightarrow M \rightarrow 0$ \end{center} since each $P_i$ is a free $R$-module for $0 \leq i \leq n$. If $R$ is a graded ring, the sequence above is in fact an exact sequence of graded free modules and graded homomorphism, where each term in the free resolution is of the form $P_i=R_{1_i}(- d_{1_i}) \oplus R_{2_i}(- d_{2_i}) \oplus \dots \oplus R_{l_i}(- d_{l_i})$. Then by applying Theorem \ref{rank-nullity Hilbert} in an inductive argument one obtains the following method for computing $HF(M,t)$ \[ {\rm HF}(M,t)=\sum_{i=0}^n (-1)^i \left( {\rm HF}(R_{1_i}(- d_{1_i}),t) + {\rm HF}(R_{2_i}(- d_{2_i}),t) + \dots + {\rm HF}(R_{l_i}(- d_{l_i}),t)\right). \] Another standard approach to compute the Hilbert function is via the Hilbert series. \begin{definition} Let $R= \displaystyle \bigoplus R_n$ be a graded ring. The Hilbert series of $R$ is defined to be the generating function \begin{center} ${\rm HS}(R,t)=\displaystyle\sum_{n=0}^{\infty} {\rm HF}(R,n)t^{n}$. \end{center} \end{definition} Similarly, if $I$ is a homogeneous ideal of $R$, then the Hilbert series of $I$ is the formal power series \begin{center} ${\rm HS}(I,t)=\displaystyle\sum_{n=0}^{\infty} {\rm HF}(I,n)t^{n}$. \end{center} Convergence is not an issue since we are working with formal power series. For the Hilbert series we have a counterpart to our result derived from the ``rank-nullity'' theorem. \begin{theorem} Let $R= \displaystyle \bigoplus _{n\geq 0} R_n$ be a graded ring and $I= \displaystyle \bigoplus _{n\geq 0} I_n$ be a graded ideal. Then \begin{center} ${\rm HS}(R/I,t)={\rm HS}(R,t)-{\rm HS}(I,t)$. \end{center} \end{theorem} \begin{proof} Theorem \ref{rank-nullity Hilbert} implies that ${\rm HF}(R/I,n)={\rm HF}(R,n)-{\rm HF}(I,n)$ and by summing over all values of $n$ the theorem follows. \end{proof} In other words, for computing the dimension of $R_n/I_n$, we count the number of monomials in $R_n$ and we subtract the number of monomials spanning $I_n$; this is because the monomials spanning $R_n$ form a basis for $R_n$ as a vector space over $k$. Similarly the monomials spanning $I_n$ form a basis for $I_n$ as a vector space over $k$. \\ To build on this result we need the following notation for the Hilbert function of a module $M$ shifted by degree $d$ \begin{center} ${\rm HF}\{M(-d)\}:={\rm HF}(M,t-d)$. \end{center} \begin{lemma}\label{HF for principal ideal} A principal ideal has the Hilbert function of a polynomial ring shifted by the degree of the generator. If $I=\langle p \rangle$, where $p$ is a monomial of degree $n$ in $k[\bar {\rm x}]$ and $\bar {\rm x}$ represents the a-tuple $(x_1,x_2,x_3,...,x_a)$ then \begin{center} ${\rm HF}(I,t)={\rm HF}\{k[\bar {\rm x}](-n)\}$. \end{center} \end{lemma} \begin{proof} By definition ${\rm HF(I,t)}$ is the dimension of the vector space spanned by all polynomials in $I$ of uniform degree $t$. A basis for such a vector space can be chosen to be all monomials in $I$ of degree $t$. These are of the form $f \cdot p $, where $f$ is a monomial of ${\rm deg}(f)=t-{\rm deg}(p)$ so there are as many such monomials as there are of degree $t-n$ in $k[\bar {\rm x}](-n)$. \end{proof} Before working through our first example, it would be helpful to refer the following corollary to our last lemma. \begin{corollary}\label{corollary to HF for principal ideal} For a principal ideal $I=\langle p \rangle$ we have that \begin{center} ${\rm HF}(R/I,t)={\rm HF}(R,t)-{\rm HF}(R(- {\rm deg}(p)),t)$ \end{center} \end{corollary} \begin{proof} Apply the above lemma to the Theorem \ref{rank-nullity Hilbert} \end{proof} \begin{example} Find the Hilbert function of $M=k[x,y,z]/\langle x^5 \rangle$. \end{example} Let $R=k[x,y,z]$. By Corollary \ref{corollary to HF for principal ideal}, the Hilbert function of the module $M$ can be written as \begin{center} ${\rm HF}(M,t)={\rm HF}(R,t)-{\rm HF}(R(- {\rm deg}(x^5)),t)={\rm HF}(R,t)-{\rm HF}(R(- 5),t)$. \end{center} Therefore, \begin{center} \begin{tabular}{ l | l | l } ${\rm HF}\{R\}$ & $-{\rm HF}\{R\} (-5)\}$ & ${\rm HF}\{M\}$\\ \hline ~1 & ~~~~~0 &~~~ $\bf 1$ \\ \hline ~3 & ~~~~~0 &~~~ $\bf 3$\\ \hline ~6 & ~~~~~0 &~~~ $\bf 6$ \\ \hline 10 & ~~~~~0 &~~ $\bf 10$ \\ \hline 15 & ~~~~~0 &~~ $\bf 15$ \\ \hline 21 & ~~~~-1 &~~ $\bf 20$ \\ \hline 28 & ~~~~-3 &~~ $\bf 25$ \\ \hline 36 & ~~~~-6 &~~ $\bf 30$ \\ \hline 45 & ~~~-10 &~~ $\bf 35$ \\ \hline 55 & ~~~-15 &~~ $\bf 40$ \\ \hline ~.. & ~~~~.. &~~~~~.. \\ \hline ~.. & ~~~~.. &~~~~~.. \\ \end{tabular} \end{center} Regardless of our approach to the Hilbert function of polynomial quotient rings, it is clear that computing the Hilbert function of rings of the form $k[x_1,x_2,...,x_a]$ is essential. \section{Hilbert Function tables, motivating applications and examples} We study Hilbert functions by placing them into families. The simplest such family will be the Hilbert functions corresponding to the indexed set $\{k[x_1, x_2, \ldots, x_a]\, : \, a \geq 1\}$. Then we generalize the idea of the Pascal table to construct the Hilbert Function tables. To motivate this generalization, we use the Stanley--Reisner ring of a complex which we gradually build in a form that is analogous to the way the corresponding Hilbert Function table would be generated. Finally, one must address the difficulties of generating a row of the Hilbert Function table which involves the introduction of one or more monomials in the ideal being used for the quotient ring corresponding to that row. We illustrate the difficulties at the end of this section and develop a different method of solving this problem in each of the next two sections. \subsection{Pascal Table and more general Hilbert Function Tables} Consider the indexed set $\{k[x_1, x_2, \ldots, x_a]\, : \, a \geq 1\}$ of polynomial rings. We use the index value $a$ to determine the row and the degree $b$ of the monomials being counted to determine the column in the table below. \begin{center} \small{ \begin{tabular}{ l | l l l l l l l l l} HF of $k[x_1]$ & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 &...\\ [-3pt] \hline HF of $k[x_1,x_2]$ & 1 & 2& 3 & 4 & 5 & 6 & 7 & 8 &... \\[-3pt] \hline HF of $k[x_1,x_2,x_3]$ & 1 & 3 & 6 & 10 & 15 & 21 & 28 & 36 &...\\[-3pt] \hline HF of $k[x_1,x_2,x_3,x_4]$ & 1 & 4 & 10 & 20 & 35 & 56 & 84 & 120&... \\[-3pt] \hline HF of $k[x_1,x_2,x_3,x_4,x_5]$ & 1 & 5 & 15 & 35 & 70 & 126 & 210 & 330&... \\[-3pt] \hline HF of $k[x_1,x_2,x_3,x_4,x_5,x_6]$ & 1 & 6 & 21 & 56 & 126 & 252 & 462 & 792&... \\[-3pt] \hline HF of $k[x_1,x_2,x_3,x_4,x_5,x_6,x_7]$ & 1 & 7 & 28 & 84 & 210 & 462 & 924 & 1716&... \\[-3pt] \hline HF of $k[x_1,x_2,x_3,x_4,x_5,x_6,x_7,x_8]$ & 1 & 8 & 36 & 120 & 330 & 792 & 1716 & 3432&... \\[-3pt] \hline . & . & . & . & . & . & . & . & .&... \\[-3pt] \end{tabular}} \end{center} The reader would have undoubtedly noticed that the number patterns displayed in the above table are those of the Pascal triangle. For this reason, we refer to the above table as the Pascal table. These numerical patterns lead us to the following proposition. \begin{proposition}\label{Pascal_table_recurrence_formula} $F(a,b)= F(a-1,b)+F(a,b-1)$, where $F(a,b)$ denotes the number of monomials of degree $b$ in $k[x_1,x_2,...,x_a]$. \end{proposition} \begin{proof} Let $S$ be the set of monomials in $k[x_1,x_2,....,x_a]$ of degree $b$. Then $S$ can be written as the union of the set $S_1$ of monomials of degree $b$ in the variables $x_1,x_2,....,x_{a-1}$ and a set $S_2$ disjoint from $S_1$. Observe $|S_1|=F(a-1,b)$. Now consider any element of $S_2$. Notice that such an element has a factor $x_a$. So if $p(\bar {{\rm x}}) \in S_2$, then there is a unique $\hat {p}(\bar {{\rm x}})$ such that $p(\bar{{\rm x}})=\hat{p}(\bar{{\rm x}})\cdot x_a$ and $\rm {deg(\hat{p}(\bar{{\rm x}}))}=b-1$. On the other hand, if $\hat {q}(x) \in k[x_1,x_2,....,x_a]$ and has degree $b-1$ then $(\hat {q}(x)) \cdot x_a \in S_2$. Therefore, there is a bijection from the set of monomials of degree $b-1$ in $k[x_1,x_2,....,x_a]$ to the set $S_2$. Consequently, $|S|=F(a-1,b)+F(a,b-1)$. \end{proof} Now we prove by induction that each element of the table is given by the following proposition. Please be aware that the row count starts with $1$ but the column count starts with zero. This is because the row count matches the number of variables used and the column count corresponds to the constant degree of the set of monomials being counted. \begin{proposition}\label{Pascal_table_combinatorial_formula} $F(a,b)=\frac{(a-1+b)!}{(a-1)!b!}$, where $F(a,b)$, $a \geq 1, b \geq 0$, denotes the entry that lies in the $a^{\rm th}$ row and the $b^{\rm th}$ column of the Pascal table. \end{proposition} \begin{proof} We have that $F(1,b)$ is the number of monomials of degree $b$ in a single variable. Since $x_1 ^{b}$ is the only monomial in $K[x_1]$ of degree $b$ then $F(1,b)=1$ for all $b\geq 0$. Also $F(a,0)=1$ for all $a\geq 1$ because in the ring $k[x_1,x_2,....,x_a]$ there is only one monomial of degree zero which is $x_1^{0}\cdot x_2^{0} \cdot ....\cdot x_a^{0}$.\\ {\bf Inductive Step}\\ Suppose $a>1$ and $b>0$. Then given that $$F(a-1,b)=\frac{(a-1+b-1)!}{(a-2)!b!}$$ and $$F(a,b-1) = \frac{(a-1+b-1)!}{(a-1)!(b-1)!}$$ we have \vspace{-2mm} \begin{eqnarray} F(a,b)&=&F(a-1,b)+F(a,b-1) \nonumber \\ & = &\frac{(a-1+b-1)!}{(a-2)!b!}+\frac{(a-1+b-1)!}{(a-1)!(b-1)!} \nonumber \\ & = & \frac{(a-1+b)\cdot (a-2+b)!}{(a-1)!b!}\nonumber \\ & = &\frac{(a-1+b)!}{(a-1)!b!}.\nonumber \ \end{eqnarray} \end{proof} Both meanings assigned to $F(a,b)$ are equivalent. Thus, for example, we can say that by choosing $a=2$, we regard $F(2,b)$ as the value in the $2^{\rm nd}$ row and $b^{\rm th}$ column of the table or the number of monomials of degree $b$ that can be written with two distinct variables. Also observe that the proposition \ref{Pascal_table_combinatorial_formula} together with corollary \ref{corollary to HF for principal ideal} give a concrete formula for the Hilbert function of a principal ideal. So for $R=k[\bar {\rm x}]$ and $p \in R$ we can write \begin{equation}\label{principal ideal Pascal} {\rm HF}(R/I,b)= {\rm F}(a,b)-{\rm F}(a,b-{\rm deg}(p))=\frac{(a-1+b)!}{(a-1)!b!}-\frac{(a-1+b-\deg (p))!}{(a-1)!(b-\deg (p))!}. \end{equation} Proposition \ref{Pascal_table_recurrence_formula} is also valid for generating some rows of more general families of Hilbert functions. We can prove it using either a counting argument or some homological algebra machinery. We prefer the latter in order to avoid delicate counting procedures. Moreover, proposition \ref{Pascal_table_recurrence_formula} allows for an inductive construction of other expressions for computing values of the Pascal table. Let us illustrate this by expressing $F(a,b)$ in terms of the ascending factorial $[a]^{n}=a\cdot(a+1)\cdot(a+2)\cdot......\cdot(a+n-1)$ with the convention $[a]^{0}=1$. \begin{proposition}\label{combinatorial_sum_formula} The Hilbert function $F(a,b)$ defined as above it can be computed by either one of the following formulas $$F(a,b)=\displaystyle\sum_{i=0}^{a-1} \frac{1}{i!} [b]^i \quad \text{ or } \quad F(a,b)=\displaystyle\sum_{j=0}^{b} \frac{1}{j!} [a-1]^j.$$ \end{proposition} \begin{proof} To prove the first formula we observe that $F(1,b)=1$ for all $b \geq 0$ and this is precisely $F(1,b)=\displaystyle\sum_{i=0}^{1-1} \frac{1}{i!} [b]^i$.\\ We do induction on the first parameter of $F(a,b)$ namely $a \geq 2$. Suppose \begin{center} $F(a-1,b)=\displaystyle\sum_{i=0}^{a-2} \frac{1}{i!} [b]^i$. \end{center} Now we use the result that \[ F(a,b)= \begin{cases} F(a-1,b)+0, & \text {for}\ b=0 \\ F(a-1,b)+F(a,b-1), & \text {for}\ b>0 \end{cases} \] Observe that $F(a-1,0)=1$, for all $a\geq 1$.\\ Therefore, \begin{eqnarray} F(a,b) & = & \displaystyle F(a-1,b)+F(a,b-1) \nonumber \\ & = & \displaystyle\sum_{i=0}^{a-2} \frac{1}{i!} [b]^i + \frac{(a-1+b-1)!}{(a-1)!(b-1)!}\nonumber \\ & = & \displaystyle\sum_{i=0}^{a-2} \frac{1}{i!} [b]^i + \frac{1}{(a-1)!} \cdot (b\cdot (b+1)\cdot.......\cdot (b+a-2))\nonumber \\ & = & \displaystyle\sum_{i=0}^{a-2} \frac{1}{i!} [b]^i + \frac{1}{(a-1)!} \cdot [b]^{(a-1)}\nonumber\\ & = &\displaystyle\sum_{i=0}^{a-1} \frac{1}{i!} [b]^i.\nonumber \end{eqnarray} The second formula follows immediately from the first formula since the left hand side is invariant when variables $a-1$ is interchanged with $b$. Therefore, we have that \begin{center} $F(a,b)=\displaystyle\sum_{j=0}^{b} \frac{1}{j!} [a-1]^j$. \end{center} \end{proof} As an example, take the graded module $k[x_1,x_2,x_3]$ then $F(3,b)=[b]^0 + \frac{1}{1!}[b]^1 + \frac{1}{2!}[b]^2=1+b+ \frac{1}{2} (b^2 + b)$, where $b=0,1,2,....$ Now we proceed to create a more robust version to compute the Hilbert function of a quotient ring by introducing the meaning of the Hilbert function table. \begin{definition} A Hilbert function table associated to a quotient ring $k[x_1,x_2,...,x_d]/I$, where $I$ is a monomial ideal in $k[x_1,x_2,...,x_d]$ is an array whose entry indexed by $(a,b)$ is the value of ${\rm HF}(k[x_1,x_2,...,x_a]/I_a,b)$ , where $I_a$ is the ideal generated by the generators of $I$ that involve only the set of variables $\{x_1,x_2,...,x_a\}$. \end{definition} As a result of the above definition, the Pascal table is a Hilbert function table for graded modules of the form $k[\bar {\rm x}]$, where $\bar {\rm x}=(x_1,x_2,....,x_a)$ and $a \in \mathbb{R}^{>0}$. We can also observe that if $a \geq d$ then $I_a=I$. Moreover, the order of the variables $x_1,x_2,...,x_d$ will affect the Hilbert function table. In fact, two different Hilbert function tables for the same quotient ring need not have the same rows for $1 \leq a < d$. This is because altering the order of $x_1,x_2,...,x_d$ will alter the sequence of ideals $I_1,I_2,...I_{d-1}$. However, two Hilbert function tables for $k[x_1,x_2,...,x_a]/I$ will agree in rows $d$ and higher because $I_a=I$ for $a \geq d$. After the $d^{\rm th}$ row, every new variable does not introduce a new monomial in the ideal. Therefore, producing the rows after the $d^{\rm th}$ row is a straightforward application of the following result. \begin{theorem}\label{no_monomial_added_to_ideal} Given $ {\rm HF}(j,b)$ the Hilbert Function of $k[x_1,x_2,....,x_d,x_{d+1},....,x_{d+j}]/I$, with $j > 0 $, where $I$ is a monomial ideal of the fixed set of variables $\{x_1,.....,x_d \}$ for $b \geq 0$ we have that $ {\rm HF}(j,b)= {\rm HF}(j-1,b) + {\rm HF}(j,b-1)$. \end{theorem} \begin{proof} For $j\geq1$, let $M_j=k[x_1,x_2,....x_d,x_{d+1},\dots,x_{d+j}]/I$ and let $z=x_{d+j}$. We use the short exact sequence \begin{equation}\label{E: short exact sequence} 0\longrightarrow (0:z)(-1) \xrightarrow{\rm incl} M(-1) \longrightarrow M \longrightarrow M/zM \longrightarrow 0 \end{equation} found in \cite{villarreal2015monomial}. In this short exact sequence let the term $M=M_j$. Applying what are commonly known as the $2^{\rm nd}$ and $3^{\rm rd}$ isomorphism theorems or Proposition 2.1 in \cite{atiyah2018introduction}, \begin{align} \begin{split} zM_j &= z\left(k[x_1,x_2,....x_d,x_{d+1},\dots,x_{d+j}]/I\right)\nonumber\\ &\cong z\left(k[x_1,x_2,....x_d,x_{d+1},\dots,x_{d+j}]/(I \cap \langle z\rangle)\right)\nonumber\\ &\cong (z\,k[x_1,x_2,....x_d,x_{d+1},\dots,x_{d+j}] + I)/I\nonumber \end{split} \end{align} thus, \begin{align} \begin{split} M_j/zM_j&=( k[x_1,x_2,....x_d,x_{d+1},\dots,x_{d+j}]/I)/\left(z\, k[x_1,x_2,....x_d,x_{d+1},\dots,x_{d+j}]+I/I \right)\nonumber\\ &\cong M_{j-1}=k[x_1,x_2,....x_d,x_{d+1},\dots,x_{d+j-1}]/I.\nonumber \end{split} \end{align} Since $z \notin I$ the only element $x \in M_j$ such that $zx=0$ is $x=0$. In other words, the annihilator of multiplication by $z$ is zero. This implies the short exact sequence, $0\longrightarrow (0:z)(-1) \xrightarrow{\rm incl} M_j(-1) \longrightarrow M_j \longrightarrow M_j/zM_j \longrightarrow 0$ and the corresponding alternating sum ${\rm HF}\{M_j/zM_j\}-{\rm HF}\{M_j\}+ {\rm HF}\{M_j(-1)\}=0$. \end{proof} \subsection{Motivating example: the Stanley-Reisner Ring} The Stanley-Reisner ring is a polynomial quotient ring assigned to a finite simplicial complex. First, we must bring to the attention of the reader what is meant by a \emph{finite simplicial complex}. \begin{definition} A finite simplicial complex $\Delta$ consists of a finite set $V$ of vertices and a collection $\Delta$ of subsets of $V$ called faces such that \begin{itemize} \item{(i)} If $u \in V$, then ${u} \in \Delta$. \item{(ii)} If $F \in \Delta$ and $G \subset F$, then $G \in \Delta$. \end{itemize} \end{definition} {\bf Note:} The empty set is a face of every simplex. Let $\Delta$ be a simplicial complex and let $F$ be a face of $\Delta$. Define the dimensions of F and $\Delta$ by ${\rm dimF}=|F|-1$ and ${\rm dim \Delta}={\rm sup} \{{\rm dim F} | F \in \Delta\}$ respectively. A face of dimension $q$ is called a q-face or a q-simplex. Associate a distinct variable $x_i$ to each distinct vertex in the set $V$. If $F$ is a face of $\Delta$ then the product of all corresponding $x_i$ is a square-free monomial associated with $F$. This is due to the fact that at most one $q$--face can exist for a given $(q+1)$--set of vertices. The Stanley-Reisner ring can be written in following form: \begin{center} $K[x_1,x_2,.......,x_n]/I $, \end{center} where $I$ is an ideal of square free monomials ideal in the variables $x_1,x_2,.......,x_n$ corresponding to the non-face of $\Delta$. For convenience let us denote the Stanley-Reisner ring associated with $\Delta$ by $k[\Delta]$. This is a standard construction the details of which can be found in page 5 of \cite{miller2004combinatorial}. By definition, a simplicial complex $\Delta$ is a set theoretic construct but it is often the case we work with its geometric realization. That is associate with $\Delta$ a topological space that is a subspace of $\mathbb{R}^{\dim \Delta}$ and it is a union of simplices corresponding to the faces of $\Delta$. Since $\Delta$ can be written as a disjoint union of its $i$-dimensional components $\Delta = \bigcup_{i=0}^{\dim \Delta}\Delta_i$ consequently the Stanley Reisner ring of $\Delta$ admits a direct sum decomposition \begin{center} $k[\Delta]= \displaystyle \bigoplus _{i=0}^{\dim \Delta} k[\Delta_i]$ \end{center} whose summands $k[\Delta_i]$ are vector spaces with a basis of monomials (not necessarily square-free) supported on the $i$-dimensional faces of $\Delta$. \begin{example} We illustrate how construction of the complex with Stanley--Reisner ring \begin{center} $M=k[x,\hat{x},y,z,w]/\langle x\hat{x},yzw \rangle$. \end{center} mirrors the generating of the corresponding Hilbert Function table by adding one variable at a time and including all relevant monomials in the ideal used in the quotient. \end{example} \begin{figure}[h!] \centering \includegraphics[width=0.2\textwidth]{graph3} \caption{4-vertices, 3-edges, 0-faces} \end{figure} \newpage Start with the complex $C_0$ corresponding to the point $x$ we have the polynomial ring $k[x]$. Bringing the next variable $\hat{x}$, we have a new complex $C_1$ corresponding to the points $x,\hat{x}$. So we have $k[x,\hat{x}]/\langle x\hat{x} \rangle$. When the next variable $y$ shows up we have the complex $C_2$ corresponding to the points $x,\hat{x}, y$ and the edges $xy$ and $\hat{x}y$. By the same way, when $z$ shows up we have the complex $C_3$ corresponding to the points $x,\hat{x}, y, z$, the edges $xy,xz,yz,y\hat{x},z\hat{x}$ and the faces $xyz$ and $y\hat{x}z$. To generate the table below we invoke\\ Theorem \ref{no_monomial_added_to_ideal}\\[12pt] \begin{center} \scalebox{1.2}{ \begin{tabular}{ l | l l l l l l l l l} ${\rm HF}\{k[x]\}$ & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & ... \\ \hline ${\rm HF} \{k[x,\hat{x}]/\langle x\hat{x} \rangle\}$ & 1 & 2 & 2 & 2 & 2 & 2 & 2 & 2 &...\\ \hline ${\rm HF} \{k[x,\hat{x}, y]/\langle x\hat{x} \rangle\}$ & 1 & 3 & 5 & 7 & 9 & 11 & 13 & 15 &...\\ \hline ${\rm HF} \{k[x,\hat{x}, y,z]/\langle x\hat{x} \rangle\}$ & 1 & 4 & 9 & 16 & 25 & 36 & 49 & 64 &...\\ \end{tabular}} \end{center} \vskip 0.5cm Let $M_1=k[x,\hat{x}, y,z]/\langle x\hat{x} \rangle.$ Then by using the short exact sequence (s.e.s.) for $M$ we have \begin{center} \footnotesize{ \begin{tabular}{l*{10}{c}r} {0} & $\longrightarrow$ & $b_3j$ & $\xrightarrow {\text {inclusion}}$ & $b_2j$ & $\xrightarrow {{\text{multiply}}\ {\text{by}}\ w}$ & $b_1j$ & $\longrightarrow$ & $b_0j$ & $\longrightarrow$ & $0$\\ {0} & $\longrightarrow$ & $(0:w)_{M}(-1)$ & $\xrightarrow {\text {inclusion}}$ & ${M}(-1)$ & $\xrightarrow {{\text{multiply}}\ {\text{by}}\ w}$ & $M$ & $\longrightarrow$ & $M/w {M} \cong {M_1}$ & $\longrightarrow$ & $0$\\ {0} & &{0} & &{0} & & $\bf 1$ & & {1} & & {0}\\ {0} & &{0} & & {1} & & $\bf 5$ & &{4} & & {0} \\ {0} & &{0} & &{5} & & $\bf 14$ & & {9} & & {0}\\ {0} & &{1} & &{14} & & $\bf 29$ & & {16} & & {0}\\ {0} & &{4} & &{29} & & $\bf 50$ & & {25} & & {0}\\ {0} & &{9} & &{50} & & $\bf 77$ & & {36} & & {0}\\ {0} & &{16} & &{77} & & $\bf 110$ & & {49} & & {0}\\ {0} & &{25} & &{110} & & $\bf 149$ & & {64} & & {0}\\ {...} & & {...} & & {...} & &{...} & &{...} & & {...} \\ {...} & & {...} & & {...} & & {...} & &{...} & & {...} \end{tabular} } \end{center} The justification for the values in the left most column is based on the annihilator \begin{center} $(0:w)=\{q \in M:qw=0 \in M\}$ \end{center} associated with the map which is multiplication by $w$. A basis for the b-graded component of the module $(0:w)$ is the following set: \begin{align} B&=\left\{{\rm nonzero}\ p\in M\ {\rm of degree}\ b:yz|p\right\}\nonumber\\ &=\left\{(yz)(r):{\rm nonzero}\ r\in M_1\ {\rm with \ degree}\ b-2\ {\rm and}\ (x{\hat x})\nmid yzr\right\}\nonumber\\ &=\left\{(yz)(r):{\rm nonzero}\ r\in M_1 \ {\rm with \ degree}\ b-2\right\}.\nonumber \end{align} Thus, ${\rm HF}\{(0:w)\}=|B|={\rm HF}\{M_1(-2)\}$. Having accounted for all annihilator elements and using the fact that $b_2j=b_1j+b_3j-b_0j$ we find the Hilbert function for $M$. \newpage \begin{example} We are looking for the Hilbert function of the module \begin{center} $M=k[x,y,z]/\langle x^2yz^3,x^3z,y^2z^2 \rangle$. \end{center} \end{example} By rearranging the variables in our example we have that $M=k[y,z,x]/\langle y^2z^2,x^2yz^3,x^3z \rangle$ and based on Theorem \ref{no_monomial_added_to_ideal} we have: \begin{center} \begin{tabular}{ l | l l l l l l l l l} ${\rm HF}\{k[y]\}$ & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 &... \\ \hline ${\rm HF}\{k[y,z]/\langle y^2z^2\rangle\}={\rm HF}\{M_1\}$ & 1 & 2& 3 & 4 & 4 & 4 & 4 & 4 &... \end{tabular} \end{center} Therefore, based on the short exact sequence we have \begin{center} \footnotesize{ \begin{tabular}{l*{10}{c}r} {0}\hspace{-6mm} & $\longrightarrow$\hspace{-6mm} & $(0:x)(-1)$\hspace{-6mm} & $\longrightarrow$\hspace{-4mm} & $M(-1)$\hspace{-4mm} & $\longrightarrow$\hspace{-6mm} & $M$\hspace{-4mm} & $\longrightarrow$\hspace{-6mm} & $M_1$\hspace{-6mm} & $\longrightarrow$\hspace{-4mm} & $0$\\ {0} & &{0} & &{0} & & ~~~$\bf 1$ & & ${1}=\{1\}$ & & {0}\\ {0} & &{0} & & {1} & & ~~~$\bf 3$ & &${2}=\{y,z\}$ & & {0} \\ {0} & &{0} & &{3} & & ~~~$\bf 6$ & & ${3}=\{y^2,yz,z^2\}$ & & {0}\\ {0} & &{0} & &{6} & & ~~~$\bf 10$ & & ${4}=\{y^3,y^2z,yz^2,z^3\}$ & & {0}\\ {0} & &${1}=\{x^2z\}$ & &{10} & & ~~~$\bf 12$ & & ${4}=\{y^4,y^3z,yz^3,z^4\}$ & & {0}\\ {0} & &${2}=\{yx^2z,z^2x^2\}$ & &{12} & & ~~~$\bf 13$ & & ${4}=\{y^5,yz^4,y^4z,z^5\}$ & & {0}\\ {0} & &${4}=\{1,y^2x^2z,yzx^2z,z^2x^2z\}$ & &{13} & & ~~~$\bf 14$ & & {4} & & {0}\\ {...} & & {...} & & {...} & &{...} & &{...} & & {...} \\ {...} & & {...} & & {...} & & {...} & &{...} & & {...} \end{tabular} } \end{center} In order to figure out the Hilbert function of the annihilator module we need to find all the non zero elements in $M$. Those elements should be either multiple of $xyz^3$ or $x^2z$. Therefore, we cannot have a factor of $y$ and a factor of $y^2z$. In other words, there are no elements in $M_1$ that create $x^2yz^3$ and $x^3z$. However, there are elements in $M_1$ that create $y^2z^2$. By this way and using the fact that the alternating sum is zero we create the above table. In this example, we can observe that the drawback is that computing the Hilbert function of the annihilator ideal would require counting. In the next examples, we illustrate basic approaches to avoid counting. \subsection{Examples} Now we use the basic results found earlier in this section to compute the Hilbert function of some key examples. These will provide the motivation for the techniques we develop in sections 3 and 4. For our convenince, we group the examples based on the number of monomials generating the ideal used to produce the quotient ring. \subsubsection{The ideal used to produce the quotient polynomial ring is a principal ideal} Consider $M=k[\bar{x}]/\langle u \rangle$ where $\deg u = d$. Using equation (\ref{principal ideal Pascal}) we obtain the following: \begin{equation}\label{principal ideal case} {\rm HF}(M,b)= \begin{cases} F(a,b), & \text {for}\ 0\leq b \leq d-1 \\ F(a,b)-F(a,b - d), &\text{for } b\geq d \end{cases} \end{equation} This approach combined with the result in Proposition \ref{combinatorial_sum_formula} immediately yields \begin{equation}\label{matrix} {\rm HF}(M,b)= \begin{cases} F(a,b)=\displaystyle\sum_{j=0}^{b} \frac{1}{j!} [a-1]^j, & \text {for}\ 0\leq b \leq d-1 \\ F(a,b)-F(a,b - d)=\displaystyle\sum_{j=b-(d-1)}^{b} \frac{1}{j!} [a-1]^j, &\text{for } b\geq d \end{cases} \end{equation} Next, (\ref{matrix}) can be encoded as matrix multiplication using an infinite matrix and infinite column vectors corresponding to the right-hand side of the above equation. \[\left( \begin{array}{ccccccccc} \frac{1}{0!} & 0 & 0 & 0 & 0 & 0 &0 & 0 &... \\ \frac{1}{0!} & \frac{1}{1!} & 0 & 0 &0 & 0 & 0 & 0 &... \\ \frac{1}{0!} & \frac{1}{1!} & \frac{1}{2!} & 0 &0 & 0 & 0 & 0 &...\\ \vdots & \vdots &\vdots & \vdots & \vdots & \vdots & \vdots &\vdots & \vdots \\ \frac{1}{0!} & \frac{1}{1!} & \frac{1}{2!} & \frac{1}{3!} & ... & \frac{1}{(d -1)!} & 0 &0 &...\\ 0 & \frac{1}{1!} & \frac{1}{2!} & \frac{1}{3!} & ... & \frac{1}{(d-1)!} & \frac{1}{d!} &0 &...\\ 0 & 0 & \frac{1}{2!} & \frac{1}{3!} & ... & \frac{1}{(d-1)!} & \frac{1}{d!} &\frac{1}{(d+1)!} &...\\ ... &... & ... & ... & ... & ... & ... & ... & ... \\ ... & ... &... & ... & ... & ... & ... & ... & ... \\ \end{array} \right) \cdot \left( \begin{array}{c} \left[a-1\right]^{0}\\ \left[a-1\right]^1\\ \left[a-1\right]^2\\ \left[a-1\right]^3\\ \left[a-1\right]^4\\ \left[a-1\right]^5\\ \left[a-1\right]^6\\ ...\\ ...\\ \end{array} \right) = \left( \begin{array}{c} {\bf {\rm HF}(M,0)}\\ {\bf {\rm HF}(M,1)}\\ {\bf {\rm HF}(M,2)}\\ {\bf \vdots}\\ {\bf {\rm HF}(M,d-1)}\\ {\bf {\rm HF}(M,d)}\\ {\bf {\rm HF}(M,d+1)}\\ {\bf ....}\\ {\bf ....}\\ \end{array} \right) \hspace{1cm} .\] In what follows, we concentrate our efforts in finding ways to compute the Hilbert function of a polynomial ring as finite sums and differences of the Pascal table row corresponding to the number of variables in our polynomial ring. In each such case, one can produce a matrix multiplication approach similar to the above. We'll leave this for the reader to try using the methods in section 4 as a starting point. Here are two concrete examples to illustrate the above computations. \vskip 0.5cm \begin{example} We are looking for the Hilbert function of the module $M=k[x,y,z]/\langle xy^{2} \rangle$. \end{example} Equation (\ref{principal ideal case}) indicates the following recurrence relation for this quotient ring \[ {\rm HF}(M,b)= \begin{cases} F(a,b), & \text {for}\ 0\leq b\leq 2 \\ F(a,b)-F(a,b-3), & \text {for}\ b\geq 3 \end{cases} \] where the coefficients of $F(a,b)$ and $F(a,b-3)$ denote the first entry of the Pascal triangle. Therefore, the Hilbert function of the module $M=k[x,y,z]/\langle xy^{2} \rangle $ is expressed by the following sequence of numbers \begin{center} \begin{tabular}{ l l l l l l l l l} ${\rm HF}(M,b)$: {\bf 1} & {\bf 3} & {\bf 6} & {\bf 9} & {\bf 12} & {\bf 15} & {\bf 18} & {\bf ...} \\ \end{tabular} \end{center} Finally, by (\ref{matrix}) we can rewrite the second part of the above function as \begin{center} $ F(a,b)-F(a,b-3)=\displaystyle\sum_{j=b-2}^{b} \frac{1}{j!} [a-1]^j$, for $b\geq 3$. \end{center} An alternative way to express the Hilbert function of $M$ is given by the following way \[\left( \begin{array}{cccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 &... \\ 1 & \frac{1}{1!} & 0 & 0 & 0 & 0 & 0 &... \\ 1 & \frac{1}{1!} & \frac{1}{2!} & 0 & 0 & 0 & 0 &...\\ 0 & \frac{1}{1!} & \frac{1}{2!} & \frac{1}{3!} & 0 & 0 & 0 &...\\ 0 & 0 & \frac{1}{2!} & \frac{1}{3!} & \frac{1}{4!} & 0 & 0 &...\\ 0 & 0 & 0 & \frac{1}{3!} & \frac{1}{4!} & \frac{1}{5!} & 0 &...\\ 0 & 0 & 0 & 0 & \frac{1}{4!} & \frac{1}{5!} & \frac{1}{6!} &...\\ ... & ... & ... & ... & ... & ... & ... & ... \\ ... & ... & ... & ... & ... & ... & ... & ... \\ \end{array} \right) \cdot \left( \begin{array}{c} \left[2\right]^{0}\\ \left[2\right]^{1}\\ \left[2\right]^{2}\\ \left[2\right]^{3}\\ \left[2\right]^{4}\\ \left[2\right]^{5}\\ \left[2\right]^{6}\\ ...\\ ...\\ \end{array} \right) = \left( \begin{array}{c} {\bf 1}\\ {\bf 3}\\ {\bf 6}\\ {\bf 9}\\ {\bf 12}\\ {\bf 15}\\ {\bf 18}\\ {\bf ...}\\ {\bf ...}\\ \end{array} \right) \hspace{1cm} .\] \subsubsection{The ideal used in the quotient polynomial ring consists of two monomials} Suppose $M=k[\bar{x}]/ \langle u,v \rangle$ with ${\rm deg}(u)=d_u$ and ${\rm deg}(v)=d_v$. Again, the key to computing the ${\rm HF}(M,b)$ is finding a way to account exactly once for the monomials of degree $b$ belonging to the ideal $\langle u,v \rangle$. There are three possibilities which the reader may visualize as the Venn Diagram of two overlaying regions: one corresponding to $\langle u \rangle$ and the other to the corresponding $\langle v \rangle$. In fact, $q \in \langle u \rangle$ and $q \in \langle v \rangle$ iff $q \in \langle{\rm lcm }(u,v) \rangle$. To see this, observe that $u \mid q$ and $v \mid q \Leftrightarrow {\rm lcm }(u,v) \mid q$. Then by using the inclusion-exclusion principle we have \begin{center} ${\rm HF}(\langle u,v \rangle , b)= {\rm HF} ( \langle u \rangle ,b) + {\rm HF} ( \langle v \rangle ,b) - {\rm HF} ( {\rm lcm}(u,v),b).$ \end{center} Since every ideal on the right-hand side is a principal ideal, we can apply lemma \ref{HF for principal ideal} and the ``rank--nullity'' reasoning from section 1 to get \[ {\rm HF}(M,b)= \begin{cases} F(a,b), & \text {for } 0\leq b< d_{\min} \\ F(a,b) -F(a,b-d_{\min}), &\text{for } d_{\min} \leq b < d_{\max}\\ F(a,b) -F(a,b-d_u) -F(a,b-d_v), &\text{for } d_{\max} \leq b < d_{\rm lcm}\\ F(a,b) -F(a,b-d_u) -F(a,b-d_v) +F(a,b- d_{\rm lcm}), &\text{for } b \geq d_{\rm lcm} \\ \end{cases} \] where $d_{\rm lcm} = {\rm deg}({\rm lcm}(u, v))$, $d_{\min}=\min(d_u, d_v)$ and $d_{\max}=\max(d_u, d_v)$. \begin{example}\label{two monomial ideal case} We are looking for the Hilbert function of the module \begin{center} $M=k[x,y,z]/\langle x^{2}y,xz^{2} \rangle$. \end{center} \end{example} Equation (\ref{principal ideal case}) indicates the following recurrence relation for this quotient ring \[ {\rm HF}(M,b)= \begin{cases} F(a,b), & \text {for}\ 0\leq b\leq 2 \\ F(a,b)-2F(a,b-3)+F(a,b-5), & \text {for}\ b\geq 3 \end{cases} \] Notice here that ${\rm lcm}(x^{2}y, xz^{2})=x^{2}yz^{2}$. Therefore, the Hilbert function of the module $M=k[x,y,z]/\langle x^{2}y,xz^{2} \rangle$ is expressed by the following sequence of numbers \begin{center} \begin{tabular}{ l l l l l l l l l} ${\rm HF}(M,b)$:& {\bf 1} & {\bf 3} & {\bf 6} & {\bf 8} & {\bf 9} & {\bf 10} & {\bf 11} & {\bf ...} \\ \end{tabular} \end{center} In the next section, we make full use of the \emph{Principle of Inclusion and Exclusion} to develop what we will call the \emph{lcm--lattice method} to handle any monomial ideal with a finite number of monomials. Before moving to the next section, let us take advantage of this example to illustrate an alternative which accounts for the monomials of degree $b$ in the ideal only once. In other words, the principle of inclusion-exclusion is a sequence of corrections for alternating over-counts and under-counts which corresponds to regions of the Venn diagram where two, three, four, etc... sets overlaps. Our goal here is to partition the union of all sets in the Venn diagram into disjoint sets as to avoid alternating inclusions with exclusions. This is accomplished by ordering our sets $E_1, E_2, E_3, \ldots$ then letting $F_1 = E_1, F_2=E_2 \setminus F_1, F_3=E_3 \setminus (F_1 \cup F_2), \ldots$. This is an approach conceptually similar to the Gram-Schmidt process in linear algebra. Let $u=x^2y$ and $v=xz^2$. Let also $E_1= \langle u \rangle$ and $E_2= \langle v \rangle$ then $F_1=E_1$ and $F_2=\{ \text{all monomials which are multiple of $v$ but not of $u$}\}$. Since $E_1$ and $E_2$ are graded modules then $F_1$ and $F_2$ will be graded sets. To illustrate this further, for degree $4$, $E_1$ and $E_2$ are disjoint so no monomials of degree $4$ need to be excluded from $F_2$. However, for degree $5$, for example $uz^2=vxy$. In this case, we want to count $x^2yz^2$ as a multiple of $u$ (i.e. belonging to $F_1$) but prevent it being counted as a multiple of $v$. Observe that $\frac{ {\rm lcm}(x^2y,xz^2)}{xz^2}=\frac{x^2yz^2}{xz^2}=xy$ which is known as a syzygy. Thus, in example \ref{two monomial ideal case}, we can generate the table below \vspace{5mm} \begin{center} \begin{tabular}{l|l|l|l|l|l|l|l|l} { Degree k} & 0 & 1 & 2 & 3 & 4 &5 &6 & ...\\ \hline {{\rm HF}\{k[x,y,z]\}} & 1 & 3 & 6 & 10 &15 &21 &28 & ... \\ \hline {\bf $F_1$} & & & & $u$ & $ux, uy, uz,$ & $ux^2, uy^2, uz^2, $ & $ux^3, uy^3, uz^3, uxyz,$ & ...\\ & & & & & &$uxy, uyz, uxz$&$ux^{2}y, uxy^{2}, ux^{2}z,$&...\\ & & & & & & &$uy^{2}z, uxz^2, uyz^2$&...\\ \hline {\bf $F_2$} & & & & $v$ & $vx, vy, vz$ & $vx^2, vy^2, vz^2, $ & $vx^3, vy^3, vz^3, $ & ... \\ & & & & & & $vxz, vyz$ &$vx^{2}z ,vy^{2}z, vxz^2, vyz^2$&...\\ \hline {\bf $|F_1|$} & & & & 1 &3 &6 &10 & ... \\ \hline {\bf $|F_2|$} & & & & 1 &3 &5 &7 & ... \\ \hline {\bf {\rm G(a,b)} } & 1 & 3 & 6 & 8 &9 &10 &11 & ... \\ \end{tabular} \end{center} \vspace{8mm} where $G(a,b)={\rm HF} \{M\}={\rm HF}\{k[x,y,z]\}-|F_1|-|F_2|$. \newpage \section{LCM--Lattice Method} As discussed at the end of the previous section, the challenge remains to find the Hilbert function of a monomial ideal with more than one monomial generators. In this section, first we start with some basic theory and then use the well known \emph{Principle of Inclusion and Exclusion} (which the reader will find in the standard reference \cite{van2001course}) to validate the method developed. \begin{proposition} $\langle u \rangle \cap \langle v \rangle=\langle {\rm lcm}(u,v) \rangle $ \end{proposition} \begin{proof} $p \in \langle u \rangle \cap \langle v \rangle \Leftrightarrow u\mid p$ and $v \mid p \Leftrightarrow {\rm lcm }(u,v) \mid p$ \end{proof} \begin{corollary} $\langle p_1 \rangle \cap \langle p_2 \rangle \cap \langle p_3 \rangle \cap ...\cap \langle p_r \rangle =\langle {\rm lcm}(p_1,p_2,p_3,...,p_r) \rangle $ \end{corollary} \begin{proof}(By Induction) \begin{itemize} \item The above proposition is the above case. \item Suppose $\displaystyle \bigcap _{i=1}^{r-1} \langle p_i \rangle=\langle {\rm lcm}(p_1,p_2,p_3,...,p_{r-1}) \rangle $ then \begin{center} $\displaystyle \bigcap _{i=1}^{r} \langle p_i \rangle=\displaystyle \bigcap _{i=1}^{r-1} \langle p_i \rangle \cap \langle p_r \rangle= \langle {\rm lcm}({\rm lcm }(p_1,p_2,p_3,...,p_{r-1}),p_r) \rangle= \langle {\rm lcm }(p_1,p_2,p_3,...,p_r) \rangle.$ \end{center} \end{itemize} \end{proof} Further use of inclusion-exclusion; this time with $n$ monomials we get \begin{eqnarray} {\rm HF}( \langle p_1,p_2,p_3,...,p_n \rangle ,b)&=& |\{ \text{monomials of degree }b\text{ in }\langle p_1,p_2,p_3,...,p_n \rangle \}| \nonumber \\ & = &|\{ \text {monomials of degree } b \text{ in } \langle p_1 \rangle \text{ OR } \langle p_2 \rangle \text{ OR}...\text{ OR } \langle p_n \rangle \}| \nonumber \\ &=& \displaystyle\sum_{1 \leq j_1 \leq n} |\langle p_{j_1} \rangle | - \displaystyle\sum_{1 \leq j_1 <j_2 \leq n} |\langle {\rm lcm}(p_{j_1},p_{j_2})\rangle | \nonumber\\ & & + \displaystyle\sum_{1 \leq j_1 <j_2 <j_3 \leq n} |\langle (p_{j_1},p_{j_2},p_{j_3}) \rangle|\nonumber\\ & & +............................................................. \nonumber\\ & & + (-1)^{r-1} \cdot \displaystyle\sum_{1 \leq j_1 <j_2<...<j_r \leq n} |\langle {\rm lcm}(p_{j_1},p_{j_2},...,p_{j_r})\rangle | \nonumber\\ & & + (-1)^{n-1} \cdot |\langle {\rm lcm}(p_{1},p_{2},...,p_{n})\rangle | \nonumber\\ &=& \displaystyle\sum_{r=1}^{n}\left((-1)^{r-1} \cdot \displaystyle\sum_ {1 \leq j_1 <j_2<...<j_r \leq n} | \langle {\rm lcm} (p_{j_1},p_{j_2},...,p_{j_r})\rangle|\right ).\nonumber \ \end{eqnarray} Assigning $d_{{j_1}{j_2}{j_3}...{j_r}}= {\rm deg} ( {\rm lcm}(p_1,p_2,p_3,...,p_r))$, where $1 \leq r \leq n$ and $1 \leq j_1 < j_2 <j_3<...<j_r \leq n$. To facilitate expressing the Hilbert function let's expand $F(a,b)=0$ if $ b<0$. Then $${\rm HF}( \langle p_1,p_2,p_3,...,p_n \rangle ,b) =\displaystyle \sum_{r=1}^{n}\left((-1)^{n-1} \cdot \displaystyle\sum_ {1 \leq j_1 <j_2<...<j_r \leq n} F(a,b-d_{{j_1}{j_2}{j_3}...{j_r}})\right)$$ \begin{center} and \end{center} $${\rm HF}(k[ \bar{x}] / \langle p_1,p_2,p_3,...,p_n \rangle ,b)= F(a,b)- \displaystyle \sum_{r=1}^{n}\left((-1)^{n-1} \cdot \displaystyle\sum_ {1 \leq j_1 <j_2<...<j_r \leq n} F(a,b-d_{{j_1}{j_2}{j_3}...{j_r}})\right).$$ The lcm--lattice method described below is based on the above argument. The starting point of building up the lcm--lattice is what we call layer 1. Layer 1 is a row containing all the monomials of the given ideal. Finding the lcm of all the pairs we create the $2^{\rm nd}$ layer. Next, we find the lcm of all the triples in layer 1 and we call this layer 3. Following the same pattern, we create as many layers as the number of monomials in the given ideal. The last layer will contain the lcm of all the monomials given in the ideal. If the ideal $I$ contains $n$ monomials then the number of monomials in the lcm-lattice in layers 1, 2, 3, ..., n will be $\dbinom{n}{1},\,\dbinom{n}{2},\,\dbinom{n}{3},\,\ldots, \dbinom{n}{n}$ correspondingly. These values are those found in the $n^{\rm th}$ row of the Pascal triangle to the right and including $\dbinom{n}{1}$. The following examples give a nice view of the above description. \begin{example} Find the Hilbert function of $M=R/\langle x^2,y^3 \rangle$ where $R=k[x,y,z]$. \end{example} In the case that we have two monomials in the ideal, the lcm lattice is simple. Start by building up the lcm lattice. Layer 1 is called the row that has all the monomials of the ideal. Afterwards, we take the lcm of the two monomials and we have the following \begin{center} \begin{tabular}{l c l } $x^2$&$y^3$& layer 1\\ ~~~~~$x^2y^3$&& layer 2 \end{tabular} \end{center} According now to the above lcm lattice, we are left with a lattice of monomials on which we use inclusion - exclusion at each row to produce the alternating sum that computed the Hilbert function \begin{center} \scalebox{0.85}{ \begin{tabular}{ l | l | l | l } ${\rm HF}\{R\}$ & layer 1(-) & layer 2($+$) & ${\rm HF} \{M\}$\\ \hline ~1 & ~~~~0 ~~~~~0 & ~~~0 & ~~~ $\bf 1$ \\ \hline ~3 & ~~~~0 ~~~~~0 & ~~~0 & ~~~ $\bf 3$\\ \hline ~6 & ~~~-1 ~~~~~0 & ~~~0 & ~~~ $\bf 5$ \\ \hline 10 & ~~~-3 ~~~~-1 & ~~~0 & ~~~ $\bf 6$ \\ \hline 15 & ~~~-6 ~~~~-3 & ~~~0 & ~~~ $\bf 6$ \\ \hline 21 & ~~~-10 ~~~-6 & ~~~1 & ~~~ $\bf 6$ \\ \hline 28 & ~~~-15 ~~~-10 & ~~~3 & ~~~ $\bf 6$ \\ \hline 36 & ~~~-21 ~~~-15 & ~~~6 & ~~~ $\bf 6$ \\ \hline 45 & ~~~-28 ~~~-21 & ~~~10 & ~~~ $\bf 6$ \\ \hline 55 & ~~~-36 ~~~-28 & ~~~15 & ~~~ $\bf 6$ \\ \hline ~.. & ~~~~.. ~~~~~.. & ~~~.. & ~~~~.. \\ \end{tabular}} \end{center} \begin{example} Given the quotient ring $M=k[x,y,z]/\langle xz,yz,x^2y \rangle$, find its Hilbert function. \end{example} Start by building up the lcm lattice. \begin{center} \begin{tabular}{c c c c} ${\bf x^2y}$&$xz$&$yz$&layer 1\\ ${\bf{x^2yz}}$&$x^2yz$&${\bf xyz}$&layer 2\\ $$&${\bf{x^2yz}}$&&layer 3\\ \end{tabular} \end{center} So we are left with the above lattice of monomials on which we use inclusion - exclusion at each row to produce the alternating sum that computed the Hilbert function. Let $R=k[x,y,z]$. Since we observe that there are monomials of the same degree in adjacent rows of the lcm-lattice lattice, we exclude these pairs of monomials from the alternating sum in our table. The cancellation is because the net contribution of such a pair to the alternating sum is zero. The monomials that are canceled are displayed in bold-faced. \begin{center} \begin{tabular}{ c|c|c|c } ${\rm HF}\{R\}$ & layer 1(-) & layer 2($+$) & ${\rm HF}\{M\}$\\ \hline 1 &0 &0 & $\bf 1$ \\ \hline 3 & 0 &0 & $\bf 3$\\ \hline 6 &-2 & 0 & $\bf 4$ \\ \hline 10 & -6 & 0 & $\bf 4$ \\ \hline 15 & -12 & 1 & $\bf 4$ \\ \hline 21 & -20 & 3 & $\bf 4$ \\ \hline 28 & -30 & 6 & $\bf 4$ \\ \hline 36 & -42 & 10 & $\bf 4$ \\ \hline 45 & -56 &15 & $\bf 4$ \\ \hline 55 & -72 &21 & $\bf 4$ \\ \hline ..&.. & .. & .. \\ \hline .. & .. & ..& .. \end{tabular} \end{center} \newpage \begin{example} Compute the Hilbert function of the module \begin{center} $M=k[x,y,z]/\langle x^2y^3z,xz^3,xy^4z,x^2z^2 \rangle$. \end{center} As before, for the sake of simplicity, we will let $R=k[x,y,z]$. \end{example} Start by building up the lcm lattice and we have \begin{center} \begin{tabular}{c c c c c c c} $ $&$x^2y^3z$&$xz^3$&$xy^4z$&$x^2z^2$&$$& layer 1\\ ${\bf x^2y^3z^3}$&$x^2y^4z$&$x^2y^3z^2$&$xy^4z^3$&$x^2z^3$&${\bf x^2y^4z^2}$& layer 2\\ $ $&${\bf x^2y^4z^3}$&${\bf x^2y^3z^3}$&$x^2y^4z^3$&${\bf x^2y^4z^2}$&$$& layer 3\\ $$&$$&$$&${\bf x^2y^4z^3}$&$$&$$&layer 4\\ \end{tabular} \end{center} Again, we typeset in bold face the monomials that are canceled and obtain the following table \begin{center} \begin{tabular}{ l | l | l | l | l } ${\rm HF}\{R\}$ & ~~~layer 1& ~~~~layer 2 &layer 3&${\rm HF}\{M\}$\\ \hline ~1 & ~~~~0 ~~~~~0 & ~~~0 ~~~0 ~~~0 & ~~~ 0&~~~ $\bf 1$ \\ \hline ~3 & ~~~~0 ~~~~~0 & ~~~0 ~~~0 ~~~0 &~~~ 0& ~~~ $\bf 3$\\ \hline ~6 & ~~~~0 ~~~~~0 & ~~~0 ~~~0 ~~~0 &~~~ 0& ~~~ $\bf 6$ \\ \hline 10 & ~~~~0 ~~~~~0 & ~~~0 ~~~0 ~~~0 &~~~ 0& ~~ $\bf 10$ \\ \hline 15 & ~~~-2 ~~~~~0 & ~~~0 ~~~0 ~~~0&~~~ 0& ~~~$\bf 13$ \\ \hline 21 & ~~~-6 ~~~~~0 & ~~~1 ~~~0 ~~~0 &~~~ 0& ~~ $\bf 16$ \\ \hline 28 & ~~-12 ~~~-2 & ~~~3 ~~~0 ~~~0 &~~~ 0& ~~ $\bf 17$ \\ \hline 36 & ~~-20 ~~~-6 & ~~~6 ~~~2 ~~~0&~~~ 0& ~~ $\bf 18$ \\ \hline 45 & ~~-30 ~~-12 & ~~10 ~~~6 ~~1 &~~~ 0& ~~ $\bf 20$ \\ \hline 55 & ~~-42 ~~-20 & ~~15 ~~12 ~~3 &~~~ -1& ~~ $\bf 22$ \\ \hline ~.. & ~~~~.. ~~~~~.. & ~~~..~~~.. ~~~..& ~~~~..& ~~~~.. \\ \hline ~.. & ~~~~.. ~~~~~.. & ~~~..~~~..~~~.. & ~~~~..& ~~~~.. \\ \end{tabular} \end{center} \section{The Syzygy Method} In this section we extend the second approach to the example \ref{two monomial ideal case} to handle ideals with finitely many monomials as generators. When implemented as a recursive algorithm this method will break down a Hilbert function computation into a sum--difference expression of Hilbert functions all of which involve principal ideals. The computation is finished by invoking Corollary \ref{principal ideal case}. Unlike the lcm--method, the principal ideals used will be generated by always taking syzygys of pairs of monomials (we never consider three or more of the given monomials in a computational step). The key recursive step is given by the following theorem. \begin{theorem}{\rm (Syzygy method)}\label{Syzygy method} Let $M=k[\bar {\rm x}]/ I $, where $I=\langle p_1, p_2, p_3,..., p_r \rangle$, a monomial ideal generated by $p_1,p_2,p_3,...,p_r \in k[\bar {\rm x}]$. Then, using the notation $d_j={\rm deg}(p_j)$ and $m_{ij}=\frac{{\rm lcm}(p_i,p_j)}{p_j} \in k[\bar {\rm x}]$ with $i<j$, we have \begin{center} \small{${\rm HF}(M,t)=F(a,t)-F(a,t-d_1)-\displaystyle\sum_{j=2}^{r} {\rm HF}(k[\bar {\rm x}]/\langle m_{1j}, m_{2j}, m_{3j},..., m_{(j-1)j}\rangle, t-d_{j})$}. \end{center} \end{theorem} \begin{proof} (By Induction) \begin{itemize} \item Base case $r=1$ then this hold by the corollary 1.0.1. \item Suppose $r>1$ and \vspace{-1mm} \begin{eqnarray} {\rm HF}(k[\bar {\rm x}]/\langle p_1, p_2, p_3,..., p_{r-1}\rangle,t)&=&F(a,t)-F(a,t-d_1)\nonumber\\ & &-\displaystyle\sum_{j=2}^{r-1} {\rm HF}(k[\bar {\rm x}]/\langle m_{1j}, m_{2j}, m_{3j},..., m_{(j-1)j}\rangle, t-d_{j}).\nonumber \end{eqnarray} \item We show that \vspace{-1mm} \begin{eqnarray} {\rm HF}(k[\bar {\rm x}]/\langle p_1, p_2, p_3,..., p_{r} \rangle,t)&=&{\rm HF}(k[\bar {\rm x}]/\langle p_1, p_2, p_3,..., p_{r-1} \rangle,t)\nonumber\\ & &- {\rm HF}(k[\bar {\rm x}]/\langle m_{1r}, m_{2r}, m_{3r},..., m_{(r-1)r}\rangle, t-d_{r}).\nonumber \end{eqnarray} A monomial $q \in k[\bar {\rm x}]$, of degree $t$ represent a nonzero element in $k[\bar {\rm x}]/\langle p_1, p_2, p_3,..., p_{r-1} \rangle$ and is zero in $k[\bar {\rm x}]/\langle p_1, p_2, p_3,..., p_{r} \rangle$ if and only if $p_i \nmid q$ for all $1\leq i<r$ and $p_r|q$. If we call the set of all such monomials $\Gamma (t)$ then we have that \begin{center} \small{${\rm HF}(k[\bar {\rm x}]/\langle p_1, p_2, p_3,..., p_{r} \rangle,t)={\rm HF}(k[\bar {\rm x}]/\langle p_1, p_2, p_3,..., p_{r-1} \rangle,t)-|\Gamma (t)|$}. \end{center} A monomial $q \in k[\bar {\rm x}]$ satisfies $q \in \Gamma (t)\Leftrightarrow q=a \cdot p_r$, where $a$ is a monomial in $k[\bar {\rm x}]$ of degree $t-d_r$ and $p_i \nmid a \cdot p_r$ for all $1\leq i<r $. This is equivalent to $m_{ir} \nmid a$ for all $1\leq i<r $. Since $a$ is a monomial we have that, \vspace{-5mm} \begin{align} a \notin \langle m_{ir} \rangle, &\text{ for all } 1\leq i \leq r-1 \nonumber\\ &\Leftrightarrow a \notin \langle m_{1r}, m_{2r}, m_{3r},..., m_{(r-1)r}\rangle \nonumber\\ &\Leftrightarrow a \in k[\bar {\rm x}]/\langle m_{1r}, m_{2r}, m_{3r},..., m_{(r-1)r}\rangle.\nonumber \end{align} Finally, to finish the proof and establish that \begin{center} {$|\Gamma (t)|= {\rm HF}(k[\bar {\rm x}]/\langle m_{1r}, m_{2r}, m_{3r},..., m_{(r-1)r}\rangle, t-d_{r})$} \end{center} we only need to observe that $a$ is uniquely determined by $q \in\Gamma (t)$ and every $a \in k[\bar {\rm x}]/\langle m_{1r}, m_{2r}, m_{3r},..., m_{(r-1)r}\rangle$ uniquely determines a monomial $q$. \end{itemize} \vspace{-5mm} \end{proof} Both the lcm-lattice-method and the Syzygy method produce similar formulas for computing the Hilbert function. Next we apply the Syzygy method to establish that the lcm-lattice method holds for a monomial ideal with three monomials. The reader should observe that this will confirm of that result without the use of inclusion-exclusion. Consider $I$ generated by three (not necessarily distinct) monomials $p_1,p_2,p_3$ with degrees $d_1,d_2,d_3$ respectively. We need to show that \begin{eqnarray} HF\{k[\bar {\rm x}]/\langle p_1,p_2,p_3 \rangle\}& = &F(a,t)-F(a,t-{\rm deg}(p_1))\nonumber \\ & & -F(a,t-{\rm deg}(p_2))-F(a,t-{\rm deg}(p_3))\nonumber \\ & & +F(a,t-{\rm deg}({\rm lcm}(p_1,p_2)))+F(a,t-{\rm deg}({\rm lcm}(p_2,p_3)))\nonumber\\ & & +F(a,t-{\rm deg}({\rm lcm}(p_1,p_3)))-F(a,t-{\rm deg}({\rm lcm}(p_1,p_2,p_3))).\nonumber \end{eqnarray} By the syzygy method, we obtain the following equality which we call the syzygy equality \begin{eqnarray} HF\{k[\bar {\rm x}]/\langle p_1,p_2,p_3 \rangle\}&=&F(a,t)-F(a,t-d_1)-HF\{k[\bar {\rm x}]/\langle m_{12} \rangle(-d_2)\}\nonumber\\ & &-HF\{k[\bar {\rm x}]/\langle m_{13},m_{23} \rangle(-d_3)\}\nonumber \end{eqnarray} Applying the syzygy method to the third and fourth summands on the right hand side we have \begin{eqnarray} HF\{k[\bar {\rm x}]/\langle m_{12} \rangle(-d_2)\}& = &F(a,t-d_2)-F(a,t-d_2-{\rm deg}(m_{12}))\nonumber \\ & = & F(a,t-d_2)-F(a,t-{\rm deg}({\rm lcm}(p_1,p_2)))\nonumber \end{eqnarray} and \begin{eqnarray} HF\{k[\bar {\rm x}]/\langle m_{13},m_{23} \rangle(-d_3)\}& = &F(a,t-d_3)-F(a,t-d_3-{\rm deg}(m_{13}))\nonumber \\ & &- HF\left\{k[\bar {\rm x}]/\left\langle {\rm lcm}(m_{13},m_{23})m_{23}^{-1} \right\rangle\left(-d_3-{\rm deg}(m_{23})\right)\right\}. \nonumber \end{eqnarray} and \vspace{-5mm} \begin{eqnarray} &HF&\{k[\bar{\rm x}]/\langle\frac{{\rm lcm}(m_{13},m_{23})}{m_{23}} \rangle(-d_3-{\rm deg}(m_{23}))\}=\nonumber\\ &= & F(a,t-d_3-{\rm deg}(m_{23})-F \left(a,t-d_3-{\rm deg}(m_{23})-{\rm deg}\left(\frac{{\rm lcm}(m_{13},m_{23}}{m_{23}}\right)\right)\nonumber \\ &= &F(a,t-{\rm deg}({\rm lcm}(p_2,p_3)))-F (a,t-d_3-{\rm deg}{\rm lcm}(m_{13},m_{23})) \nonumber \\ &= &F(a,t-{\rm deg}({\rm lcm}(p_2,p_3)))-F(a,t-(d_3+{\rm deg}{\rm lcm}(m_{13},m_{23})) \nonumber\\ &= &F(a,t-{\rm deg}({\rm lcm}(p_2,p_3)))-F\left(a,t-({\rm deg}(p_3)+{\rm deg} \left ({\rm lcm}\left(\frac{{\rm lcm}(p_1,p_3)}{p_3},\frac{{\rm lcm}(p_2,p_3)}{p_3}\right)\right)\right) \nonumber\\ &= &F(a,t-{\rm deg}({\rm lcm}(p_2,p_3)))-F\left(a,t-\left({\rm deg}(p_3)+{\rm deg}\left(\frac{{\rm lcm}(p_1,p_2,p_3)}{p_3}\right)\right)\right) \nonumber\\ &= &F(a,t-{\rm deg}({\rm lcm}(p_2,p_3)))-F(a,t-{\rm deg}({\rm lcm}(p_1,p_2,p_3)))). \nonumber \end{eqnarray} Back-substituting the iterated results of the Syzygy method into the syzygy equality produces the same alternating sum as the lcm-method. Thus, we proved that the lcm-lattice method is valid. \newpage The following examples are based on the Syzygy method. \begin{example} Find the Hilbert function of $M=R/\langle x^2,y^3 \rangle$, where $R=k[x,y,z]$. \end{example} We will only need the syzygy $m_{12}=\frac{{\rm lcm}(x^2,y^3)}{y^3}=\frac{x^2y^3}{y^3}=x^2$. Computing the Hilbert function in this case requires only one use Theorem \ref{Syzygy method}, which yields the following: \begin{eqnarray}\label{syzygy equality} {\rm HF}\{M\}&=&{\rm HF}\{R\}-{\rm HF}\{R({\rm -deg}(x^2))\}-{\rm HF}\{R/{\langle m_{12} \rangle} ({\rm -deg}(y^3))\}\nonumber\\ &=&{\rm HF}\{R\}-{\rm HF}\{R(-2)\}-{\rm HF}\{R/{\langle x^2 \rangle} (-3)\}. \end{eqnarray} Based on the Corollary \ref{HF for principal ideal}, we see that the last term in (\ref{syzygy equality}) it is shifted by 3, so we have \begin{eqnarray}\label{to sub into syzygy equality} {\rm HF}\{R/{\langle x^2 \rangle} (-3)\}&=&{\rm HF}\{R(-3)\}-{\rm HF}\{R ({\rm -deg}(x^2))(-3)\}\nonumber\\ &=&{\rm HF}\{R(-3)\}-{\rm HF}\{R(-5)\}. \end{eqnarray} Therefore, by substituting equation (\ref{to sub into syzygy equality}) into equation (\ref{syzygy equality}), we have \begin{eqnarray} {\rm HF}\{M\}&=&{\rm HF}\{R\}-{\rm HF}\{R(-2)\}-[{\rm HF}\{R(-3)\}-{\rm HF}\{R(-5)\}]\nonumber\\ &=&{\rm HF}\{R\}-{\rm HF}\{R(-2)\}-{\rm HF}\{R(-3)\}+{\rm HF}\{R(-5)\}.\nonumber \end{eqnarray} The Hilbert function of $M$ is presented in the last column of the following row-generating table \begin{center} \scalebox{1.1}{ \begin{tabular}{ l | l | l | l | l} ${\rm HF}\{R\}$ & ${\rm -HF}\{R(-2)\}$ & {\rm -HF}\{R(-3)\} & {\rm HF}\{R(-5)\} & ${\rm HF}\{M\}$\\ \hline ~1 & ~~~~~0 & ~~~~~0 & ~~~~~~~~0 &~~~ $\bf 1$ \\ \hline ~3 & ~~~~~0 & ~~~~~0 & ~~~~~~~~0 &~~~ $\bf 3$\\ \hline ~6 & ~~~~-1 & ~~~~~0 & ~~~~~~~~0 &~~~ $\bf 5$ \\ \hline 10 & ~~~~-3 & ~~~~-1 & ~~~~~~~~0 &~~~ $\bf 6$ \\ \hline 15 & ~~~~-6 & ~~~~-3 & ~~~~~~~~0 &~~~ $\bf 6$ \\ \hline 21 & ~~~-10 & ~~~~-6 & ~~~~~~~~1 &~~~ $\bf 6$ \\ \hline 28 & ~~~-15 & ~~~-10 & ~~~~~~~~3 &~~~ $\bf 6$ \\ \hline 36 & ~~~-21 & ~~~-15 & ~~~~~~~~6 &~~~ $\bf 6$ \\ \hline 45 & ~~~-28 & ~~~-21 & ~~~~~~~10 &~~~ $\bf 6$ \\ \hline 55 & ~~~-36 & ~~~-28 & ~~~~~~~15 &~~~ $\bf 6$ \\ \hline 66 & ~~~-45 & ~~~-36 & ~~~~~~~21 &~~~ $\bf 6$ \\ \hline 78 & ~~~-55 & ~~~-45 & ~~~~~~~28 &~~~ $\bf 6$ \\ \hline ~.. & ~~~~.. & ~~~~~.. & ~~~~~~~~..&~~~~.. \\ \hline ~.. & ~~~~.. & ~~~~~.. & ~~~~~~~~..&~~~~.. \\ \end{tabular}} \end{center} \vspace{8mm} In the next example, we apply the Syzygy method to quotient rings whose monomial ideal consists of more than two monomials. \begin{example} Denote by $R=k[x,y,z]$. Find the Hilbert function of the quotient ring $M=R/\langle xz,yz,x^2y \rangle$. Observe that in this example we have the following syzygies, \end{example} \begin{center} $m_{12}=\frac{{\rm lcm}(xz,yz)}{yz}=\frac{xyz}{yz}=x$,\\ $m_{13}=\frac{{\rm lcm}(xz,x^2y)}{x^2y}=\frac{x^2yz}{x^2y}=z$,\\ $m_{23}=\frac{{\rm lcm}(yz,x^2y)}{x^2y}=\frac{x^2yz}{x^2y}=z$. \end{center} \vspace{-2mm} Using now the syzygy method we can express the Hilbert function of $M$ as follows \begin{eqnarray}\label{syzygy equality2} {\rm HF}\{M\}&=&{\rm HF}\{R\}-{\rm HF}\{R({\rm -deg}(xz))\}-{\rm HF}\{R/{\langle m_{12} \rangle} ({\rm -deg}(yz))\}\nonumber\\ & & -{\rm HF}\{R/{\langle m_{13},m_{23} \rangle} ({\rm -deg}(x^2y))\}\nonumber\\ &=&{\rm HF}\{R\}-{\rm HF}\{R(-2)\}-{\rm HF}\{R/{\langle x \rangle} (-2)\}-{\rm HF}\{R/{\langle z \rangle} (-3)\}. \end{eqnarray} The last two terms of \ref{syzygy equality2} equal to the second row of the Pascal table but shifted since ${\rm HF}\{R/{\langle x \rangle} (-2)\}\cong {\rm HF}\{k[y,z](-2)\}$\quad and \quad ${\rm HF}\{R/{\langle z \rangle} (-3)\}\cong {\rm HF}\{k[x,y](-3)\}$. Thus we obtain the Hilbert function of $M$ shown in the last column of the table below. \begin{center} \scalebox{0.85}{ \begin{tabular}{ c|c|c|c|c} ${\rm HF}\{R\}$ & ${\rm -HF}\{R(-2)\}$ & ${\rm -HF}\{R/{\langle x \rangle} (-2)\}$ & ${\rm -HF}\{R/{\langle z \rangle} (-3)\}$ & ${\rm HF}\{M\}$\\ \hline ~1 & 0 & 0 & 0 &$\bf 1$ \\ \hline ~3 & 0 & 0 & 0 & $\bf 3$\\ \hline ~6 & -1 & -1 & 0 & $\bf 4$ \\ \hline 10 & -3 & -2 &-1 & $\bf 4$ \\ \hline 15 &-6 &-3 &-2 & $\bf 4$ \\ \hline 21 & -10 & -4 & -3 & $\bf 4$ \\ \hline 28 & -15 &-5 &-4 & $\bf 4$ \\ \hline 36 & -21 & -6 &-5 & $\bf 4$ \\ \hline 45 & -28 &-7 & -6 & $\bf 4$ \\ \hline 55 & -36 & -8 & -7 &$\bf 4$ \\ \hline 66 & -45 & -9 & -8 &$\bf 4$ \\ \hline 78 & -55 & -10 & -9 & $\bf 4$ \\ \hline .. & .. & .. &..&.. \\ \end{tabular}} \end{center} \begin{example} Compute the Hilbert function of the module \begin{center} $M=k[x,y,z]/\langle x^2z^2,xz^3,xy^4z,x^2y^3z \rangle$. \end{center} \end{example} We denote $R=k[x,y,z]$, as before. Then compute the following list of relevant syzygies: \begin{align} m_{12}=&\frac{x^2z^3}{xz^3}=x,\nonumber\\ m_{13}=&\frac{x^2y^4z^2}{xy^4z}=xz,\nonumber\\ m_{23}=&\frac{xy^4z^3}{xy^4z}=z^2,\nonumber\\ m_{14}=&\frac{x^2y^3z^2}{x^2y^3z}=z,\nonumber\\ m_{24}=&\frac{x^2y^3z^3}{x^2y^3z}=z^2,\nonumber\\ m_{34}=&\frac{x^2y^4z}{x^2y^3z}=y.\nonumber \end{align} Based on the syzygy method we have \begin{eqnarray}\label{syzygy equality3} {\rm HF}\{M\}&=&{\rm HF}\{R\}-{\rm HF}\{R({\rm -deg}(x^2z^2))\}-{\rm HF}\{R/{\langle x \rangle} (-{\rm deg}(xz^3))\}\nonumber\\ & &-{\rm HF}\{R/{\langle xz,z^2 \rangle} (-{\rm deg}(xy^4z))\}-{\rm HF}\{R/{\langle z,z^2,y \rangle} (-{\rm deg}(x^2y^3z))\}\nonumber\\&=&{\rm HF}\{R\}-{\rm HF}\{R(-4)\}-{\rm HF}\{R/{\langle x \rangle} (-4))\}-{\rm HF}\{R/{\langle xz,z^2 \rangle} (-6)\}\nonumber\\ & &-{\rm HF}\{R/{\langle z,z^2,y \rangle} (-6)\}. \end{eqnarray} From (\ref{syzygy equality3}) we can see that \begin{equation}\label {syzygy term1} {\rm HF}\{R/{\langle x \rangle} (-4)\}\cong {\rm HF}\{k[y,z](-4)\} \end{equation} and \begin{equation}\label{syzygy term2} {\rm HF}\{R/{\langle z,z^2,y \rangle} (-6)\}\cong {\rm HF}\{R/{\langle y,z \rangle} (-6)\}\cong {\rm HF}\{k[x](-6)\}. \end{equation} Therefore, equation(\ref{syzygy term1}) is given by the $2^{\rm nd}$ row of the Pacal table shifted down by four and equation (\ref{syzygy term2}) is given by the $1^{\rm st}$ row of the Pascal table shifted down by six. Moreover, in order to find the Hilbert function of $M$ we need to find the ${\rm HF}\{R/{\langle xz,z^2 \rangle} (-6)\}$. Applying again the syzygy method to the fourth summand on the right hand side of (\ref{syzygy equality3}) we have that \begin{center} $m_{12}=\frac{xz^2}{z^2}=x$. \end{center} Observe that the shifting is equally distributed in all the terms as follows \begin{eqnarray} \label{syzygy term5} {\rm HF}\{R/\langle xz,z^2 \rangle(-6)\}& = &{\rm HF}\{R(-6)\}-{\rm HF}\{R({\rm -deg}(xz))(-6)\}-{\rm HF}\{R/{\langle x \rangle} (-{\rm deg}(z^2))(-6)\}\nonumber\\&=&{\rm HF}\{R(-6)\}-{\rm HF}\{R(-2)(-6)\}-{\rm HF}\{R/{\langle x \rangle} (-2)(-6)\}\nonumber\\&=&{\rm HF}\{R(-6)\}-{\rm HF}\{R(-8)\}-{\rm HF}\{R/{\langle x \rangle} (-8)\}\nonumber\\&=&{\rm HF}\{R(-6)\}-{\rm HF}\{R(-8)\}-{\rm HF}\{k[y,z] (-8)\}\nonumber \end{eqnarray} This way we obtain the following row-generating table \begin{center} \scalebox{0.9}{ \begin{tabular}{ c|c|c|c} ${\rm HF}\{R(-6)\}$ & ${\rm -HF}\{R(-8)\}$ & ${\rm -HF}\{k[y,z] (-8)\}$ & ${\rm HF}\{R/\langle xz,z^2 \rangle(-6)\}$\\[-3pt] \hline 0 & 0 & 0 & $0$ \\[-3pt] \hline 0 &0 &0 & $0$\\[-3pt] \hline 0 &0 & 0 & $ 0$ \\[-3pt] \hline 0 &0 &0 & $ 0$ \\[-3pt] \hline 0 & 0 &0 & $0$ \\[-3pt] \hline 0 & 0 & 0 & $0$ \\[-3pt] \hline 1 & 0 &0 & $1$ \\[-3pt] \hline 3 &0 &0 & $3$ \\[-3pt] \hline 6 &-1 &-1 & $4$ \\[-3pt] \hline 10 & -3 & -2 & $5$ \\[-3pt] \hline 15 & -6 & -3 & $6$ \\[-3pt] \hline .. & .. &.. &.. \\[-3pt] \hline .. & .. &.. &.. \end{tabular}} \end{center} \vspace{4mm} Substituting now (\ref{syzygy term1}),(\ref{syzygy term2}) as well as the last column of the above table into (\ref{syzygy equality3}), we compute the Hilbert function of $M$ \\[7pt] \begin{center} \scalebox{0.82}{ \begin{tabular}{ c | c | c | c | c | c } ${\rm HF}\{R\}$ & ${\rm -HF}\{R(-4)\}$ & ${\rm -HF}\{k[y,z](-4)\}$ & ${\rm -HF}\{R/\langle xz,z^2\rangle(-6)\}$ & ${\rm -HF}\{k[x](-6)\}$ & ${\rm HF}\{M\}$\\ \hline 1 & 0 & 0 & 0 & 0 &$\bf 1$ \\ \hline 3 & 0 & 0 & 0 & 0 &$\bf 3$ \\ \hline 6 & 0 & 0 & 0 & 0 &$\bf 6$ \\ \hline 10 & 0 & 0 & 0 & 0 &$\bf 10$ \\ \hline 15 & 1 & 1 & 0 & 0 &$\bf 13$ \\ \hline 21& -3 & -2 & 0 & 0 &$\bf 16$ \\ \hline 28 & -6 & -3 & -1 & -1 &$\bf 17$ \\ \hline 36 &-10 & -4 & -3 & -1 &$\bf 18$ \\ \hline 45 & -15 & -5 & -4 & -1 &$\bf 20$ \\ \hline 55 & -21 & -6 & -5 & -1 &$\bf 22$ \\ \hline 66 & -28 & -7 & -6 & -1 &$\bf 24$ \\ \hline .. & .. & .. & ..&..&.. \\ \hline .. & .. & .. & ..&..&.. \end{tabular}}\\[15pt] \end{center} \section{Syzygy method via homological algebra} The short exact sequence that involves $\phi_{x_a}:=\text{multiplication by }x_a$ (see \cite{villarreal2015monomial} page 98) works well with the assemblage row-by-row of a Hilbert function table. That is because the key homomorphism in the short exact sequence is multiplication by a variable followed by natural projection. Consequently, the last non-zero object of the short exact sequence is the cokernel of $\phi_{x_a}$. This cokernel as we saw in section 2, turns out to be the quotient ring corresponding to the row in the Hilbert function table immediately preceding the introduction of the variable $x_a$. In other words, of the two Hilbert function sequences that the short exact sequence needs to generate the the Hilbert function of $k[x_1,x_2,...,x_a]/I_a$, one of them (the right-most) is the Hilbert function of $k[x_1,x_2,...,x_{a-1}]/I_{a-1}$. Therefore, any remaining difficulty would be confined to finding the Hilbert function for the kernel of $\phi_{x_a}$. In this section, we make use of the same set up as in section 2. Let $S=\{p_1,p_2,p_3,....,p_r\}$, where $p_1,p_2,p_3,...,p_r$ are monomials in the variables $x_1,x_2,...x_d$. Extend this set of variables to an infinite set of variables $x_1,x_2,...,x_d,x_{d+1},...$. For an integer value $a$, let $S_a=\{p_i \in S:p_i \in k[x_1,x_2,...,x_a]\}$. Re-index, if necessary, the set $S$ such that \begin{enumerate} \item{$S_a' \subset S_a$ if $a' \geq a$} \item{For $p_i,\,p_j \in S_a$, $j > i$ only if the highest power of $x_a$ dividing $p_i$ also divides $p_j$.} \end{enumerate} The reader should observe that the first requirement of this re-indexing of the generators of $I$ has the purpose of introducing the generators for the ideals $I_a$ in consecutive order as the variables $x_a$ are introduced one-by-one. The second criteria for the re-index ensures that, as the set $S_{a-1}$ is enlarged to $S_a$, the new monomials are ordered in (non-strict) increasing order of the power of $x_a$. This second criteria is done to ensure that the variable $x_a$ does not appear in the syzygies we might need to compute as we generate the $a^{\rm th}$-row of the Hilbert table. Also, observe that if $S_a=\emptyset$ then set $I_a=0$; otherwise set $I_a= \langle p_i \, | \, p_i \in S_a\rangle$. Let $M_a=k[x_1,x_2,...,x_a]/I_a$. Construct an infinite array whose ${\rm a}^{th}$ row is the sequence of Hilbert function values of $M_a$. Consider the following \emph{short exact sequence} where $\phi_{x_a}$ is multiplication by $x_a$, the module $M_a=k[x_1, x_2, \ldots, x_a]/I_a$, and $(0:x_a)_{M_a}=\ker \phi_{x_a}$ , $$0 \rightarrow (0:x_a)_{M_a}(-1)\rightarrow M_a(-1)\overset{\phi_{x_a}}\rightarrow M_a\rightarrow M_a/x_aM_a\rightarrow 0.$$ Set $S_0=\emptyset$ and for $a \geq 1$, if $S_{a-1} \subsetneq S_a$ set $$U_{x_a}=\{q_i=\frac{p_i}{x_a} \, : \,p_i \in S_a \setminus S_{a-1}\}.$$ If $S_{a-1}=S_a$ then set $U_{x_a}=\emptyset$. \begin{lemma} $(0:x_a)_{M_a}=\langle q_i:q_i \in U_{x_a} \rangle _{M_a}$. \end{lemma} \begin{proof} If $U_{x_a}=\emptyset \Leftrightarrow x_a \nmid p_i$ for all $p_i \in S_a \Leftrightarrow \forall g \in M_a, g \neq 0$ then $x_ag \neq 0$.\\ If $ U_{x_a} \neq \emptyset$ the following equivalence holds: \begin{align} x_ag=0 \text { in } M_a &\Leftrightarrow p_i \mid x_ag \text{ for some }p_i \in S_a \setminus S_{a-1}\nonumber\\ &\Leftrightarrow q_i \mid g \text{ for some } q_i \in U_{x_a}\nonumber\\ &\Leftrightarrow g \in \langle q_i \,:\,q_i \in U_{x_a}\rangle\nonumber \end{align} \end{proof} \begin{remark} Observe that if $U_{x_a}= \emptyset$ then from the above lemma follows that $(0:x_a)_{M_a}=0$. \end{remark} Using the same notation for syzygies as in the previous section, namely $m_{ij}=\frac{{\rm lcm} (p_i, p_j)}{p_j}$ we now state the following lemma. \begin{lemma} A non-zero monomial $g \in (0:x_a)_{M_a}$ can be written as follows for one and only one $q_i \in U_{x_a}$, \begin{enumerate} \item{$g=\alpha_1q_1$ if $q_1 \in U_{x_a}$} \item{$g=\alpha_jq_j$ if $q_j \in U_{x_a}$ and $m_{ij}\nmid \alpha_j$ for all $1 \geq i < j$} \end{enumerate} and conversely any $g$ satisfying one of the equations above, belongs to $(0:x_a)_{M_a}$. \end{lemma} \begin{proof} By the previous lemma all we are left to show is uniqueness.\\ Suppose $g \in (0:x_a)_{M_a}$, let $i$ be the smallest index such that $q_i \mid g$. Then for any $1 \geq i' < i$, $g$ cannot be written as $g=\alpha_{i'}q_{i'}$.\\ If $i < j$ and $q_j \mid g$ then \begin{align} \alpha_{j}=\frac{g}{q_j}\text{ but }q_i \mid g \text{ and }q_j \mid g &\Rightarrow {\rm lcm}(q_i, q_j) \mid g\nonumber\\ &\Leftrightarrow \frac{{\rm lcm} (q_i, q_j)}{q_j} \mid \frac{g}{q_j}\Leftrightarrow m_{ij} \mid \alpha_j.\nonumber \end{align} Therefore, $\alpha_j$ does not satisfy condition 2. \end{proof} \begin{theorem}\label{thm} With the notation of the two lemmas above, the Hilbert function of the annihilator of the homomorphism $\phi_{x_a}$ satisfies the following formula, \begin{align} {\rm HF}\{(0:x_a)_{M_a}\}=&\delta(a){\rm HF}\{k[x_1, x_2, \ldots, x_{a-1}](- \deg q_1)\}\nonumber\\ &+\sum_{1<j \in \text{ Index Set } U_{x_a}}\hspace{-5mm} {\rm HF}\{k[x_1, x_2, \ldots, x_{a-1}]/\langle m_{1j}, m_{2j}, \ldots, m_{(j-1)j}\rangle(- \deg q_j)\}\nonumber, \end{align} where $\delta(a)=0$ for $q_1 \notin U_{x_a}$, and $\delta(a)=1$ for $q_1 \in U_{x_a}$. \end{theorem} \begin{proof} If $q_1 \in U_{x_a}$ and $q_1 \mid g$ then $\alpha_1 \in k[x_1, x_1, \ldots, x_{a-1}]/I_{a-1}$ and $\deg(a_1)=b - \deg(q_1)$. But since $U_{x_{a-1}}=\emptyset$, then $I_{a-1}=0$ which gives us the summand with $\delta(a)=1$.\\ If $q_1 \notin U_{x_a}$ then $\delta(a)=0$ and the first summand is irrelevant.\\ Moreover, for all $g \in (0:x_a)_{M_a}$ expressible as $g=\alpha_jq_j$ with $m_{ij} \nmid \alpha_j$, $1 \geq i < j$, then \begin{align} \alpha_j \in &\left(k[x_1, x_2, \ldots, x_{a-1}]/I_{a-1} \right)/\langle m_{1j}, m_{2j}, \ldots, m_{(j-1)j}\rangle\nonumber\\ &\cong k[x_1, x_2, \ldots, x_{a-1}]/\langle m_{1j}, m_{2j}, \ldots, m_{(j-1)j}\rangle.\nonumber \end{align} The last isomorphism being due to the second and third isomorphism theorems. \end{proof} \begin{remark} Observe that if $U_{x_a}= \emptyset$ then the sum in the theorem is zero, i.e. ${\rm HF}\left((0:x_a)_{M_a},b\right)=0$ for all $b \geq 0$. \end{remark} With the Hilbert function for the annihilator $(0:x_a)_{M_a}$ and the Hilbert function for $M_a/x_aM_a \cong M_{a_1}$ in hand, it is straightforward to implement the procedure outlined in section 2 to generate the Hilbert function of $M_a$. For that reason, we only show in the next example how to write the Hilbert function of the annihilator in terms of the Hilbert function of simpler quotient rings. \begin{example} Use the theorem \ref{thm} to write a sum equivalent to the non-trivial annihilator ideals $(0:x_a)_{M_a}$ where $I=\langle y^6, x^3y^5, x^2y^2z^2, x^3z, x^2yz^3 \rangle$ and the variables are ordered $y, x, z, w_1, w_2, \ldots$. \end{example} Before embarking in the computations, we check if the set of generators of $I$ needs re-indexing given the order we have chosen to introduce the variables (this order is quirky in that the variable $y$ is introduced before the variable $x$ and was chosen to illustrate that we are free to select the order in which the variables are introduced). The criteria that $S_a' \subset S_a$ if $a' \geq a$ is satisfied by the order in which the monomials generating $I$ are listed. However, the second criteria; namely, that for $p_i,\,p_j \in S_a$, $j > i$ only if the highest power of $x_a$ dividing $p_i$ also divides $p_j$, requires that the order of the monomials $x^2y^2z^2$ and $x^3z$ be swapped. Observe that adjusting the indexing to satisfy the second criteria does not interfere with the first criteria. In other words, after swapping the third and fourth monomials we get $I=\langle y^6, x^3y^5, x^3z, x^2y^2z^2, x^2yz^3 \rangle$ which satisfies both re-indexing criteria. The first annihilator is $(0:y)_{M_y}$, where $M_y = k[y]/\langle y^6 \rangle$. In this case, $U_y=\{y^5\}$ and ${\rm HF}\{(0:y)_{M_y}\}={\rm HF}\{k(-5)\}.$ The second annihilator is $(0:x)_{M_x}$, where $M_x=k[y, x]/\langle y^6, x^3y^5\rangle$ and $U_x=\{x^2y^5\}$. There is only the syzygy $m_{12}=y$ to consider. Therefore, $${\rm HF}\{(0:x)_{M_x}\}={\rm HF}\{k[y]/\langle y\rangle(- 7)\}={\rm HF}\{k(- 7)\}.$$ The third and last non-trivial annihilator is $(0:z)_{M_z}$, where $$M_z=k[y, x, z]/\langle y^6, x^3y^5, x^3z, x^2y^2z^2, x^2yz^3 \rangle.$$ In this case, $U_z=\{x^3, x^2y^2z, x^2yz^2\}$. The syzygies to consider are \begin{align} &m_{13}=y^6 &&m_{23}=y^5 &&\phantom{m_{13}=y^4} &&\phantom{m_{23}=x^3y^3}\nonumber\\ &m_{14}=y^4 &&m_{24}=xy^3 &&m_{34}=x &&\phantom{m_{23}=x^3y^3}\nonumber\\ &m_{15}=y^5 &&m_{25}=xy^4 &&m_{35}=x &&m_{45}=y.\nonumber \end{align} Therefore, \begin{align} {\rm HF}\{(0:z)_{M_z}\}=&{\rm HF}\{k[y,x]/\langle y^6, y^5\rangle(- 3)\}\nonumber\\ &+{\rm HF}\{k[y,x]/\langle y^4, xy^3, x\rangle(- 5)\}\nonumber\\ &+{\rm HF}\{k[y,x]/\langle y^5, xy^4, x, y\rangle(- 5)\}\nonumber\\ =&{\rm HF}\{k[y,x]/\langle y^5\rangle(- 3)\}\nonumber\\ &+{\rm HF}\{k[y]/\langle y^4\rangle(- 5)\}+{\rm HF}\{k(- 5)\}.\nonumber \end{align} \section{Conclusion} As the reader can see the Syzygy method via homological algebra is quite close in spirit to the Syzygy method discussed in the previous section. The only significant difference is in the tools used to prove it. Therefore, all the information about the Hilbert function was obtain from the syzygies. Last but not least, we have developed two different approaches to computing the Hilbert function of a quotient ring: the lcm-lattice method and the syzygy method. \bibliographystyle{unsrt}
1,108,101,563,955
arxiv
\section{Introduction} We seek to understand high multiplicity events in p+Pb collisions at the LHC for which flow like properties have recently been observed \cite{Chatrchyan:2013nka, Chatrchyan2013795}. The properties of events in the large \npart~tails of these multiplicity distributions resemble in many respects those of Pb+Pb events at the same multiplicity. We propose that the tail of the p+Pb multiplicity distribution arises from long-lived (on the collision time scale) quantum fluctuations in the colliding proton's wavefunction, as opposed to fluctuations in the Pb nucleus or fluctuations in the final state particle production process. Our argument is based on the hypothesis that the wave function of the nucleon includes configurations that are so spatially extended that their inelastic cross section is much larger than the average. These fluctuations correspond to relatively low energy excitations of the proton in the comoving frame, which are vastly time dilated in the reference frame of the Pb nucleus. As such they can be considered as approximately frozen during the entire p+Pb collision, except for perturbations caused by the interactions with nucleons in the Pb nucleus. Having a larger geometric size, it is natural to expect that the incident proton will have a much larger cross section with the nucleus when it finds itself in one of these configurations. As a result, more energy will be deposited and more particles will be produced. Such cross section fluctuations in hadron collisions have a relatively long history of study \cite{Good:1960ba, Kopeliovich:1981pz, Blaettel:1993ah, Bialas:2006qf, Frankfurt:2008vi}. What is most important for the interpretation of the observed collective flow-like properties of the high multiplicity events, however, is that the energy will be deposited over a much larger transverse area, which makes the validity of a hydrodynamical description \cite{Bozek:2011if,Bozek:2012gr,Bozek:2013df,Bozek:2013uha,Qin:2013bha} of the following expansion more credible. In the following, we will consider two alternative models for the spatial structure of the large-size configurations of a highly boosted nucleon. The first model is based on the flux-tube model of quark confinement (we call this the ``stringy'' nucleon). The second is a pion-cloud model, in which the nucleon is surrounded by one or several soft virtual pions (we call this the ``cloudy'' nucleon). We will argue on the basis of existing data for the antiquark distribution in the nucleon that the probability of finding the nucleon surrounded by a cloud of four pions is of the order of $P(4\pi) \sim 10^{-6}$ and thus should be abundantly sampled in the CMS experiment, which recorded an event sample corresponding to $6\times 10^{10}$ minimum bias events. We start with a discussion of multiplicity fluctuations induced by fluctuations in the nucleon-nucleon cross section, introduce two physical models for these fluctuations and finally develop models of the spatial eccentricities arising from them. \section{Multiplicity Fluctuations} In the recent papers \cite{Alvioli:2013vk, Rybczynski:2013mla} the authors consider fluctuations in the total nucleon-nucleon cross section $\sigma_{\rm NN}$ arising from color fluctuations in the initial nuclear densities along with the usual contributions from the varying number of participating nucleons. We reproduce some simple arguments which show that large geometric cross sections favor a large number of nucleon-nucleon interactions. We shall set aside impact parameter fluctuations in \npart~ and only consider contributions arising from a fluctuating nucleon cross section $\sigma$. Following the optical Glauber model we consider the incident proton as a cookie-cutter punching out a tube of cross sectional area $\sigma$ from the target nucleus. We define \npart~ as the number of nucleons in this tube and take it to be Poisson distributed with mean \begin{equation} \bar{n}(\sigma) = \sigma \rho L, \end{equation} where $\rho = 0.138~ \fm^{-3}$ is the nucleon density per unit volume and $L \approx 10$~fm is the length of the nucleus as seen by the incident proton in a central p+Pb collision. Then the probability of observing a given \npart~ is \begin{equation} p(N_{\mathrm{part}} ) = \frac{(\sigma \rho L)^{N_{\mathrm{part}}} }{N_{\mathrm{part}}!}\exp(-\sigma \rho L ). \end{equation} Taking the average value of $\langle \sigma \rangle = \sigma_{\rm NN} = 4.803\; \fm^2$ \cite{Totem2013} then $E[N_{\mathrm{part}}] = \bar{n}(\sigma_{\rm NN}) = 6.73$. \begin{figure}[htb!] \centering \includegraphics[width=0.4\textwidth]{sig-dist-plot.pdf} \caption{\label{fig:sig-dist-plot} (Color Online) Proposed probability distributions for fluctuations in the total cross section $\sigma_{\rm NN}$} \end{figure} Let us consider distributional forms for $\sigma$ along with that presented in \cite{Guzey:2005tk, Alvioli:2013vk}, we fix the mean of the proposed distributions to the average $\langle \sigma \rangle = \sigma_{\rm NN} = 4.803\; \fm^2$. We pick two probability distributions to model the fluctuations of the cross section: a gamma distribution and a log normal (see \figref{fig:sig-dist-plot}). The densities are \begin{align} p_{\mathrm{Alvioli}}(\sigma) &= \rho \frac{\sigma}{(\sigma + \sigma_0)} \exp(-\frac{\left(\sigma/\sigma_0 - 1\right)^2 }{\Omega^2}),\\ &\rho= 0.363, \quad \Omega=0.69, \quad \sigma_0 = 4.80~\mathrm{fm}^2,\notag\\ p_{\mathrm{gamma}}(\sigma) &= \frac{\sigma^{k-1} \exp \left(-\frac{\sigma}{\theta}\right)}{\theta^k \Gamma (k)}, \\ &\theta = \frac{\langle \sigma_{\rm NN} \rangle}{k}, \quad k = 4.0, \notag\\ p_{\mathrm{log normal}}(\sigma) &= \frac{1}{\sigma \delta \sqrt{2 \pi}} \exp \left( - \frac{(\log(\sigma) - \sigma_{NN})^2}{2\delta^2} \right),\\ &\delta = 0.428, \notag \end{align} \begin{widetext} We fix the values of $k$ and $\delta$ in the gamma and log-normal distributions so that both of the proposed distributions have the same variance. The Miettenen-Pumplin relation \cite{Miettinen:1978jb} connects the scaled variance of $P(\sigma)$ to the ratio of single inelastic and elastic cross-sections at $t=0$ \begin{equation} \label{eqn-mpump} \int P(\sigma) \left( \frac{\sigma}{\sigma_{NN}} - 1 \right) ^2\; d \sigma \equiv \omega_{\sigma} = \left. \frac{ d\sigma( p + p \to X + p )/dt }{ d\sigma( p + p \to P + p )/dt } \right |_{t=0} \end{equation} \end{widetext} Our proposed distributions have $\omega_{\sigma} = 0.25$ which is consistent with current experimental results. \begin{figure*}[ht!] \centering \includegraphics[width=0.3\textwidth]{joint-alvioli-plot.pdf} \includegraphics[width=0.3\textwidth]{joint-gamma-plot.pdf} \includegraphics[width=0.3\textwidth]{joint-gauss-plot.pdf} \caption{\label{fig:joint-plots} Joint probability distributions for $\sigma$ and \npart~ at values of fixed \npart. From left to right the Alvioli, gamma and Gaussian distributions are shown. These results do not include the effects of impact parameter fluctuations or nucleon-nucleon correlations. } \end{figure*} The joint probability distributions of \npart~ and $\sigma$ are shown for some fixed values of \npart~ in \figref{fig:joint-plots}. From these figures it is clear that large fluctuations in the cross section $\sigma$ are more likely to contribute at larger values of \npart. We compute the average cross section $\hat{\sigma}(N_{\mathrm{part}})$ for each of the proposed distributions \begin{equation} \hat{\sigma}(N_{\mathrm{part}}) = \frac{\int_{0}^{\infty} \sigma\, p(\sigma, N_{\mathrm{part}})\; d\sigma }{\int_{0}^{\infty} p(\sigma, N_{\mathrm{part}})\; d\sigma}. \end{equation} These effective cross sections are shown in \figref{fig:sigma-averaged} as a function of the number of participants. The effective cross section grows roughly linearly with the number of participants. Events with large \npart~ are more likely to be events with a large cross section and thus a large effective proton area. We show the influence of the variance of the proposed cross section distributions in \figref{fig:sigma-averaged-gamvar}, a larger variance enhances the effective cross section for a given number of participants. Having established that fluctuations in the cross section can be selected by requiring large fluctuations in the number of participants, let us now consider some simple models of these fluctuations. \begin{figure}[ht!] \centering \includegraphics[width=0.3\textwidth]{sigma-averaged-plot.pdf} \caption{\label{fig:sigma-averaged} The average cross section as a function of the number of participants for each of the proposed cross section distributions. These results do not include the effects of impact parameter fluctuations or nucleon-nucleon correlations. } \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=0.3\textwidth]{sigma-averaged-plot-gammaVar.pdf} \caption{\label{fig:sigma-averaged-gamvar} The average cross section as a function of the number of participants for the gamma cross section distribution with constant mean and increasing variance.} \end{figure} \section{The Stringy Model} In the stringy model we model the fat proton as three valence quarks connected by color flux tubes. This phenomenological model is inspired by results from quenched lattice QCD which show that at even relatively modest valence quark separations the gluon field in a nucleon localizes into flux tubes \cite{Bissey:2009gw, Bissey:2006bz}. In 3-body problems it is often convenient to use Jacobi coordinates \begin{align} u &= x_2 - x_1, \quad p_{u} = \frac{1}{2} (p_1 - p_2),\notag\\ v &= (x_2 + x_1)/2 - x_3, \quad p_{v} = \frac{1}{3} (p_1 + p_2 - 2 p_3), \notag\\ w &= (x_1 + x_2 + x_3)/3, \quad p_{w} = p_1 + p_2 + p_3. \end{align} In the center-of-mass (CM) frame $p_{w} = w = 0 $. Neglecting spin effects, we can write a wave equation for this system as \begin{equation} \left[p_1^2 + p_2^2 +p_3^2 + V(x_1, x_2, x_3)^2 \right] \Psi = E^2 \Psi, \end{equation} where $V(x_1,x_2,x_3)$ is the inter-quark potential \cite{Sakurai:1967}. Here we approximate this potential as a linear confinement potential with string constant $k$ in the limit of very spatially extended configurations. We assume a star-like configuration of flux tubes converging on the CM of the quark configuration: \begin{equation} V(x_1, x_2, x_3) = k \left( \left| \frac{u}{2}+\frac{v}{6} \right| + \left| \frac{u}{2}-\frac{v}{6} \right| + \left| \frac{2v}{3} \right| \right). \end{equation} Neglecting cross terms in the large spatial extent regime we approximate the potential for convenience as \begin{equation} V(x_1, x_2, x_3)^2 = k^2(u^2+v^2) . \end{equation} The wave equation then takes the form \begin{equation} \left[ \left( 2 p_{u}^2 + \frac{3}{2}p_{v}^2\right) + k^2( u^2 + v^2) \right] \Psi = E^2 \Psi. \end{equation} The solution for the wave function is \begin{equation} \Psi(u,v) = N \exp \left( - \frac{ku^2}{2\sqrt{2}} - \frac{kv^2}{\sqrt{6}} \right). \end{equation} From the normalization requirement \begin{equation} 1 = \int_{0}^{\infty} \Psi^2\; u^2 v^2 du dv, \end{equation} we obtain $N^2 = \frac{16}{\pi 3^{3/4}} k^3 $. The mass of the nucleon in this simple model is \begin{equation} E^2 = \left( \sqrt{2} + \sqrt{3/2} \right) k = 0.53\;\mbox{GeV}^2. \end{equation} The mean square radius of the system is \begin{align} \langle r^2 \rangle &= \int u^2 v^2 \Psi^2 \frac{1}{6}(3 u^2 + 4v^2) \; du dv, \notag\\ &= \frac{3^{1/4}\sqrt{6+\frac{7\sqrt{3}}{2}}}{2k} = \frac{2.28541}{k} , \end{align} taking a string constant $k = 1~\mathrm{GeV}/\fm$ we obtain the root-mean-square (RMS) radius of these configurations $\sqrt{\langle r^2 \rangle} = 0.674~\fm$. The fraction of configurations of radius $\rho(r)$ can be computed from \begin{align} \rho(r^2) &= \int \Psi^2 \delta\left(r^2 - \left(\frac{u^2}{2} + \frac{2 v^2}{3}\right) \right) u^2 v^2 \; du dv, \end{align} this is plotted in \figref{fig:radius-dist}. \begin{figure}[htb!] \includegraphics[width=0.3\textwidth]{pradialPlot.pdf} \caption{\label{fig:radius-dist} (Color Online) The probability distribution for the mean square radius of the extended nucleon $\rho(r)$}. \end{figure} \begin{figure}[htb!] \centering \includegraphics[width=0.3\textwidth]{rho-r-plot.pdf} \caption{\label{fig:rho-r-plot} The probability distribution for the total flux tube length in the limit of a very extended proton} \includegraphics[width=0.3\textwidth]{pgreaterPlot.pdf} \caption{\label{fig:pgreater-plot} The probability for the total flux tube length to be greater than $L$ in the limit of a very extended proton} \end{figure} The fraction $\rho$ of configurations with total flux-tube length $L = u + v$, \begin{equation} \rho(L) = \int \Psi^2 \delta(u + v - L)\; u^2 v^2 du dv , \end{equation} is shown in \figref{fig:rho-r-plot}. The average total flux tube length is $\langle L \rangle = 1.155~\fm$. The probability of the total flux tube length exceeding a certain value of $L$, \begin{equation} P_>(L) = \int_L^\infty \rho(L') dL' , \end{equation} is shown in \figref{fig:pgreater-plot}. Configurations with very long flux tubes occur with observable frequency, for instance, we would expect there to be approximately $10^4$ events with with $L > 2.8~\fm$ in the CMS p+Pb data. \section{The Cloudy Model} There is a non vanishing probability for a proton to produce a virtual pion via the transition $p \to n \pi^+$ or $p \to p \pi^0$. Isospin symmetry dictates that $P(p \to n \pi^+) = 2 P(p \to p \pi^0)$. The proton can also produce a virtual pion and simultaneously excite itself into one of the states of the $\Delta$-resonance: $p \to \Delta\pi$. As this transition requires an additional 300 MeV of energy, we neglect this contribution here, but it would need to be taken into account in a more complete treatment. Since the configuration with a single virtual pion contains either a neutron or a proton, it can spawn another virtual pion by the same mechanism. Assuming that the consecutive pion production processes are independent then the number of virtual pions $N_{\pi}$ accompanying the proton is given by a Poisson distribution with mean given by the average number of virtual pions $\langle n_{\pi} \rangle$. The probability of finding the incident nucleon accompanied by a cloud of $N_{\pi}$ pions is thus: \begin{equation} \label{eqn:poisson-npi} P(N_{\pi}) = \frac{\langle n_{\pi} \rangle ^{N_{\pi}} \exp(-\langle n_{\pi} \rangle)} {N_\pi !}. \end{equation} Experimental information about the virtual pion cloud of the nucleon is obtained, e.~g.\, from exclusive pion production in electron scattering off the nucleon, or the measurement of the isovector component of the antiquark distribution in the nucleon \cite{Horn:2006tm, DeTroconiz:2001wt, Amendolia:1984nz}. Here we focus on the second method. \subsection{The $\bar{d}/\bar{u}$ asymmetry} Parton distribution functions $f_i(x,Q^2)$ (PDFs) \cite{Brock:1993sz} give the unnormalized probability of finding a parton of species $i$ with a given momentum fraction $x$ in a proton at a given scale $Q$. The distributions are normalized so that \begin{align} \int_{0}^{1} f_{u} (x) - f_{\bar{u}}(x)\; dx &= 2,\\ \int_{0}^{1} f_{d} (x) - f_{\bar{d}}(x)\; dx &= 1,\\ \int_{0}^{1} x f_q(x) + x f_{\bar{q}}(x) + x f_{g}(x)\; dx &= 1. \end{align} The first two integrals fix the number of valence quarks of each flavor in the proton, the third sum ensures conservation of the total momentum of the proton. The Gottfried sum rule is given in terms of the second nucleon structure function $F_2^{N}(x,Q^2) = \sum_{a} x f_{a} (x, Q^2)$, \begin{align} \mathfrak{S}_{g} &= \int_0^{1} (F_2^{p} - F_{2}^{n}) \frac{dx}{x},\notag \\ &= \frac{1}{3} + \frac{2}{3} \int_0^{1} \left[ f_{\bar{u}}(x) - f_{\bar{d}}(x) \right]\; dx . \end{align} The naive value for this sum would be $\mathfrak{S}_g = \frac{1}{3}$, based on the notion that sea quarks arise from the splitting of gluons, implying that the antiquark distribution functions in the nucleon are flavor symmetric. However several experiments have found a non vanishing net flavor asymmetry in the distribution of sea quarks \cite{Towell:2001nh}. A review of the theory of this asymmetry can be found in \cite{Kumano:1997cy}. We shall pursue the idea that the asymmetry is the consequence of the presence of a cloud of virtual pions as developed in \cite{PhysRevD.53.2586, PhysRevD.44.717}. The E866 results \cite{Towell:2001nh} give $\int_{0}^{1} \left[ f_{\bar{d}}(x) - f_{\bar{u}}(x) \right]\; dx = 0.118 \pm 0.012$. If we interpret the asymmetry as arising solely from the production of virtual pions we can set $P(p \to n \pi^+) = 0.118$. Considering isospin symmetry, this leads to the conjecture $\langle n_{\pi} \rangle = \frac{3}{2} \times 0.118 = 0.177$. \subsection{Antiquark distribution in the nucleon} Following \cite{PhysRevD.53.2586, PhysRevD.44.717} we can write down the contribution to the overall proton light antiquark PDF of a single virtual pion. This is given in terms of the convolution of the light-cone momentum distribution of a virtual pion $f_{\pi,N}$, the probability of finding a pion with momentum fraction $y$, and the pion antiquark PDF $g_{\bar{q}}(x/y, Q)$: \begin{equation} x f_{\bar{q}}^{(1)}(x,Q) = \mathcal{C}^2 \int_x^{1} dy f_{\pi,N}(y) \frac{x}{y} g_{\bar{q}}\left(\frac{x}{y}, Q\right). \end{equation} Where $\mathcal{C}$ is the associated isospin Clebsch-Gordan coefficient, \begin{equation} f_{\pi, N}(y) = -\frac{g_{\pi NN}^2}{16 \pi^2} y \int_{-\infty}^{tm} \frac{ -t}{\left(t - m_{\pi}^2\right)^2} | F_{\pi NN}(t)|^2, \end{equation} $F_{\pi NN}(t)$ is the nucleon-nucleon-pion form factor and $t_{m} (y) = -M_{N}^2 \frac{y^2}{(1-y)}$ is the maximum invariant momentum transferred to the pion. In the literature the following form factors are suggested \cite{Kumano:1997cy, PhysRevD.43.3067} \begin{align} F_{\pi NN}^{\mathrm{monopole}} &= \frac{\Lambda_m^2 - M_{\pi}^2}{\Lambda_{m}^2 -t}, \notag\\ F_{\pi NN}^{\mathrm{dipole}} &= \left(\frac{\Lambda_d^2 - M_{\pi}^2}{\Lambda_{m}^2 -t}\right)^2, \notag\\ F_{\pi NN}^{\mathrm{exp}} &= \exp\left(\frac{t-M_{\pi}^2}{\Lambda_{e}^2}\right). \end{align} Setting $\Lambda_d = 0.8$, $\Lambda_m = 0.62 \Lambda_d$ and $\Lambda_e = 1.28 \Lambda_m$ as suggested by Kumano \cite{Kumano:1997cy} we obtain the pion distribution shown in \figref{fig:pion-dist}. Here we have chosen $g_{\pi NN}$ such that the distribution $f_{\pi, N}(y)$ is normalized to one. This allows us to interpret $f_{\pi, N}(y)$ as the probability for finding a pion at a given momentum fraction $y$ \emph{given that there is a pion present in the nucleon} as opposed to setting the value from experimental data and interpreting it as the unconditional probability of finding a pion with momentum fraction $x$ in the nucleon. As can be seen, the choice of form factor does not have a significant influence on the pion momentum distribution. From here on we use the dipole form as it is the median curve in \figref{fig:pion-dist}. The average pion momentum is relatively independent of the form factor \begin{equation} \langle x_{\pi} \rangle = \frac{\int_0^{1} x f_{\pi,N}(x)\; dx } {\int_0^{1} f_{\pi,N}(x)\; dx } = 0.234. \end{equation} \begin{figure}[ht!] \centering \includegraphics[width=0.4\textwidth]{pi-dens-plot.pdf} \caption{\label{fig:pion-dist} The virtual pion momentum distribution function $f_{\pi,N}$ computed for each of the form factors.} \end{figure} Let us consider the probability of observing a light antiquark conditioned on the number of pions present in the system. The conditional probability of observing a light antiquark given that there are no pions, $P_{\bar{q}}(x,Q | N_{\pi} = 0)$, is \begin{align} x P_{\bar{q}}(x ,Q| N_{\pi} = 0) &= x f_{\bar{q}}(x,Q) P(N_{\pi} = 0), \end{align} where for simplicity we are taking the nucleon PDF $f_{\bar{q}}(x,Q)$ as being defined in the absence of virtual pions. The conditional probability for observing a light antiquark with momentum fraction $x$ given that there is a single pion accompanying the proton is \begin{align} & x P_{\bar{q}}(x,Q | N_\pi = 1) = \int_x^{1} dy\, f_{\pi,N}(y) \left\{ \frac{x}{y} g_{\bar{q}}\left(\frac{x}{y}, Q\right) \right. \notag \\ & \left. \qquad + \frac{x}{1-y} f_{\bar{q}}\left(\frac{x}{1-y}, Q\right) \right\} P(N_{\pi} = 1). \end{align} The probability for finding a light antiquark with momentum fraction $x$ and there being a single pion in the system is the sum of terms representing the probability of finding the light antiquark \emph{within the pion} and the probability of finding the light antiquark \emph{in the proton} given that the pion has taken away a fraction $y$ of the proton's total momentum. Similarly we can write down the conditional probabilities for configurations with more virtual pions. As an example, we give the result for $N_\pi = 2$: \begin{widetext} \begin{align} x P_{\bar{q}}(x,Q | N_\pi = 2) &= \int_{x}^{1}\; dy_{1} \int_{x}^{1-y_{1}}\; dy_{2} f_{\pi,N}(y_1) f_{\pi, N}\left(\frac{y_2}{1-y_1}\right) \notag\\ & \times \left\{ \frac{x}{y_1} g_{\bar{q}}\left(\frac{x}{y_1}, Q\right) + \frac{x}{y_2} g_{\bar{q}}\left(\frac{x}{y_2}, Q\right) + \frac{x}{1-y_1 - y_2} f_{\bar{q}}\left(\frac{x}{1-y_1 - y_2}, Q\right)\right\} P(N_{\pi} = 2). \end{align} \end{widetext} \begin{figure}[htb] \centering \includegraphics[width=0.4\textwidth]{pionpdf-plot.pdf} \caption{\label{fig:pipdf} The Gl\"uck {\em et al.} \cite{Gluck:1991ey} HO pion PDFs evaluated at $Q=10$~GeV, for valence and sea quarks and gluons. The average momentum fractions for each species are respectively $0.155,0.023,0.511$.} \end{figure} In our evaluation of these expressions we use the parameterization given by Gl\"uck {\em et al.} \cite{Gluck:1991ey} for the pion PDF $g_{\bar{q}}$. For these PDFs, at $Q=10$~GeV the average valence antiquark momentum fraction is $\langle x_{\bar{q}} \rangle_{\pi} = \int_{0}^{1} x g_{\bar{q}}(x,Q)\; dx = 0.155$. For reference we plot the valence, sea and gluon PDFs of the pion in \figref{fig:pipdf}. \begin{figure}[htb] \centering \includegraphics[width=0.4\textwidth]{modpdf-contrib-plot.pdf} \caption{\label{fig:modpdf-contrib} The contribution to the light antiquark parton distribution functions from configurations with a given definite number of pions. The number of pions ranges from 0 to 3, computed at $Q = 10$~GeV.} \end{figure} \begin{table} \centering \begin{tabular}{|l|r|c|} \hline $N_{\pi}$ & $P(N_{\pi})$ & $\int dx\, P_{\bar{q}}(x,Q | N_{\pi})$ \\ \hline 0 & 0.889~~~ & 2.292\\ 1 & 0.104~~~ & 0.747\\ 2 & 0.00618 & 0.068 \\ 3 & 0.00024 & 0.0027\\ 4 & $7.17 \times 10^{-6}$ & \\ \hline \end{tabular} \caption{\label{tab:modpdf} The probability of finding $n$ pions along with the integrated light quark PDF, computed at $Q = 10$~GeV. The integral over the PDF is cut off at $x_{min} = 0.001$.} \end{table} We tabulate the probabilities of finding $n$ pions in the physical proton along with the integral of the modified PDF in \tabref{tab:modpdf}. The contributions from configurations with different numbers of virtual pions to the antiquark distribution are shown in \figref{fig:modpdf-contrib}. The contributions die off quickly with $N_{\pi}$, the higher order terms contribute to successively smaller ranges in $x$ due to the conservation of the total momentum of the proton. The modified PDF including effects from up to three pions, \begin{equation} \label{eqn:full-mod} x \tilde{f}_{\bar{q}}(x, Q) = \sum_{n=0}^{3} x P_{\bar{q}}(x, Q | N_{\pi} = n) , \end{equation} is shown in \figref{fig:modpdf}. \begin{figure}[htp] \centering \includegraphics[width=0.4\textwidth]{modpdf-plot.pdf} \includegraphics[width=0.4\textwidth]{ratpdf-plot.pdf} \caption{\label{fig:modpdf} The modified light antiquark PDF (top) plotted with the unmodified PDF. The ratio of the modified light antiquark PDF to the unmodified case (bottom), computed at $Q = 10$~GeV} \end{figure} \section{Phenomenology} \subsection{Effects on Hard Scattering} An observable consequence of the stringy proton model would be an enhancement of gluon jet production over quark jets in high multiplicity p+Pb events. We expect that the gluon density in a ``fat'' nucleon will be enhanced at moderate to large $x$, as almost all of the energy in the proton now resides in the gauge field contained in the flux-tubes. This implies that the momentum fraction carried by the valence quarks must be shifted to smaller values of $x$. We expect that the localization of the valence quarks and the enhanced large-$x$ gluon distribution will have a non trivial feed-down to saturation physics at small $x$. However a more detailed calculation is needed to address the small-$x$ physics associated with a high multiplicity p+Pb event. We expect that the total cross-section fluctuations arising from this model would scale like fluctuations in the total area of the nucleon, this is set by $r^2= \frac{1}{6}\left(3 u^2 + 4v^2\right)$ \begin{equation} p\left(\frac{\sigma}{\sigma_{NN}}\right) \propto p\left(\frac{r^2}{\langle r^2 \rangle}\right) \end{equation} where $p(r_{\mbox{rms}})$ is plotted in \figref{fig:radius-dist}. In the case of the cloudy proton model, the presence of virtual pions in the ``fat'' proton serves to enhance the antiquark PDF at large values of $x$. This enhancement must be accompanied by a shift of the light quark distribution to smaller values of $x$. This could lead to an observable enhancement of hard quark-antiquark annihilation, expressed as enhanced Drell-Yan pair-production or as enhanced W-boson production in high multiplicity events relative to a minimum bias baseline. We note that in both models the valence quark distribution will be shifted to lower values of $x$, implying reduced production of very hard jets initiated by valence quark scattering. As a consequence, in the cloudy nucleon model the gluon sea, and as a secondary effect the isoscalar quark sea will be much enhanced, while in the stringy nucleon model the isovector quark sea will be enhanced with little or no increase in the gluon sea. This difference should, in principle, serve as an observable distinction between the two models. Of course, in reality, both mechanisms may contribute to the ``fat'' proton configurations. To estimate the significance of these modifications we consider the cloudy nucleon model. We can compare the average momentum fraction carried by an antiquark in the modified and unmodified situations, \begin{equation} \label{eqn:meanx} \langle x_{\bar{q}} \rangle = \frac{\int_{x_{\mathrm{min}}}^{1} x f_{\bar{q}}(x)\; dx}{\int_{x_{\mathrm{min}}}^{1} f_{\bar{q}}(x)\; dx}, \end{equation} we use a lower cut-off of $x_{\mathrm{min}} = 0.001$. In terms of the antiquark distribution inside the virtual pion we can estimate \begin{equation} \langle x_{\bar{q}}^{\pi} \rangle \simeq \langle x_{\pi} \rangle \langle x_{\bar{q}} \rangle_{\pi} = 0.234 \times 0.168 = 0.0393, \end{equation} using data from \figref{fig:pion-dist} and \figref{fig:pipdf}. Directly integrating the MSTW PDFs for nucleon sea antiquarks we find \begin{equation} \langle x_{\bar{q}}^{N} \rangle = 0.0119. \end{equation} This means that the antiquarks contributed by virtual pions carry, on average, three times the longitudinal momentum than the antiquarks contained in the parton sea of an average proton. This difference should be possible to observe if the population of protons with a virtual pion can be significantly enhanced by selecting high-multiplicity p+Pb events. The fully modified PDF \eqref{eqn:full-mod} including the effects of up to three virtual pions gives \begin{equation} \langle x_{\bar{q}}^{N,\mathrm{Mod}} \rangle = 0.0173, \end{equation} a value $3/2$ times larger than the unmodified case. The virtual pions make a significant contribution to the nucleon PDF. We can expect some modification to hard processes as a result. We refrain here from making quantitative predictions for hard scattering phenomena accompanying high multiplicity p+Pb events, because these will certainly depend sensitively on the possible trigger conditions, which are not known to us. We also are concerned that the sophistication of the models of the ``fat'' proton explored here, especially the ``stringy'' proton model, is insufficient to make reliable quantitative predictions for the effective parton distributions associated with a given multiplicity window. \subsection{Eccentricity Distributions} How else can we physically distinguish between these two toy models? By considering their influence on the parton distributions we have examined the fat proton in a longitudinal section. We now attempt to build models of the transverse structure of the portly proton. We numerically sample the spatial eccentricity coefficients $\epsilon_2, \epsilon_3$ from density distributions generated in the spirit of each of the models. If the energy deposited in a proton-nucleus collision thermalizes and the tiny fireball expands hydrodynamically, these spatial eccentricities may reasonably be expected to be reflected in the Fourier coefficients $v_{n}$ of the final-state flow . We compute the eccentricities for an event with a transverse density profile $\rho$ as \begin{equation} \epsilon_{n} = \frac{\int \rho(r,\phi) r^2 \cos(n \phi - n \Phi_n) r dr d\phi}{\int \rho(r,\phi) r^3 dr d\phi}, \end{equation} where the event plane angle $\Phi_{n}$ for the $n-$th moment is \begin{equation} \Phi_{n} = \frac{1}{n} \arctan \left( \frac{ \int \rho(r,\phi) r^2 \sin(n\phi) r dr d\phi}{ \int \rho(r,\phi) r^2 \cos(n\phi) r dr d\phi} \right). \end{equation} We generate events for the pion cloud model with $N$ pions as follows, where $N$ is drawn from the Poisson distribution \eqref{eqn:poisson-npi} . For each event we sample the radial locations of the $N$ pions about the proton from an exponential distribution. The pion angular positions are sampled uniformly. The exponential radial distribution is motivated by the Yukawa model, we consider several values of the rate constant $\lambda$ for this distribution. In \cite{PhysRevD.53.2586} the authors carry out a more advanced calculation along the same lines as our cloudy model, including the effects of the $\Delta \pi$ channel. The dominant contribution of the pion cloud to the antiquark distribution arises from pions with an average momentum of $\langle P_{\pi} \rangle \simeq 0.8$~GeV, although this calculation is carried out at a slightly lower virtuality scale $Q^2 = 1\;\rm{GeV}^2$ this result provides a reasonable estimate for the mean radial position of pions around the proton. We set our the average pion radial position to be $\lambda = \frac{1}{\langle P_{\pi} \rangle}$. A Gaussian kernel with width $\sigma_{\pi} = 1/\sqrt{6}$~fm is convolved against the resulting points. This kernel width is chosen so that $r_{\pi} = \sqrt{\frac{2}{3}} r_{p}$. We take the radius of the proton as defining the $2\sigma$ distance from the center, i.e the probability of finding any density outside of this radius is $< 5\%$. Finally a density representing the proton is placed at the origin with a smearing kernel width of $\sigma_{p} = 0.5$~fm. \begin{figure*}[h] \centering \includegraphics[width=0.8\textwidth]{pion-shape.png} \\ \includegraphics[width=0.8\textwidth]{pion-shape-npi4.png} \caption{\label{fig:pion-dists} Contour distributions of the proton and pion-cloud density (arb) in the transverse plane, the width of each plot is $3$~fm. Each plot is a single event sampled from the ensemble. The top row shows events with $\langle N_{\pi} \rangle = 0.1778$ the calculated value, the bottom row shows events with $\langle N_{\pi} \rangle = 4$.} \end{figure*} Density plots of a few typical events from the pion model are shown in \figref{fig:pion-dists}, here the plots have a width of $3/2$~fm. The central proton tends to dominate the density but the effects of the outlying pions are visible. We consider one ensemble with the average number of pions set to the physical value of $\langle N_{\pi} \rangle = 0.1778$ and one with $\langle N_{\pi} \rangle = 4$ to illustrate the effects of large fluctuations. For the stringy model we sample the absolute values of the Jacobi parameters $u,v$ normally with some width $\sigma_{\rm string}$ such that the average total flux tube length is $\langle \rho \rangle = 1.009$~ fm, to match the values we computed above. The angles made by $u,v$ in the transverse plane are sampled uniformly, the positions of the three quarks can then be reconstructed. \begin{figure*}[h] \centering \includegraphics[width=0.9\textwidth]{string-shape.png}\\ \includegraphics[width=0.9\textwidth]{string-shape-fat.png} \caption{\label{fig:string-dists} Distributions of the stringy density (arb) in the transverse plane, the width of each plot is $2$~fm. The top row shows strings with a width $0.1$~fm the bottom row shows strings with a width $0.3$~fm.} \end{figure*} The flux tube density profile is generated by convolving the resulting line segments with a Gaussian profile. We consider two ensembles, a ``thin'' and ``fat'' set of events with widths $w_{\mathrm{string}} = 0.1, 0.3$~fm respectively. Some typical events are shown in \figref{fig:string-dists}. These plots are $3$~fm in width, long two-legged configurations tend to dominate. \begin{figure*}[ht!] \centering \includegraphics[width=0.3\textwidth]{pion-eps-dist.pdf} \includegraphics[width=0.3\textwidth]{pion-npi4-eps.pdf} \caption{\label{fig:eps-dists-pion} Distributions of $\epsilon_2, \epsilon_3$ for $500$ events generated from the pion-cloud. The left figure shows the results for the physical value $\langle N_{\pi} \rangle = 0.1778$, the right figure shows the results for $\langle N_{\pi} \rangle = 4$. Note that impact parameter fluctuations are not included. } \end{figure*} \begin{figure*}[hbt!] \centering \includegraphics[width=0.3\textwidth]{string-eps-dist.pdf} \includegraphics[width=0.3\textwidth]{string-eps-dist-fat.pdf} \caption{\label{fig:eps-dists-string} Distributions of $\epsilon_2, \epsilon_3$ for $1000$ events generated from the stringy models. The string width is $0.1$~fm in the left figure (thin) and $0.3$~fm in the right figure (fat). Note that impact parameter fluctuations are not included.} \end{figure*} Histograms of the $\epsilon_2/\epsilon_3$ distributions for the pion cloud and stringy models are shown in \figref{fig:eps-dists-pion} and \figref{fig:eps-dists-string}. The pion model with a realistic average number of pions per nulceon gives an appreciably non-zero eccentricity distribution, this is strongly enhanced in for the large pion cloud case. Either choice of flux tube width leads to strong enhancements in the $\epsilon_2$ spectrum at large eccentricities and to a nontrivial $\epsilon_3$ spectrum, the wider string model shows less dramatic results as the smearing reduces the geometric influence of the string profile. \section{Summary and Outlook} Fluctuations in the nucleon-nucleon cross section can induce large fluctuations in the number of participants in a central p+Pb event. The apparent universality of the large \npart\ tails of p+Pb, Pb+Pb and p+p collisions suggests these fluctuations arise from a spatially over-extended, or ``fat'', proton wave function. A natural consequence of this extended proton size and its concomitant large cross section is an enhanced collision volume in such p+Pb events. The larger volume reduces spatial density gradients and thus makes a hydrodynamical description of the evolution of the reaction more likely to be valid. We have proposed two phenomenological models for the large-size configurations of the proton, one based on color flux tubes and one on virtual pion production. Each model leads to modified large $x$ physics in the initial state of the p+Pb collision relative to minimum bias events. Qualitatively, the stringy proton model predicts enhancement of the gluon PDF, while the cloudy proton model predicts an enhancement of the light antiquark PDFs. It would be interesting to view these models as different initial seeds for small $x$ saturation physics. The stringy model's extended ``valence'' gluon configuration is likely to give rise to a substantially different color glass than that arising from the pion cloud which effectively has many more valence (anti-)quarks. In proton-nucleus collisions the conjectured ``fat'' proton configurations have obvious consequences for the transverse energy density distribution in the initial state and its Fourier moments $\epsilon_{n}$. The much enhanced initial transverse extent of the fireball makes the application of hydrodynamical models for its expansion more credible, because it implies a larger Knudsen number. Since the distribution of eccentricities is significantly different for the two models considered here, measurements of final-state ``flow'' coefficients $v_{n}$ for high multiplicity p+Pb events will shed some light upon which of these models is more realistic. \begin{acknowledgments} We acknowledge support by DOE grant DE-FG02-05ER41367. CCS would like to thank D.~Velicanu and I.~C.~Kozyrkov for many helpful discussions. \end{acknowledgments} \bibliographystyle{h-physrev5}
1,108,101,563,956
arxiv
\section{Introduction} \label{sec:intro} Clustering---the process by which a set of items are semantically-partitioned into finite groups--- has been employed in many domains of computing including machine learning, computer graphics, information retrieval, and bioinformatics. It has numerous applications including interpretability, compression, visualization, outlier detector, and zero-shot classification. While clustering methods have traditionally ingested raw features (\eg, pixels) as input, an alternative explored particularly in the face-verification literature is to use \emph{deep embeddings}. Deep embeddings are latent-feature vectors discovered by neural networks \cite{Guo2020,Lin2018,Nguyen2021,Otto2018,Shen2021,Shi2018,Wang2019Linkage,Yang2019,Yang2020,Ye2021,Zhan2018}. They form a natural space for clustering because they are learned with loss functions that encourage grouping of semantically-similar inputs (\eg, \cite{Boudiaf2020,Chopra2005,Deng2019,Hadsell2006,Snell2017,Weinberger2009,Sohn2016}) and explicit representation of task-relevant features (\eg, \cite{Ridgeway2016,Schroff2015}). Face verification systems rely on large backbone networks that produce highly-discriminative\footnote{The term \emph{discriminative} refers to the network's ability to separate embeddings that belong to the same identity or class from those that do not, as measured by a distance function.} embeddings of face images. Improvements to these systems come largely through increasing the available labeled data, but with an expensive annotation cost. To overcome the annotation challenge, recent work has successfully used clustering to pseudo-label \cite{He2018,Wang2019Linkage,Yang2019,Yang2020,Zhan2018,Shen2021,Nguyen2021}. The approach is cyclical: train a face-embedding model, embed and cluster unsupervised face images, pseudo-label the images via the clustering assignments, merge the pseudo-labeled data into the training set, and re-train the model. Since the crux of the approach is the clustering step, as improved clustering leads to better pseudo-labels and thus improved embedding quality, research has shifted from unsupervised, shallow, heuristic-based clustering methods \cite{Banerjee2005,Cheng1995,Ester1996,Frey2007,Gopal2014,Hornik2012,Jain2010,Kulis2012,Lin2018,Liu2014,Lloyd1982,Ng2001,Otto2018,Sculley2010,Shi2018,Sibson1973,Straub2015} to supervised, deep, inductive methods \cite{Guo2020,He2018,Nguyen2021,Shen2021,Wang2019Linkage,Yang2019,Yang2020,Ye2021,Zhan2018}, with claimed improvements upward of 20\% over $k$-means \cite{Lloyd1982} and hierarchical agglomerative clustering \cite{Sibson1973}, for example. Even though improvements from the deep methods are impressive, the prior work has limitations. First, by focusing primarily on face datasets, the clustering methods operate on embedding spaces that are highly discriminative and not representative of the many embedding spaces that could benefit from clustering. Deng \etal \cite{Deng2019} show that both verification and Recall@1 \cite{Musgrave2020} performance of the embeddings are well-above 90\% and can even approach the ceiling across a range of face datasets, including MegaFace \cite{Ira2016}. We also confirm that the embeddings for DeepFashion \cite{Liu2016deepfashion}---one of the only non-face datasets explored for clustering---exhibit Recall@1 above 90\%. Second, the commonly-used face datasets (e.g., MS-Celeb1M \cite{Guo2016} and MegaFace \cite{Ira2016}) are no longer publicly available, with progress reliant on using unofficial, shared embeddings extracted from pretrained backbones. Third, similar to Musgrave \etal \cite{Musgrave2020}, we find methodological choices that may implicitly favor the recent, deep methods; these choices include the lack of a validation split, monitoring test performance for hyperparameter tuning and early stopping, and copying baseline results from previous work. For the reasons above, we conducted a large-scale empirical study of methods for clustering pretrained embeddings. We focused on outlining an end-to-end, reproducible pipeline, including dataset curation, backbone training, and clustering. We then benchmarked 17 clustering methods---including two state-of-the-art, supervised, deep methods, GCN-VE \cite{Yang2020} and STAR-FC \cite{Shen2021}---on three datasets. The first two datasets are Cars196 \cite{Krause2013} and Stanford Online Products (SOP) \cite{Song2016}. We specifically chose to benchmark against Cars196 and SOP because they are: (1) popular in the embedding-learning literature \cite{Musgrave2020,Scott2021,Boudiaf2020}, (2) extend beyond the face-verification domain, and (3) produce embeddings that are significantly less discriminative, as discussed in \cref{sec:backbone_results}. We investigate a third dataset that has similar statistics, similar Recall@1 performance, and thus similar clustering results compared to common face datasets (e.g., MS-Celeb1M) used solely in past research. The third dataset's purpose is simply to corroborate results from past research. To enable reproducibility, we include all necessary details for Cars196 and SOP in the appendices, and plan to release the code. Our results indicate that the deep methods are indeed superior when operating on highly-discriminative embeddings, but their performance drops otherwise, matching or even underperforming the shallow, heuristic-based methods. We conclude that the Recall@1 performance of the embeddings is generally an accurate proxy for predicting the benefits of deep clustering methods. Additionally, where the deep methods do improve performance, the improvements are of much smaller magnitude than previously reported, likely as a result of our systematic training pipeline and matched hyperparameter tuning. Lastly, we find that $\ell_2$-normalization of embeddings prior to clustering leads to a stark improvement across all heuristic-based methods, exploiting the geometrical properties of embeddings learned with softmax cross-entropy \cite{Scott2021}. \section{Related Work} \label{sec:related_work} \subsection{Unsupervised Clustering} \label{sec:unsupervised_clustering} Clustering is normally characterized as an unsupervised learning task where methods leverage heuristics based on assumptions about the data-generation process or the structure of the data. Adopting the terminology of Jain \cite{Jain2010}, we discuss four common classes of heuristics: partition-based, density-based, hierarchical, and graph-theoretic. Methods using partition-based heuristics \cite{Frey2007,Hornik2012,Kulis2012,Lloyd1982,Sculley2010,Straub2015} have a representation of clusters and decide which data points belong to each cluster, in many cases, using a similarity or distance function. The most popular method is $k$-means clustering \cite{Lloyd1982} which estimates empirical cluster centers minimizing within-cluster variance. Other variants make assumptions about the geometry of the data, e.g., spherical $k$-means \cite{Hornik2012}, make assumptions about the scale of the data, e.g., mini-batch $k$-means \cite{Sculley2010}, or explore alternative formulations that remove the need for a predefined number of clusters, e.g., Dirichlet-process (DP) $k$-means \cite{Kulis2012,Straub2015}. Density-based methods assume that clusters represent high-density regions in the feature space and are separated from other clusters by low-density regions. Popular density-based methods attempt to estimate cluster densities via expectation-maximization, e.g., the Gaussian mixture model (GMM) \& the von Mises--Fisher mixture model (vMF-MM) \cite{Banerjee2005}, variational inference \cite{Gopal2014}, Parzen windows, e.g., DBSCAN \cite{Ester1996} \& deep density clustering \cite{Lin2018}, and kernel density estimation, e.g., MeanShift \cite{Cheng1995}. Hierarchical methods form clusters by starting either with a singleton cluster containing all data points and iteratively splitting it (i.e., top-down or divisive) or with a cluster for each data point and iteratively merging them (i.e., bottom-up or agglomerative). The split or merge is decided greedily using a linkage rule until a criteria is satisfied such as the total number of clusters. We experiment with hierarchical agglomerative clustering (HAC) \cite{Sibson1973} and several popular linkage rules (single, complete, average, and Ward linkage), as well as approximate rank-order (ARO) \cite{Otto2018} based on the rank-order linkage metric \cite{Zhu2011}. The final common class of heuristics use graph theoretics. These methods represent data points as a graph with edges weighted by pairwise similarities. One of the most popular methods, and the one we employ in our experiments, is spectral clustering \cite{Ng2001}. Spectral clustering performs $k$-means clustering on the eigenvectors of the normalized Laplacian matrix constructed from the graph's affinity matrix. \subsection{Supervised Clustering} \label{sec:supervised_clustering} Recent research, instead of relying on heuristics, has proposed deep, supervised methods that inductively cluster. These methods rely on supervision indicating which data points belong to the same cluster. By \emph{inductive}, we refer to methods that learn a function, for example, a function that predicts if two data points should be clustered together, which can then be applied to unseen data. This is in contrast to the unsupervised methods which lack generalizability from one dataset to another. Generally, the idea is to leverage local and/or global structure in the feature space to learn what points belong to the same cluster or what pairs of points should be linked. Clusters can then be formed using simple algorithms such as connected-component labeling. Consensus-driven propagation (CDP) \cite{Zhan2018} uses an ensemble of backbone networks to produce statistics across many $k$-nearest-neighbor affinity graphs and then trains a \emph{mediator} network to predict links between data points from which clusters are formed. Wang \etal \cite{Wang2019Linkage} improves link prediction by capturing structure in local affinity graphs directly with graph convolutional networks (GCNs) \cite{Kipf2017}. A series of GCN-methods followed that incorporate both local and global structure via density information \cite{Guo2020} and multi-scale views \cite{Yang2019}, for example. Alternatives beyond GCNs have been proposed such as using self-attention via transformers \cite{Nguyen2021,Ye2021} and learning an agglomerative clustering policy with inverse reinforcement learning \cite{He2018}. Our experiments include results from CDP, as well as two state-of-the-art GCN approaches: GCN-VE \cite{Yang2020} and STAR-FC \cite{Shen2021}. GCN-VE is a supervised approach that trains two GCNs. The first, GCN-V, estimates a confidence for each data point based on a supervised density measure. The second, GCN-E, constructs subgraphs based on the estimated confidences and predicts pairwise links. The links are used in a \emph{tree-deduction} algorithm to construct final clusters. STAR-FC trains a single GCN to predict pairwise links, but uses a structure-preserving sampling strategy to train the GCN with both local and global graph structure. During inference, links are predicted followed by \emph{graph parsing and refinement} steps to construct clusters. Consistent with past work, we do not train an ensemble and mediator for CDP, but rather use just the unsupervised \emph{label-propagation} step to form clusters. Note that this removes all supervised components of CDP, which is why we classify it as an unsupervised method in \cref{sec:results}. \section{Methodology} \label{sec:methodology} Our pipeline for clustering pretrained embeddings involves three steps: dataset curation, backbone training, and clustering. We discuss the methodology for each of the steps below. \subsection{Dataset Curation} \label{sec:dataset_curation} \begin{table}[t] \begin{center} \small \begin{tabular}{@{}ccccccc@{}} \toprule & \multicolumn{3}{c}{Cars196} & \multicolumn{3}{c}{SOP} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} & $m$ & $n$ & $\frac{m}{n}$ & $m$ & $n$ & $\frac{m}{n}$ \\ \midrule Backbone Train & $3,963$ & $49$ & $81$ & $32,277$ & $5,658$ & $6$ \\ Clustering Train & $4,091$ & $49$ & $83$ & $27,265$ & $5,658$ & $5$ \\ Validation & $4,058$ & $49$ & $83$ & $32,734$ & $5,658$ & $6$ \\ Test & $4,073$ & $49$ & $83$ & $27,777$ & $5,660$ & $5$ \\ \bottomrule \end{tabular} \end{center} \caption{Dataset splits with number of instances, $m$, number of classes, $n$, and approximate number of instances per class, $\frac{m}{n}$, for Cars196 and SOP. The classes in each split of are disjoint.} \label{tab:dataset_stats} \end{table} We consider three datasets for our experiments: Cars196 \cite{Krause2013}, Stanford Online Products (SOP) \cite{Song2016}, and a private, third dataset, \emph{Dataset 3}. Each dataset is partitioned into a \emph{backbone train} split, used to train the backbone, a \emph{clustering train} split, used only to train the deep clustering methods, a \emph{validation} split, used for hyperparameter tuning and early stopping for the backbone and deep clustering methods, and one or more \emph{test} splits, used to fit the unsupervised clustering methods and evaluate all clustering methods. Cars196 and SOP have a single test split while Dataset 3 has five test splits that cumulatively grow in size, enabling investigation into the scalability of clustering methods. All splits, except for the test splits of Dataset 3, have disjoint sets of classes. \Cref{tab:dataset_stats} has per-split statistics on the number of instances, number of classes, and approximate number of instances per class for both Cars196 and SOP. Dataset 3 has $\mathcal{O}(100,000)$ instances, $\mathcal{O}(1,000)$ classes, and roughly $\mathcal{O}(100)$ instances per class for all splits. We acknowledge that the deep methods are provided an advantage through access to the train-clustering split, which is unavailable to the shallow methods. Ideally, the shallow methods would make use of the additional split of data, however, we chose to match the protocol of past research on clustering face embeddings. \subsection{Backbone Training} \label{sec:backbone_training} We use a ResNet50 \cite{He2016} for the backbone with an additional fully-connected layer added after global average pooling and prior to the classification head, which produces a 256D embedding. We chose a ResNet50 backbone to match prior research on face clustering \cite{Nguyen2021, Shen2021, Yang2019, Yang2020, Ye2021}, but note that consistent clustering results were found with an Inception v3 backbone \cite{He2018}. The model is trained with cosine softmax cross-entropy---based on findings from Scott \etal \cite{Scott2021}---using the backbone-train split of each dataset, with Recall@1 monitored on the validation split for early stopping and hyperparameter tuning. A hyperparameter search was performed for each of the datasets. Additional details on training the backbone, including specifics on the architecture, data augmentation, and hyperparameters are included in \cref{sec:backbone_details}. \subsection{Clustering} \label{sec:clustering} \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{cars_sop_cluster.pdf} \caption{Harmonic mean of Pairwise ($F_P$) and BCubed ($F_B$) F-scores across clustering methods for Cars196 (top) and SOP (bottom). The left and right pane contain results for unsupervised and supervised clustering methods, respectively, and the red and blue bars indicate clustering of unnormalized and $\ell_2$-normalized embeddings, respectively. GMM, vMF-MM, and Spectral could not be run on SOP due to runtime inefficiencies, indicated by the gray, hatched bars. The methods that omit a result for unnormalized embeddings assume $\ell_2$-normalized embeddings, by default.} \label{fig:cars_sop_cluster} \end{figure*} The clustering methodology varies between the supervised, deep methods (\eg, GCN-VE and STAR-FC) and the unsupervised, shallow methods (\eg, $k$-means). We discuss each below. \Cref{sec:clustering_details} contains additional methodological details, architecture details for GCN-VE and STAR-FC, and an enumeration of all hyperparameters and their tuned values across all clustering methods. For supervised clustering, we compute the 256D embeddings for the clustering-train, validation, and test splits. Methods are trained using the clustering-train embeddings from each dataset with the loss monitored on the validation split for early stopping and hyperparameter tuning. A hyperparameter search was performed for each of the datasets and we report results using the set of hyperparameter values associated with maximal clustering performance on the validation split. The model is then applied, inductively, on the test embeddings and final performance is reported. \begin{table}[b] \begin{center} \small \scalebox{0.93}{ \begin{tabular}{@{}cccc@{}} \toprule \multicolumn{4}{c}{Unnormalized Embeddings} \\ \midrule & Clustering & \multirow{2}{*}{Validation} & \multirow{2}{*}{Test} \\ & Train & & \\ \midrule Cars196 & $0.67$ & $0.76$ & $0.76$ \\ SOP & $0.72$ & $0.70$ & $0.73$ \\ Dataset 3 & $0.91$ & $0.91$ & $0.91$, $0.89$, $0.88$, $0.87$, $0.86$ \\ \bottomrule \end{tabular}} \end{center} \begin{center} \small \scalebox{0.93}{ \begin{tabular}{@{}cccc@{}} \toprule \multicolumn{4}{c}{$\ell_2$-Normalized Embeddings} \\ \midrule & Clustering & \multirow{2}{*}{Validation} & \multirow{2}{*}{Test} \\ & Train & & \\ \midrule Cars196 & $0.69$ & $0.80$ & $0.79$ \\ SOP & $0.75$ & $0.74$ & $0.77$ \\ Dataset 3 & $0.94$ & $0.95$ & $0.94$, $0.93$, $0.92$, $0.91$, $0.91$ \\ \bottomrule \end{tabular}} \end{center} \caption{Recall@1 on the clustering-train, validation, and test splits of each dataset for both unnormalized and $\ell_2$-normalized embeddings. The comma-separated values for the test column of Dataset 3 represent the five test splits.} \label{tab:backbone_recall} \end{table} In contrast, the unsupervised methods operate directly on the embeddings from the test split(s). A hyperparameter search was also conducted for the unsupervised methods, however, we report results using the set of values associated with maximal clustering performance on the test split(s). We admit that optimizing for test performance appears to provide an unfair advantage to the unsupervised methods, however, the supervised methods: (1) have access to a split of data unavailable to the unsupervised methods, (2) have thousands of learnable parameters, and (3) have 12 and 16 hyperparameters for STAR-FC and GCN-VE, respectively, compared to at-most 4 hyperparameters for the unsupervised methods. There are no alterations to the embeddings other than an $\ell_2$-normalization step that we found to unequivocally improve clustering performance. \Cref{sec:clustering_results} further discusses the benefits of $\ell_2$-normalization. \section{Experimental Results} \label{sec:results} \begin{table*}[t] \begin{center} \small \begin{tabular}{@{}ccccccccccc@{}} \toprule & \multicolumn{2}{c}{Test \#1} & \multicolumn{2}{c}{Test \#2} & \multicolumn{2}{c}{Test \#3} & \multicolumn{2}{c}{Test \#4} & \multicolumn{2}{c}{Test \#5} \\ \midrule & $F_P$ & $F_B$ & $F_P$ & $F_B$ & $F_P$ & $F_B$ & $F_P$ & $F_B$ & $F_P$ & $F_B$ \\ \midrule Minibatch $k$-Means & $0.64$ & $0.66$ & $0.60$ & $0.61$ & $0.57$ & $0.58$ & $0.55$ & $0.56$ & $0.54$ & $0.55$ \\ DP $k$-Means & $0.67$ & $0.68$ & $0.62$ & $0.60$ & $0.61$ & $0.59$ & \color{blue}$0.45$ & \color{blue}$0.53$ & \color{blue}$0.43$ & \color{blue}$0.49$ \\ DP vMF $k$-Means & $0.72$ & $0.70$ & $0.67$ & $0.65$ & $0.60$ & $0.58$ & $0.61$ & $0.59$ & $0.61$ & $0.59$ \\ HAC & $0.74$ & $0.75$ & $0.69$ & $0.72$ & $0.66$ & $0.69$ & \multicolumn{2}{c}{DNC} & \multicolumn{2}{c}{DNC} \\ ARO & $0.55$ & $0.59$ & $0.53$ & $0.59$ & $0.50$ & $0.56$ & $0.50$ & $0.51$ & $0.48$ & $0.49$ \\ DBSCAN & $0.32$ & $0.41$ & $0.21$ & $0.40$ & $0.20$ & $0.20$ & $0.20$ & $0.20$ & $0.19$ & $0.20$ \\ CDP & $0.62$ & $0.62$ & $0.59$ & $0.59$ & $0.56$ & $0.56$ & $0.54$ & $0.54$ & $0.52$ & $0.53$ \\ \midrule GCN-VE & $\boldsymbol{0.78}$ & $\boldsymbol{0.79}$ & $\boldsymbol{0.74}$ & $\boldsymbol{0.75}$ & $\boldsymbol{0.71}$ & $\boldsymbol{0.72}$ & $\boldsymbol{0.68}$ & $\boldsymbol{0.70}$ & $0.62$ & $\boldsymbol{0.65}$ \\ STAR-FC & $0.65$ & $0.75$ & $0.64$ & $0.72$ & $0.65$ & $0.69$ & $0.65$ & $0.66$ & $\boldsymbol{0.63}$ & $0.64$ \\ \bottomrule \end{tabular} \end{center} \begin{center} \small \begin{tabular}{@{}cccccc@{}} \toprule & Test \#1 & Test \#2 & Test \#3 & Test \#4 & Test \#5 \\ \midrule Minibatch $k$-Means & $0.2$h & $0.7$h & $1.3$h & $2.5$h & $4$h \\ DP $k$-Means & $13$h & $70$h & $127$h & \color{blue}$253$h & \color{blue}$324$h \\ DP vMF $k$-Means & $29$h & $98$h & $27$h & $9$h & $12$h \\ HAC & $6$h & $35$h & $97$h & DNC & DNC \\ ARO & $0.5$h & $1.8$h & $4$h & $3$h & $9$h \\ DBSCAN & $0.1$h & $0.4$h & $0.8$h & $1.4$h & $1.9$h \\ CDP & $0.4$h & $1.3$h & $3$h & $5$h & $8$h \\ \midrule GCN-VE & $132$h, $0.3$h & $132$h, $0.7$h & $132$h, $1.6$h & $132$h, $2.8$h & $37$h, $2.5$h \\ STAR-FC & $45$h, $0.2$h & $45$h, $0.6$h & $45$h, $0.9$h & $45$h, $1.3$h & $45$h, $1.8$h \\ \bottomrule \end{tabular} \end{center} \caption{Pairwise ($F_P$) and BCubed ($F_B$) F-scores (top) and time to cluster in hours (bottom) across clustering methods for all test splits of Dataset 3. \emph{DNC} indicates that the method \emph{did not converge} in a reasonable amount of time. The time to cluster for GCN-VE and STAR-FC contains the train time and test time separated by a comma. Boldface indicates the highest value for a metric. \color{blue}Blue \color{black} indicates that the particular run used unnormalized embeddings instead of $\ell_2$-normalized embeddings.} \label{tab:dataset3_cluster} \end{table*} Using the methodology outlined in \cref{sec:methodology}, we conduct extensive experiments on Cars196, Stanford Online Products (SOP), and Dataset 3 across 17 clustering methods. We begin by measuring the discriminability of the embeddings learned by the backbones and then move on to the clustering results. Details on compute resources are included in \cref{sec:compute_resources}. \subsection{Backbone Results} \label{sec:backbone_results} One connection we attempt to make in our experimentation is between embedding discriminability and clustering performance: the likelihood of deep clustering methods outperforming shallow methods increases as classes are better discriminated. Thus, we quantify embedding discriminability across all three datasets for reference in coming sections. \Cref{tab:backbone_recall} contains Recall@1 for the clustering-train, validation, and test splits of each dataset. The top table uses unnormalized embeddings (\ie, embeddings that lie in Euclidean space) and the bottom table uses $\ell_2$-normalized embeddings (\ie, embeddings projected onto the surface of a unit hypersphere). Consistent with Scott \etal \cite{Scott2021}, we find that $\ell_2$-normalization leads to strictly improved discrimination, as it removes intra-class variance. We chose Cars196, SOP, and Dataset 3 as benchmarks because the discriminability of the embeddings varies significantly, with Cars196 and SOP performance between 70\% and 80\%, and Dataset 3 above 90\%, for $\ell_2$-normalized embeddings. We emphasize that this is a confound not considered in past work on face clustering. The datasets previously explored all have Recall@1 performance above 90\% and some even near ceiling \cite{Deng2019}. \subsection{Clustering Results} \label{sec:clustering_results} We measure clustering performance on the test splits of all datasets. The clustering results focus on two metrics: Pairwise ($F_P$) \cite{Shi2018} and BCubed ($F_B$) \cite{Amigo2009} F-scores. Because the metrics have different emphases---specifically, $F_P$ emphasizes fidelity of larger clusters and $F_B$ emphasizes fidelity of clusters proportional to their size---we report their harmonic mean when not reporting both, individually. \Cref{fig:cars_sop_cluster} shows clustering performance for Cars196 (top) and SOP (bottom) across 12 unsupervised methods and the 2 supervised methods. \Cref{sec:cars_sop_clustering_results} contains tabulated results used to construct the figure, as well as additional metrics such as normalized mutual information. The red and blue bars represent performance on unnormalized and $\ell_2$-normalized embeddings, respectively. There are two key takeaways: (1) using $\ell_2$-normalized embeddings consistently improves clustering performance, sometimes upward of 30\%, and (2) the supervised, deep methods are outperformed by unsupervised, shallow methods and are not even among the top-3 performers, contrary to past work reporting consistent state-of-the-art performance. Among the unsupervised methods, there is no clear best-performing method, however, HAC performs strongly on both datasets. As a point of comparison, HAC outperforms GCN-VE and STAR-FC by 16\% and 6\% for Cars196, respectively, and 3\% and 1\% for SOP, respectively. The results on Cars196 and SOP highlight a downside of the deep methods, particularly that they become fragile in the presence of uncertainty (\ie, less discriminability) in the embedding space. To verify we can reproduce the benefits of GCN-VE and STAR-FC, we performed a similar clustering analysis on Dataset 3, where Recall@1 is above 90\% for all splits. \Cref{tab:dataset3_cluster} contains Pairwise and BCubed F-scores (top) and the time to cluster (bottom) across the five test splits. Notice that the results are inverted: GCN-VE and STAR-FC consistently outperform the shallow methods. However, the margin between HAC, for example, and GCN-VE is less than 5\%---for the test splits where HAC converged. Comparing to the results in Yang \etal \cite{Yang2020} and Shen \etal \cite{Shen2021}, GCN-VE is shown to consistently outperform HAC by upward of 20\%. In agreement with results reported above for Cars196 and SOP, $\ell_2$-normalized embeddings, again, outperform unnormalized embeddings, compatible with prior work on embedding-learning and face verification \cite{Scott2021,Deng2019,Liu2017,Ranjan2017,Ranjan2019,Wang2017normface,Wang2018,Wang2018a}. The only exception is for test splits \#4 and \#5 for DP $k$-means, where unnormalized embeddings were superior. In addition to state-of-the-art performance, another benefit underlined in past work for GCN-VE and STAR-FC is their efficient inference time. We computed the time to cluster the Dataset 3 test splits in the bottom of \cref{tab:dataset3_cluster}. We also find that GCN-VE and STAR-FC are among the most efficient methods at inference time, outpaced only by DBSCAN and minibatch $k$-means. However, one factor not presented in past work is the \emph{total} time to employ deep clustering methods, that is, including the additional time to train the method. We find that training can take tens of hours and when factored in with the inference time, presents a trade-off for the deep methods. Depending on the application, GCN-VE and STAR-FC may only be used for inference one time, whereas if they are used in a production system may be used for inference hundreds of times. When considering the total time, in addition to their marginal improvements over much simpler methods, we encourage inspection into the amortized time (\ie, the dispersion of training time amongst the expected number of inference runs). In the case of fewer inference runs, methods such as minibatch $k$-means may provide adequate performance while producing a more-efficient amortized runtime. \subsubsection{Investigation into GCN-VE} \label{sec:gcn_ve_investigation} \begin{table}[t] \begin{center} \small \begin{tabular}{@{}ccccc@{}} \toprule & \multicolumn{2}{c}{Cars196} & \multicolumn{2}{c}{SOP} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} & $F_P$ & $F_B$ & $F_P$ & $F_B$ \\ \midrule GCN-VE \cite{Yang2020} & $0.26$ & $0.37$ & $0.44$ & $0.63$ \\ GCN-VE (ours) & $0.22$ & $0.37$ & $0.47$ & $0.65$ \\ \bottomrule \end{tabular} \end{center} \caption{Comparison of Pairwise ($F_P$) and BCubed ($F_B$) F-scores for the open-source GCN-VE code and our reimplementation on Cars196 and SOP.} \label{tab:gcn_ve_verify} \end{table} \begin{table}[t] \begin{center} \small \scalebox{0.96}{ \begin{tabular}{@{}ccccccc@{}} \toprule & \multicolumn{2}{c}{\multirow{2}{*}{Cars196}} & \multicolumn{2}{c}{\multirow{2}{*}{SOP}} & \multicolumn{2}{c}{Dataset 3} \\ & & & & & \multicolumn{2}{c}{Test \#1} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} & $F_P$ & $F_B$ & $F_P$ & $F_B$ & $F_P$ & $F_B$ \\ \midrule Tree Deduction & $0.13$ & $0.36$ & $0.44$ & $0.63$ & $0.38$ & $0.41$ \\ + $\ell_2$-Norm & $\boldsymbol{0.25}$ & $\boldsymbol{0.37}$ & $0.44$ & $0.63$ & $0.62$ & $0.65$ \\ + GCN-E & $0.19$ & $\boldsymbol{0.37}$ & $0.44$ & $0.62$ & $0.62$ & $0.65$ \\ \midrule GCN-VE & $0.22$ & $\boldsymbol{0.37}$ & $\boldsymbol{0.47}$ & $\boldsymbol{0.65}$ & $\boldsymbol{0.78}$ & $\boldsymbol{0.79}$ \\ \bottomrule \end{tabular}} \end{center} \caption{Ablation study comparing GCN-VE to simpler variants. The reported results are Pairwise ($F_P$) and BCubed ($F_B$) F-scores on Cars196, SOP, and test split \#1 of Dataset 3. Boldface indicates the highest value for a metric.} \label{tab:gcn_ve_variants} \end{table} Due to the unexpected results of GCN-VE on Cars196 and SOP, we conducted further investigation of the method to verify our findings. First, we compared our reimplementation of GCN-VE to the open-source version. \Cref{tab:gcn_ve_verify} compares the two implementations on Cars196 and SOP. We confirm that our results, within reasonable variance, match the implementation from Yang \etal \cite{Yang2020}. Next, we ablated the components of GCN-VE to better understand which contributed to final performance. In general, GCN-VE works by: (1) estimating a confidence for each embedding via the GCN-V graph convolutional network, (2) constructing subgraphs using the estimated confidences, (3) estimating which edges in the subgraphs are valid connections via the GCN-E graph convolutional network, and (4) running a tree-deduction algorithm to form the final clusters. We consider three ablated variants of GCN-VE, described below. All that is needed to form clusters is to run tree deduction on a $k$-nearest-neighbors subgraph with edges pruned by a distance threshold. We refer to this first ablation as \emph{Tree Deduction}. Based on empirical analysis from Scott \etal \cite{Scott2021}, the embedding $\ell_2$-norm is a measure of confidence, thus, we can replace the GCN-V network with the $\ell_2$-norm of each embedding. Additionally, we can replace the GCN-E network simply by considering edges valid if they are connected to higher-confidence embeddings within a distance threshold, and run tree deduction as is. We refer to this second ablation as \emph{Tree Deduction + $\ell_2$-norm}. The third ablation, \emph{Tree Deduction + $\ell_2$-norm + GCN-E}, reintroduces GCN-E, but still uses $\ell_2$-norm instead of the GCN-V network. \Cref{tab:gcn_ve_variants} measures clustering performance of the three ablations against GCN-VE for Cars196, SOP, and test split \#1 of Dataset 3. Interestingly, one can recover the majority of GCN-VE performance with the Tree Deduction + $\ell_2$-norm variant, corroborating results from Scott \etal \cite{Scott2021} on $\ell_2$-norm conveying embedding confidence. Additionally, the critical component of GCN-VE is not the GCN-E network, but GCN-V. Tree Deduction + $\ell_2$-norm performs as well, if not better, than Tree Deduction + $\ell_2$-norm + GCN-E. \subsubsection{Degradation Study} \label{sec:degradation_study} \begin{table}[t] \begin{center} \small \begin{tabular}{@{}ccccc@{}} \toprule & \multirow{2}{*}{32D} & \multirow{2}{*}{64D} & \multirow{2}{*}{128D} & 256D \\ & & & & (original) \\ \midrule Dataset 3 & \multirow{2}{*}{$0.68$} & \multirow{2}{*}{$0.86$} & \multirow{2}{*}{$0.92$} & \multirow{2}{*}{$0.94$} \\ Clustering Train & & & & \\ \bottomrule \end{tabular} \end{center} \caption{Recall@1 on the clustering-train split of Dataset 3 as the embedding dimensionality increases.} \label{tab:dataset3_degradation_recall} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{dataset3_degradation.pdf} \caption{Harmonic mean between Pairwise ($F_P$) and BCubed ($F_B$) F-scores of GCN-VE and HAC on test split \#1 of Dataset 3 for decreasing embedding dimensionalities. The percentages above the GCN-VE (green) bars indicate its relative reduction in error compared to HAC.} \label{fig:dataset3_degradation} \end{figure} While the experimental results thus far support our claim that the benefits of deep clustering methods are tied to embedding discriminability, we admit there are confounds. First, Cars196 and SOP are non-face datasets and second, they have very different dataset statistics compared to Dataset 3 and the historically-used face datasets, summarized in \cref{tab:dataset_stats}. To remove these confounds, we perform a degradation study on Dataset 3 where we randomly select a subset of embedding dimensions to ignore, thus decreasing the discriminability. We remove the same embedding dimensions for all splits of the data to ensure they are compatible. \Cref{tab:dataset3_degradation_recall} measures the Recall@1 performance of the clustering-train split of Dataset 3 for varying embedding dimensionalities. As expected, the lower-dimensional spaces have reduced discriminability, for example, the 32D space has Recall@1 dropping 26\% compared to the original 256D embedding. \Cref{fig:dataset3_degradation} measures the harmonic mean of $F_P$ and $F_B$ for GCN-VE and HAC on test split \#1 of Dataset 3 as we decrease the embedding dimensionality. The tabulated results composing the figure are included in \cref{sec:dataset3_degradation}. The percentage above the GCN-VE (green) bars indicates the relative error reduction that GCN-VE achieves over HAC. Note that for the original 256D embedding space, GCN-VE has a 15\% relative error reduction over HAC, but by projecting the embedding space to 128D---decreasing embedding discriminability by a mere 2\%---GCN-VE loses all advantage over HAC. Thus, while we do not see as large of a performance drop for GCN-VE relative to HAC, comparatively, as we do for Cars196, the degradation study is consistent with our SOP results. \section{Discussion and Conclusions} \label{sec:discussion} Many of the innovations in supervised methods for clustering pretrained embeddings are tied to face verification. The methods were benchmarked primarily on face datasets and motivated by enlarging them via pseudo-labeling the vast amounts of unlabeled face data. Common face datasets (\eg, MS-Celeb1M \cite{Guo2016} and MegaFace \cite{Ira2016}) are no longer publicly available, with progress reliant on using unofficial, shared embeddings extracted from pretrained backbones. Additionally, the impressive improvements of supervised methods are bound to face embeddings, shown to have verification and Recall@1 performance well above 90\% and in some cases near ceiling \cite{Deng2019}. Finally, results of recent work may implicitly favor the novel, supervised methods based on methodological choices such as the lack of a validation split and hyperparameter tuning based on test performance. Our goals in conducting a large-scale empirical study on clustering pretrained embeddings are to: (1) broaden the scope of supervised clustering methods beyond faces, (2) present an end-to-end pipeline including backbone training along with results for benchmarking future methods, and (3) benchmark the robustness of supervised, deep methods on embeddings with less discriminability. We do so by presenting benchmarks for 17 clustering methods on three datasets: Cars196, SOP, and a third dataset, Dataset 3. Cars196 and SOP are not only from diverse visual domains, but have very different dataset statistics (\ie, \cref{tab:dataset_stats}) and embedding discriminability (\ie, \cref{tab:backbone_recall}) compared to the historically-used face datasets. We emphasize that Dataset 3 was chosen to corroborate results from past embedding-clustering research. We discuss conclusions from our study below. \paragraph{Embedding discriminability vs.~supervised clustering performance.} The main hypothesis we verify is the existence of a relationship between how discriminative the embeddings are (\ie, Recall@1) and the benefits of supervised, deep clustering methods. We see for Cars196 and SOP---where embedding Recall@1 is between 70\% and 80\%---that the state-of-the-art supervised methods are not among the top 3 performers. They are outperformed by HAC and spectral clustering, for example. In contrast, for Dataset 3---where embedding Recall@1 is above 90\%---GCN-VE is a consistent state-of-the-art method followed by STAR-FC. To remove any confounds caused by the domain or the dataset statistics, we show that we can remove all benefits of the supervised methods by randomly projecting the Dataset 3 embeddings from 256D to 128D (\ie, decreasing Recall@1 by a mere 2\%). We leave it to future work to amend the fragility of supervised, deep methods, but we hypothesize that it may be caused by susceptibility of these methods to overfit spurious correlations in the embedding space that aren't generalizable and that simpler heuristics are more effective in cases of high embedding uncertainty. \paragraph{Consistency with results from past work.} We believe certain methodological choices such as not using a validation split and using the test split for hyperparameter tuning and early stopping may have implicitly favored the deep methods, which are more flexible and have significantly more hyperparameters than the shallow methods. Instead, we use a validation set for hyperparameter selection and early stopping for the deep methods, as well as reimplement and evaluate all baselines, and find that the unsupervised methods are actually competitive. For example, HAC only underperforms state-of-the-art on Dataset 3 by 5\%, whereas past work indicates it can underperform by 20\% or more consistently \cite{Yang2020,Shen2021}. While supervised methods are clearly superior on Dataset 3, the margin of improvement is of much smaller magnitude than previously reported. \paragraph{Benefits of $\ell_2$-normalization.} It has been shown that softmax cross-entropy and its variants discover the most-discriminative embedding spaces \cite{Scott2021,Musgrave2020,Boudiaf2020,Tian2020}. Due to the structure of the embeddings, the majority of intra-class variance extends outward from the origin. As a result, Scott \etal \cite{Scott2021} verify that $\ell_2$-normalization of the embeddings leads to robust Recall@1 improvements, which we find transfers directly to clustering performance, as well (\ie, \cref{fig:cars_sop_cluster}). For clustering pretrained embeddings, particularly trained with some form of softmax cross-entropy, we recommend $\ell_2$-normalization prior to clustering. \paragraph{Performance vs.~runtime tradeoff.} One benefit of the supervised methods is their efficient inference time. GCN-VE and STAR-FC are outpaced only by DBSCAN and minibatch $k$-means in \cref{tab:dataset3_cluster} for Dataset 3. However, the time to train supervised methods is non-negligible and one should consider amortized runtime when choosing among methods. If the goal is to use clustering for visualization, for example, and inference is only going to run once, the total runtime may be too costly given the marginal improvements over much simpler and faster unsupervised methods. \paragraph{On the value of shallow methods.} Given recent trends, one might expect the supervised, deep methods to be strictly superior to the unsupervised, shallow methods that have become commonplace for clustering. However, broadening the scope beyond the face domain has underlined their fragility in the presence of embedding uncertainty and emphasized the value of shallow methods. We find that fundamental methods such as spectral clustering and HAC can outperform GCN-VE and STAR-FC despite having three times fewer hyperparameters and no learnable parameters. By proposing new benchmarks on Cars196 and SOP, we hope that our empirical study serves as the foundation for which supervised methods can be further improved. \section{Ethical Considerations} \label{sec:ethical_considerations} A major goal of our research is to broaden the study of clustering pretrained embeddings beyond the facial verification domain, which we do by providing benchmarks on Cars196 and SOP. We appreciate the sensitivity associated with research on human data, including images of faces, and emphasize that clustering should only be employed in contexts that are responsible, socially beneficial, and fair, for example, in personalization of products for better user experiences, and not as part of harmful technologies or systems. Additionally, we acknowledge that non-human datasets, including Cars196 and SOP, can exhibit bias through lack of representation. We chose to report only on the downstream clustering performance, as to not reflect on or reinforce biases present in the data. {\small \bibliographystyle{ieee_fullname} \section{Compute Resources} \label{sec:compute_resources} Each backbone was trained using a single NVIDIA Tesla T4, 16 CPUs, and 64 GB of RAM on Google Cloud Platform. Code was implemented using PyTorch v1.6.0 and Python 3.8.2 on Ubuntu 18.04. For clustering, all methods had access to 16 CPUs, 96 GB of RAM, and a single NVIDIA Tesla T4, when appropriate. Code was implemented with PyTorch v1.6.0, scikit-learn v0.24.2, FAISS v1.7.1, and Python 3.8.2 on Ubuntu 18.04. Additionally, we note that both HAC and spectral clustering could be made faster by computing the nearest-neighbor subgraph on GPU instead of CPU. \section{Cars196 and SOP Tabulated Results} \label{sec:cars_sop_clustering_results} The tables below contains clustering results for Cars196 and SOP. The $F_P$ and $F_B$ values are used to compute the harmonic means for $\ell_2$-normalized embeddings presented in \cref{fig:cars_sop_cluster}. \begin{table}[H] \begin{center} \scalebox{0.96}{ \begin{tabular}{@{}cccccccc@{}} \toprule \multicolumn{8}{c}{Cars196} \\ \midrule & Adj. Rand Index & NMI & AMI & $F_P$ & $F_B$ & Time to Cluster (s) & Number of Clusters \\ \midrule DBSCAN & $0.10$ & $0.63$ & $0.28$ & $0.12$ & $0.35$ & $3$ & $2,425$ \\ MeanShift & $0.18$ & $0.65$ & $0.42$ & $0.20$ & $0.38$ & $177$ & $1,407$ \\ ARO & $0.28$ & $0.69$ & $0.49$ & $0.29$ & $0.36$ & $3$ & $1,415$ \\ DP $k$-Means & $0.37$ & $0.69$ & $0.61$ & $0.38$ & $0.41$ & $10$ & $403$ \\ DP vMF $k$-Means & $0.36$ & $0.67$ & $0.61$ & $0.38$ & $0.40$ & $12$ & $155$ \\ $k$-Means & $0.40$ & $0.65$ & $0.62$ & $0.41$ & $0.42$ & $12$ & $50$ \\ Spherical $k$-Means & $0.40$ & $0.65$ & $0.62$ & $0.41$ & $0.42$ & $12$ & $50$ \\ GMM & $0.40$ & $0.66$ & $0.62$ & $0.41$ & $0.43$ & $62$ & $50$ \\ vMF-MM & $0.37$ & $0.65$ & $0.61$ & $0.38$ & $0.40$ & $267$ & $60$ \\ HAC & $0.42$ & $0.68$ & $0.65$ & $0.44$ & $0.45$ & $4$ & $55$ \\ CDP & $0.24$ & $0.68$ & $0.46$ & $0.26$ & $0.36$ & $2$ & $1,686$ \\ Spectral & $\boldsymbol{0.47}$ & $0.70$ & $\boldsymbol{0.68}$ & $\boldsymbol{0.48}$ & $\boldsymbol{0.50}$ & $99$ & $50$ \\ GCN-VE & $0.20$ & $0.65$ & $0.49$ & $0.22$ & $0.37$ & $1140$, $16$ & $905$ \\ STAR-FC & $0.36$ & $\boldsymbol{0.71}$ & $0.53$ & $0.37$ & $0.40$ & $341$, $1$ & $1,624$ \\ \bottomrule \end{tabular}} \end{center} \begin{center} \scalebox{0.96}{ \begin{tabular}{@{}cccccccc@{}} \toprule \multicolumn{8}{c}{Cars196} \\ \midrule & Adj. Rand Index & NMI & AMI & $F_P$ & $F_B$ & Time to Cluster (m) & Number of Clusters \\ \midrule DBSCAN & $0.29$ & $0.93$ & $0.49$ & $0.29$ & $0.58$ & $0.8$ & $17,801$ \\ MeanShift & $0.35$ & $0.93$ & $0.48$ & $0.35$ & $0.57$ & $54$ & $18,096$ \\ ARO & $0.52$ & $\boldsymbol{0.94}$ & $0.64$ & $0.52$ & $0.66$ & $0.2$ & $9,915$ \\ DP $k$-Means & $0.45$ & $0.93$ & $0.59$ & $0.45$ & $0.63$ & $13$ & $12,261$ \\ DP vMF $k$-Means & $0.45$ & $0.93$ & $0.59$ & $0.45$ & $0.63$ & $3$ & $11,282$ \\ $k$-Means & $0.46$ & $0.92$ & $0.58$ & $0.46$ & $0.61$ & $210$ & $7,750$ \\ Spherical $k$-Means & $0.45$ & $0.93$ & $0.59$ & $0.45$ & $0.62$ & $242$ & $8,250$ \\ HAC & $0.52$ & $0.93$ & $0.64$ & $0.52$ & $0.65$ & $4$ & $7,250$ \\ CDP & $\boldsymbol{0.56}$ & $\boldsymbol{0.94}$ & $\boldsymbol{0.67}$ & $\boldsymbol{0.56}$ & $\boldsymbol{0.69}$ & $0.1$ & $11,480$ \\ GCN-VE & $0.47$ & $0.93$ & $0.62$ & $0.47$ & $0.65$ & $20$, $13$s & $8,808$ \\ STAR-FC & $0.50$ & $0.93$ & $0.61$ & $0.50$ & $0.65$ & $8$, $2$s & $10,393$ \\ \bottomrule \end{tabular}} \end{center} \caption{Clustering results for Cars196 and SOP. NMI, AMI, $F_P$, and $F_B$ represent normalized mutual information, adjusted mutual information, Pairwise F-score, and BCubed F-score, respectively. The time to cluster for Cars196 and SOP are measured in seconds and minutes, respectively. The time to cluster for GCN-VE and STAR-FC contains the train time and test time separated by a comma. Boldface indicates the highest value for a metric.} \end{table} \section{Comparison of $k$-Means Initialization Strategies} \label{sec:k_means_initializations} There are two common strategies for initializing clusters in $k$-means and spherical $k$-means. The simplest is to initialize clusters by randomly selecting embeddings from the dataset. The alternative is to use $k$-means++ \cite{Arthur2007} which tries to pick clusters that are generally distant from one another, leading to faster convergence and better expected performance. We find that $k$-means++ indeed is a better initialization strategy, however, for large $k$ it increases the runtime drastically. The increased runtime is caused by successively computing the nearest cluster-center for each embedding, and thus scales poorly as $k$ grows. For our experiments, we use the $k$-means++ implementation from scikit-learn and admit that performance could be improved by running nearest-neighbor computation on accelerated hardware or by using approximate nearest-neighbor methods. Based on our results summarized in the tables below, we find that the benefits of $k$-means++ initialization are not always significant (\eg, \cref{tab:k_means_dataset3}) and may not be worth the increased runtime, which can be upwards of 900x slower (e.g., SOP results in \cref{tab:k_means_cars_sop}) than a random initialization for large $k$. \begin{table}[H] \begin{center} \begin{tabular}{@{}ccccc@{}} \toprule \multicolumn{5}{c}{Cars196} \\ \midrule & Initialization & $F_P$ & $F_B$ & Time to Cluster (s) \\ \midrule $k$-Means & Random & $\boldsymbol{0.41}$ & $0.42$ & $1$ \\ Spherical $k$-Means & Random & $\boldsymbol{0.41}$ & $\boldsymbol{0.43}$ & $1$ \\ $k$-Means & $k$-Means++ & $\boldsymbol{0.41}$ & $0.42$ & $12$ \\ Spherical $k$-Means & $k$-Means++ & $\boldsymbol{0.41}$ & $0.42$ & $12$ \\ \bottomrule \end{tabular} \end{center} \begin{center} \begin{tabular}{@{}ccccc@{}} \toprule \multicolumn{5}{c}{SOP} \\ \midrule & Initialization & $F_P$ & $F_B$ & Time to Cluster (s) \\ \midrule $k$-Means & Random & $0.36$ & $0.53$ & $14$ \\ Spherical $k$-Means & Random & $0.36$ & $0.53$ & $31$ \\ $k$-Means & $k$-Means++ & $\boldsymbol{0.46}$ & $0.61$ & $12,591$ \\ Spherical $k$-Means & $k$-Means++ & $0.45$ & $\boldsymbol{0.62}$ & $14,491$ \\ \bottomrule \end{tabular} \end{center} \caption{Comparison of $k$-means initialization strategies on Cars196 and SOP. $F_P$ and $F_B$ represent Pairwise F-score and BCubed F-score, respectively. Boldface indicates the highest value for a metric.} \label{tab:k_means_cars_sop} \end{table} \begin{table}[H] \begin{center} \begin{tabular}{@{}cccccccccccc@{}} \toprule & Initialization & \multicolumn{2}{c}{Test \#1} & \multicolumn{2}{c}{Test \#2} & \multicolumn{2}{c}{Test \#3} & \multicolumn{2}{c}{Test \#4} & \multicolumn{2}{c}{Test \#5} \\ \midrule & & $F_P$ & $F_B$ & $F_P$ & $F_B$ & $F_P$ & $F_B$ & $F_P$ & $F_B$ & $F_P$ & $F_B$ \\ \midrule Minibatch $k$-Means & Random & $\boldsymbol{0.64}$ & $0.65$ & $\boldsymbol{0.60}$ & $\boldsymbol{0.61}$ & $\boldsymbol{0.57}$ & $\boldsymbol{0.58}$ & $\boldsymbol{0.55}$ & $\boldsymbol{0.56}$ & $0.53$ & $0.54$ \\ Minibatch $k$-Means & $k$-Means++ & $\boldsymbol{0.64}$ & $\boldsymbol{0.66}$ & $\boldsymbol{0.60}$ & $\boldsymbol{0.61}$ & $\boldsymbol{0.57}$ & $\boldsymbol{0.58}$ & $\boldsymbol{0.55}$ & $\boldsymbol{0.56}$ & $\boldsymbol{0.54}$ & $\boldsymbol{0.55}$ \\ \bottomrule \end{tabular} \end{center} \begin{center} \begin{tabular}{@{}ccccccc@{}} \toprule & Initialization & Test \#1 & Test \#2 & Test \#3 & Test \#4 & Test \#5 \\ \midrule Minibatch $k$-Means & Random & $3$m & $7$m & $12$m & $18$m & $23$m \\ Minibatch $k$-Means & $k$-Means++ & $9$m & $41$m & $78$m & $147$m & $233$m \\ \bottomrule \end{tabular} \end{center} \caption{Comparison of $k$-means initialization strategies on Dataset 3. For each of the five Dataset 3 test splits, the top table contains Pairwise ($F_P$) and BCubed ($F_B$) F-scores, and the bottom table contains the time to cluster in minutes. Boldface indicates the highest value for a metric.} \label{tab:k_means_dataset3} \end{table} \section{Dataset 3 Degradation Study} \label{sec:dataset3_degradation} The table below contains clustering results for GCN-VE and HAC on test split \#1 of Dataset 3 where we reduce the dimensionality of the input embedding. We reduce the dimensionality as a method for degrading the embedding discriminability, measured via Recall@1 in \cref{tab:dataset3_degradation_recall}. These results are used to compute the harmonic means presented in \cref{fig:dataset3_degradation}. \begin{table}[H] \begin{center} \begin{tabular}{@{}ccccccccc@{}} \toprule \multicolumn{9}{c}{Dataset 3 Test Split \#1} \\ \midrule & \multicolumn{2}{c}{\multirow{2}{*}{32D}} & \multicolumn{2}{c}{\multirow{2}{*}{64D}} & \multicolumn{2}{c}{\multirow{2}{*}{128D}} & \multicolumn{2}{c}{256D} \\ & & & & & & & \multicolumn{2}{c}{(original)} \\ \midrule & $F_P$ & $F_B$ & $F_P$ & $F_B$ & $F_P$ & $F_B$ & $F_P$ & $F_B$ \\ \midrule HAC & $\boldsymbol{0.27}$ & $0.27$ & $\boldsymbol{0.53}$ & $0.54$ & $\boldsymbol{0.68}$ & $\boldsymbol{0.70}$ & $0.74$ & $0.75$ \\ GCN-VE & $0.26$ & $\boldsymbol{0.29}$ & $\boldsymbol{0.53}$ & $\boldsymbol{0.55}$ & $\boldsymbol{0.68}$ & $0.69$ & $\boldsymbol{0.78}$ & $\boldsymbol{0.79}$ \\ \bottomrule \end{tabular} \end{center} \caption{Pairwise ($F_P$) and BCubed ($F_B$) F-scores for HAC and GCN-VE on test split \#1 of Dataset 3 as the embedding dimensionality increases. Boldface indicates the highest value for a metric.} \label{tab:dataset3_degradation} \end{table} \section{Experimental Details} \label{sec:experimental_details} \subsection{Backbone} \label{sec:backbone_details} For Cars196 and SOP, the experimental details mimic Scott \etal \cite{Scott2021} unless explicitly noted. For completeness, we recount the details below. \subsubsection{Architecture} \label{sec:backbone_architecture} The backbone network architecture is a ResNet50 \cite{He2016}. The network is initialized using weights pretrained on ImageNet and all batch-normalization parameters are frozen. We remove the head of the architecture and add two fully-connected layers, with no activations, directly following the global-average-pooling layer. The first fully-connected layer maps from 2048 units to 256 units, and the second fully-connected layer maps from 256 units to $Y$ units, where $Y$ is the number of classes in the backbone-training split of the dataset. The embedding dimensionality is thus 256D. For Cars196 and SOP, $Y = 49$ and $Y = 5658$, respectively. \subsubsection{Dataset Augmentation} \label{sec:dataset_augmentation} \paragraph{Cars196.} Data augmentation during training includes: (1) resizing images to 256$\times$256, (2) random jittering of brightness, contrast, saturation, and hue with factors in [0.7, 1.3], [0.7, 1.3], [0.7, 1.3], and [-0.1, 0.1], respectively, (3) cropping at a random location with random size between 16\% and 100\% of the input size, (4) resizing the final cropped images to 224$\times$224, and (5) random horizontal flipping and $z$-score normalization. Data augmentation during validation and testing includes: (1) resizing images to 256$\times$256, (2) center cropping to 224$\times$224, and (3) $z$-score normalization. The mean and standard deviation for $z$-score normalization are the same values used for ImageNet. We match Boudiaf \etal \cite{Boudiaf2020} and sample batches randomly. \paragraph{SOP.} Data augmentation during training includes: (1) resizing images to 256$\times$256, (2) cropping at a random location with random size between 16\% and 100\% of the input size with the aspect ratio randomly selected in [0.75, 1.33], (3) resizing the final cropped images to 224$\times$224, and (4) random horizontal flipping and $z$-score normalization. Data augmentation during validation and testing includes: (1) resizing images to 256$\times$256, (2) center cropping to 224$\times$224, and (3) $z$-score normalization. The mean and standard deviation for $z$-score normalization are the same values used for ImageNet. We match Boudiaf \etal \cite{Boudiaf2020} and sample batches randomly. \subsubsection{Backbone Network Hyperparameters} \label{sec:backbone_hparams} Models are trained with SGD and either standard or Nesterov momentum. Based on results from Scott \etal \cite{Scott2021}, we use cosine softmax cross-entropy as the loss. For Cars196 and SOP, the validation split's Recall@1 is monitored throughout training and if it does not improve for 15 epochs, the learning rate is cut in half. If the validation split's Recall@1 does not improve for 35 epochs, model training terminates early. The model parameters are saved for the epoch resulting in the highest validation Recall@1. The inverse-temperature, $\beta$, is parameterized as $\beta = \exp(\tau)$ where $\tau \in \mathbb{R}$ is an unconstrained network parameter learned automatically via gradient descent. Unlike Scott \etal \cite{Scott2021}, $\tau$ is optimized using the same learning rate as all other network parameters. The remaining hyperparameters for training the backbone network are: \begin{itemize} \itemsep0.1em \item Learning rate \item $\ell_2$ weight decay \item Momentum \item Nesterov momentum \item Initial value of $\tau$ \end{itemize} Hyperparameters were optimized with a grid search. The table below contains the hyperparameter values used for the model achieving the best validation Recall@1. \begin{table}[H] \begin{center} \begin{tabular}{@{}cccccc@{}} \toprule & Learning rate & $\ell_2$ weight decay & Momentum & Nesterov momentum & Initial value of $\tau$ \\ \midrule Cars196 & $0.005$ & $5\times10^{-5}$ & $0.0$ & False & $0.0$ \\ SOP & $0.005$ & $5\times10^{-5}$ & $0.9$ & True & $-2.773$ \\ \bottomrule \end{tabular} \end{center} \caption{Backbone hyperparameter values for Cars196 and SOP.} \end{table} \subsection{Clustering} \label{sec:clustering_details} For clustering, the inputs are the 256D embeddings from the penultimate output of the trained backbone network. The embeddings are $\ell_2$-normalized unless explicitly noted otherwise. To produce the embeddings for clustering, the images were augmented using the testing augmentation, which is: (1) resizing images to 256$\times$256, (2) center cropping to 224$\times$224, and (3) $z$-score normalization. \subsubsection{GCN-VE Details} \label{sec:gcn_ve_details} We match Yang \etal \cite{Yang2020} and use a one-layer graph convolutional network (GCN) \cite{Kipf2017} with mean aggregation \cite{Yang2019} and ReLU activation for GCN-V, and a similar four-layer graph convolutional network for GCN-E. GCN-V maps the 512D output of the GCN through a fully-connected layer to 512 units, then through a PReLU activation, followed by a final fully-connected layer to a single output unit. The model is trained with mean-squared-error to match an empirically-estimated confidence based on the density of an embedding's neighborhood. GCN-E has identical structure to GCN-V except that it contains a four-layer GCN, the GCN output is 256D, and the first fully-connected layer has 256 units. GCN-E is trained with softmax cross-entropy to predict the likelihood that a pair of nearby embeddings belong to the same cluster. The GCN layers are initialized using Xavier-uniform for the weights and zero for the biases. The fully-connected layers are initialized using Kaiming-uniform for both weights and biases. Both GCN-V and GCN-E are trained with SGD and standard momentum. If the validation split's loss does not decrease for 100 epochs, the learning rate is cut in half, and if it does not decrease for 250 epochs, model training terminates early. The model parameters are saved for the epoch resulting in the lowest validation loss. \subsubsection{STAR-FC Details} \label{sec:star_fc_details} We match Shen \etal \cite{Shen2021} and use a one-layer graph convolutional network (GCN) \cite{Kipf2017} with mean aggregation \cite{Yang2019} and ReLU activation. The GCN output is 1024D which is then passed through a fully-connected layer to 512 units, then through a PReLU activation, followed by a final fully-connected layer to a single output unit. The model is trained with softmax cross-entropy to predict the likelihood that a pair of embeddings belong to the same cluster. The GCN layers are initialized using Xavier-uniform for the weights and zero for the biases. The fully-connected layers are initialized using Kaiming-uniform for both weights and biases. STAR-FC is trained with SGD and standard momentum. If the validation split's loss does not decrease for 15 epochs, the learning rate is cut in half, and if it does not decrease for 35 epochs, model training terminates early. The model parameters are saved for the epoch resulting in the lowest validation loss. \subsubsection{Clustering Hyperparameters} \label{sec:clustering_hparams} For each clustering method, we list the hyperparameters and their values for Cars196 and SOP. All hyperparameters were optimized with a grid search. \paragraph{$k$-Means with Random Initialization \cite{Lloyd1982,Sculley2010}.} The only hyperparameters are $k$, the number of clusters, and the minibatch size. A minibatch size of -1 indicates the batch size is equal to the full dataset size. \begin{table}[H] \begin{center} \begin{tabular}{@{}ccc@{}} \toprule & $k$ & Minibatch size \\ \midrule Cars196 & $40$ & $-1$ \\ SOP & $8,250$ & $-1$ \\ \bottomrule \end{tabular} \end{center} \caption{Hyperparameter values for $k$-means with random initialization.} \end{table} \paragraph{$k$-Means with $k$-Means++ Initialization \cite{Arthur2007,Sculley2010}.} The only hyperparameters are $k$, the number of clusters, and the minibatch size. A minibatch size of -1 indicates the batch size is equal to the full dataset size. \begin{table}[H] \begin{center} \begin{tabular}{@{}ccc@{}} \toprule & $k$ & Minibatch size \\ \midrule Cars196 & $50$ & $-1$ \\ SOP & $7,750$ & $-1$ \\ \bottomrule \end{tabular} \end{center} \caption{Hyperparameter values for $k$-means with $k$-means++ initialization.} \end{table} \paragraph{Spherical $k$-Means with Random Initialization \cite{Hornik2012}.} The only hyperparameter is $k$, the number of clusters. Due to the performance of spherical $k$-means closely matching that of $k$-means with $\ell_2$-normalized embeddings, and the lack of a readily-available implementation of minibatch spherical $k$-means, we only ran spherical $k$-means for Cars196 and SOP, where the minibatch size could be -1. \begin{table}[H] \begin{center} \begin{tabular}{@{}cc@{}} \toprule & $k$ \\ \midrule Cars196 & $45$ \\ SOP & $8,000$ \\ \bottomrule \end{tabular} \end{center} \caption{Hyperparameter values for spherical $k$-means with random initialization.} \end{table} \paragraph{Spherical $k$-Means with $k$-Means++ Initialization \cite{Hornik2012}.} The only hyperparameter is $k$, the number of clusters. Due to the performance of spherical $k$-means closely matching that of $k$-means with $\ell_2$-normalized embeddings, and the lack of a readily-available implementation of minibatch spherical $k$-means, we only ran spherical $k$-means for Cars196 and SOP, where the minibatch size could be -1. \begin{table}[H] \begin{center} \begin{tabular}{@{}cc@{}} \toprule & $k$ \\ \midrule Cars196 & $50$ \\ SOP & $8,250$ \\ \bottomrule \end{tabular} \end{center} \caption{Hyperparameter values for spherical $k$-means with $k$-means++ initialization.} \end{table} \paragraph{Dirichlet-Process (DP) $k$-Means \cite{Kulis2012}.} The hyperparameters are: \begin{itemize} \itemsep0.1em \item $\lambda$, the cluster penalty \item Initialization strategy, either `Global Centroid' or `Random' \item Whether to do an online EM update \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{@{}cccc@{}} \toprule & $\lambda$ & Initialization strategy & Online EM \\ \midrule Cars196 & $0.9$ & Global Centroid & True \\ SOP & $0.85$ & Random & False \\ \bottomrule \end{tabular} \end{center} \caption{Hyperparameter values for DP $k$-means.} \end{table} \paragraph{Dirichlet-Process (DP) von Mises--Fisher (vMF) $k$-Means \cite{Straub2015}.} The hyperparameters are: \begin{itemize} \itemsep0.1em \item $\lambda$, the cluster penalty \item Initialization strategy, either `Global Centroid' or `Random' \item Whether to do an online EM update \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{@{}cccc@{}} \toprule & $\lambda$ & Initialization strategy & Online EM \\ \midrule Cars196 & $-0.55$ & Random & True \\ SOP & $-0.38$ & Random & False \\ \bottomrule \end{tabular} \end{center} \caption{Hyperparameter values for DP vMF $k$-means.} \end{table} \paragraph{Gaussian Mixture Model (GMM).} The hyperparameters are: \begin{itemize} \itemsep0.1em \item $k$, the number of clusters \item Initialization strategy, either `k-Means' or `Random' \item Type of covariance, either `Full' or `Diagonal' \end{itemize} GMM was run only on Cars196 due to runtime inefficiencies. \begin{table}[H] \begin{center} \begin{tabular}{@{}cccc@{}} \toprule & $k$ & Initialization strategy & Covariance type \\ \midrule Cars196 & $50$ & $k$-Means & Diagonal \\ \bottomrule \end{tabular} \end{center} \caption{Hyperparameter values for GMM.} \end{table} \paragraph{von Mises--Fisher Mixture Model (vMF-MM) \cite{Gopal2014,Banerjee2005}.} The hyperparameters are: \begin{itemize} \itemsep0.1em \item $k$, the number of clusters \item Initialization strategy, one of `k-Means++' or `Random', `Random-Class', or `Random-Orthonormal' \item Posterior type, either `Hard' or `Soft' \end{itemize} vMF-MM was run only on Cars196 due to runtime inefficiencies. \begin{table}[H] \begin{center} \begin{tabular}{@{}cccc@{}} \toprule & $k$ & Initialization strategy & Posterior type \\ \midrule Cars196 & $50$ & Random-Class & Soft \\ \bottomrule \end{tabular} \end{center} \caption{Hyperparameter values for vMF-MM.} \end{table} \paragraph{Hierarchical Agglomerative Clustering (HAC) \cite{Sibson1973}.} The hyperparameters are: \begin{itemize} \itemsep0.1em \item $k$, the number of clusters \item Affinity metric, one of `Euclidean' or `Cosine' \item Linkage criterion, one of `Ward', `Complete', `Average', or `Single' \item Number of neighbors for computing the nearest-neighbor affinity subgraph \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{@{}ccccc@{}} \toprule & $k$ & Affinity metric & Linkage criterion & Number of neighbors \\ \midrule Cars196 & $55$ & Euclidean & Ward & $40$ \\ SOP & $7,250$ & Euclidean & Ward & $2$ \\ \bottomrule \end{tabular} \end{center} \caption{Hyperparameter values for HAC.} \end{table} \paragraph{Approximate Rank Order (ARO) \cite{Otto2018}.} The hyperparameters are: \begin{itemize} \itemsep0.1em \item Number of neighbors for computing the nearest-neighbors subgraph \item Threshold on rank-order distances \item Distance metric, one of `Euclidean' or `Cosine' \end{itemize} Note that for $\ell_2$-normalized embeddings, both Euclidean and cosine distance metrics produce identical results because the rank-order is unchanged. \begin{table}[H] \begin{center} \begin{tabular}{@{}cccc@{}} \toprule & Number of neighbors & Threshold & Distance metric \\ \midrule Cars196 & $30$ & $0.65$ & Euclidean \\ SOP & $5$ & $1.0$ & Euclidean \\ \bottomrule \end{tabular} \end{center} \caption{Hyperparameter values for ARO.} \end{table} \paragraph{Density-Based Spatial Clustering of Applications with Noise (DBSCAN) \cite{Ester1996}.} The hyperparameters are: \begin{itemize} \itemsep0.1em \item $\epsilon$, the maximum distance between two embeddings in the same neighborhood \item Minimum number of samples in neighborhood for core point \item Distance metric, either `Euclidean' or `Cosine' \item Whether a sparse matrix is used for nearest-neighbor subgraph \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{@{}ccccc@{}} \toprule & $\epsilon$ & Minimum number of samples & Distance metric & Sparse matrix \\ \midrule Cars196 & $0.25$ & $5$ & Cosine & False \\ SOP & $0.66$ & $2$ & Euclidean & False \\ \bottomrule \end{tabular} \end{center} \caption{Hyperparameter values for DBSCAN.} \end{table} \paragraph{MeanShift \cite{Cheng1995}.} The hyperparameters are: \begin{itemize} \itemsep0.1em \item Bandwidth for RBF kernel \item Minimum bin frequency \item Whether to cluster all embeddings including orphans \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{@{}cccc@{}} \toprule & Bandwidth & Minimum bin frequency & Cluster all embeddings \\ \midrule Cars196 & $0.75$ & $5$ & True \\ SOP & $0.65$ & 1 & False \\ \bottomrule \end{tabular} \end{center} \caption{Hyperparameter values for MeanShift.} \end{table} \paragraph{Spectral Clustering \cite{Ng2001}.} The hyperparameters are: \begin{itemize} \itemsep0.1em \item $k$, the number of clusters \item Method for constructing the affinity matrix \item Number of neighbors for computing the nearest-neighbor affinity subgraph \item Number of components for the spectral embedding \end{itemize} Spectral clustering was run only on Cars196 due to runtime inefficiencies. \begin{table}[H] \begin{center} \begin{tabular}{@{}ccccc@{}} \toprule & $k$ & Affinity & Number of neighbors & Number of components \\ \midrule Cars196 & $50$ & Nearest neighbor subgraph & $20$ & $50$ \\ \bottomrule \end{tabular} \end{center} \caption{Hyperparameter values for spectral clustering.} \end{table} \paragraph{Consensus-Driven Propagation (CDP) \cite{Zhan2018}.} The hyperparameters are: \begin{itemize} \itemsep0.1em \item Number of neighbors for computing the nearest-neighbor subgraph \item Threshold on cosine similarity in the nearest-neighbor subgraph for pruning edges \item Threshold step size \item Max cluster size \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{@{}ccccc@{}} \toprule & Number of neighbors & Threshold & Threshold step size & Max cluster size \\ \midrule Cars196 & $2$ & $0.6$ & $0.01$ & $320$ \\ SOP & $5$ & $-0.2$ & $0.05$ & $10$ \\ \bottomrule \end{tabular} \end{center} \caption{Hyperparameter values for CDP.} \end{table} \paragraph{Tree Deduction \cite{Yang2020}.} The hyperparameters are the number of neighbors for computing the nearest-neighbor subgraph and a threshold on cosine similarity for pruning edges during tree deduction. \begin{table}[H] \begin{center} \begin{tabular}{@{}ccc@{}} \toprule & Number of neighbors & Threshold \\ \midrule Cars196 & $2$ & $0.75$ \\ SOP & $1$ & $0.6$ \\ \bottomrule \end{tabular} \end{center} \caption{Hyperparameter values for tree deduction.} \end{table} \paragraph{Tree Deduction with Embedding $\ell_2$-Norm Confidence.} The hyperparameters are the number of neighbors for computing the nearest-neighbor subgraph and a threshold on cosine similarity for pruning edges during tree deduction. \begin{table}[H] \begin{center} \begin{tabular}{@{}ccc@{}} \toprule & Number of neighbors & Threshold \\ \midrule Cars196 & $20$ & $0.6$ \\ SOP & $3$ & $0.65$ \\ \bottomrule \end{tabular} \end{center} \caption{Hyperparameter values for tree deduction with embedding $\ell_2$-norm confidence.} \end{table} \paragraph{Embedding $\ell_2$-Norm Confidence + GCN-E + Tree Deduction.} The hyperparameters are: \begin{itemize} \itemsep0.1em \item Number of neighbors for computing the adjacency matrix (N) \item Threshold on cosine similarity for pruning edges in the adjacency matrix ($\tau_1$) \item Ignore ratio (IR) \item Threshold on cosine similarity for pruning edges during tree deduction ($\tau_2$) \item Number of units in the hidden layers of the network (H) \item Learning rate (LR) \item Momentum (MOM) \item $\ell_2$ weight decay ($\ell_2$) \item Dropout probability (D) \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{@{}cccccccccc@{}} \toprule & N & $\tau_1$ & IR & $\tau_2$ & H & LR & MOM & $\ell_2$ & D \\ \midrule Cars196 & $60$ & $0.0$ & $0.1$ & $0.6$ & $512$ & $0.01$ & $0.9$ & $1\times10^{-5}$ & $0.2$ \\ SOP & $5$ & $0.0$ & $0.7$ & $0.7$ & $512$ & $0.01$ & $0.9$ & $1\times10^{-5}$ & $0.0$ \\ \bottomrule \end{tabular} \end{center} \caption{Hyperparameter values for embedding $\ell_2$-norm confidence + GCN-E + tree deduction.} \end{table} \paragraph{GCN-VE \cite{Yang2020}.} GCN-VE has two sub-networks, GCN-V and GCN-E. We present the hyperparameters associated with each sub-network independently. The hyperparameters for GCN-V are: \begin{itemize} \itemsep0.1em \item Number of neighbors for computing the adjacency matrix (N) \item Threshold on cosine similarity for pruning edges in the adjacency matrix ($\tau$) \item Number of units in the hidden layers of the network (H) \item Learning rate (LR) \item Momentum (MOM) \item $\ell_2$ weight decay ($\ell_2$) \item Dropout probability (D) \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{@{}cccccccc@{}} \toprule & N & $\tau$ & H & LR & MOM & $\ell_2$ & D \\ \midrule Cars196 & $10$ & $0.0$ & $512$ & $0.01$ & $0.9$ & $1\times10^{-5}$ & $0.0$ \\ SOP & $2$ & $0.0$ & $512$ & $0.1$ & $0.9$ & $1\times10^{-5}$ & $0.0$ \\ \bottomrule \end{tabular} \end{center} \caption{Hyperparameter values for GCN-E.} \end{table} The hyperparameters for GCN-E are: \begin{itemize} \itemsep0.1em \item Number of neighbors for computing the adjacency matrix (N) \item Threshold on cosine similarity for pruning edges in the adjacency matrix ($\tau_1$) \item Ignore ratio (IR) \item Threshold on cosine similarity for pruning edges during tree deduction ($\tau_2$) \item Number of units in the hidden layers of the network (H) \item Learning rate (LR) \item Momentum (MOM) \item $\ell_2$ weight decay ($\ell_2$) \item Dropout probability (D) \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{@{}cccccccccc@{}} \toprule & N & $\tau_1$ & IR & $\tau_2$ & H & LR & MOM & $\ell_2$ & D \\ \midrule Cars196 & $200$ & $0.0$ & $0.0$ & $0.7$ & $512$ & $0.01$ & $0.9$ & $1\times10^{-5}$ & $0.0$ \\ SOP & $5$ & $0.0$ & $0.7$ & $0.8$ & $512$ & $0.01$ & $0.9$ & $1\times10^{-5}$ & $0.0$ \\ \bottomrule \end{tabular} \end{center} \caption{Hyperparameter values for GCN-E.} \end{table} \paragraph{STAR-FC \cite{Shen2021}.} The hyperparameters are: \begin{itemize} \itemsep0.1em \item Number of neighbors for computing the adjacency matrix (N) \item Threshold on cosine similarity for pruning edges in the adjacency matrix ($\tau$) \item Number of seed clusters (SC) \item Number of nearest clusters (NC) \item Number of random clusters (RC) \item Random node proportion (RNP) \item Prune threshold (P) \item Intimacy threshold (I) \item Number of units in the hidden layers of the network (H) \item Learning rate (LR) \item Momentum (MOM) \item $\ell_2$ weight decay ($\ell_2$) \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{@{}cccccccccccccc@{}} \toprule & N & $\tau$ & SC & NC & RC & RNP & P & I & H & LR & MOM & $\ell_2$ \\ \midrule Cars196 & $10$ & $0.0$ & $4$ & $6$ & $10$ & $0.9$ & $0.15$ & $0.6$ & $512$ & $0.1$ & $0.9$ & $1\times10^{-5}$ \\ SOP & $3$ & $0.0$ & $4$ & $500$ & $250$ & $0.9$ & $0.5$ & $0.7$ & $512$ & $0.1$ & $0.9$ & $1\times10^{-5}$ \\ \bottomrule \end{tabular} \end{center} \caption{Hyperparameter values for STAR-FC.} \end{table}
1,108,101,563,957
arxiv
\section{Introduction} Gamma-ray bursts (GRBs) are objects emitting high-energy photons. One detection of a GeV photon from GRB 940217 was reported 17 years ago \citep{hurley94}. Compared with the previous research, recently, with Large Area Telescope (LAT) on board the {\it Fermi} satellite, we have more observational cases for the study of GRB high-energy emission above 100 MeV. It is more important that the published multi-wavelength data of GRB 090510 and GRB 100728A have been provided by the simultaneous observations of the {\it Swift} and {\it Fermi} \citep{depasquale10,abdo11}. Some radiation models can be constrained by these simultaneous data. It is hard to apply the simple synchrotron model for the explanation of high-energy emission from GRB 941017 \citep{gonzalez03}. In general, the photons produced by synchrotron radiation can be scattered by the relativistic electrons. Therefore, an inverse Compton process or synchrotron self-Compton (SSC) process is proposed naturally to explain the GRB emission above the GeV band \citep{meszaros93,dermer00,wang01,zhang01,granot03,fan08,zou09,corsi10}. In particular, the possibility was suggested that the photons from X-ray flares can be scattered to the GeV band by those relativistic electrons \citep{wang06,galli08}. From a theoretical point of view, \citet{meszaros94} first mentioned the physics of turbulent field growth for the study of GRB radiation. \citet{narayan09} proposed one model in which the GRB radiating fluid is relativistically turbulent. This turbulent process, plus the inverse Compton mechanism, was applied to the study of radiation in GRB 080319B \citep{kumar09}. In the turbulent fluid, the random and small emitters can produce short-time variabilities, indicating many pulses shown in the GRB prompt light curve \citep{lyutikov06,lazar09}. It is worth noting the key point of this ``jet-in-jet" model: these microemitters within the bulk jet of GRB explosion also have a jet structure. The turbulent scenario mentioned above is consistent with the principle of jitter mechanism. Jitter radiation, which is the emission of relativistic electrons in a random and small-scale magnetic field, has been applied to GRB research \citep{medvedev00, medvedev06}. The random and small-scale magnetic field can be generated by Weibel instability \citep{medvedev99}. Alternatively, we propose that the turbulent cascade process can also produce the random and small-scale magnetic field \citep{mao07,mao11}. As the magnetic field may have a sub-Larmor length scale, the jitter radiation in this sub-Larmor scale magnetic field was fully studied by \citet{med09} and \citet{med11}. The small-scale turbulent dynamo with large Reynolds numbers at a saturated state in a fluid flow was simulated by \citet{sch04}. The simulation identified a power-law turbulent energy spectrum. In our model, the complicated jitter radiation is simplified as a one-dimensional case to study GRB prompt emission. From this specified jitter radiation, the spectral index of a single electron is directly related to the turbulent energy spectrum. In general, the electron energy distribution is assumed to be a power law. However, in the turbulent framework, stochastic acceleration may be effective. \citet{schlickeiser89a,schlickeiser89b} found a Maxwellian energy distribution of electrons. A similar quasithermal stationary solution of stochastic acceleration was given by \citet{ka06}. In a turbulent magnetic field, the stochastic acceleration of ultrarelativistic electrons was also discussed by \citet{stawarz08}. With the Maxwellian electron energy distribution, the radiative spectrum and light curve of GRB afterglow were calculated by \citet{giannios09}. As suggested by \citet{kirk10}, if the jitter photons in the keV band are scattered by the relativistic electrons, the final output emission will be in the GeV band. In this work, we attempt to calculate the inverse Compton scattering of jitter radiation. Similar to the SSC mechanism, this process can be called as ``jitter self-Compton" (JSC) mechanism. At present, we have only two published data sets of GRBs (GRB 090510 and GRB 100728A), which were obtained by the simultaneous observations from {\it Swift} and {\it Fermi}. In particular, the extremely powerful X-ray flares and GeV emission of GRB 100728A were observed by {\it Swift}/X-ray telescope (XRT) and {\it Fermi}/LAT, respectively. Thus, the case of GRB 100728A provides us an excellent chance to study the powerful GeV emission and the link between keV emission and GeV emission in the GRB multiwavelength spectrum. In this work, the multiwavelength spectral result of GRB 100728A is especially precious to constrain our theoretical model of jitter/JSC radiation. Furthermore, from the clues of \citet{lyutikov06} and \citet{lazar09}, we expect that the observed gross emission from a bulk jet launched by GRB explosion might be related to the emissions from the small-scale emitters with the minijet structure. Therefore, in this paper, we stress the following issues. (1) To prove the jitter/JSC mechanism, which may work for GRB prompt/GeV-band emission, we use the multiwavelength spectrum of GRB 100728A \citep{abdo11} as an example. (2) In our former research \citep{mao11}, jitter has been identified as the possible radiation mechanism for the GRB prompt emission, and turbulence is the dominant dynamic process. Here, some further, detailed calculations are required to fit the observational data of GRB 100728A. (3) The JSC mechanism can be examined because the multiwavelength spectrum of GRB 100728A has been given. (4) The final JSC result is dependent on the electron energy distribution. (5) Since the turbulent dynamics has been applied in \citet{mao11}, as a consequent step, the link between the microemitters with the minijet structure and the emission of the GRB bulk jet should be considered. (6) As calculated by \citet{mao11}, the cooling timescale of relativistic electrons has a typical value of about $10^{-8}$ s. The observed duration of GRB prompt emission is much longer than the cooling timescale, so further explanations are required. In Section 2.1, we first briefly describe our specific case of jitter radiation and then fully present the JSC process. In Section 2.2, we illustrate the ``jet-in-jet" scenario and link the minijets to the bulk jet structure. In Section 2.3, combined with the ``jet-in-jet" effect, our jitter/JSC model can reproduce the multiwavelength spectral properties of GRB 100728A. In particular, we focus on the GeV emission detected by {\it{Fermi}}/LAT and the JSC process. The observed GRB duration can also be estimated. Conclusions and discussion are given in Section 3. \section{JSC Process and Application of GRB 100728A} There are two important issues concerning the following jitter/JSC calculations: (1) Because stochastic acceleration is one of the key points in this work, the Maxwellian energy distribution of relativistic electrons should be applied in the calculation. (2) The gross temporal profile of GRB prompt emission can be the result of superimposing from a large amount of short-timescale pulses. \subsection{JSC Process} Jitter radiation is the emission of relativistic electrons in a random and small-scale magnetic field. In a one-dimensional case, the simplified formulas of jitter radiation have been derived \citep{mao07,mao11}. We also propose that a random and small-scale magnetic field can be produced by turbulence. The radiative intensity of a single electron has a power-law shape, and the spectral index is related to the energy spectrum of turbulent flow. Here, we write the radiative intensity in the unit of $\rm{erg~s^{-1}~Hz^{-1}}$ as \begin{equation} I_{\nu,\rm{jitter}}=\frac{8e^4}{3m_e^2c^3}\nu^{-(\zeta_p-1)}, \end{equation} where $\zeta_p$ is the index determined by the turbulent energy cascade and $c$ is the light speed. In this simplified case, we note that the jitter radiation of a single electron is not related to the electron Lorentz factor $\gamma$. The radiative flux of a single electron can be simply estimated by $I_{\nu,\rm{jitter}}ct_{\rm{cool}}$, where $t_{\rm{cool}}$ is the radiative cooling timescale of relativistic electrons. The process of inverse Compton scattering can be calculated by a standard recipe \citep{rybicki79}. The SSC radiation has been fully discussed as well \citep{chiang99}. In principle, our JSC process can follow the same SSC calculation procedure. The emission flux density in the unit of $\rm{erg~s^{-1}~cm^{-3}~Hz^{-1}}$ is \begin{equation} j_{\nu,\rm{jsc}}=8\pi r_0^2ch\int_{\nu_{0,\rm{min}}}^{\nu_{0,\rm{max}}}\int_{\gamma_{\rm{min}}}^{\gamma_{\rm{max}}} N(\gamma)f(\nu/4\gamma^2\nu_0)n_{\rm{ph}}(\nu_0)d\nu_0d\gamma, \end{equation} where $f(x)=x+2x^2\rm{ln}x+x^2-2x^3$ for $0<x<1$, $f(x)=0$ for $x>1$, and $x\equiv \nu/4\gamma^2\nu_0$. The Thomson scattering section is $\sigma_T=8\pi r_0^2/3=6.65\times 10^{-25}~\rm{cm^2}$. $n_{\rm{ph}}(\nu_0)$ is the number density of seed photons, and it can be easily calculated from the jitter radiation as $n_{\rm{ph}}(\nu_0)=t_{\rm{cool}}\int (I_{\nu,\rm{jitter}}/h\nu) d\nu \int N(\gamma)d\gamma$, $N(\gamma)$ is the electron energy distribution. $\nu_{0,\rm{min}}$ and $\nu_{0,\rm{max}}$ are the lower and upper limits of jitter radiative frequency, respectively. $\gamma_{\rm{min}}$ and $\gamma_{\rm{max}}$ are the lower and upper limits of relativistic electron Lorentz factor, respectively. In general, the electron energy distribution can be given as a power law: $N(\gamma)\propto \gamma^{-p}$, where $p=2.2$. In this paper, the turbulent process is one of the vital points for GRB prompt emission and particle acceleration. As mentioned in Section 1, due to stochastic acceleration, the Maxwellian function of electron energy distribution can be obtained. Here, we follow the description of electron energy distribution given by \citet{giannios09} as \begin{equation} N(\gamma)=C\gamma^2\rm{exp}(-\gamma/\Theta)/2\Theta^3 \end{equation} for $\gamma\le \gamma_{\rm{nth}}$ and \begin{equation} N(\gamma)=C[\gamma^2\rm{exp}(-\gamma/\Theta)/2\Theta^3](\gamma/\gamma_{\rm{nth}})^{-p} \end{equation} for $\gamma > \gamma_{\rm{nth}}$, where $C$ is the normalization constant, $\gamma_{\rm{nth}}$ is the connection number between the Maxwellian and power law components, and $\Theta= kT/m_ec^2$ is a characteristic temperature. We use this mixed thermal-nonthermal electron energy distribution to calculate jitter/JSC radiation. In the case of $\gamma_{\rm{min}}=\gamma_{\rm{nth}}$, the mixed thermal-nonthermal distribution is reduced to a pure power-law distribution. \subsection{Jet-in-jet Scenario} We draw a sketch to illustrate the ``jet-in-jet" scenario, as shown in Figure 1. The term of ``jet-in-jet" means that those microemitters radiating as minijets are within the bulk jet. \citet{giannios10} proposed an ``off-axis" parameter $\alpha$ defined by $\theta_j=\alpha/\Gamma_j$, where $\Gamma_j$ is the bulk Lorentz factor of the jet launched by GRB and $\theta_j$ is the related view angle. The gross Lorentz factor can be derived as $\Gamma=2\Gamma_j\Gamma_e/\alpha^2$, and $\Gamma_e$ is the Lorentz factor of the minijet. In our work, because these minijets point randomly in the bulk jet but all of them move with a general turbulent velocity, we use $\Gamma_e\sim \Gamma_t$, and $\Gamma_t=10$ is the turbulent Lorentz factor adopted by \citet{narayan09}. The possibility of observing these minijets can be estimated by $P=2\pi\int_0^\theta \rm{sin}\theta' d\theta'/4\pi=\theta^2/4=1/4\Gamma^2$. The observed flux is $\nu f(\nu)=P\delta^{2+w}\nu'f(\nu')$, where $\nu'f(\nu')$ is the flux calculated in the GRB shell frame, $w$ is the spectral index, and $\delta$ is the Doppler factor. Here, we take $w=1$ and $\delta\sim \Gamma$. In the GRB shell frame, the microemitter has the length scale of $l_s=\gamma ct_{\rm{cool}}$. The total number of microemitters within the fireball shell is $n=4\pi R^2\delta_s/l_s^3$, where $R\sim 10^{13}$ cm is the fireball radius and $\delta_s=ct_{\rm{cool}}$ is the thick of the shell. The length scale of the turbulent eddy is $l_{\rm{eddy}}\sim R/\Gamma$ \citep{narayan09}. We can define a dimensionless scale as $n_l=l_{\rm{eddy}}/l_s$. Therefore, we sum up the contributions of the microemitters within the turbulent eddy and obtain the total observed duration of GRB emission as $T=n_lnP\Gamma t_{\rm{cool}}$. We calculate these parameters in the next subsection. \subsection{High-Energy Emission of GRB 100728A} To quantitatively study the jitter/JSC process, we use GRB 100728A as an example. GRB 100728A was detected by {\it Swift}/XRT (0.3-10 keV), {\it Swift}/Burst Alert Telescope (BAT, 15-150 keV), {\it Fermi}/Gamma-ray Burst Monitor (10-1000 keV), and {\it Fermi}/LAT (above 100 MeV). The spectrum observed by theb BAT and XRT can be well fitted by the Band function with spectral index $\alpha=-1.06$ and $\beta=-2.24$ and peak energy $E_{\rm{pk}}=1.0$ keV \citep{abdo11}. This spectral function can be extrapolated to the GeV band. The X-ray emission was dominated by a series of bright X-ray flares with a maximum rate above 800 $counts~s^{-1}$. In the time interval of strong X-ray flare activity, a significant GeV emission was detected by {\it Fermi}/LAT. We take these observational data from \citet{abdo11} and plot them in Figure 2. We perform the jitter/JSC calculations to reproduce the multiwavelength spectrum of GRB 100728A and also plot the results in Figure 2. We adopt $\Gamma_j=100$, $\Gamma_t=10$, and ``off-axis" parameter $\alpha=1$. $C=1.7\times 10^{10} ~\rm{cm^{-3}}$ is the value of the electron number density in the relativistic shock. Using the spectral index determined by the energy spectrum of turbulent flow, we can reproduce X-ray and prompt emissions of GRB 100728A through the jitter mechanism. Here, $\zeta_p=2.24$ is in the theoretical range of the turbulent energy cascade \citep{she94}. With the electron energy distribution presented by Equations (3) and (4), fixing $\gamma_{\rm{min}}=100$, $\gamma_{\rm{max}}=10^6$, and $\gamma_{\rm{nth}}=10^3$, we can obtain the JSC result. We adopt $\Theta=200$, which corresponds to the plasma temperature above $10^{12}$ K. The radiative cooling timescale $t_{\rm{cool}}$ is given as $2.2\times 10^{-8}$ s (see the calculation below). The frequency $\nu_{0,\rm{max}}=4.2$ keV is also considered. It indicates that X-ray emission, which is dominated by X-ray flares in this case, provides enough target photons for the relativistic electron scattering. For comparison, using a pure power-law electron energy distribution ($\gamma_{\rm{min}}=\gamma_{\rm{nth}}=10^3$), we also calculate the JSC result which is shown in Figure 2. From all of the results mentioned above, we confirm that the JSC mechanism with a mixed thermal-nonthermal electron energy distribution is one possible origin of GRB 100728A GeV emission detected by {\it Fermi}/LAT. The jitter/JSC calculations in this work for reproducing the multiwavelength spectrum of GRB 100728A are strongly dependent on some of the parameters mentioned above. For example, the ``off-axis" parameter $\alpha$ includes a wide range as $0<\alpha<\Gamma_j$. On the other hand, as shown in Figure 2, the two observational data points have large error bars. Despite these uncertainties, some additional physical components can modify the hydrodynamics and radiative spectrum of GRB. For instance, we may further consider the gamma-ray photon annihilation as $\gamma\gamma$\ding{213}$e^+e^-$ and the $e^+e^-$ cooling as $e^+e^-$\ding{213}$\gamma\gamma$. $\gamma\gamma$ opacity implies a minimum bulk Lorentz factor \citep{nakar07}. \citet{hascoet11} gave a detailed study on the consequences of gamma-ray photon annihilation. Although these topics are out of the scope of our paper, we note the possibility that the jitter/JSC production has minor differences with the multi-wavelength observation. Because the random and small-scale magnetic field can be generated by the turbulent energy spectrum ($B^2=\int F(k)dk$, $F(k)\propto k^{-\zeta_p}$; see Mao \& Wang 2011), in this work, we obtain $B=1.0\times 10^6$ G. The cooling timescale of relativistic electrons for the radiation of jitter and JSC is $t_{\rm{cool}}=3m_ec/4\sigma_T\gamma(U_B+U_{\rm{ph}})$, where $U_B$ is the energy density of magnetic field and $U_{\rm{ph}}$ is the energy density of radiation field. In the case of GRB 100728A, as $U_B\gg U_{\rm{ph}}$, the cooling timescale is dominated by the jitter radiation: \begin{equation} t_{\rm{cool}}=\frac{6\pi m_ec}{\sigma_T\gamma B^2}=2.2\times 10^{-8}(\frac{\gamma}{3.6\times 10^4})^{-1}(\frac{B}{1.0\times 10^6 ~\rm{G}})^{-2}~\rm{s}, \end{equation} where $\gamma=3.6\times 10^4$ is the average value of the electron Lorentz factor obtained from \citet{giannios09}. We use the reference value $t_{\rm{cool}}=2.2\times 10^{-8}$ s as the radiative cooling timescale, which is much shorter than the observed GRB duration. We expect that those fast-variability pulses shown in the GRB prompt emission/X-ray flare are produced by the extremely short-time activities of the microemitters. The gross profile \citep{norris05} with a long-time duration of prompt emission/X-ray flare is the superimposing of those fast-variability pulses. We can further quantify the parameters mentioned in Section 2.2. In the shell frame, the length scale of the microemitters is \begin{equation} \l_s=\gamma ct_{\rm{cool}}=2.2\times10^6(\frac{B}{1.0\times 10^6~\rm{G}})^{-2}~\rm{cm}. \end{equation} The total number of microemitters within the shock shell of the thick $\delta_s=ct_{\rm{cool}}$ is \begin{equation} n=\frac{4\pi R^2\delta_s}{l_s^3}=7.8\times 10^{10}(\frac{R}{10^{13}~\rm{cm}})^2(\frac{\gamma}{3.6\times 10^4})^{-2}(\frac{B}{1.0\times 10^6~\rm{G}})^4. \end{equation} The length scale of the turbulent eddy can be estimated as \begin{equation} \l_{\rm{eddy}}=\frac{R}{\Gamma}=5.0\times 10^{9}(\frac{\alpha}{1.0})^2(\frac{R}{1.0\times 10^{13}~\rm{cm}})(\frac{\Gamma_j}{100})^{-1} (\frac{\Gamma_t}{10})^{-1}~\rm{cm}. \end{equation} With the definition $n_l=l_{\rm{eddy}}/l_s$, we sum up the contributions from all of the microemitters in the turbulent eddy and obtain the duration of the GRB prompt emission: \begin{equation} T=n_lnP\Gamma t_{rm{cool}}=460(\frac{\alpha}{1.0})^4(\frac{R}{1.0\times 10^{13}~\rm{cm}})^3(\frac{\gamma}{3.6\times 10^4})^{-3}(\frac{B}{1.0\times 10^6~\rm{G}})^4(\frac{\Gamma_j}{100})^{-2} (\frac{\Gamma_t}{10})^{-2}~\rm{s}. \end{equation} This calculated timescale is roughly consistent with the observed GRB duration. Finally, as \citet{honda05} and \citet{honda09} studied particle acceleration in the random and small-scale magnetic field, we adopt their result to calculate the acceleration timescale of relativistic electrons for the GRB prompt emission: \begin{equation} t_{\rm{acc}}=4.0\times 10^{-12}(\frac{E}{\rm{MeV}})^2(\frac{B}{1.0\times 10^6~\rm{G}})^{-2}(\frac{l_{\rm{eddy}}}{5.0\times 10^9~ \rm{cm}})^{-1}(\frac{U}{0.1c})^{-2}~\rm{s}, \end{equation} where upstream speed is $U\sim 0.1c$. After comparing the cooling and acceleration timescales, and assuming $\gamma=3.6\times 10^4$ and $l_{\rm{eddy}}=5.0\times 10^9~\rm{cm}$, we obtain $t_{\rm{acc}}\le t_{\rm{cool}}$ below 100 MeV. This indicates that the particle acceleration is effective for the jitter mechanism. \section{Conclusions and Discussion} The gamma-ray emission of GRB 080319B provided evidence of the relativistic turbulent process \citep{kumar09}. It was already pointed out by \citet{nakar02} that the GRB temporal profile contains many fast-variability pulses. These observational issues give us general hints into the consideration of jitter radiation in a turbulence-generated magnetic field. Using particle-in-cell (PIC) simulations, \citet{nishikawa09} found that a partially developed hydrodynamic-like shock structure can be created when the jet propagates into an unmagnetized plasma. The synthetic radiation spectra were also extracted from PIC simulations \citep{sironi09, fre10}. Moreover, \citet{mizuno11} performed relativistic magnetohydrodynamic simulations of a relativistic shock propagating through an inhomogeneous medium. It was shown that the postshock region becomes turbulent and the magnetic field is strongly amplified. In our work, the magnetic field generated by relativistic turbulence has a sub-Larmor length scale. We further suggest that the ``jet-in-jet" scenario is causing the relativistically counterstreaming plasma mentioned above. In particular, \citet{reynolds10} and \citet{reynolds11} comprehensively studied jitter radiation spectra that are dependent on the properties of anisotropic magnetic turbulence. The radiation spectra are also strongly affected by the spatial distribution of the magnetic field. In our work, the magnetic field and radiation are in the framework of the one-dimensional case. This prevents us from further investigating the topologies of the turbulent and magnetic fields in detail. However, we can compare our result and the result of \citet{reynolds10} and \citet{med11} in the one-dimensional case. For example, in the study of \citet{med11}, the isotropic turbulence has the form of $f(k)\sim k^{-2\beta}$ after the nonlinear evolution and the magnetic field is $B^2=f(k)$, while it is $B^2=k^{\zeta_p-1}$ in our work. Therefore, in the one-dimensional case, we obtain $\beta=(\zeta_p-1)/2$. Moreover, from the study of \citet{med11}, we know that $f(k)$ is strongly related to the topology of turbulence and the view angle $\theta$. The jitter parameter is given as $K=eBl_{\rm{cor}}/mc^2$, where $l_{\rm{cor}}$ is the correlation scale, so we can clearly see that the jitter parameter is also strongly related to the topologies of the turbulent and magnetic fields. Although the GRB emission from about 10 keV up to 10 GeV can usually be fitted by a single radiation process (see Abdo et al. 2009 for the case of GRB 080916C and Ackermann et al. 2010a for the case of GRB 090217A), sometimes an additional component is required to completely explain the GRB GeV emission. At present, GRB 090510 and GRB 100728A are two sources that have been studied by using published data sets from the simultaneous observations of {\it Swift} and {\it Fermi}. Besides GRB 100728A, GRB 090510 is another interesting source used to examine the jitter and JSC processes. {\it Fermi}/LAT detected the emission above 100 MeV \citep{ackermann10}. \citet{depasquale10} built the multiwavelength spectral energy distribution (SED) from simultaneous observations of {\it Swift} and {\it Fermi}. Synchrotron and SSC were proposed to be the origins of the keV-MeV emission and an additional component above the GeV band, respectively \citep{ackermann10b}, and a large bulk Lorentz factor $\Gamma>500-1000$ was required. However, it is difficult to apply the simple model presented in this paper for this burst because of two reasons. (1) A hard power-law component dominates the emission both below 20 keV and above 100 MeV \citep{ackermann10b}. The JSC process can be adopted to explain the emission above 100 MeV, but it cannot explain the emission below 20 keV. (2) There are no BAT data shown in the multiwavelength SED \citep{depasquale10}. The spectral data in the energy range of $0.3-10$ keV cannot well constrain the jitter slope if jitter radiation is dominated from 1 keV to 100 MeV. Moreover, it is also difficult to use the {\it Fermi}/LAT spectral data above 100 MeV plotted with a wide-ranging confidence interval (see the butterfly in Figure 3 of De Pasquale et al. 2010) to constrain the parameters of the JSC model. In this paper, we use the JSC mechanism to interpret the powerful emission of GRB 100728A above the GeV band. In particular, we see two extraordinary observational behaviors of GRB 100728A. (1) A series of powerful X-ray flares (maximum rate is larger than 500 $counts~s^{-1}$) that occurred about 800 s after the burst trigger. (2) The significant GeV band emission was detected by {\it Fermi}/LAT in the same time interval as the X-ray flares. From this observational evidence, we were able to examine the jitter and JSC mechanisms. Using the jitter mechanism, we successfully reproduced prompt and X-ray emissions of GRB 100728A. To use the JSC mechanism to explain the high-energy emission of GRB 100728A above the GeV band, the jitter photons in the X-ray band should be scattered by the mixed thermal-nonthermal electrons to the GeV band. Meanwhile, the ``jet-in-jet" scenario is also considered. Therefore, our model, which includes all of the main points discussed above, such as turbulence, the random and small-scale magnetic field, radiation, and geometric structure of emission, is self-consistent. We confirm through our calculation that the jitter and JSC processes in the ``jet-in-jet" scenario are valid for the multiwavelength emission of GRB 100728A without any further assumptions. However, as shown in Figure 2, the JSC result has a minor deviation to the observational data above 100 MeV. The effect of the $\gamma\gamma$ opacity on the output spectrum is mentioned in Section 2.3. Here some other physical components are generally proposed. Although the bulk Lorentz factor of GRB jet $\Gamma_j=100$ and the Lorentz factor of the minijet $\Gamma_e=10$ are given, the shock and emission regions may have a complex structure, that can modify the radiation spectrum. The radiative efficiency is also an important issue. The relation between the cooling time of the internal shock and the shock Lorenz factor was investigated by \citet{med09}. Moreover, instead of a constant density of shock in our case, a certain structured density profile may be involved. We speculate that the final output radiation is the superposition of multicomponent contributions. From Figure 2, we see that the JSC radiation above the GeV band is related to the electron energy distribution. The powerful emission above the GeV band can be obtained if the seed photons are scattered by the electrons with a mixed thermal-nonthermal distribution. The weak GeV emission may have a flux value below the threshold of what an observing instrument can detect, if the seed photons are scattered by electrons with a purely nonthermal power-law distribution. In our former research \citep{mao11}, the acceleration timescale is larger than the cooling timescale above 100 MeV. Thus, the jitter radiation does not work and we do not expect more GeV-GRBs. In this paper, we further explore GRB detection above the GeV band by {\it Fermi}/LAT that is also strongly dependent on particle acceleration. Some other interesting suggestions, such as the upscattered cocoon model \citep{toma09} and the external inverse Compton model \citep{he11}, may also be important in explaining GRB detection by {\it Fermi}/LAT. We hope that more multiwavelength data sets can be accumulated so that more penetrating studies can be performed in the future\footnote{The simultaneous observations of {\it Swift} and {\it Feimi}/LAT were performed on GRB 110625A \citep{page11,gruber11,tam11} and GRB 110731A \citep{oates11,bregeon11,gruber11}. We hope that future published data can provide more constraints on our model.}. \acknowledgments We thank the referee for the instructive suggestions. This work is supported by the KASI Fellowship program, the National Natural Science Foundation of China 11173054, the National Basic Research Program of China (973 Program, 2009CB824800), and the Policy Research Program of the Chinese Academy of Sciences (KJCX2-YW-T24).
1,108,101,563,958
arxiv
\section{introduction} Single layer graphene,\cite{novo} graphane CH,\cite{graphan,hasanch} fluorographene CF,\cite{nair,hasancf} BN\cite{BN,mehmetbn} and MoS$_2$\cite{mos2,canmos2} have displayed unusual chemical and physical properties for future nanotechnology applications. Furthermore, the properties of these nanomaterials can be modified by creating excess electrical charge. For example, linear crossing of bands of graphene at the Fermi level gives rise to electron-hole symmetry, whereby under bias voltage the charge carriers can be tuned continuously between electrons and holes in significant concentrations.\cite{dirac} This way, the conductivity of graphene can be monitored. Similar situation leading to excess electrons or holes can also be achieved through doping with foreign atoms.\cite{wang,wehling,sevincli,ataca,topsakal} Layered materials can be exfoliated under excessive charging, which is created by photoexcitation for very short time.\cite{carbone,miyamoto} It is proposed that the femtosecond laser pulses rapidly generate hot electron gas, which spills out leaving behind a positively charged graphite slab. Eventually, charged outermost layers of graphite are exfoliated.\cite{miyamoto} Recently, the effects of charging of graphene have been treated in different studies. Ekiz \textit{et al.}\cite{ekiz} showed that oxidized graphene domains, which become insulator upon oxidation, change back to the metallic state using electrical stimulation. Theoretically, based on the first principles calculations, it has been shown that the binding energy and magnetic moments of adatoms adsorbed to graphene can be modified through static charging.\cite{topsakal,cohen} Possibility of transforming the electronic structure of one species to another through gating modeled by charging has been pointed out.\cite{cohen2} It is argued that diffusion of adsorbed oxygen atoms on graphene can be modified through charging.\cite{sofo} We found that pseudopotential plane wave calculations of charged surfaces using periodically repeating layers are sensitive to the vacuum spacing between adjacent cells and have limited applicability.\cite{topsakal} In this paper we investigate the effect of static charging on suspended (or free standing) single layer nanostructures, such as graphene, graphane (CH), fluorographene (fully fluorinated graphene) (CF), boron nitride (BN) and molybdenum disulfide (MoS$_2$). All these honeycomb nanostructures have two dimensional (2D) hexagonal lattice. First, we examine how the size of the "vacuum" potential between layers affects the calculated properties of the \textit{negatively charged} single-layer nanostructures when treated using periodic boundary conditions. We then investigate the effect of charging on the electronic energy band structure and atomic structure. We show that the bond lengths and hence 2D lattice constants increase as a result of electron removal from the single layer. Consequently, phonons soften and the frequencies of Raman active modes are lowered. As a result of electron removal, three-layer, wide band gap BN and MoS$_2$ sheets are metallized and excess positive charge is accumulated mainly at the outermost atomic layers. Owing to Coulomb force those layers start to repel each other. When exceeded the weak van der Waals (vdW) attraction, the repulsive Coulomb force initiates the exfoliation. \section{Method} The present results are obtained by performing first-principles plane wave calculations carried out within spin-polarized and spin-unpolarized density functional theory (DFT) using projector-augmented wave potentials.\cite{paw} The exchange correlation potential is approximated by Generalized Gradient Approximation.\cite{PW91} For a better account of weak interlayer attraction in layered crystals, van der Waals (vdW) interaction is also taken into account.\cite{grimme} A plane-wave basis set with kinetic energy cutoff of 500 eV is used. All atomic positions and lattice constants are optimized by using the conjugate gradient method, where the total energy and atomic forces are minimized. The convergence for energy is chosen as 10$^{-5}$ eV between two steps, and the maximum force allowed on each atom is less than 0.01 eV/\AA{}. The Brillouin zone (BZ) is sampled by (15x15x5) special \textbf{k}-points for primitive unit cell. Calculations for neutral, as well as charged systems are carried out by using VASP package.\cite{vasp} Two-dimensional single layers or slabs and a vacuum space $s$ between them are repeated periodically along the perpendicular $z$-direction. The amount of charging is specified as either positive charging, i.e. electron depletion ($Q >0$), or negative charging, i.e. excess electrons ($Q < 0$), in units of $\pm$ electron (e) per unit cell. Average surface charge density is specified as $\bar{\sigma}= Q/A$, i.e the charge per unit area, $A$, being the area of the unitcell. Normally, periodic boundary conditions realized by repeating charged supercells has a divergent electric potential energy and has drawbacks and limitations, which have been the subject matter of several studies in the past. To achieve the convergence of electronic potential, additional neutralizing background charge is applied.\cite{leslie,payne} Recently, error bars in computations due to compensating charge have been estimated.\cite{cohen2} The dipole corrections can be carried out for cubic structures, if a finite electric dipole moment builds in the unit cell.\cite{dabo,schefler} Monopole and dipole corrections are also treated self-consistently.\cite{gava} Various charged structures have been also treated by using different approaches and computational methods.\cite{fu,blochl,schutz,heinze,lozovoi,filhol,schnur} Owing to those theoretical advances, studies on charged systems can now reveal useful information, when treated carefully. \section{Charging of suspended single layers} The negative and positive charging of suspended single layer graphene, CH, CF, BN and MoS$_2$ are treated using supercell method. In Fig.~\ref{figure-structure} (a) we describe MoS$_2$ single layers, which are periodically repeated along the $z$-direction and separated by a vacuum spacing $s$ between the adjacent outermost sulfur planes. In Fig.~\ref{figure-structure} (b) and (c) the self-consistent electronic potential energy, $V_{el}(\textbf{r})$ is averaged in the planes perpendicular to the $z$-axis to obtain planarly averaged 1D potential energy $\bar{V}_{el}(z)$ for different values of $s$. \begin{figure} \includegraphics[width=8cm]{figure-structure.png} \caption{(Color online) (a) Description of supercell geometry used to treat 2D single layer MoS$_2$. $c$ and $s$ are supercell lattice constant and vacuum spacing between adjacent layers. The $z$-axis is perpendicular to the layers. (b) Self-consistent potential energy of positively charged ($Q>0$ per cell), periodically repeating MoS$_{2}$ single layers, which is planarly averaged along $z$-direction. $\bar{V}_{el}(z)$ is calculated using different vacuum spacings $\textit{s}$ as specified by inset. The planarly averaged potential energy of a single and infinite MoS$_2$ layer is schematically shown by linear dashed lines in the vacuum region. The zero of energy is set at the Fermi level indicated by dash-dotted lines. (c) $\bar{V}_{el}(z)$ of negatively charged ($Q<0$ per cell) and periodically repeating MoS$_2$ single layers. Averaged potential energy of infinite MoS$_2$ single layer is shown by linear and dashed line in the vacuum region. (d) Variation of $\bar{V}_{el}(z)$ and total potential energy including electronic and exchange-correlation potential, $\bar{V}_{el}(z)$ + $\bar{V}_{xc}(z)$, between two negatively charged MoS$_2$ layers corresponding to $Q$=-0.110 e/unitcell before the spilling of electrons into vacuum. The spacing between MoS$_2$ layers is $s=20$ \AA. (e) Same as (d) but $Q$=-0.114 e/unitcell, where the total potential energy dips below $E_F$ and hence excess electrons start to fill the states localized in the potential well between two MoS$_2$ layers. (f) Corresponding planarly averaged charge density $\bar{\lambda}$. Accumulation of the charge at the center of $s$ is resolved in a fine scale. Arrows indicate the extremum points of $\bar{V}_{el}(z)$ in the vacuum region for $Q>0$ and $Q<0$ cases.} \label{figure-structure} \end{figure} In the vacuum region, the electronic potential energy $\bar{V}_{el}(z)$ strongly depends on the vacuum spacing $s$. For an infinitely large single plane having excess charge $Q>0$ per cell, the potential energy in the vacuum region is linear, if it is not periodically repeating. Thus, as $z \rightarrow \infty$, $\bar{V}_{el}(z \rightarrow \infty) \rightarrow +\infty$ as schematically shown in Fig.~\ref{figure-structure} (b). However, for a periodically repeating single layers (within the periodic boundary conditions) the potential energy is symmetric with respect to the center of vacuum spacing and it passes through a maximum at $s$/2. The maximum value of the potential increases with increasing $s$ in Fig.~\ref{figure-structure}(b). In contrast, for a negatively ($Q<0$ per cell) charged and infinite MoS$_2$ single layer, a reverse situation occurs as shown in Fig.~\ref{figure-structure} (c). Namely $\bar{V}_{el}(z \rightarrow \infty) \rightarrow -\infty$ linearly, if MoS$_2$ single layer is not periodically repeating. Notably, the energy of a finite size, single layer nanostructure (i.e. a flake) does not diverge, but has finite value for large $z$ both for $Q>0$ and $Q<0$ cases. On the other hand, for periodically repeating single layers within the periodic boundary conditions, potential energies are symmetric with respect to the center of vacuum spacing and they passes through a minimum at $s/2$. This way a potential well is formed in the vacuum region between two adjacent layers. Normally, the depth of this well increases with increasing negative charging and $s$. At a critical value of negative charge, the self-consistent potential energy $V(\textbf{r})$ including electronic and exchange-correlation potential energies dip below the Fermi level (even if $\bar{V}_{el}(z) > E_F$) and eventually electrons start to occupy the states localized in the quantum well. Such a situation is described in Fig.~\ref{figure-structure} (d)-(f). Of course, this situation is an \textit{artifact} of the method using plane wave basis set and the repeating layers separated by the vacuum space $s$. Despite that, the method may provide reasonable description of the negatively charged layers until the minimum of the well dips below the Fermi level. According to this picture, the escaping of electrons out of the material is delayed in relatively short $s$. On the other hand, the interaction between layers prevents one from using too short $s$. Earlier, this limitation of the method is usually overlooked. The critical value of negative charge depends on $s$ value. It should be noted that for $s$=20 \AA, electrons start to escape from the graphene layer for $Q$=-4.03x10$^{13}$ e/cm$^{2}$, even though larger doping of 4x10$^{14}$ e/cm$^{2}$ has been attained for garphene on SiO$_{2}$ substrate.\cite{efetov} \begin{figure} \includegraphics[width=8.3cm]{figure-1D-se.png} \caption{(Color Online) (a) Energy eigenvalues of the occupied electronic states, $E_{i}$ and corresponding $|\Psi_{i}(z)|^2$ are obtained by the numerical solution of the Schrodinger equation of a planarly averaged, 1D electronic potential energy of single layer graphene for $s$=12.5 $\AA$, 25 $\AA$ and 50 $\AA$ shown by dashed lines. (b) Same as (a) for 3-layer graphene. Zeros of $|\Psi_{i}(z)|^2$ at large $z$ are aligned with the corresponding energy eigenvalues.} \label{figure-schrodinger} \end{figure} In the case of positive charging, even if $\bar{V}_{el}(z)$ is not linear and does not increase to $+\infty$, the periodic boundary conditions using sufficiently large $s$ can provide a realistic description of charged systems, since the wave functions in the vacuum region rapidly decay under high and wide potential barrier. Therefore, the calculated wave functions and electronic energies are not affected even if $\bar{V}_{el}(z)$ is smaller than the electronic potential corresponding to infinite vacuum spacing. We demonstrate our point of view by solving directly the Schrodinger equation to obtain the wave functions and energy eigenvalues for the planarly averaged 1D potentials of single layer and 3-layer graphene corresponding to $s$=12.5, 25, 50 $\AA$ in Fig.~\ref{figure-schrodinger}. One sees that the large difference, $\Delta \bar{V}_{el}(z)=\bar{V}_{el,s=50 \AA}(z)-\bar{V}_{el,s=12.5 \AA}(z)$ do not affect the occupied states at their tail region in the vacuum spacing; the energy difference is only 5 meV (which cannot be resolved from the figure) between smallest and largest vacuum spacing $s$, which is smaller than the accuracy limit of DFT calculations. As one expects, the dependence on the vacuum spacing increases for excited states, which have relatively larger extension and hence they are affected from $\Delta \bar{V}_{el}(z)$. \begin{figure} \includegraphics[width=8.3cm]{figure-bands.png} \caption{(Color Online) Energy band structures of 2D single layer of graphene C, fluorographene CF, graphane CH, BN and MoS$_2$ calculated for $Q =+0.2$ e/cell, $Q = 0$ (neutral) and $Q = -0.10$ e/cell. Zero of energy is set at the Fermi level indicated by dash-dotted lines. The band gap is shaded. Note that band gap increases under positive charging. Parabolic bands descending and touching the Fermi level for $Q<0$ are free electron like bands. Band calculations are carried out for $s$=20 \AA.} \label{figure-bands} \end{figure} By taking the above limitations of the method in negative charging into account, we now examine the effect of charging of single layers of graphene, CF, CH, BN and MoS$_2$ on their electronic structure and bond lengths. In Fig.~\ref{figure-bands} the changes in band structure with charging are significant within DFT. For example, the band gap (i.e. the energy gap between the top of the valence band and the minimum of the conduction band) of neutral single layer BN increases from 4.61 eV to 5.12 and to 5.54 eV as $Q$ increases from $Q$=0 to +0.2 e/cell and to +0.4 e/cell, respectively. The increase of the band gap occurs due to the fact that the electronic potential energy becomes deeper with increasing electron depletion. For $Q > 0$, the Fermi level dips in the valance band and creates holes. In contrast, parabolic free electron like bands, which occur above the vacuum level in the neutral case, start to descend as a result of negative charging ($Q<0$) and eventually they touch the Fermi level. Upon increasing $Q$ these parabolic bands start to be occupied and hence part of $Q$ is transferred to the quantum well in the vacuum region. This way the rate of accumulation of excess charge in the conduction band of single layer nanostructure recedes. Even if these parabolic bands appear as touching the Fermi level in the same band structure in ($k_{x},k_{y}$)-plane they are physically separated from the states of single layer honeycomb structure under study. As it was mentioned before, this situation is an artifact of periodic boundary conditions and can be viewed as the vanishing of the work function. We note that in the case of negatively charged, finite size single-layer, the excess electrons are hindered from spilling out to the vacuum by a wide tunneling barrier, even if $\bar{V}_{el}(z)$ is lowered below the Fermi level in vacuum for large $z$. Incidentally, for both $Q>0$ and $Q<0$, the spin-polarized calculations carried out for single layers of graphene, CH, CF, BN and MoS$_2$ did not yield any magnetic state as a result of charging. Another crucial effect of charging appears in the variation of the bond lengths with $Q$. As shown in Fig.~\ref{figure-lattice} (a) the bond length or lattice constants of single layer graphene, BN, CH, CF and MoS$_2$ increase with increasing positive charge density $\bar{\sigma}$. The elongation of the bond length is slow for small $\bar{\sigma}$, but increases quickly when $\bar{\sigma} \gtrsim 1$ C/m$^2$. The bonds get longer when the electrons are removed from the system and hence bonds become weaker. The contour plots of total charge density in a plane passing through C-C and B-N bonds of graphene and BN honeycomb structures in Fig.~\ref{figure-lattice} (b) and (c), show that the charge density between atoms becomes weaker with increasing electron depletion. Weakening of bonds can have crucial consequences as phonon softening and is observable through Raman spectrum. In fact, the Raman active mode of graphene calculated by using density functional perturbation theory is at 1524 cm$^{-1}$ and shifts down to 1510 cm$^{-1}$ for $Q$=+0.2 e/cell, and to 1311 cm$^{-1}$ for $Q$=0.4 e/cell. To confirm whether the elongation of bonds dominates the Raman shift of graphene, we calculated the Raman active modes of neutral graphene having the same lattice constant of graphene when charged by $Q$=+0.4 e/cell. In this case the Raman active mode shifted to 1274 cm$^{-1}$, which is close to the Raman active mode calculated for graphene charged with $Q$=+0.4 e/cell. We also note that the excessive charging of single layer materials considered in this paper lead to instability. This is revealed from phonon dispersion calculations. For example, neutral graphene which has normally stable planar structure and positive frequencies of acoustical branches in BZ, starts to attain imaginary frequencies of long wavelength acoustical modes at excessive charging. Weakening of graphene layer is expected to be reflected to its elastic properties, in particular to its stiffness.\cite{stiffness} \begin{figure} \includegraphics[width=8cm]{figure-lattice.png} \caption{(Color Online) (a) Variation of the ratio of lattice constants $a$ of positively charged single layer graphene, BN, CH, CF and MoS$_2$ to their neutral values $a_o$ with the average surface charge density, $\bar{\sigma}$. The unit cell and the lattice vectors are described by inset. (b) The charge density contour plots in a plane passing through a C-C bond. (c) Same as B-N bond.} \label{figure-lattice} \end{figure} \section{Exfoliation of layered BN and MoS$_2$} \begin{table} \caption{Dependence of the threshold charges on the vacuum spacing $s$ (\AA) between 3-layer slabs. Threshold charge, $Q_e$ (e/cell) where exfoliation sets in and corresponding threshold average surface charge density $\bar{\sigma}_e=Q_{e}/A$ (C/m$^2$) are calculated for positive charged 3-layer Graphene, BN and MoS$_2$ sheets for $s$=50 \AA~ and $s$=20 \AA. The numbers of valence electrons per unit cell of the slab are also given in the second column.} \label{table3} \centering{} \begin{tabular}{|c|c|c|c|} \hline System & \# of e & $Q_{e}$ (e/cell) & $\bar{\sigma}$ (C/m$^{2}$) \tabularnewline \hline \hline 3-layer Graphene (s=50) & 24 & +0.160 & +0.49 \tabularnewline \hline 3-layer Graphene (s=20) & 24 & +0.205 & +0.62 \tabularnewline \hline 3-layer BN (s=50) & 24 & +0.225 & +0.66 \tabularnewline \hline 3-layer BN (s=20) & 24 & +0.320 & +0.94 \tabularnewline \hline 3-layer MoS$_2$ (s=50) & 54 & +0.322 & +0.57 \tabularnewline \hline 3-layer MoS$_2$ (s=20) & 54 & +0.480 & +0.86 \tabularnewline \hline \end{tabular} \end{table} We next investigate the exfoliation of single layer BN and MoS$_2$ from their layered bulk crystal through charging. We model 3-layer slab (sheet) of BN and MoS$_2$ as part of their layered bulk crystal. We considered only 3-layer slabs in order to cut the computation time, since the model works also for thicker slabs consisting of 6-10 layers graphene.\cite{topsakal} Energy minimizations of neutral sheets relative to stacking geometry are achieved. Stacking of 3-layer BN and MoS$_2$ slabs comply with the stacking of layers in 3D layered BN\cite{mehmetbn} and MoS$_2$ crystals.\cite{canmos2} In these slabs, the layers are hold together mainly by attractive vdW interactions of a few hundreds meV and any repulsive interaction overcoming it leads to exfoliation. When electrons are injected to or removed from the slab, the Fermi level shifts up or down and cross the conduction or valance band of the insulator and attribute to it a metallic character. At the end, the excess charge by itself accumulates on the surfaces of the metallic slab inducing a repulsive Coulomb interaction between the outermost layers of the slab. Here we consider positive charging only, since in the case of negative charging the excess charges quickly spill into the vacuum before the exfoliation sets in. \begin{figure} \includegraphics[width=7cm]{figure-bn-charge.png} \caption{(Color Online) Variation of planarly averaged positive excess charge $\bar{\lambda}(z)$ along $z$-axis perpendicular to the BN-layers calculated for different $s$. As $s$ increases more excess charge is transferred from center region to the surface planes.} \label{figure-bn-charge} \end{figure} \begin{figure} \includegraphics[width=9cm]{figure-energy.png} \caption{(Color Online) Variation of cohesive energy and perpendicular force $F_{\perp}$ in 3-layer BN slab as a function of the distance $L$ between the surfaces. Energies and forces are calculated for different levels of excess positive charge $Q$ (e/unit cell). Zero of energy corresponds to the energy as $L \rightarrow \infty$. } \label{figure-energy} \end{figure} The amount of charge in the unit cell, which is necessary for the onset of exfoliation, is defined as the threshold charge $Q_e$. Threshold charges are calculated for 3-layer slabs of graphene, BN and MoS$_2$ for $s$=20 \AA~ and $s$=50 \AA. Results presented in Tab.\ref{table3} indicate that the amount of threshold charge decreases with increasing $s$. This confirms our arguments in Sec. III that in positive charging large vacuum space, $s$, is favored. The mechanism underlying this finding is summarized in Fig.~\ref{figure-bn-charge} where we show the linear charge density, $\bar{\lambda}(z)$ calculated for different $s$ values of a 3-layer BN. For small $s$, the excess charge accumulates mainly at surfaces of the slab, also with some significant fraction inside the slab. However, as $s$ increases some part of $Q$ is transferred from inside to the outer surface giving rise to the increase of the charge accumulation at the surface. At the end, for the same level of charging the induced Coulomb repulsion increases with increasing $s$. Accordingly, the same slab requires relatively smaller amount of threshold charge $Q_e$ to be exfoliated, if $s$ is taken large. \begin{figure*} \includegraphics[width=15cm]{figure-exfoliation.png} \caption{(Color Online) Exfoliation of outermost layers from layered BN and MoS$_2$ slabs by positively charging of slabs. (a) Turquoise isosurfaces of excess positive charge density. (b) Change in total energy with excess surface charge density. (c) Variation of $L$ of slabs with charging. } \label{figure-exfoliation} \end{figure*} In Fig.~\ref{figure-energy} we present the variation of the cohesive energy of the 3-layer BN slab relative to three free BN layers for neutral $Q=0$ and positive charged $Q >0$ cases as a function of the distance $L$ between the outermost BN atomic planes of 3-layer BN slab. The cohesive energy for a given $L$ is obtained from the following expression: $E_{C}= E_{T}$ [3-Layer BN] - 3$E_{T}$ [single layer BN]. The total energy of the single layer BN, $E_{T}$ [single layer] is calculated in a smaller supercell to keep the density of the background charge the same. The cohesive energy of the neutral slab in equilibrium is $\sim$ 302 meV/cell. If the spacings of layers (i.e. $L$) starts to increase, an attractive force $F_{\perp}=-\partial E_{T}/\partial L$ acts to restore the equilibrium spacing. $F_{\perp}(L)$ first increases with increasing $L$, passes through a maximum and then decays to zero. In Fig.~\ref{figure-energy} we also show how the minimum of cohesive energy decreases and moves to relatively large spacings with increasing $Q$. Concomitantly, the maximum of the attractive force for a given $Q$, $F_{\perp,max}$ decreases with increasing $Q$ and eventually becomes zero. This give rise to the exfoliation. We note that despite the limitations set by the neutralizing uniform charge on the total energy, the cohesive energies calculated for different charge levels reveal useful qualitative information on the effects of charging. In Fig.~\ref{figure-exfoliation} (a) we show isosurfaces of excess positive charge densities of 3-layer BN and MoS$_2$ slabs. These slabs become metallic upon extracting electrons (i.e. upon positive charging) and excess charges reside at both surfaces of slabs. As shown in Fig.~\ref{figure-exfoliation} (b), the total energy raises with increasing charging or average charge density, $\bar{\sigma}$. In compliance with Fig.~\ref{figure-energy}, the separation between surface layers, $L$, increases. The sharp drop of $\Delta E$ at $Q_e$ or $\bar{\sigma}_e$ indicate the onset of exfoliation due to the repulsive Coulomb force pulling them to exfoliate. In Fig.~\ref{figure-exfoliation} (c) $L$ increases with increasing charging as discussed in Fig.~\ref{figure-energy}. The increments of $L$ exhibits a stepwise behavior for BN. This is also artifact of the method, where forces are calculated within preset threshold values. The variation of $L$ of MoS$_2$ slab with $Q > 0$ display a different behavior due to charge transfer from Mo to S atoms. The exfoliation due to the static charging can be explained by a simple electrostatic model, where the outermost layers of slabs is modeled by uniformly charged planes, which yields repulsive interaction independent of their separation distance, i.e. $F \propto Q^{2}/(A\cdotp \epsilon_{0})$, where $\epsilon_{0}$ is static dielectric constant.\cite{topsakal} Calculated forces differ from the classical force due to screening effects of excess charge residing inside the slabs. \section{Discussions and conclusions} In this study, the threshold values of static charge, $Q_e$, to be implemented in the slabs to achieve exfoliation are quite high. Such high static charging of layers can be achieved locally through the tip of Scanning Tunnelling Microscope or electrolytic gate.\cite{efetov} The dissipation of locally created excess charge in materials may involve a decay time $\tau_D$. Relatively longer $\tau_D$ can induce a local instability and the desorption of atoms from nanoparticles. Experimentally ultra-fast graphene ablation was directly observed by means of electron crystallography.\cite{carbone} Carriers excited by ultra-short laser pulse transfer energy to strongly coupled optical phonons. Graphite undergoes a contraction, which is subsequently followed by an expansion leading eventually to laser-driven ablation.\cite{carbone} Much recently, the understanding of photoexfoliation have been proposed, where exposure to femtosecond laser pulses has led to athermal exfoliation of intact graphenes.\cite{miyamoto} Based on time dependent DFT calculations (TD-DFT), it is proposed that the femtosecond laser pulse rapidly generates hot electron gas at $\sim20.000$ K, while graphene layers are vibrationally cold. The hot electrons spill out, leaving behind a positively charged graphite slab. The charge deficiency accumulated at the top and bottom surfaces lead to athermal excitation.\cite{miyamoto} The exfoliation in static charging described in Fig.~\ref{figure-exfoliation} is in compliance with the understanding of photoexcitation revealed from previous TD-DFT calculations,\cite{miyamoto} since the driving force which leads to the separation of graphenes from graphite is mainly related with electrostatic effects in both methods. In summary, the present study investigated the effects of charging on the structural and electronic properties of single layer graphene, graphene derivatives, BN and MoS$_2$, which have honeycomb structure. We concluded that while caution has to be exercised in the studies involving negative charging using large vacuum spacing, positive charging can be treated safely using large vacuum spacing. We found that upon positive charging the band gaps of single layers of BN and MoS$_2$ increase and the unit cells are enlarged. Consequently the phonons become softer. The charging of BN and MoS$_2$ slabs were also studied. While these slabs are wide band semiconductors, they become metallic upon positive charging. Consequently, excess charges are accumulated on the surfaces of slabs and induce repulsive force between outermost layers. With increasing positive charging the spacing between these layers increases, which eventually ends with exfoliation. \begin{acknowledgments} We thank S. Cahangirov for helpful discussions. We acknowledge partial financial support from The Academy of Science of Turkey (TUBA) and TUBITAK through Grant No: 108234. Part of the computational resources has been provided by TUBITAK ULAKBIM, High Performance and Grid Computing Center (TR-Grid e-Infrastructure). \end{acknowledgments}
1,108,101,563,959
arxiv
\section{Introduction} In the seminal paper by \citet{ArtznerDelbaenEberHeath1999}, the authors proposed an axiomatic approach to defining risk measures that are meant to give a numerical value of the riskiness of a given financial contract or portfolio. Alternatively, one can view the risk measures as a tool that allows to establish preference orders on the set of cashflows according to their riskiness. {Another seminal paper, \citet{ChernyMadan2009}, introduced and studied axiomatic approach to defining performance measures, or acceptability indices, that are meant to provide evaluation of performance of a financial portfolio. In their most native form, performance measures evaluate the trade-off between return on the portfolio and the portfolio's risk. Both \citet{ArtznerDelbaenEberHeath1999} and \citet{ChernyMadan2009} were concerned with measures of risk and measures of performance in static framework.} As shown in one of the first papers that studied risk measures in dynamic framework, \citet{Riedel2004}, if one is concerned about making noncontradictory decisions (from the risk point of view) over the time, then an additional axiom, called time consistency, is needed. Over the past decade significant progress has been made towards expanding the theory of dynamic risk measures and their time consistency. {For example, so called cocycle condition (for convex risk measures) was studied in ~\cite{FollmerPenner2006}, recursive construction was exploited in~\cite{CheriditoKupper2006}, relation to acceptance and rejection sets was studied in~\cite{Delbaen2006}, the concept of prudence was introduced in~\cite{Penner04}, connections to g-expectations were studied in~\cite{RosazzaGianin2006}, and the relation to Bellman's principle of optimalty was shown in~\cite{ArtznerDelbaenEberHeathKu2007}.} For more details on dynamic cash-additive measures also called dynamic monetary risk measures, we also refer the reader to a comprehensive survey paper \cite{AcciaioPenner2010} and the references therein. Let us briefly recall the concept of strong time consistency of monetary risk measures, which is one of the most recognized forms of time consistency. Assume that $\rho_t(X)$ is the value of a dynamic monetary risk measure at time $t\in[0,T]$, that corresponds to the riskiness, at time $t$, of the terminal cashflow $X$, with $X$ being an $\mathcal{F}_T$-measurable random variable. The monetary risk measure is said to be strongly time consistent if for any $t<s\leq T$, and any $\mathcal{F}_T$-measurable random variables $X,Y$ we have that \begin{equation}\label{eq:strong1} \rho_s(X) = \rho_s(Y) \quad \Rightarrow \quad \rho_t(X)= \rho_t(Y). \end{equation} The financial interpretation of strong time consistency is clear -- if $X$ is as risky as $Y$ at some future time $s$, then today, at time $t$, $X$ is also as risky as $Y$. One of the main features of the strong time consistency is its connection to dynamic programming principle. It is not hard to show that in the $L^{\infty}$ framework, a monetary risk measure is strongly time consistent if and only if \begin{equation}\label{eq:DPP1} \rho_t = \rho_t (-\rho_s), \quad 0\leq t<s\leq T. \end{equation} All other forms of time consistency for monetary risk measures, such as weak, acceptance consistent, rejection consistent, are tied to this connection as well. In \cite{Tutsch2008}, the author proposed a general approach to time consistency for cash-additive risk measures by introducing so called `test sets' or `benchmark sets.' Each form of time consistency was associated to a benchmark set of random variables, and larger benchmark sets correspond to stronger forms of time consistency. The first study of time consistency of {dynamic performance measures} is due to \citet{BCZ2010}, where the authors elevated the theory of coherent acceptability indices to dynamic setup in discrete time. It was pointed out that none of the forms of time consistency for risk measures is suitable for {acceptability indices}. Recursive property similar to \eqref{eq:DPP1}, or the benchmark sets approach essentially can not be applied to {scale invariant maps such as acceptability indices. One of the specific features of the acceptability indices, that needed to be accounted for in study of their time consistency, was that these measures of performance can take infinite value. In particular, this required extending the analysis beyond the $L^{\infty}$ framework.} Consequently, one of the main challenge was to find an appropriate form of time consistency of acceptability indices, that would be both financially reasonable and mathematically tractable. For the case of random variables (terminal cashflows), the proposed form of time consistency for a dynamic coherent acceptability index $\alpha$ reads as follows: for any $\mathcal{F}_t$-measurable random variables $m_t, \ n_t$, and any $t<T$, the following implications hold \begin{align} \alpha_{t+1}(X)\geq m_t &\quad \Rightarrow \quad \alpha_t(X)\geq m_t, \nonumber \\ \alpha_{t+1}(X)\leq n_t &\quad \Rightarrow \quad \alpha_t(X)\leq n_t. \label{eq:timeConstAlpha1} \end{align} The financial interpretation is also clear -- if tomorrow $X$ is acceptable at least at level $m_t$, then today $X$ is also acceptable at least at level $m_t$; similar interpretation holds true for the second part of \eqref{eq:timeConstAlpha1}. It is fair to say, we think, that dynamic acceptability indices and their time consistency properties play a critical role in so called conic approach to valuation and hedging of financial contracts \cite{BCIR2012,RosazzaGianinSgarra2012}. We recall that both risk measures and performance measures, in the nutshell, put preferences on the set of cashflows. While the corresponding forms of time consistency \eqref{eq:strong1} and \eqref{eq:timeConstAlpha1} for these classes of maps, as argued above, are different, we note that generally speaking both forms of time consistency are linking preferences between different times. The aim of this paper is to present a unified and flexible framework for time consistency of risk measures and performance measures, that integrates existing forms of time consistency. We consider a (large) class of maps that are postulated to satisfy only two properties - monotonicity and locality\footnote{See Section~\ref{sec:prelims} for rigorous definitions along with a detailed discussion of each property.} - and we study time consistency of such maps. {We focus on these two properties, as, in our opinion, these two properties have to be satisfied by any reasonable dynamic risk measure or dynamic performance measure.} We introduce the notion of an update rule that is meant to link preferences between different times.\footnote{It needs to be stressed that our notion of the update rule is different from the notion of update rule used in \cite{Tutsch2008}. } The time consistency is defined in terms of an update rule. We should note that this paper is the first step that we made towards a unified theory of time consistency of dynamic risk/performance measures. {To illustrate why our approach leads to such unification, we show almost all known concepts of weak time consistency can be reproduced and studied in terms of single concept of an update rule, that is introduced in this paper and that is suitable both for dynamic risk measures and dynamic performance measures. For study of relation of our update rule to other types of time consistency (e.g. middle time consistency, strong time consistency or supermartingale time consistency) and their connections to various update rules as well as new concepts of time consistency, please see our survey paper \cite{BCP2014}.} As mentioned earlier, part of this study hinges on some technical results, proved rigourously herein, about conditional expectation and conditional essential infimum/supremum for random variables that may take the values $\pm\infty$. Finally, we want to mention that traditionally the investigation of dynamic risk measures and dynamic performances indices is accompanied by robust representation type results, which is beyond the scope of this study given the generality of the classes of measures considered. Moreover, usually this is done in the context of convex analysis by exploring convexity (of risk measures) or quasi-concavity (of acceptability indices) properties of some relevant functions. In contrast, we depict time consistency without using convex analysis, and we consider functions that are only local and monotone, which provides for quite a generality of our results. The paper is organized as follows. In Section~\ref{sec:prelims} we introduce some necessary notations and present the main object of our study -- the Dynamic LM-measure. In Section~\ref{sec:TimeCons} we set forth the main concepts of the paper -- the notion of an updated rule and the definition of time consistency of a dynamic LM-measure. We prove a general result about time consistency, that can be viewed as counterpart of dynamic programming principle \eqref{eq:DPP1}. Additionally, we show that there is a close relationship between update rule approach to time consistency and the approach based on so called benchmark sets. Section~\ref{S:types} is devoted to weak time consistency. {In Appendix~\ref{A:cond} we provide discussion of extensions of the notion of conditional expectation and of conditional essential infimum/supremum to the case of random variables that take values in $[-\infty,\infty]$.} To ease the exposition of the main concepts, all technical proofs are deferred to the Appendix~\ref{A:proofs}, unless stated otherwise directly below the theorem or proposition. \section{Preliminaries}\label{sec:prelims} Let $(\Omega,\mathcal{F},\mathbb{F}=\{\mathcal{F}_{t}\}_{t\in\mathbb{T}} ,P)$ be a filtered probability space, with $\mathcal{F}_{0}=\{\Omega,\emptyset\}$, and $\mathbb{T}=\set{0,1,\ldots, T}$, for fixed and finite time horizon $T\in\mathbb{N}$.\footnote{Most of the results hold true or can be adjusted respectively, to the case of infinite time horizon. For sake of brevity, we will omit the discussion of this case here. } For $\mathcal{G}\subseteq\mathcal{F}$ we denote by $L^0(\Omega,\mathcal{G},P)$, and $\bar{L}^0(\Omega,\mathcal{G},P)$ the sets of all $\mathcal{G}$-measurable random variables with values in $(-\infty,\infty)$, and $[-\infty,\infty]$, respectively. In addition, we will use the notation $L^{p}(\mathcal{G}):=L^{p}(\Omega,\mathcal{G},P)$, $L^{p}_{t}:=L^{p}(\mathcal{F}_{t})$, and $L^{p}:=L_T^{p}$, for $p\in \set{0,1,\infty}$. Analogous definitions will apply to $\bar{L}^{0}$. We will also use the notation $\mathbb{V}^{p}:=\{(V_{t})_{t\in\mathbb{T}}: V_{t}\in L^{p}_{t}\}$, for $p\in \{0,1,\infty\}$. Throughout this paper, $\mathcal{X}$ will denote either the space of random variables $L^{p}$, or the space of adapted processes $\mathbb{V}^p$, for $p\in\set{0,1,\infty}$. If $\mathcal{X}=L^{p}$, for $p\in\{0,1,\infty\}$, then the elements $X \in\mathcal{X}$ are interpreted as discounted terminal cash-flows. On the other hand, if $\mathcal{X}=\mathbb{V}^{p}$, for $p\in\{0,1,\infty\}$, then the elements of $\mathcal{X}$, are interpreted as discounted dividend processes. It needs to be remarked, that all concepts developed for $\mathcal{X}=\mathbb{V}^{p}$ can be easily adapted to the case of cumulative discounted value processes. The case of random variables can be viewed as a particular case of stochastic processes by considering cash-flows with only the terminal payoff, i.e. stochastic processes such that $V=(0,\ldots,0,V_T)$. Nevertheless, we treat this case separately for transparency. For both cases we will consider standard pointwise order, understood in the almost sure sense. In what follows, we will also make use of the multiplication operator denoted as $\cdot_{t}$ and defined by: \begin{align} m\cdot_{t}V &:=(V_{0},\ldots,V_{t-1},mV_{t},mV_{t+1},\ldots), \nonumber\\ m\cdot_{t}X &:= mX,\label{eq:conventionV} \end{align} for $V\in\Set{(V_t)_{t\in\mathbb{T}} \mid V_{t}\in L^{0}_{t}}$, $X\in L^{0}$ and $m\in L^{\infty}_{t}$. In order to ease the notation, if no confusion arises, we will drop $\cdot_t$ from the above product, and we will simply write $mV$ and $mX$ instead of $m\cdot_{t}V$ and $m\cdot_{t}X$, respectively. \begin{remark} We note that the space $\mathbb{V}^{p}, \ p\in\{0,1,\infty\}$, endowed with multiplication $(\,\cdot_{t},)$ does not define a proper $L^{0}$--module \cite{FilipovicKupperVogelpoth2009} (e.g. $0\cdot_{t} V\neq 0$ for some $V\in\mathbb{V}^{p}$). However, in what follows, we will adopt some concepts from $L^0$-module theory which naturally fit into our study. Moreover, as in many cases we consider, if one additionally assume {\it independence of the past}, and replaces $V_{0},\ldots,V_{t-1}$ with $0$s in \eqref{eq:conventionV}, then $\mathcal{X}$ becomes an $L^{0}$--module. We refer the reader to \cite{BCDK2013,BCP2013} for a thorough discussion on this matter. \end{remark} Throughout, we will use the convention that $\infty-\infty=-\infty+\infty=-\infty$ and $0\cdot\pm\infty=0$. \noindent For $t\in\mathbb{T}$ and $X\in\bar{L}^{0}$ we define the (generalized) $\mathcal{F}_{t}$-conditional expectation of $X$ by $$ E[X|\mathcal{F}_{t}]:=\lim_{n\to\infty}E[(X^{+}\wedge n)|\mathcal{F}_{t}]-\lim_{n\to\infty}E[(X^{-}\wedge n)|\mathcal{F}_{t}], $$ where $X^{+}=(X\vee 0)$ and $X^{-}=(-X\vee 0)$. {Note that, in view of our convention we have that $(-1)(\infty-\infty) = \infty \neq -\infty + \infty =-\infty$, which, in particular, implies that we might get $-E[X]\neq E[-X]$.} {Thus, the conditional expectation operator defined above is no longer linear on $\bar{L}^{0}$ space ({see Proposition \ref{pr:condexp} in Appendix~\ref{A:cond}}). Similarly, for any $t\in\mathbb{T}$ and $X\in\bar{L}^{0}$, we define the (generalized) $\mathcal{F}_{t}$-conditional essential infimum by\footnote{Since both sequences $\Essinf_{t}(X^{+}\wedge n)$ and $\Esssup_{t}(X^{-}\wedge n)$ are monotone, the corresponding limits exist.} \begin{equation}\label{eq:essinf} \Essinf_{t}X:=\lim_{n\to\infty}\Big[\Essinf_{t}(X^{+}\wedge n)\Big]-\lim_{n\to\infty}\Big[\Esssup_{t}(X^{-}\wedge n)\Big], \end{equation} and respectively we put $\Esssup_{t}(X):=-\Essinf_{t}(-X)$. {For basic properties of this operator and the definition of conditional essential infimum on $L^{\infty}$ see Appendix~\ref{A:cond}. In particular, note that for any $X\in \bar{L}^{0}_{t}$ we get $\Essinf_{t}X=X$.} Next, we introduce the main object of this study. \begin{definition} A family $\varphi=\{\varphi_{t}\}_{t\in\mathbb{T}}$ of maps $\varphi_{t}:\mathcal{X}\to\bar{L}^{0}_{t}$ is a {\it Dynamic LM-measure} if $\varphi$ satisfies \begin{enumerate}[1)] \item {\it (Locality)} $\1_{A}\varphi_{t}(X)=\1_{A}\varphi_{t}(\1_{A}\cdot_{t} X)$; \item {\it (Monotonicity)} $X\leq Y \Rightarrow \varphi_{t}(X)\leq \varphi_{t}(Y)$; \end{enumerate} for any $t\in\mathbb{T}$, $X,Y\in\mathcal{X}$ and $A\in\mathcal{F}_{t}$. \end{definition} We believe that locality and monotonicity are two properties that must be satisfied by any reasonable dynamic measure of performance and/or measure of risk. Monotonicity property is natural for any numerical representation of an order between elements of $\mathcal{X}$. {The locality property essentially means that the values of the LM-measure restricted to a set $A\in\mathcal{F}$ remain invariant with respect to the values of the arguments outside of the same set $A\in\mathcal{F}$; in particular, the events that will not happen in the future do not change the value of the measure today. } \label{page:footnote} {Dynamic LM-measures contain several important subclasses.} Among the most recognized ones are {dynamic risk measures\footnote{Formally, we will consider negatives of dynamic risk measures and call them monetary utility measures, as typically risk measures are assumed to be counter monotone, rather than monotone.}} and dynamic performance measures {(dynamic acceptability indices)}. These classes of measures have been extensively studied in the literature over the past decade. Cash additivity is the key property that distinguishes risk measures from all other measures. This property means that adding $\$m$ to a portfolio today reduces the overall risk by the same amount $m$. From the regulatory perspective, the value of a risk measure is typically interpreted as the minimal capital requirement for a bank. For more details on coherent/covex/monetary risk measures {and for formal definition of cash additivity} we refer the reader to the survey papers \cite{FollmerSchied2010,AcciaioPenner2010}. The distinctive property of performance measures is scale invariance - a rescaled portfolio or cashflow is accepted at the same level. Performance and acceptability indices were studied in~\cite{ChernyMadan2009,BCZ2010,CheriditoKromer2012,BCP2013}, and they are meant to provide assessment of how good a financial position is. In particular,~\cite{CheriditoKromer2012} gives examples of performance indices that are not acceptability indices. It needs to be noted that the theory developed in this paper can also be applied to sub-scale invariant dynamic performance indices studied in \cite{RosazzaGianinSgarra2012,BCC2014}. \section{Time consistency and update rules}\label{sec:TimeCons} In this section we introduce the main concept of this paper - the time consistency of {dynamic risk measures and dynamic performance measures}, or more generally, the time consistency of dynamic LM-measures introduced in the previous section. We recall that these dynamic LM-measures are defined on $\mathcal{X}$, where $\mathcal{X}$ either denotes the space $L^{p}$ of random variables or the space $\mathbb{V}^{p}$ of stochastic processes, for $p\in\set{0,1,\infty}$, so, our study of time consistency is done relative to such spaces. Nevertheless, the definition of time consistency can be easily adapted to more general spaces, such as Orlicz hearts (as studied in \cite{CheriditoLi2009}), or, such as topological $L^{0}$-modules (see for instance~\cite{BCDK2013}). Assume that $\varphi$ is a dynamic LM-measure on $\mathcal{X}$. For an arbitrary fixed $X\in\mathcal{X}$ and $t\in \mathbb{T}$ the value $\varphi_{t}(X)$ represents a quantification (measurement) of preferences about $X$ at time $t$. Clearly, it is reasonable to require that any such quantification (measurement) methodology should be coherent as time passes. This is precisely the motivation behind the concepts of time consistency of dynamic LM-measures. {There are various forms of time consistency proposed in the literature, some of them suitable for one class of measures, other for a different class of measures.} For example, for dynamic convex (or coherent) risk measures various version of time consistency surveyed in \cite{AcciaioPenner2010} can be {seen} as versions of the celebrated dynamic programming principle. On the other hand, as shown in \cite{BCZ2010}, dynamic programming principle essentially is not suited for scale invariant measures such as dynamic acceptability indices, and the authors introduce a new type of time consistency tailored for these measures and provide a robust representation of them. Nevertheless, in all these cases the time consistency property connects, in a {noncontradictory} way, the measurements {done} at different times. Next, we will introduce the notion of update rule that serves as the main tool in relating the measurements of preferences at different times, and also, it is the main building block of our unified theory of time consistency property. \begin{definition}\label{def:UMST} We call a family $\mu=\{\mu_{t,s}:\, t,s\in\mathbb{T},\, s>t\}$ of maps $\mu_{t,s}:\bar{L}^{0}_{s}\times\mathcal{X}\to\bar{L}^{0}_{t}$ an {\it update rule} if for any $s>t$, the map $\mu_{t,s}$ satisfies the following conditions: \begin{enumerate}[1)] \item (Locality) $\1_{A}\mu_{t,s}(m,X)=\1_{A}\mu_{t,s}(\1_{A}m,X)$; \item (Monotonicity) if $m\geq m'$, then $\mu_{t,s}(m,X)\geq \mu_{t,s}(m',X)$; \end{enumerate} for any $X\in\mathcal{X}$, $A\in\mathcal{F}_{t}$ and $m,m'\in\bar{L}^{0}_{s}$. \end{definition} Since LM-measures are local and monotone, properties with clear financial interpretations, the update rules are naturally assumed to be local and monotone too. The first argument $m\in\bar{L}^{0}_{s}$ in $\mu_{t,s}$ serves as a benchmark to which the measurement $\varphi_{s}(X)$ is compared. The presence of the second argument, $X\in\mathcal{X}$, in $\mu_{t,s}$, allows the update rule to depend on the objects (the $X$s), which the preferences are applied to. However, as we will see in next section, there are natural situations when the update rules are independent of $X\in\mathcal{X}$, and sometimes they do not even depend on the future times $s\in\mathbb{T}$. { \begin{remark}\label{rem:super1} As we have mentioned, the update rule is used for updating preferences through time. This, for example, can be achieved in terms of conditional expectation operator, i.e. we can consider an update rule $\mu$, given by\begin{equation}\label{eq:ur:exp} \mu_{t,s}(m,X)=E[m|\mathcal{F}_{t}]. \end{equation} Note that this particular update rule does not depend on $s$ and $X$. Update rule might be also used for discounting the preferences. Intuitively speaking, the risk of loss in the far future might be more preferred than the imminent risk of loss (see~\cite{Cherny2010} for the more detailed explanation of this idea). For example, the update rule $\mu$ of the form \begin{equation}\label{eq:ur:exp3} \mu_{t,s}(m,X)=\left\{ \begin{array}{ll} \alpha^{s-t} E[m|\mathcal{F}_{t}] & \textrm{on } \{E[m|\mathcal{F}_{t}] \geq 0\},\\ \alpha^{t-s} E[m|\mathcal{F}_{t}] & \textrm{on } \{E[m|\mathcal{F}_{t}] < 0\}. \end{array}\right. \end{equation} for a fixed $\alpha\in (0,1)$ would achieve this goal. Note that `discounting' proposed here has nothing to do with the ordinary discounting, as we act on discounted values already. \end{remark} } Next, we define several particular classes of update rules, suited for our needs. \begin{definition}\label{UM} Let $\mu$ be an update rule. We will say that $\mu$ is: \begin{enumerate}[1)] \item {{\it $X$-invariant}, if $\mu_{t,s}(m,X)=\mu_{t,s}(m,0)$;} \item {\it $sX$-invariant}, if there exists a family $\{\mu_{t}\}_{t\in\mathbb{T}}$ of maps $\mu_{t}:\bar{L}^{0}\to\bar{L}^{0}_{t}$, such that $\mu_{t,s}(m,X)=\mu_{t}(m)$; \item {\it Projective}, if it is $sX$-invariant and $\mu_{t}(m_{t})=m_{t}$; \end{enumerate} for any $s,t\in\mathbb{T}$, $s>t$, $X\in\mathcal{X}$, $m\in\bar{L}^{0}_{s}$ and $m_{t}\in\bar{L}^{0}_{t}$. \end{definition} { Examples of update rules satisfying 1) and 3) are given by \eqref{eq:ur:exp3} and \eqref{eq:ur:exp}, respectively. The update rule, which satisfies 2), but not 3) can be constructed by substituting $\alpha^{t-s}$ with a constant in \eqref{eq:ur:exp3}. Generally speaking update rules for stochastic processes will not satisfy 1) as the information about the process in the time interval $(t,s)$ will affect $\mu_{t,s}$; see Subsection~\ref{S:semi.weak} for details. } \begin{remark}\label{rem:UMnotation} If an update rule $\mu$ is $sX$-invariant, then it is enough to consider only the corresponding family $\{\mu_{t}\}_{t\in\mathbb{T}}$. Hence, with slight abuse of notation we will write $\mu=\{\mu_{t}\}_{t\in\mathbb{T}}$ and call it an update rule as well. \end{remark} We are now ready to introduce the general definition of time consistency. \begin{definition}$\!\!\!$\footnote{We introduce the concept of time consistency only for LM-measures, as this is the only class of measures used in this paper. However, the definition itself is suitable for any map acting from $\mathcal{X}$ to $\bar{L}^0$. For example, traditionally in the literature, the time consistency is defined for dynamic risk measures (negatives of LM-measures), and the above definition of time consistency will be appropriate, although one has to flip `acceptance' with `rejection'. } Let $\mu$ be an update rule. We say that the dynamic LM-measure $\varphi$ is {\it $\mu$-acceptance (resp. $\mu$-rejection) time consistent} if \begin{equation}\label{eq:atc} \varphi_{s}(X)\geq m_{s}\quad (\textrm{resp.} \leq) \quad \Longrightarrow\quad \varphi_{t}(X)\geq \mu_{t,s}(m_{s},X)\quad (\textrm{resp.} \leq), \end{equation} for all $s,t\in\mathbb{T}$, $s>t$, $X\in \mathcal{X}$ and $m_{s}\in \bar{L}^{0}_{s}$. If property \eqref{eq:atc} is satisfied only for $s,t\in\mathbb{T}$, such that $s=t+1$, then we say that $\varphi$ is {\it one step $\mu$-acceptance (resp. one step $\mu$-rejection) time consistent}. \end{definition} The financial interpretation of acceptance time consistency is straightforward: if $X\in\mathcal{X}$ is accepted at some future time $s\in\mathbb{T}$, at least at level $m$, then today, at time $t\in\mathbb{T}$, it is accepted at least at level $\mu_{t,s}(m,X)$. Similarly for rejection time consistency. Essentially, the update rule $\mu$ translates the preference levels at time $s$ to preference levels at time $t$. As it turns out, this simple and intuitive definition of time consistency, with appropriately chosen $\mu$, will cover various cases of time consistency for risk and performance measures that can be found in the existing literature (see \cite{BCP2014} for a survey). Next, we will give an equivalent formulation of time consistency. While the proof of the equivalence is simple, the result itself will be conveniently used in the sequel. Moreover, it can be viewed as a counterpart of dynamic programming principle, which is an equivalent formulation of dynamic consistency for convex risk measures. \begin{proposition}\label{prop:mart.ref} Let $\mu$ be an update rule and let $\varphi$ be a dynamic LM-measure. Then $\varphi$ is $\mu$-acceptance (resp. $\mu$-rejection) time consistent if and only if \begin{equation}\label{eq:accepTimeConsAlt} \varphi_{t}(X)\geq \mu_{t,s}(\varphi_{s}(X),X)\quad (\textrm{resp.} \leq), \end{equation} for any $X\in\mathcal{X}$ and $s,t\in\mathbb{T}$, such that $s>t$. \end{proposition} \begin{remark}\label{rem:preservetc} It is clear, and also naturally desired, that a monotone transformation of an LM-measure will not change the preference order of the underlying elements. We want to emphasize that a monotone transformation will also preserve the time consistency. In other words, the preference orders will be also preserved in time. Indeed, if $\varphi$ is $\mu$-acceptance time consistent, and $g:\bar{\mathbb{R}} \to\bar{\mathbb{R}}$ is a strictly monotone function, then the family $\{g\circ \varphi_{t}\}_{t\in\mathbb{T}}$ is $\widetilde{\mu}$-acceptance time consistent, where the update rule $\widetilde{\mu}$ is defined by $\widetilde{\mu}_{t,s}(m,X)=g(\mu_{t,s}(g^{-1}(m),X))$, for $t,s\in\mathbb{T}$, $s>t$, $X\in\mathcal{X}$ and $m\in \bar{L}^{0}_{s}$. \end{remark} In the case of random variables, $\mathcal{X}=L^p$, we we will usually consider update rules that are $X$-invariant. The case of stochastic processes is more intricate. If $\varphi$ is a dynamic LM-measure, and $V\in\mathbb{V}^p$, then in order to compare $\varphi_{t}(V)$ and $\varphi_{s}(V)$, for $s>t$, one also needs to take into account the cash-flows between times $t$ and $s$. Usually, for $\mathcal{X}=\mathbb{V}^{p}$ we consider update rules, such that \begin{equation}\label{eq:prTOrv.f} \mu_{t,t+1}(m,V)=\mu_{t,t+1}(m,0)+f(V_{t}), \end{equation} where $f:\bar{\mathbb{R}}\to \bar{\mathbb{R}}$ is a Borel measurable function, such that $f(0)=0$. We note, that any such one step update rule $\mu$ can be easily adapted to the case of random variables. Indeed, upon setting $\widetilde{\mu}_{t,t+1}(m):=\mu_{t,t+1}(m,0)$ we get a one step $X$-invariant update rule $\widetilde{\mu}$, which is suitable for random variables. Moreover, $\widetilde{\mu}$ will define the corresponding type of one step time consistency for random variables. Of course, this correspondence between update rule for processes and random variables is valid only for `one step' setup. Moreover, for update rules, which admit the so called nested composition property~(cf. \cite{Ruszczynski2010, Ruszczynski2006} and references therein), \begin{equation}\label{eq:RvToPr} {\mu_{t,s}(m,V)=\mu_{t,t+1}(\mu_{t+1,t+2}(\ldots\mu_{s-2,s-1}(\mu_{s-1,s}(m,V),V)\ldots V),V),} \end{equation} we have that $\mu$-acceptance (resp. $\mu$-rejection) time consistency is equivalent to one step $\mu$-acceptance (resp. $\mu$-rejection) time consistency. \subsection{Relation between update rule approach and the benchmark approach} As we will show in this section, there is a close relationship between our update rule approach to time consistency and the approach based on so called benchmark sets. The latter approach was initiated by Tutsch \cite{Tutsch2008}, where the author applied it in the context of dynamic risk measures. Essentially, a benchmark set is a collection of elements from $\mathcal{X}$ that satisfy some additional structural properties. For simplicity, we shall assume here that $\mathcal{X}=L^p$, for $p\in\{0,1,\infty\}$. The definition of time consistency in terms of benchmark sets is as follows: \begin{definition}\label{def:TC-benchSet} Let $\varphi$ be a dynamic LM-measure and let $\mathcal{Y}=\{\mathcal{Y}_{t}\}_{t\in\mathbb{T}}$ be a {\it family of benchmark sets}, that is, sets $\mathcal{Y}_t$ such that $\mathcal{Y}_{t}\subseteq \mathcal{X}$, $0\in\mathcal{Y}_{t}$ and $\mathcal{Y}_t+\mathbb{R}=\mathcal{Y}_t$. We say that $\varphi$ is {\it acceptance (resp. rejection) time consistent with respect to $\mathcal{Y}$}, if \begin{equation}\label{eq:benchmark2} \varphi_{s}(X)\geq \varphi_{s}(Y)\quad (resp. \leq)\quad \Longrightarrow\quad \varphi_{t}(X)\geq \varphi_{t}(Y)\quad (resp. \leq), \end{equation} for all $s\geq t$, $X\in \mathcal{X}$ and $Y\in\mathcal{Y}_{s}$. \end{definition} Informally, the ``degree'' of time consistency with respect to $\mathcal{Y}$ is measured by the size of $\mathcal{Y}$. Thus, the larger the sets $\mathcal{Y}_{s}$ are, for each $s\in\mathbb{T}$, the stronger is the degree of time consistency of $\varphi$. We now have the following important proposition, \begin{proposition}\label{prop:benchmark} Let $\varphi$ be a dynamic LM-measure and let $\mathcal{Y}$ be a family of benchmark sets. Then, there exists an update rule $\mu$ such that $\varphi$ is acceptance (resp. rejection) time consistent with respect to $\mathcal{Y}$ if and only if it is $\mu$-acceptance (resp. $\mu$-rejection) time consistent. \end{proposition} \noindent The update rule $\mu$ is said to provide $\varphi$ with the same type of time consistency as $\mathcal{Y}$ does, and vice versa. Generally speaking, the converse implication does not hold true, i.e. given an LM-measure $\varphi$ and an update rule $\mu$ it may not be possible to construct $\mathcal{Y}$ so that it provides the same type of time consistency as $\mu$ does. In other words, the notion of time consistency given in terms of updates rule is more general. \section{Weak time consistency}\label{S:types} {In this section we will discuss examples of update rules, which relate to weak time consistency for random variables and for stochastic processes. This is meant to illustrate the framework developed earlier in this paper.} {For a thorough presentation of application of our theory of update rules to other known types of time consistency, such as middle time consistency and strong time consistency, as well as for other related results and concepts, we refer to the survey paper \cite{BCP2014}.} The notion of weak time consistency was introduced in~\cite{Tutsch2008}, and subsequently studied in \cite{AcciaioPenner2010,ArtznerDelbaenEberHeathKu2007,CheriditoDelbaenKupper2006a,DetlefsenScandolo2005,AcciaioFollmerPenner2010,CheriditoDelbaenKupper2006a}. The idea is that if `tomorrow', say at time $s$, we accept $X\in\mathcal{X}$ at level $m_s\in\mathcal{F}_s$, then `today', say at time $t$, we would accept $X$ at least at any level {lower} or equal {to} $m_s$, {appropriately} adjusted by the information $\mathcal{F}_t$ available at time $t$ (cf. \eqref{eq:inft}). Similarly, if tomorrow we reject $X$ at level {higher or equal to} $m_s\in\mathcal{F}_s$, then today, we should also reject $X$ at any level {higher} than $m_s$, adjusted to the flow of information $\mathcal{F}_t$. This suggests that the update rules should be taken as $\mathcal{F}_t$-conditional essential infimum and supremum, respectively. {Towards this end, we first} show that $\mathcal{F}_t$-conditional essential infimum and supremum are projective update rules. \begin{proposition}\label{th:essinf} The family $\mu^{\inf}:=\{\mu_{t}^{\inf}\}_{t\in\mathbb{T}}$ of maps $\mu^{\inf}_{t}:\bar{L}^{0}\to\bar{L}^{0}_{t}$ given by \[ \mu^{\inf}_{t}(m)=\Essinf_{t}m, \] is a projective\footnote{See Remark~\ref{rem:UMnotation} for the comment about notation.\label{fn:notation}} update rule. Similar result is true for family $\mu^{\sup}:=\{\mu_{t}^{\sup}\}_{t\in\mathbb{T}}$ of maps $\mu^{\sup}_{t}:\bar{L}^{0}\to\bar{L}^{0}_{t}$ given by $\mu^{\sup}_{t}(m)=\Esssup_{t}m$. \end{proposition} \subsection{Weak time consistency for random variables} Recall that the case of random variables corresponds to $\mathcal{X}=L^{p}$, for a fixed $p\in\{0,1,\infty\}$. We proceed with the definition of weak acceptance and weak rejection time consistency (for random variables). \begin{definition}\label{type.of.cons.weak} Let $\varphi$ be a dynamic LM-measure. Then $\varphi$ is said to be {\it weakly acceptance (resp. {\it weakly rejection}) time consistent} if it is $\mu^{\inf}$-acceptance (resp. $\mu^{\sup}$-rejection) time consistent. \end{definition} Definition~\ref{type.of.cons.weak} of time consistency is equivalent to many forms of time consistency studied in the current literature. Usually, the weak time consistency is considered for dynamic monetary risk measures on $L^{\infty}$ (cf.~\cite{AcciaioPenner2010} and references therein); we refer to this case as to} the `classical weak time consistency.' It was {observed }in \cite{AcciaioPenner2010} that in the classical weak time consistency framework, weak acceptance (resp. weak rejection) time consistency is equivalent to the statement that for any $X\in\mathcal{X}$ and $s>t$, we get \begin{equation}\label{eq:classicWeak} \varphi_{s}(X)\geq 0 \Rightarrow \varphi_{t}(X)\geq 0\quad\quad \textrm{(resp. $\leq$)}. \end{equation} {This observation was the motivation} for our definition of weak acceptance (resp. weak rejection) time consistency, and the next proposition explains why so. \begin{proposition}\label{pr:weak} Let $\varphi$ be a dynamic LM-measure. The following conditions are equivalent \begin{enumerate}[1)] \item $\varphi$ is weakly acceptance time consistent, i.e. for any $X\in\mathcal{X}$, $t,s\in\mathbb{T}$, $s>t$, and $m_{s}\in \bar{L}^0_{s}$, \begin{equation}\label{eq:inft} \varphi_{s}(X)\geq m_{s} \Rightarrow \varphi_{t}(X) \geq \Essinf_t(m_{s}). \end{equation} \item For any $X\in \mathcal{X}$, $s,t\in\mathbb{T}$, $s>t$, $\varphi_{t}(X)\geq \Essinf_{t}\varphi_{s}(X)$. \item For any $X\in \mathcal{X}$, $s,t\in\mathbb{T}$, $s>t$, and $m_{t}\in \bar{L}^{0}_{t}$, \[ \varphi_{s}(X)\geq m_{t} \Rightarrow \varphi_{t}(X)\geq m_{t}. \] \end{enumerate} If additionally $\varphi$ is a dynamic monetary utility measure\footnote{i.e $\varphi_{t}(0)=0$ and $\varphi_{t}(X+\ c_{t})=\varphi_{t}(X)+c_{t}$ for any $t\in\mathbb{T}$, $X\in\bar{L}^{0}$ and $c_{t}\in L^{\infty}_{t}$.}, then the above conditions are equivalent to \begin{enumerate}[1)] \item[4)] For any $X\in \mathcal{X}$ and $s,t\in\mathbb{T}$, $s>t$, \[ \varphi_{s}(X)\geq 0 \Rightarrow \varphi_{t}(X)\geq 0. \] \end{enumerate} Similar result holds true for weak rejection time consistency. \end{proposition} Property 3) in Proposition~\ref{pr:weak} was also suggested as the notion of (weak) acceptance and (weak) rejection time consistency in the context of scale invariant measures, called acceptability indices (cf.~\cite{BiaginiBion-Nadal2012,BCZ2010}). {In many papers studying risk measurement theory (cf. \cite{DetlefsenScandolo2005} and references therein), the weak form of time consistency is defined using dual approach to the measurement of risk. Rather than directly updating the level of preferences $m$, as in our approach, in the dual approach the level of preference is updated indirectly by manipulating probabilistic scenarios and explaining the update procedure by using so called {\it pasting property} (see e.g.~\cite[Def. 9]{DetlefsenScandolo2005}). As shown in the next result, our update rule related to weak form of time consistency admits dual representation, allowing us to link our definition with the dual approach.} \begin{proposition}\label{pr:essinf.cond.aa} For any $m\in\bar{L}^{0}$ and $t\in\mathbb{T}$ we get \begin{equation}\label{eq:essinf.eq.pt} \mu^{\inf}_{t}(m)=\essinf_{Z\in P_{t}}E[Zm|\mathcal{F}_{t}]. \end{equation} where $P_{t}:=\{Z\in L^{0} \mid Z\geq 0,\ E[Z|\mathcal{F}_{t}]=1\}$. Similar result is true for $\Esssup_{t}m$. \end{proposition} In \eqref{eq:essinf.eq.pt}, the random {variables $Z\in P_{t}$ may be treated as Radon-Nikodym derivatives w.r.t. $P$ } of some probability measures $Q$ such that $Q\ll P$ and $Q|_{\mathcal{F}_{t}}=P|_{\mathcal{F}_{t}}$. {The family $P_{t}$ may thus be thought of as the family of all possible $\mathcal{F}_{t}$-conditional probabilistic scenarios. Accordingly, $\mu^{\inf}_{t}(m)$ represents the $\mathcal{F}_{t}$-conditional worst-case preference update with respect to all such scenarios.} Note that combining Propositions~\ref{prop:mart.ref}~and~\ref{pr:essinf.cond.aa}, we obtain that weak acceptance time consistency of $\varphi$ is equivalent to the condition \begin{equation}\label{eq:robust.expl} \varphi_{t}(X) \geq \essinf_{Z\in P_{t}}E[Z\varphi_{s}(X)|\mathcal{F}_{t}], \end{equation} which in fact is a starting point for almost all robust definitions of weak time consistency, for $\varphi$'s admitting dual representation~\cite{DetlefsenScandolo2005}. As next result shows, the weak time consistency is indeed one of the weakest forms of time consistency, being implied by any other concept of time consistency generated by a projective rule. \begin{proposition}\label{pr:UDMprop} Let $\varphi$ be a dynamic LM-measure and let $\mu$ be a projective update rule. If $\varphi$ is $\mu$-acceptance (resp. $\mu$-rejection) time consistent, then $\varphi$ is weakly acceptance (resp. weakly rejection) time consistent. \end{proposition} In particular, recall that time consistency is preserved under monotone transformations, Remark~\ref{rem:preservetc}. Thus, for any strictly monotone function $g:\bar{\mathbb{R}}\to\bar{\mathbb{R}}$ , if $\varphi$ is weakly acceptance (resp. weakly rejection) time consistent, then $\{g\circ\varphi_{t}\}_{t\in\mathbb{T}}$ also is weakly acceptance (resp. weakly rejection) time consistent. \subsection{Weak and Semi-weak time consistency for stochastic processes}\label{S:semi.weak} In this subsection we introduce {and discuss} the concept of semi-weak time consistency for stochastic processes. Thus, we take $\mathcal{X}=\mathbb{V}^{p}$, for a fixed $p\in\{0,1,\infty\}$. As it will turn out, in the case of random variables semi-weak time consistency coincides with the property of weak time consistency; that is why we omitted discussion of semi-weak consistency in the previous section. {To provide a better perspective for the concept of semi-weak time consistency, we start with the definition of weak time consistency for stochastic processes, which transfers directly from the case of random variables using \eqref{eq:prTOrv.f}.} \begin{definition}\label{type.of.cons.proc.weak} Let $\varphi$ be a dynamic LM-measure. We say that $\varphi$ is {\it weakly acceptance (resp. weakly rejection) time consistent for stochastic processes} if it is one step $\mu$-acceptance (resp. one step $\mu^{*}$-rejection) time consistent, where the update rule is given by $$ \mu_{t,t+1}(m,V)=\mu^{\inf}_{t}(m)+V_{t}\qquad (\textrm{resp. } \mu^{*}_{t,t+1}(m,V)=\mu^{\sup}_{t}(m)+V_{t}). $$ \end{definition} As mentioned earlier, the update rule, and consequently weak time consistency for stochastic processes, depends also on the value of the process (the dividend paid) at time $t$. If tomorrow, at time $t+1$, we accept $X\in\mathcal{X}$ at level greater than $m_{t+1}\in\mathcal{F}_{t+1}$, then today at time $t$, we will accept $X$ at least at level $\Essinf_t m_{t+1}$ (i.e. the worst level of $m_{t+1}$ adapted to the information $\mathcal{F}_t$) plus the dividend $V_t$ received today. For counterparts of Propositions~\ref{pr:weak} and~\ref{pr:UDMprop} for the case of stochastic processes, see the survey paper~\cite{BCP2014}. As it was shown in \cite{BCZ2010}, none of the existing, at that time, forms of time consistency were suitable for scale-invariant maps, such as acceptability indices. In fact, even the weak acceptance and the weak rejection time consistency for stochastic processes are too strong in case of {acceptability indices}. Because of that we need even a weaker notion of time consistency, which we will refer to as semi-weak acceptance and semi-weak rejection time consistency. The notion of semi-weak time consistency for stochastic processes, introduced next, is suited precisely for {acceptability indices}, and we refer the reader to \cite{BCZ2010} for a detailed discussion on time consistency for {acceptability indices} and their dual representations\footnote{In \cite{BCZ2010} the authors combined both semi-weak acceptance and rejection time consistency into one single definition and call it time consistency.}. \begin{definition}\label{def:semi} Let $\varphi$ be a dynamic LM-measure (for processes). Then $\varphi$ is said to be: \begin{itemize} \item {\it Semi-weakly acceptance time consistent} if it is one step $\mu$-acceptance time consistent, where the update rule is given by $$ \mu_{t,t+1}(m,V) =1_{\{V_{t}\geq 0\}}\mu^{\inf}_{t}(m)+1_{\{V_{t}< 0\}}(-\infty).\quad $$ \item {\it Semi-weakly rejection time consistent} if it is one step $\mu'$-rejection time consistent, where the update rule is given by $$ \mu'_{t,t+1}(m,V) =1_{\{V_{t}\leq 0\}}\mu^{\sup}_{t}(m)+1_{\{V_{t}> 0\}}(+\infty). $$ \end{itemize} \end{definition} It is straightforward to check that weak acceptance/rejection time consistency for stochastic processes always implies semi-weak acceptance/rejection time consistency. Next, we will show that the definition of semi-weak time consistency is indeed equivalent to time consistency introduced in \cite{BCZ2010}, that was later studied in \cite{BiaginiBion-Nadal2012,BCC2014}. \begin{proposition}\label{pr:proc.weak.semi} Let $\varphi$ be a dynamic LM-measure on $\mathbb{V}^p$ . The following conditions are equivalent \begin{enumerate}[1)] \item $\varphi$ is semi-weakly acceptance time consistent, i.e. for all $V\in \mathcal{X}$, $t\in\mathbb{T}$, $t<T$, and $m_{t}\in \bar{L}^{0}_{t}$, $$ \varphi_{t+1}(V)\geq m_{t+1} \Rightarrow \varphi_{t}(V) \geq \1_{\{V_{t}\geq 0\}}\Essinf_t(m_{t+1})+1_{\{V_{t}< 0\}}(-\infty). $$ \item For all $V\in \mathcal{X}$ and $t\in\mathbb{T}$, $t<T$, $\varphi_{t}(V) \geq \1_{\{V_{t}\geq 0\}}\Essinf_t(\varphi_{t+1}(V))+1_{\{V_{t}< 0\}}(-\infty)$. \item For all $V\in \mathcal{X}$, $t\in\mathbb{T}$, $t<T$, and $m_{t}\in \bar{L}^{0}_{t}$, such that $V_{t}\geq 0$ and $\varphi_{t+1}(V)\geq m_{t}$, then $\varphi_{t}(V)\geq m_{t}$. \end{enumerate} Similar result is true for semi-weak rejection time consistency. \end{proposition} Property 3) in Proposition~\ref{pr:proc.weak.semi} illustrates best the financial meaning of semi-weak acceptance time consistency: if tomorrow we accept the dividend stream $V\in\mathcal{X}$ at level $m_t$, and if we get a positive dividend $V_t$ paid today at time $t$, then today we accept the cash-flow $V$ at least at level $m_t$ as well. Similar interpretation is valid for semi-weak rejection time consistency. The next two results give an important (dual) connection between cash additive risk measures and {acceptability indices. In particular, these results shed light on the relation between time-consistency property of dynamic acceptability indices, represented by the family $\{\alpha_{t}\}_{t\in\mathbb{T}}$ below, and time consistency of the corresponding family $\{\varphi^{x}\}_{x\in\mathbb{R}_{+}}$, where $\varphi^{x}=\{\varphi^{x}_{t}\}_{t\in\mathbb{T}}$ is a dynamic risk measure (for any $x\in\mathbb{R}_+$).} \begin{proposition}\label{prop:DCRMtoDAI} Let $\{\varphi^{x}\}_{x\in\mathbb{R}_{+}}$, be a decreasing family of dynamic LM-measures\footnote{A family, indexed by $x\in\mathbb{R}_{+}$, of maps $\{\varphi_{t}^{x}\}_{t\in\mathbb{T}}$, will be called {\it decreasing}, if $\varphi_{t}^{x}(X)\leq \varphi_{t}^{y}(X)$ for all $X\in\mathcal{X}$, $t\in\mathbb{T}$ and $x,y\in\mathbb{R}_{+}$, such that $x\geq y$.}. Assume that for each $x\in\mathbb{R}_{+}$, $\{\varphi^{x}_{t}\}_{t\in\mathbb{T}}$ is weakly acceptance (resp. weakly rejection) time consistent. Then, the family $\{\alpha_{t}\}_{t\in\mathbb{T}}$ of maps $\alpha_{t}:\mathcal{X}\to \bar{L}^{0}_{t}$ defined by\footnote{Note that the map defined in \eqref{eq:DCRMtoDAI} is $\mathcal{F}_{t}$-measurable as the essential supremum over an uncountable family of $\mathcal{F}_{t}$-measurable random variables. See Appendix \ref{A:cond}.} \begin{equation}\label{eq:DCRMtoDAI} \alpha_{t}(V):=\esssup_{x\in \mathbb{R}^{+}}\{x \1_{\{\varphi_{t}^{x}(V)\geq0\}}\}, \end{equation} is a semi-weakly acceptance (resp. semi-weakly rejection) time consistent dynamic LM-measure. \end{proposition} Observe that \begin{equation}\label{eq:DCRMtoDAI2} \alpha_{t}(V)(\omega)=\sup\{x\in\mathbb{R}_{+}: \varphi_{t}^{x}(V)(\omega)\geq0\}. \end{equation} As the representation \eqref{eq:DCRMtoDAI2} is more convenient than \eqref{eq:DCRMtoDAI}, it will be used in the proofs given in the Appendix. \begin{proposition}\label{prop:DAItoDCRM} Let $\{\alpha_{t}\}_{t\in\mathbb{T}}$ be a dynamic LM-measure, which is independent of the past and translation invariant\footnote{We say that $\alpha$ is {\it translation invariant} if $\alpha_{t}(V+m1_{\set{t}})=\alpha_{t}(V+m1_{\set{s}})$, for any $m\in L^{p}_{t}$ and $V\in\mathcal{X}$, where $1_{\set{t}}$ corresponds to process equal to $1$ a time $t$ and 0 elsewhere; We say that $\alpha$ is {\it independent of the past} if $\alpha_{t}(V)=\alpha_{t}(V-0\cdot_{t}V)$, for any $V\in\mathcal{X}$.}. Assume that $\{\alpha_{t}\}_{t\in\mathbb{T}}$ is semi-weakly acceptance (resp. semi-weakly rejection) time consistent. Then, for any $x\in\mathbb{R}_{+}$, the family $\{\varphi_{t}^{x}\}_{t\in\mathbb{T}}$ defined by \begin{equation}\label{eq:DAItoDCRM} \varphi^{x}_{t}(V)=\essinf_{c\in\mathbb{R}}\{c\1_{\{\alpha_{t}(V-c1_{\{t\}})\leq x\}}\}, \end{equation} is a weakly acceptance (resp. weakly rejection) time consistent dynamic LM-measure. \end{proposition} In the proofs given in the Appendix, we will use representation \begin{equation}\label{eq:DAItoDCRM2} \varphi^{x}_{t}(V)(\omega)=\inf\{c\in\mathbb{R}: \alpha_{t}(V-c1_{\{t\}})(\omega)\leq x\}, \end{equation} rather than \eqref{eq:DAItoDCRM}, as it is more convenient. This type of dual representation, i.e. \eqref{eq:DCRMtoDAI} and \eqref{eq:DAItoDCRM}, first appeared in \cite{ChernyMadan2009} where the authors studied static (one period of time) scale invariant measures. Subsequently, in \cite{BCZ2010}, the authors extended these results to the case of stochastic processes with special emphasis on time consistency property. In contrast to \cite{BCZ2010}, we consider an arbitrary probability space, not just a finite one. {We conclude this section by presenting two examples that illustrate the concept of semi-weak time consistency and show the connection between maps introduced in Propositions~\ref{prop:DCRMtoDAI}~and~\ref{prop:DAItoDCRM}. For more examples we refer to the survey paper \cite{BCP2014}.} \begin{example}[Dynamic Gain Loss Ratio] \label{ex:4} Dynamic Gain Loss Ratio (dGLR) is a popular measure of performance, which essentially improves on some drawbacks of Sharpe Ratio (such as penalizing for positive returns), and it is equal to the ratio of expected return over expected losses. Formally, for $\mathcal{X}=\mathbb{V}^{1}$, dGLR is defined as \begin{equation}\label{e:dGLT} \varphi_{t}(V):=\left\{ \begin{array}{ll} \frac{E[\sum_{i=t}^{T}V_{i}|\mathcal{F}_{t}]}{E[(\sum_{i=t}^{T}V_{i})^{-}|\mathcal{F}_{t}]}, &\quad \textrm{if } E[\sum_{i=t}^{T}V_{i}|\mathcal{F}_{t}]>0,\\ 0, &\quad \textrm{otherwise}. \end{array}\right. \end{equation} For various properties and dual representations of dGLR see for instance \cite{BCZ2010,BCDK2013}. In \cite{BCZ2010}, the authors showed that dGLR is both semi-weakly acceptance and semi-weakly rejection time consistent, although assuming that $\Omega$ is finite. For sake of completeness we will show here that dGLR is semi-weakly acceptance time consistency; semi-weakly rejection time consistency is left to an interested reader as an exercise. Assume that $t\in\mathbb{T}\setminus\{T\}$, and $V\in\mathcal{X}$. In view of Proposition~\ref{prop:mart.ref}, it is enough to show that \begin{equation}\label{eq:ex4.1} \varphi_{t}(V)\geq \1_{\{V_{t}\geq 0\}}\Essinf_{t}(\varphi_{t+1}(V))+\1_{\{V_{t}< 0\}}(-\infty). \end{equation} On the set ${\{V_{t}< 0\}}$ the inequality \eqref{eq:ex4.1} is trivial. Since $\varphi_{t}$ is non-negative and local, without loss of generality, we may assume that $\Essinf_{t}(\varphi_{t+1}(V))>0$. Moreover, $\varphi_{t+1}(V)\geq\Essinf_{t}(\varphi_{t+1}(V))$, which implies \begin{equation}\label{eq:ex4.2} E[\sum_{i=t+1}^{T}V_{i}|\mathcal{F}_{t+1}]\geq \Essinf_{t}(\varphi_{t+1}(V))\cdot E[(\sum_{i=t+1}^{T}V_{i})^{-}|\mathcal{F}_{t+1}]. \end{equation} Using \eqref{eq:ex4.2} we obtain \begin{align} \1_{\{V_{t}\geq 0\}}E[\sum_{i=t}^{T}V_{i}|\mathcal{F}_{t}] & \geq \1_{\{V_{t}\geq 0\}}E[E[\sum_{i=t+1}^{T}V_{i}|\mathcal{F}_{t+1}]|\mathcal{F}_{t}]\nonumber\\ & \geq \1_{\{V_{t}\geq 0\}}\Essinf_{t}(\varphi_{t+1}(V))\cdot E[\1_{\{V_{t}\geq 0\}}E[(\sum_{i=t+1}^{T}V_{i})^{-}|\mathcal{F}_{t+1}]|\mathcal{F}_{t}]\nonumber\\ & \geq \1_{\{V_{t}\geq 0\}}\Essinf_{t}(\varphi_{t+1}(V))\cdot E[(\sum_{i=t}^{T}V_{i})^{-}|\mathcal{F}_{t}].\label{eq:ex4.3} \end{align} {Note that $\Essinf_{t}(\varphi_{t+1}(V))>0$ implies that $\varphi_{t+1}(V)>0$, and thus $E[\sum_{i=t+1}^{T}V_{i}|\mathcal{F}_{t+1}]>0$. Hence, on set $\set{V_t\geq0}$, we have $$ \1_{\set{V_t\geq 1}}E[ \sum_{i=t}^T V_i |\mathcal{F}_t ] \geq \1_{\set{V_t\geq 1}} E[E[\sum_{i=t+1}^{T}V_{i}|\mathcal{F}_{t+1}]|\mathcal{F}_t]>0. $$ Combining this and \eqref{eq:ex4.3}, we conclude the proof. } \end{example} \begin{example}[Dynamic RAROC for processes] \label{ex:3} Risk Adjusted Return On Capital (RAROC) is a popular scale invariant measure of performance; we refer the reader to \cite{ChernyMadan2009} for study of static RAROC, and to \cite{BCZ2010} for its extension to dynamic setup. We consider the space $\mathcal{X}=\mathbb{V}^{1}$ and we fix $\alpha\in (0,1)$. Dynamic RAROC, at level $\alpha$, is the family $\{\varphi_{t}\}_{t\in\mathbb{T}}$, with $\varphi_{t}$ given by \begin{equation}\label{eq:dRAROC} \varphi_{t}(V):=\left\{ \begin{array}{ll} \frac{E[\sum_{i=t}^{T}V_{i}|\mathcal{F}_{t}]}{-\rho^{\alpha}_{t}(V)}&\quad \textrm{if } E[\sum_{i=t}^{T}V_{i}|\mathcal{F}_{t}]>0,\\ 0 &\quad \textrm{otherwise}, \end{array}\right. \end{equation} where $\rho^{\alpha}_{t}(V)=\essinf\limits_{Z\in \mathcal{D}^{\alpha}_{t}}E[Z\sum_{i=t}^{T}V_{i}|\mathcal{F}_{t}]$, and where the family of sets $\{D^{\alpha}_{t}\}_{t\in\mathbb{T}}$ is defined by\footnote{The family $\{D^{\alpha}_{t}\}_{t\in\mathbb{T}}$ represents risk scenarios, which define the dynamic version of the conditional value at risk at level $\alpha$ (cf. \cite{Cherny2010}).} \begin{equation}\label{eq:det.sets.coh} D^{\alpha}_{t}:=\{Z\in L^{1}: 0\leq Z\leq\alpha^{-1},\ E[Z|\mathcal{F}_{t}]=1\}. \end{equation} We use the convention $\varphi_{t}(V)=+\infty$, if $\rho_{t}(V)\geq 0$. In \cite{BCZ2010} it was shown that dynamic RAROC is a dynamic acceptability index for processes. Moreover, it admits the following dual representation (cf. \eqref{eq:DCRMtoDAI2}): for any fixed $t\in\mathbb{T}$, $$ \varphi_{t}(V)=\sup\{x\in\mathbb{R}_{+}: \phi^{x}_{t}(V)\geq 0 \}, $$ where $\phi^{x}_{t}(V)=\essinf\limits_{Z\in \mathcal{B}^{x}_{t}}E[Z(\sum_{i=t}^{T}V_{i})|\mathcal{F}_{t}]$ with $$ \mathcal{B}_{t}^{x}=\{Z\in L^{1}: Z=\frac{1}{1+x}+\frac{x}{1+x}Z_{1},\ \textrm{for some } Z_{1}\in \mathcal{D}_{t}^{\alpha}\}. $$ It is easy to check, that the family $\{\varphi^{x}_{t}\}_{t\in\mathbb{T}}$ is a dynamic coherent risk measure for processes, see \cite{BCZ2010} for details. Since $1\in \mathcal{D}_{t}^{\alpha}$, we also get that $\{\phi_{t}^{x}\}_{t\in\mathbb{T}}$ is increasing with $x\in\mathbb{R}_{+}$. Moreover, it is known that $\{\phi_{t}^{x}\}_{t\in\mathbb{T}}$ is weakly acceptance time consistent but not weakly rejection time consistent, for any fixed $x\in\mathbb{R}_{+}$ (see \cite[Example 1]{BCP2014}). Thus, using Propositions~\ref{prop:DCRMtoDAI}~and~\ref{prop:DAItoDCRM} we immediately conclude that $\{\varphi^{x}_{t}\}_{t\in\mathbb{T}}$ is semi-weakly acceptance time consistent and not semi-weakly rejection time consistent. \end{example} \begin{appendix} \section{Appendix} \subsection[Conditional expectation and essential supremum/infimum]{Conditional expectation and essential supremum/infimum on $\bar{L}^{0}$}\label{A:cond} First, we will present some elementary properties of the generalized conditional expectation. \begin{proposition}\label{pr:condexp} For any $X,Y\in\bar{L}^{0}$ and $s,t\in\mathbb{T}$, $s>t$ we get \begin{enumerate}[1)] \item $E[\lambda X|\mathcal{F}_{t}]\leq \lambda E[X|\mathcal{F}_{t}]$ for $\lambda\in L^{0}_{t}$, and $E[\lambda X|\mathcal{F}_{t}]=\lambda E[X|\mathcal{F}_{t}]$ for $\lambda\in L^{0}_{t}$, $\lambda\geq 0$; \item $E[X|\mathcal{F}_{t}]\leq E[E[X|\mathcal{F}_{s}]|\mathcal{F}_{t}]$, and $E[X|\mathcal{F}_{t}]=E[E[X|\mathcal{F}_{s}]|\mathcal{F}_{t}]$ for $X\geq 0$; \item $E[X|\mathcal{F}_{t}]+E[Y|\mathcal{F}_{t}]\leq E[X+Y|\mathcal{F}_{t}]$, and $E[X|\mathcal{F}_{t}]+E[Y|\mathcal{F}_{t}]=E[X+Y|\mathcal{F}_{t}]$ if $X,Y\geq 0$; \end{enumerate} \end{proposition} \begin{remark} All inequalities in Proposition~\ref{pr:condexp} can be strict. Assume that $t=0$ and $k,s\in\mathbb{T}$, $k>s > 0$, and let $\xi \in L^{0}_{k}$ be such that $\xi=\pm1$, $\xi$ is independent of $\mathcal{F}_s$, and $P(\xi=1)=P(\xi=-1)=1/2$. We consider $Z\in L_{s}^{0}$ such that $Z\geq 0$, and $E[Z]=\infty$. By taking $\lambda=-1$, $X=\xi Z$ and $Y=-X$, we get strict inequalities in 1), 2) and 3). \end{remark} Next, we will discuss some important features of conditional essential infimum and conditional essential supremum, in the context of $\bar{L}^{0}$. Before that, we will recall the definition of conditional essential infimum for bounded random variables. For $X\in L^{\infty}$ and $t\in\mathbb{T}$, we will denote by $\Essinf_{t}X$ the unique (up to a set of probability zero), $\mathcal{F}_{t}$-measurable random variable, such that for any $A\in\mathcal{F}_{t}$, the following equality holds true \begin{equation}\label{eq:essinf.b} \essinf_{\omega\in A}X=\essinf_{\omega\in A}(\Essinf_{t}X). \end{equation} We will call this random variable the \textit{$\mathcal{F}_{t}$-conditional essential infimum of $X$}. We refer the reader to \cite{BarronCardaliaguetJensen2003} for a detailed proof of the existence and uniqueness of the conditional essential infimum. We will call $\Esssup_{t}(X):=-\Essinf_{t}(-X)$ the \textit{$\mathcal{F}_{t}$-conditional essential supremum of $X\in L^\infty$}. As stated in the preliminaries we extend these two notions to the space $\bar{L}^0$. For any $t\in\mathbb{T}$ and $X\in\bar{L}^{0}$, we define the $\mathcal{F}_{t}$-conditional essential infimum by \begin{equation}\label{eq:A22} \Essinf_{t}X:=\lim_{n\to\infty}\Big[\Essinf_{t}(X^{+}\wedge n)\Big]-\lim_{n\to\infty}\Big[\Esssup_{t}(X^{-}\wedge n)\Big], \end{equation} and respectively we put $\Esssup_{t}(X):=-\Essinf_{t}(-X)$. \begin{remark} Extending the function $\arctan$ to $[-\infty,\infty]$ by continuity, and observing that $\arctan X\in L^\infty$ for any $X\in \bar{L}^{0}$, one can naturally extend conditional essential infimum to $ \bar{L}^{0}$ by setting \[ \Essinf_{t}X= \arctan^{-1}[\Essinf_{t}(\arctan X)]. \] \end{remark} We proceed with the following result: \begin{proposition}\label{pr:essinf} For any $X,Y\in \bar{L}^{0}$, $s,t \in \mathbb{T}, s\geq t$, and $A\in\mathcal{F}_{t}$ we have \begin{enumerate}[1)] \item $\essinf_{\omega\in A}X=\essinf_{\omega\in A}(\Essinf_{t}X)$; \item If $\essinf_{\omega\in A}X=\essinf_{\omega\in A}U$ for some $U\in \bar{L}^{0}_t$, then $U=\Essinf_{t}X$; \item $X\geq \Essinf_{t}X$; \item If $Z\in\bar{L}^{0}_{t}$, is such that $X\geq Z$, then $\Essinf_{t}X\geq Z$; \item If $X\geq Y$, then $\Essinf_{t}X\geq \Essinf_{t}Y$; \item $\1_{A}\Essinf_{t}X=\1_{A}\Essinf_{t}(\1_{A}X)$; \item $\essinf_{s}X\geq \Essinf_{t}X$; \end{enumerate} The analogous results are true for $\{\Esssup_{t}\}_{t\in\mathbb{T}}$. \end{proposition} The proof for the case $X,Y\in L^{\infty}$ can be found in~\cite{BarronCardaliaguetJensen2003}. Since for any $n\in\mathbb{N}$ and $X,Y\in \bar{L}^{0}$ we get $X^{+}\wedge n\in L^{\infty}$, $X^{-}\wedge n\in L^{\infty}$ and $X^{+}\wedge X^{-}=0$, the extension of the proof to the case $X,Y\in \bar{L}^{0}$ is straightforward, and we omit it here. \begin{remark}\label{rem:essinf.def2} Similarly to~\cite{BarronCardaliaguetJensen2003}, the conditional essential infimum $\essinf_{t}(X)$ can be alternatively defined as the largest $\mathcal{F}_{t}$-measurable random variable, which is smaller than $X$, i.e. properties 3) and 4) from Proposition~\ref{pr:essinf} are characteristic properties for conditional essential infimum. \end{remark} Next, we define the generalized versions of $\essinf$ and $\esssup$ of a (possibly uncountable) family of random variables: For $\{X_{i}\}_{i\in I}$, where $X_{i}\in\bar{L}^{0}$, we let \begin{equation}\label{eq:cond222} \essinf_{i\in I}X_{i}:=\lim_{n\to\infty}\Big[\Essinf_{i\in I}(X_{i}^{+}\wedge n)\Big]-\lim_{n\to\infty}\Big[\Esssup_{i\in I}(X_{i}^{-}\wedge n)\Big]. \end{equation} Note that, in view of \cite[Appendix~A]{KaratzasShreve1998}, $\Essinf_{i\in I}X_{i}\wedge n $ and $\Esssup_{i\in I}X_{i}\wedge n $ are well defined, so that $\essinf_{i\in I}X_{i}$ is well defined. It needs to be observed that the operations of the right hand side of \eqref{eq:cond222} preserve measurability. In particular, if $X_{i}\in\mathcal{F}_{t}$ for all $i\in I$, then $\essinf_{i\in I}X_{i}\in\mathcal{F}_{t}$. Furthermore, if for any $i,j\in I$, there exists $k\in I$, such that $X_{k}\leq X_{i}\wedge X_{j}$, then there exists a sequence $i_n\in I, n\in\mathbb{N}$, such that $\{X_{i_n}\}_{n\in\mathbb{N}}$ is nonincreasing and $\essinf_{i\in I}X_{i}=\inf_{n\in\mathbb{N}}X_{i_n}=\lim_{n\to\infty}X_{i_n}$. Analogous results hold true for $\esssup_{i\in I}X_{i}$. \subsection{Proofs}\label{A:proofs} \subsubsection*{Proof of Proposition~\ref{prop:mart.ref}.}\label{prop:mart.ref.a} \begin{proof} Let $\mu$ be an update rule. \smallskip \noindent 1) The implication ($\Rightarrow$) follows immediately, by taking in the definition of acceptance time consistency $m_s=\varphi_s(X)$. \smallskip \noindent ($\Leftarrow$) Assume that $\varphi_{t}(X)\geq \mu_{t,s}(\varphi_{s}(X),X)$, for any $s,t\in\mathbb{T}, s>t$, and $X\in\mathcal{X}$. Let $m_{s}\in\bar{L}^{0}_{s}$ be such that $\varphi_{s}(X)\geq m_{s}$. Using monotonicity of $\mu$, we get $\varphi_{t}(X)\geq\mu_{t,s}(\varphi_{s}(X),X)\geq \mu_{t,s}(m_{s},X)$. \smallskip \noindent 2) The proof is similar to 1). \end{proof} \subsubsection*{Proof of Proposition~\ref{prop:benchmark}.}\label{prop:benchmark.a} \begin{proof} We do the proof only for acceptance time consistency. The proof for rejection time consistency is analogous. \smallskip\noindent \textit{Step 1.} We will show that $\varphi$ is acceptance time consistent with respect to $\mathcal{Y}$, if and only if \begin{equation}\label{eq:bench3} \1_A\varphi_{s}(X)\geq \1_A\varphi_{s}(Y)\quad \Longrightarrow\quad \1_A\varphi_{t}(X)\geq \1_A\varphi_{t}(Y), \end{equation} for all $s\geq t$, $X\in \mathcal{X}$, $Y\in \mathcal{Y}_{s}$ and $A\in\mathcal{F}_{t}$. For sufficiency it is enough to take $A=\Omega$. For necessity let us assume that \begin{equation}\label{eq:bench.pr1} \1_A\varphi_s(X)\geq \1_A\varphi_s(Y). \end{equation} Using locality of $\varphi$, we get that \eqref{eq:bench.pr1} is equivalent to \[ \1_A\varphi_s(\1_A X+\1_{A^c}Y)+\1_{A^c}\varphi_s(\1_A X+\1_{A^c}Y)\geq \1_A\varphi_s(Y)+\1_{A^c}\varphi_s(Y) \] and consequently to $\varphi_s(\1_A X+\1_{A^c}Y)\geq \varphi_s(Y)$. Thus, using \eqref{eq:benchmark2}, we get \[ \varphi_s(\1_A X+\1_{A^c}Y)\geq \varphi_s(Y) \quad \Longrightarrow\quad \varphi_t(\1_A X+\1_{A^c}Y)\geq \varphi_t(Y). \] By the same arguments we get that $\varphi_t(\1_A X+\1_{A^c}Y)\geq \varphi_t(Y)$ is equivalent to $\1_A\varphi_t(X)\geq \1_A\varphi_t(Y)$, which concludes this part of the proof. \smallskip\noindent {\it Step 2.} Now we demonstrate that $\varphi$ is acceptance time consistent with respect to $\mathcal{Y}$ if and only if $\varphi$ is acceptance time consistent with respect to the family $\widehat \mathcal{Y}=\{\widehat \mathcal{Y}_t\}_{t\in\mathbb{T}}$ of benchmark sets given by \begin{equation}\label{eq:bench.local} \widehat{\mathcal{Y}}_{t}:=\{\1_{A}Y_1+\1_{A^c}Y_2\,:\, Y_1,Y_2\in\mathcal{Y}_{t},\, A\in\mathcal{F}_t\}. \end{equation} Noting that for any $t\in\mathbb{T}$ we have $\mathcal{Y}_t\subseteq \widehat \mathcal{Y}_t$, we get the sufficiency part. For necessity let us assume that \begin{equation}\label{eq:bench.pr2} \varphi_s(X)\geq \varphi_s(Y) \end{equation} for some $Y\in\widehat \mathcal{Y}_t$. From \eqref{eq:bench.local} we know that there exists $A\in\mathcal{F}_t$ and $Y_1,Y_2\in\mathcal{Y}_s$, such that $Y=\1_{A}Y_1+\1_{A^c}Y_2$. Consequently, using locality of $\varphi$, and the fact that \eqref{eq:bench.pr2} is equivalent to \[ \1_A\varphi_s(X)+\1_{A^c}\varphi_s(X)\geq \1_{A}\varphi_s(\1_{A}Y_1+\1_{A^c}Y_2)+\1_{A^c}\varphi_s(\1_{A}Y_1+\1_{A^c}Y_2), \] we conclude that \eqref{eq:bench.pr2} is equivalent to \[ \1_A\varphi_s(X)+\1_{A^c}\varphi_s(X)\geq \1_{A}\varphi_s(Y_1)+\1_{A^c}\varphi_s(Y_2). \] As the sets $A$ and $A^c$ are disjoint, using \eqref{eq:bench3} twice, we get \[ \1_A\varphi_t(X)+\1_{A^c}\varphi_t(X)\geq \1_{A}\varphi_t(Y_1)+\1_{A^c}\varphi_t(Y_2). \] Using similar arguments as before, we get that the above inequality is equivalent to $\varphi_t(X)\geq \varphi_t(Y)$, which in fact concludes this part of the proof. \smallskip\noindent \textit{Step 3.} For any $m_s\in \bar{L}^0_s$ we set, \[ \mu_{t,s}(m_s):=\esssup_{A\in\mathcal{F}_{t}}\Big[\1_{A}\esssup_{Y\in \mathcal{Y}^{-}_{A,s}(m_s)}\varphi_{t}(Y)+\1_{A^{c}}(-\infty)\Big], \] where $\mathcal{Y}^{-}_{A,s}(m_s):=\{Y\in\widehat{\mathcal{Y}}_s: \1_A m_s\geq \1_A\varphi_s(Y)\}$, and show that the corresponding family of maps $\mu$ is an projective update rule. \smallskip \noindent \textit{Adaptiveness.} For any $m_s\in\bar{L}^0_s$, $\esssup$ of the set of $\mathcal{F}_{t}$-measurable random variables $\{\varphi_{t}(Y)\}_{Y\in \mathcal{Y}^{-}_{A,s}(m_s)}$ is $\mathcal{F}_{t}$-measurable (see~\cite{KaratzasShreve1998}, Appendix A), which implies that $\mu_{t,s}(m_s)\in\bar{L}^{0}_{t}$. \smallskip \noindent \textit{Monotonicity.} If $m_s\geq m_s'$ then for any $A\in\mathcal{F}_{t}$ we get $\mathcal{Y}^{-}_{A,s}(m_s)\supseteq \mathcal{Y}^{-}_{A,s}(m_s')$, which implies $\mu_{t,s}(m_s)\geq\mu_{t,s}(m_s')$. \smallskip \noindent \textit{Locality.} Let $B\in\mathcal{F}_{t}$ and $m_s\in\bar{L}^0_s$. It is enough to consider $A\in\mathcal{F}_{t}$, such that $\mathcal{Y}^{-}_{A,s}(m_s)\neq\emptyset$, as otherwise we get \[ \Big[\1_{A}\esssup_{Y\in \mathcal{Y}^{-}_{A,s}(m_s)}\varphi_{t}(Y)+\1_{A^{c}}(-\infty)\Big]\equiv -\infty. \] For any such $A\in\mathcal{F}_{t}$, we get \begin{equation}\label{eq:ppp1pp1} \1_{A\cap B}\esssup_{Y\in \mathcal{Y}^{-}_{A,s}(m_s)}\varphi_{t}(Y)=\1_{A\cap B}\esssup_{Y\in \mathcal{Y}^{-}_{A\cap B,s}(m_s)}\varphi_{t}(Y). \end{equation} Indeed, since $ \mathcal{Y}^{-}_{A,s}(m_s)\subseteq \mathcal{Y}^{-}_{A\cap B,s}(m_s)$, we have $$ \1_{A\cap B}\esssup_{Y\in \mathcal{Y}^{-}_{A,s}(m_s)}\varphi_{t}(Y)\leq \1_{A\cap B}\esssup_{Y\in \mathcal{Y}^{-}_{A\cap B,s}(m_s)}\varphi_{t}(Y). $$ On the other hand, for any $Y\in \mathcal{Y}^{-}_{A\cap B,s}(m_s)$ and for a fixed $Z\in \mathcal{Y}^{-}_{A,s}(m_s)$ we get, in view of \eqref{eq:bench.local}, that $$ \1_{B}Y+\1_{B^{c}}Z\in \mathcal{Y}^{-}_{A,s}(m_s). $$ Thus, using locality of $\varphi_{t}$, we deduce $$ \1_{A\cap B}\esssup_{Y\in \mathcal{Y}^{-}_{A\cap B,s}(m_s)}\varphi_{t}(Y)= \1_{A\cap B}\esssup_{Y\in \mathcal{Y}^{-}_{A\cap B,s}(m_s)}\1_{B}\varphi_{t}(\1_{B}Y+\1_{B^{c}}Z)\leq\1_{A\cap B}\esssup_{Y\in \mathcal{Y}^{-}_{A,s}(m_s)}\varphi_{t}(Y), $$ which proves \eqref{eq:ppp1pp1}. Now, note that $ \mathcal{Y}^{-}_{A\cap B,s}(m_s)= \mathcal{Y}^{-}_{A\cap B,s}(\1_B m_s)$, and thus \begin{equation}\label{eq:ppp1pp2} \1_{A}\esssup_{Y\in \mathcal{Y}^{-}_{A\cap B,s}(m_s)}\varphi_{t}(Y)=\1_{A}\esssup_{Y\in \mathcal{Y}^{-}_{A\cap B,s}(\1_B m_s)}\varphi_{t}(Y). \end{equation} Combining \eqref{eq:ppp1pp1}, \eqref{eq:ppp1pp2}, and the fact that $ \mathcal{Y}^{-}_{A,s}(m_s)\neq\emptyset$ implies $ \mathcal{Y}^{-}_{A,s}(\1_B m_s)\neq\emptyset$, we obtain the following chain of equalities \begin{align*} \1_{B}\mu_{t,s}(m_s) &= \1_{B}\esssup_{A\in\mathcal{F}_{t}}\Big[\1_{A}\esssup_{Y\in \mathcal{Y}^{-}_{A,s}(m_s)}\varphi_{t}(Y)+\1_{A^{c}}(-\infty)\Big]\\ & = \1_{B}\esssup_{A\in\mathcal{F}_{t}}\Big[\1_{A\cap B}\esssup_{Y\in \mathcal{Y}^{-}_{A,s}(m_s)}\varphi_{t}(Y)+\1_{A^{c}\cap B}(-\infty)\Big]\\ & = \1_{B}\esssup_{A\in\mathcal{F}_{t}}\Big[\1_{A\cap B}\esssup_{Y\in \mathcal{Y}^{-}_{A\cap B,s}(m_s)}\varphi_{t}(Y)+\1_{A^{c}\cap B}(-\infty)\Big]\\ & = \1_{B}\esssup_{A\in\mathcal{F}_{t}}\Big[\1_{A\cap B}\esssup_{Y\in \mathcal{Y}^{-}_{A\cap B,s}(\1_B m_s)}\varphi_{t}(Y)+\1_{A^{c}\cap B}(-\infty)\Big]\\ & =\1_{B}\esssup_{A\in\mathcal{F}_{t}}\Big[\1_{A}\esssup_{Y\in \mathcal{Y}^{-}_{A,s}(\1_B m_s)}\varphi_{t}(Y)+ \1_{A^{c}}(-\infty)\Big]\\ & = \1_{B}\mu_{t,s}(\1_B m_s). \end{align*} Thus, $\mu$ is an $X$-invariant update rule. \smallskip\noindent \textit{Step 4.} By locality of $\varphi$ and \eqref{eq:bench3}, we note that acceptance time consistency with respect to $\mathcal{Y}$ is equivalent to \begin{equation}\label{eq:bench.pr5} \varphi_{t}(X)\geq \esssup_{A\in\mathcal{F}_{t}}\Big[\1_{A}\esssup_{Y\in \mathcal{Y}^{-}_{A,s}(\varphi_s(X))}\varphi_{t}(Y)+\1_{A^{c}}(-\infty)\Big]. \end{equation} Thus, using \eqref{eq:accepTimeConsAlt}, we deduce that $\varphi$ satisfies \eqref{eq:benchmark2} if and only if $\varphi$ is time consistent with respect to the update rule $\mu$. Since \eqref{eq:accepTimeConsAlt} is equivalent to \eqref{eq:bench.pr5}, we conclude the proof. \end{proof} \subsubsection*{Proof of Proposition~\ref{th:essinf}.}\label{th:essinf.a} \begin{proof} Monotonicity and locality of $\mu^{\inf}$ is a straightforward implication of Proposition~\ref{pr:essinf}. Thus, $\mu^{\inf}$ is $sX$-invariant update rule. The projectivity comes straight from the definition (see Remark~\ref{rem:essinf.def2}). \end{proof} \subsubsection*{Proof of Proposition~\ref{pr:weak}.}\label{pr:weak.a} \begin{proof} We will only show the proof for acceptance consistency. The proof for rejection consistency is similar. Let $\{\varphi_{t}\}_{t\in\mathbb{T}}$ be a dynamic LM-measure. \smallskip \noindent $1)\Leftrightarrow 2)$. This is a direct application of Proposition~\ref{prop:mart.ref}. \smallskip \noindent $1)\Rightarrow 3)$. Assume that $\varphi$ is weakly acceptance consistent, and let $m_{t}\in \bar{L}^{0}_{t}$ be such that $\varphi_{s}(X)\geq m_{t}$. Then, using Proposition~\ref{prop:mart.ref}, we get $\varphi_{t}(X)\geq \Essinf_{t}(\varphi_{s}(X))\geq \Essinf_{t}(m_{t})=m_{t}$, and hence 3) is proved. \smallskip \noindent $3)\Rightarrow 1)$. By the definition of conditional essential infimum, $\Essinf_{t}(\varphi_{s}(X))\in\bar{L}^{0}_{t}$, for any $X\in\mathcal{X}$, and $t,s\in\mathcal{T}$. Moreover, by Proposition~\ref{pr:essinf}.(3), we have that $\varphi_{s}(X)\geq\Essinf_{t}(\varphi_{s}(X))$. Using assumption 3) with $m_t=\Essinf_{t}(\varphi_{s}(X))$, we immediately obtain $\varphi_{t}(X)\geq\Essinf_{t}(\varphi_{s}(X))$. Due to Proposition~\ref{prop:mart.ref} this concludes the proof. \smallskip \noindent $3)\Leftrightarrow 4)$. Clearly 3) $\Rightarrow $ 4). Thus, it remains to show the converse implication. Since $\varphi$ is a monetary utility measure, then invoking locality of $\varphi$, we conclude that for any $m_{t}\in \bar{L}^{0}_{t}$, such that $\varphi_s(X)\geq m_t$, and for any $n\in\mathbb{N}$, we have \[ \varphi_s(\1_{\{m_{t}\in (-n,n)\}}(X-m_t))\geq 0. \] Now, in view of 4), we get that $\varphi_t(\1_{\{m_{t}\in (-n,n)\}}(X-m_t))\geq0$, and consequently \[ \1_{\{m_{t}\in (-n,n)\}}\varphi_t(X)\geq\1_{\{m_{t}\in (-n,n)\}}m_{t}. \] Thus, 3) is proved on the $\mathcal{F}_{t}$-measurable set $\{m_{t}\in(-\infty,\infty)\}=\bigcup_{n\in\mathbb{N}}\{m_t\in (-n,n)\}$. On the set $\{m_t=-\infty\}$ inequality $\varphi_{t}(X)\geq m_{t}$ is trivial. Finally, on the set $\{m_t=\infty\}$, in view of the monotonicity of $\varphi$, we have that $\varphi_{s}(X)=\varphi_{t}(X)=\infty$, which implies 3). This concludes the proof. \end{proof} \subsubsection*{Proof of Proposition~\ref{pr:essinf.cond.aa}.}\label{pr:essinf.cond.aa.a} \begin{proof} Let a family $\mu=\{\mu_{t}\}_{t\in\mathbb{T}}$ of maps $\mu_{t}:\bar{L}^{0}\to \bar{L}^{0}_{t}$ be given by \begin{equation}\label{eq:essinff} \mu_{t}(m)=\essinf_{Z\in P_{t}}E[Zm|\mathcal{F}_{t}] \end{equation} Before proving \eqref{eq:essinf.eq.pt}, we will need to prove some facts about $\mu$. First, let us show that $\mu$ is local and monotone. Let $t\in\mathbb{T}$. Monotonicity is straightforward. Indeed, let $m,m'\in\bar{L}^{0}$ be such that $m\geq m'$. For any $Z\in P_{t}$, using the fact that $Z\geq 0$, we get $Zm\geq Zm'$. Thus, $E[Zm|\mathcal{F}_{t}]\geq E[Zm'|\mathcal{F}_{t}]$ and consequently $\essinf_{Z\in P_{t}}E[Zm|\mathcal{F}_{t}]\geq \essinf_{Z\in P_{t}}E[Zm'|\mathcal{F}_{t}]$. Locality follows from the fact, for any $A\in\mathcal{F}_{t}$ and $m\in\bar{L}^{0}$, using Proposition~\ref{pr:condexp}, convention $0\cdot\pm\infty=0$, and the fact that for any $Z_{1},Z_{2}\in P_{t}$ we have $\1_{A}Z_{1}+\1_{A^{c}}Z_{2}\in P_{t}$, we get \begin{align*} \1_{A}\mu_{t}(m) & = \1_{A}\essinf_{Z\in P_{t}}E[Zm|\mathcal{F}_{t}]\\ & =\1_{A}\essinf_{Z\in P_{t}}(E[(\1_{A}Z)m|\mathcal{F}_{t}]+E[(\1_{A^{c}}Z)m|\mathcal{F}_{t}])\\ &=\1_{A}\essinf_{Z\in P_{t}}E[(\1_{A}Z)m|\mathcal{F}_{t}]+\1_{A}\essinf_{Z\in P_{t}}E[(\1_{A^{c}}Z)m|\mathcal{F}_{t}]\\ &=\1_{A}\essinf_{Z\in P_{t}}E[Z(\1_{A}m)|\mathcal{F}_{t}]+\1_{A}\essinf_{Z\in P_{t}}\1_{A^{c}}E[Zm|\mathcal{F}_{t}]\\ &=\1_{A}\mu_{t}(\1_{A}m). \end{align*} Secondly, let us prove that we get \begin{equation}\label{eq:A0} m \geq \mu_{t}(m), \end{equation}for any $m\in \bar{L}^{0}$. Let $m\in L^{0}$. For $\alpha\in (0,1)$ let\footnote{In the risk measure framework, it might be seen as the risk minimazing scenario for conditional $CV@R_{\alpha}$.} \begin{equation}\label{eq:Zalpha} Z_{\alpha}:=\1_{\set{m\leq q^{+}_{t}(\alpha)}}E[\1_{\set{m\leq q^{+}_{t}(\alpha)}}|\mathcal{F}_{t}]^{-1}. \end{equation} where $q^{+}_{t}(\alpha)$ is $\mathcal{F}_{t}$-conditional (upper) $\alpha$ quantile of $m$, defined as $$ q^+_{t}(\alpha):= \esssup\set{ Y\in L_{t}^0 \mid E[\1_{\set{m\leq Y}}|\mathcal{F}_{t}]\leq\alpha }. $$ For $\alpha\in (0,1)$, noticing that $Z_{\alpha}<\infty$, due to convention $0\cdot\infty=0$ and the fact that $$ \{E[\1_{\set{m\leq q^{+}_{t}(\alpha)}}|\mathcal{F}_{t}]=0\}\subseteq \{\1_{\set{m\leq q^{+}_{t}(\alpha)}}= 0\}\cup B,$$ for some $B$, such that $P[B]=0$, we conclude that $Z_{\alpha}\in P_{t}$. Moreover, by the definition of $q^+_t(\alpha)$, there exists a sequence $Y_n\in L_{t}^0$, such that $Y_n\nearrow q^+_t(\alpha)$, and $$ E[\1_{\set{m<Y_n}} \ |\ \mathcal{F}_t]\leq \alpha. $$ Consequently, by monotone convergence theorem, we have $$ E[\1_{\set{m<q^+_t(\alpha)}} \ | \ \mathcal{F}_t] \leq \alpha. $$ Hence, we deduce $$P[m<q^{+}_{t}(\alpha)]=E[\1_{\set{m<q^{+}_{t}(\alpha)}}]\leq E[E[\1_{\set{m<q^{+}_{t}(\alpha)}}|\mathcal{F}_{t}]]\leq E[\alpha]=\alpha,$$ which implies that \begin{equation}\label{eq:A6} P[m\geq q^{+}_{t}(\alpha)]\geq (1-\alpha). \end{equation} On the other hand \begin{align*} \1_{\set{m\geq q^{+}_{t}(\alpha)}}m & \geq \1_{\set{m\geq q^{+}_{t}(\alpha)}} q_{t}^{+}(\alpha)= \1_{\set{m\geq q^{+}_{t}(\alpha)}} q_{t}^{+}(\alpha)E[Z_{\alpha}|\mathcal{F}_{t}]\\ & \geq \1_{\set{m\geq q^{+}_{t}(\alpha)}} E[Z_{\alpha} q_{t}^{+}(\alpha)|\mathcal{F}_{t}]\geq \1_{\set{m\geq q^{+}_{t}(\alpha)}} E[Z_{\alpha}m|\mathcal{F}_{t}], \end{align*} which combined with \eqref{eq:A6}, implies that \begin{equation}\label{eq:1alpha} P\Big[m\geq E[Z_{\alpha}m|\mathcal{F}_{t}]\Big]\geq 1-\alpha. \end{equation} Hence, using \eqref{eq:1alpha}, and the fact that $$ E[Z_{\alpha}m|\mathcal{F}_{t}]\geq\mu_{t}(m), \quad \alpha\in(0,1), $$ we get that $$ P[m\geq \mu_{t}(m)]\geq 1-\alpha. $$ Letting $\alpha\to 0$, we conclude that \eqref{eq:A0} holds true for $m\in L^{0}$. Now, assume that $m\in\bar{L}^{0}$, and let $A:=\{E[\1_{\{m=-\infty\}}|\mathcal{F}_{t}]=0\}$. Similar to the arguments above, we get $$ \1_{A}m\geq \mu_{t}(\1_{A}m). $$ Since $\mu_{t}(0)=0$, and due to locality of $\mu_{t}$, we deduce \begin{equation}\label{eq:A10} \1_{A}m\geq \mu_{t}(\1_{A}m)=\1_{A}\mu_{t}(\1_{A}m)=\1_{A}\mu_{t}(m). \end{equation} Moreover, taking $Z=1$ in \eqref{eq:essinff}, we get \begin{equation}\label{eq:A11} \1_{A^{c}}m\geq \1_{A^{c}}(-\infty)= \1_{A^{c}}E[m|\mathcal{F}_{t}]\geq \1_{A^{c}}\mu_{t}(m). \end{equation} Combining \eqref{eq:A10} and \eqref{eq:A11}, we concludes the proof of \eqref{eq:A0} for all $m\in\bar{L}^{0}$. Finally, we will show that $\mu$ defined as in \eqref{eq:essinff} satisfies property 1) from Proposition~\ref{pr:essinf}, which will consequently imply equality \eqref{eq:essinf.eq.pt}. Let $m\in\bar{L}^{0}$ and $A\in\mathcal{F}_{t}$. From the fact that $m\geq \mu_{t}(m)$ we get $$\essinf_{\omega\in A}m\geq\essinf_{\omega\in A}\mu_{t}(m).$$ On the other hand we know that $\1_{A}\essinf_{\omega\in A}m\leq \1_{A}m$ and $\1_{A}\essinf_{\omega\in A}m\in\bar{L}^{0}_{t}$, so \begin{align*} \essinf_{\omega\in A}m & =\essinf_{\omega\in A}(\1_{A}\essinf_{\omega\in A}m)=\essinf_{\omega\in A}(\1_{A}\mu_{t}(\1_{A}\essinf_{\omega\in A}m))\leq\\ & \leq \essinf_{\omega\in A}(\1_{A}\mu_{t}(\1_{A}m))=\essinf_{\omega\in A}(\1_{A}\mu_{t}(m))=\essinf_{\omega\in A}\mu_{t}(m) \end{align*} which proves the equality. The proof for $\Esssup_{t}$ is similar and we omit it here. This concludes the proof. \end{proof} \subsubsection*{Proof of Proposition~\ref{pr:UDMprop}.}\label{pr:UDMprop.a} \begin{proof} Then, using Proposition~\ref{pr:essinf}, for any $t,s\in\mathbb{T}$, $s>t$, and any $X\in\mathcal{X}$, we get $$ \varphi_{t}(X)\geq \mu_{t}(\varphi_{s}(X))\geq \mu_{t}(\Essinf_{s}(\varphi_{s}(X)))\geq\mu_{t}(\Essinf_{t}(\varphi_{s}(X)))=\Essinf_{t}(\varphi_{s}(X)). $$ The proof for rejection time consistency is similar. \end{proof} \subsubsection*{Proof of Proposition~\ref{pr:proc.weak.semi}.}\label{pr:proc.weak.semi.a} \begin{proof} We will only show the proof for acceptance consistency. The proof for rejection consistency is similar. Let $\varphi$ be a dynamic LM-measure. \smallskip \noindent $1)\Leftrightarrow 2)$. This is a direct implication of Proposition~\ref{prop:mart.ref}. \smallskip \noindent $2)\Rightarrow 3)$. Assume that $\varphi$ is semi-weakly acceptance consistent. Let $V\in\mathcal{X}$ and $m_{t}\in \bar{L}^{0}_{t}$ be such that $\varphi_{t+1}(V)\geq m_{t}$ and $V_{t}\geq 0$. Then, by monotonicity of $\mu^{\inf}_{t}$, we have $$ \varphi_{t}(V)\geq \1_{\set{V_{t}\geq 0}}\mu^{\inf}_{t}(\varphi_{t+1}(V))\geq \mu^{\inf}_{t}(m_{t})= \Essinf_{t}(m_{t})=m_{t}, $$ and hence 3) is proved. \smallskip \noindent $3)\Rightarrow 2)$. Let $V\in\mathcal{X}$. We need to show that \begin{equation}\label{eq:semi.A} \varphi_{t}(V)\geq 1_{\{V_{t}\geq 0\}}\mu^{\inf}_{t}(\varphi_{t+1}(V))+1_{\{V_{t}< 0\}}(-\infty). \end{equation} On the set $\{V_{t}<0\}$ inequality \eqref{eq:semi.A} is trivial. We know that $$ (\1_{\{V_{t}\geq 0\}}\cdot_{t}V)_{t}\geq 0\quad \textrm{and}\quad \varphi_{t+1}(\1_{\{V_{t}\geq 0\}}\cdot_{t}V)\geq \Essinf_{t}\varphi_{t+1}(\1_{\{V_{t}\geq 0\}}\cdot_{t}V). $$ Thus, for $m_{t}=\Essinf_{t}\varphi_{t+1}(\1_{\{V_{t}\geq 0\}}\cdot_{t}V)$, using locality of $\varphi$ and $\mu^{\inf}$ as well as 3), we get $$ \1_{\{V_{t}\geq 0\}}\varphi_{t}(V)=\1_{\{V_{t}\geq 0\}}\varphi_{t}(\1_{\{V_{t}\geq 0\}}\cdot_{t}V)\geq \1_{\{V_{t}\geq 0\}} m_{t}=\1_{\{V_{t}\geq 0\}}\mu_{t}^{\inf}(\varphi_{t+1}(V)). $$ and hence \eqref{eq:semi.A} is proved on the set $\{V_{t}\geq 0\}$. This conclude the proof of 2). \end{proof} \subsubsection*{Proof of Proposition~\ref{prop:DCRMtoDAI}} \begin{proof} The proof of locality and monotonicity of \eqref{eq:DCRMtoDAI} is straightforward (see \cite{BCZ2010} for details). Let us assume that $\{\varphi^{x}_{t}\}_{t\in\mathbb{T}}$ is weakly acceptance time consistent. Using counterpart of Proposition~\ref{pr:weak} for stochastic processes (see~\cite{BCP2014}) we get \begin{align*} 1_{\{V_{t}\geq 0\}}\alpha_{t}(V) & = 1_{\{V_{t}\geq 0\}}\Big(\sup\{x\in\mathbb{R}_{+}: 1_{\{V_{t}\geq 0\}}\varphi_{t}^{x}(V)\geq0\}\Big)\\ & \geq 1_{\{V_{t}\geq 0\}}\Big(\sup\{x\in\mathbb{R}_{+}: 1_{\{V_{t}\geq 0\}}[\Essinf_{t}\varphi_{t+1}^{x}(V)+V_{t}]\geq0\}\Big)\\ & \geq 1_{\{V_{t}\geq 0\}}\Big(\sup\{x\in\mathbb{R}_{+}: 1_{\{V_{t}\geq 0\}}\Essinf_{t}\varphi_{t+1}^{x}(V)\geq0\}\Big)\\ & = 1_{\{V_{t}\geq 0\}}\Essinf_{t}\Big(\sup\{x\in\mathbb{R}_{+}: 1_{\{V_{t}\geq 0\}}\varphi_{t+1}^{x}(V)\geq0\}\Big)\\ & = 1_{\{V_{t}\geq 0\}}\Essinf_{t}\alpha_{t+1}(V). \end{align*} This leads to inequality $$\alpha_{t}(V)\geq 1_{\{V_{t}\geq 0\}}\Essinf_{t}\alpha_{t+1}(V) +1_{\{V_{t}< 0\}}(-\infty),$$ which, by Proposition~\ref{pr:proc.weak.semi}, is equivalent to semi-weak rejection time consistency. The proof of weak acceptance time consistency is similar. \end{proof} \subsubsection*{Proof of Proposition~\ref{prop:DAItoDCRM}} \begin{proof}[Proof] The proof of locality and monotonicity of \eqref{eq:DAItoDCRM} is straightforward (see \cite{BCZ2010} for details). Let us prove weak acceptance time consistency. Let us assume that $\{\alpha_{t}\}_{t\in\mathbb{T}}$ is semi-weakly acceptance time consistent. Using Proposition~\ref{prop:mart.ref} we get \begin{align*} \varphi^{x}_{t}(V) & = \inf\{c\in\mathbb{R}: \alpha_{t}(V-c1_{\{t\}})\leq x\}\\ & = \inf\{c\in\mathbb{R}: \alpha_{t}(V-c1_{\{t+1\}})\leq x\}\\ & = \inf\{c\in\mathbb{R}: \alpha_{t}(V-c1_{\{t+1\}}-V_{t}1_{\{t\}})\leq x\}+V_{t}\\ & \geq \inf\{c\in\mathbb{R}: 1_{\{0\geq 0\}}\Essinf_{t}\alpha_{t+1}(V-c1_{\{t+1\}}-V_{t}1_{\{t\}}) +1_{\{0< 0\}}(-\infty)\leq x\}+V_{t}\\ & = \inf\{c\in\mathbb{R}: \Essinf_{t}\alpha_{t+1}(V-c1_{\{t+1\}})\leq x\}+V_{t}\\ & = \Essinf_{t}\big(\inf\{c\in\mathbb{R}:\alpha_{t+1}(V-c1_{\{t+1\}})\leq x\}\big)+V_{t}\\ & = \Essinf_{t}\varphi_{t+1}^{x}(V)+V_{t}, \end{align*} which, is equivalent to weak acceptance time consistency of $\varphi$. The proof of rejection time consistency is similar. \end{proof} \subsubsection*{Proof of Proposition~\ref{pr:condexp}.}\label{pr:condexp.a} \begin{proof} First note that for any $X,Y\in \bar{L}^{0}$, $\lambda\in L^{0}_{t}$ such that $X,Y, \lambda \geq 0$, and for any $s,t\in\mathbb{T}$, $s>t$, by Monotone Convergence Theorem, and using the convention $0\cdot\pm\infty=0$ we get \begin{eqnarray} E[\lambda X|\mathcal{F}_{t}]& = &\lambda E[X|\mathcal{F}_{t}];\label{eq:1.1}\\ E[X|\mathcal{F}_{t}]& = &E[E[X|\mathcal{F}_{s}]|\mathcal{F}_{t}];\label{eq:1.2}\\ E[X|\mathcal{F}_{t}]+E[Y|\mathcal{F}_{t}]&=&E[X+Y|\mathcal{F}_{t}].\label{eq:1.3} \end{eqnarray} Moreover, for $X\in \bar{L}^{0}$, we also have \begin{equation}\label{eq:1.4} E[-X|\mathcal{F}_{t}]\leq -E[X|\mathcal{F}_{t}]. \end{equation} For the last inequality we used the convention $\infty-\infty=-\infty$. Next, using \eqref{eq:1.1}-\eqref{eq:1.4}, we will prove the announced results. Assume that $X,Y\in\bar{L}^{0}$. \noindent 1) If $\lambda\in L^{0}_{t}$, and $\lambda\geq 0$, then, by~(\ref{eq:1.1}) we get \begin{align*} E[\lambda X|\mathcal{F}_{t}] & = E[(\lambda X)^{+}|\mathcal{F}_{t}]-E[(\lambda X)^{-}|\mathcal{F}_{t}] =E[\lambda X^{+}|\mathcal{F}_{t}]-E[\lambda X^{-}|\mathcal{F}_{t}]=\\ & =\lambda E[X^{+}|\mathcal{F}_{t}]-\lambda E[X^{-}|\mathcal{F}_{t}]= \lambda E[X|\mathcal{F}_{t}]. \end{align*} From here, and using~(\ref{eq:1.4}), for a general $\lambda\in L^{0}_{t}$, we deduce \begin{align*} E[\lambda X|\mathcal{F}_{t}] & = E[1_{\{\lambda\geq 0\}}\lambda X+1_{\{\lambda< 0\}}\lambda X|\mathcal{F}_{t}]= 1_{\{\lambda\geq 0\}}\lambda E[ X|\mathcal{F}_{t}] +1_{\{\lambda< 0\}}(-\lambda) E[-X|\mathcal{F}_{t}]\leq\\ & \leq 1_{\{\lambda\geq 0\}}\lambda E[ X|\mathcal{F}_{t}] +1_{\{\lambda< 0\}}\lambda E[X|\mathcal{F}_{t}]=\lambda E[X|\mathcal{F}_{t}]. \end{align*} \smallskip \noindent 2) The proof of 2) follows from \eqref{eq:1.2} and \eqref{eq:1.4}; for $X\in L^{0}$ see also the proof in~\cite[Lemma 3.4]{Cherny2010}. \smallskip \noindent 3) On the set $\{E[X|\mathcal{F}_{t}]=-\infty\} \cup \{E[Y|\mathcal{F}_{t}]=-\infty\}$ the inequality is trivial due to the convention $\infty-\infty=-\infty$. On the other hand the set $\{E[X|\mathcal{F}_{t}]>-\infty\} \cap \{E[Y|\mathcal{F}_{t}]>-\infty\}$ could be represented as the union of the sets $\{E[X|\mathcal{F}_{t}]>n\} \cap \{E[Y|\mathcal{F}_{t}]>n\}$ for $n\in\mathbb{Z}$ on which the inequality becomes the equality, due to \eqref{eq:1.3}. \end{proof} \end{appendix} \section*{Acknowledgments} Tomasz R. Bielecki and Igor Cialenco acknowledge support from the NSF grant DMS-1211256. Marcin Pitera acknowledges the support by Project operated within the Foundation for Polish Science IPP Programme ``Geometry and Topology in Physical Model'' co-financed by the EU European Regional Development Fund, Operational Program Innovative Economy 2007-2013. {\small
1,108,101,563,960
arxiv
\section{Introduction} The broad range of applications for Green Fluorescent Proteins (GFPs)\cite{ grynkiewicz_new_1985,shaner_improved_2004-1,shimomura_extraction_1962} keeps them in the spotlight of researchers for many years now. However, it seems that once you think you have achieved almost perfect fluorescent proteins\cite{okabe_green_1997,lelimousin_intrinsic_2009}, new applications arise spanning a completely new field in fluorescent proteins research\cite{fernandez-suarez_fluorescent_2008, dedecker_fluorescent_2013,subach_chromophore_2012,kent_deconstructing_2008}. One such breakthrough was the discovery of the Reversibly-Switchable Fluorescent Proteins (RSFPs)\cite{habuchi_photo-induced_2006,habuchi_cover:_2005,hofmann_breaking_2005}. They are an indispensable tool in super-resolution microscopy\cite{dedecker_subdiffraction_2007, vandenberg_diffraction-unlimited_2015} and improvements to their photophysical properties are still being carried on \cite{bourgeois_reversible_2012,acharya_photoinduced_2016}. Theoretical studies have contributed significantly to understanding of RSFPs fluorescence quantum yield\cite{lelimousin_intrinsic_2009}, modulated isomerization quantum yield\cite{groenhof_photoactivation_2004} and color-tuning\cite{shcherbakova_red_2012}. In spite of the large amount of research in this field, it seems like we still do not have a complete answer to the question why some proteins are photo-switchable while others are just fluorescent. How can only a few mutations in the protein $\beta$-barrel shift the absorption maxima by 40~nm\cite{andresen_photoswitchable_2008, faraji_nature_2015}? What is the role of the protein environment in this process and most importantly how does each aminoacid contribute to the photophysical properties of a certain protein? There are a number of studies conducted on various fluorescent proteins identifying the role of protein environment on absorption maxima\cite{amat_spectral_2013} as well as emission and various electrostatic effects\cite{hasegawa_excited_2007, park_emission_2016}. Study on a set of green, orange and red FPs by Hasegawat et al.\cite{hasegawa_excited_2007} identified the blue-shifting effect of the protein electrostatic potential. Another detailed study on DsRed by List et al.\cite{list_molecular-level_2012} identifies the role of protein environment in one- and two-photon absorption. Also theoretical models were built to explain the pressure and temperature effects \cite{jacchetti_temperature_2016} or electrostatic field spectral tuning in GFPs\cite{drobizhev_long-_2015}. Studies on fluorescence and potential energy landscape of anionic GFP chromophore by Martin et al\cite{martin_origin_2004} and Polyakov et al\cite{polyakov_potential_2010} provide in-depth mechanistic details for possible chromophore isomerization pathways. But in order to have an outlook which would allow for a rational protein design, a study on a consistent set of proteins is indispensable. That is why we have chosen a set of proteins differing by a minimal number of mutations, while conserving the chromophore structure. Step-by-step exploration of an excited state surface allowed us to connect absorption/emission, photo-switching speed and conical intersections with protein environment effects. A well-known RSFP Dronpa\cite{habuchi_photo-induced_2006,moeyaert_green--red_2014} which was a pioneer in RSFPs field and is still widely used for various applications\cite{dedecker_fluorescent_2013,zhou_photoswitchable_2013}, became constitutive of the model protein set. Also, due to its pioneering role many Dronpa-like mutants were developed and well characterized\cite{andresen_photoswitchable_2008}. For our study we have selected Dronpa mutants which do not alter the chromophore structure, so all of the mutations occur in the $\beta$-sheets and/or $\alpha$-helix. The list of five proteins that were used in our study in shown in the Table \ref{tab:mutations}. \begin{minipage}{\linewidth} \centering \begin{tabular}{lllll} \hline Protein&Mutation&Absorption&Emission\\ &&nm/kcal*mol\textsuperscript{-1}&nm/kcal*mol\textsuperscript{-1}\\ \hline Dronpa\cite{moeyaert_green--red_2014}&&503/56.7&522/54.7\\ rsFastLime\cite{stiel_1.8_2007}&V157G&496/57.5&518/55.1\\ rsKame\cite{rosenbloom_optimized_2014}&V157L&503/56.7&515/55.4\\ Padron0.9\cite{andresen_structural_2007, brakemann_molecular_2010}&T59M+V60A+N94I+P141L&504/56.6&510/56.0\\ &G155S+V157G+M159Y+F190S&&\\ bsDronpa\cite{andresen_photoswitchable_2008}&A69T+D112N+G155S+V157G&460/62.0&504/56.6\\ &M159C+F173C&&\\ \hline \end{tabular}\par \captionof{table}{Effect of mutations on the absorption and emission wavelength in Dronpa-like mutants. Only the most essential mutations are indicated for Padron0.9. For bsDronpa six essential mutations are shown, which also were used to model it on the basis of Dronpa crystal structure.} \label{tab:mutations} \bigskip \end{minipage} Two single-point mutants rsKame\cite{rosenbloom_optimized_2014} and rsFastLime\cite{stiel_1.8_2007, andresen_photoswitchable_2008} were chosen due to their minimal difference with Dronpa. They do have a big difference in the speed of photo-switching (see Table S1), but there is barely any difference in the absorption and emission maxima. For a broader spectral variation we have chosen Padron0.9\cite{brakemann_molecular_2010} and bsDronpa\cite{andresen_photoswitchable_2008}. Both proteins conserve the chromophore structure with only a few mutations in the protein matrix. Padron0.9 is a negatively-switchable counterpart of Dronpa. But more importantly for us its absorption and emission maxima are slightly red-shifted in comparison to Dronpa. And bsDronpa is the most blue-shifted close mutant of Dronpa with a large Stokes shift (43~nm). Previous studies on FPs with large Stokes shift\cite{faraji_nature_2015, piatkevich_extended_2013, yoon_far-red_2016} have shown that the extended Stokes shift is a result of a more flexible hydrogen bond (H-bond) interplay where more states can be accessed in the ground and/or excited state surface. However, here our focus is on obtaining one bsDronpa population capable of reproducing the spectral trend of our model set. Hence we do not indulge into in depth conformational analysis of bsDronpa H-bonds fluctuations. We would like to emphasize that the proteins used in our calculations differ by a maximum of 8 mutations and stem from the same protein thus minimizing the differences in the environment. Also, they contain exactly the same chromophore including its sidechain. Although the chromophore's $\pi$-system includes only phenol and imidazoline rings, the chromophore sidechain also plays an important role influencing the $\alpha$-helix flexibility and adaptability of the chromophore to the protein environment\cite{arpino_crystal_2012, moeyaert_green--red_2014, smyrnova_thermal_2016}. Since we are interested in extrapolation of our results into functional mutations, conserving the protein structure is of a an extreme importance. The more mutations are included in the protein the less predictable become the results with increasing possibility of a failed experiment due to factors which are out of reach for computational studies. Previous studies have shown how choice of an initial set up of the system can influence the calculated excitation energies\cite{filippi_bathochromic_2012,melaccio_towards_2016,amat_spectral_2013}. Here we have used exactly the same protocol for all five proteins, which can be briefly described as a modified Automatic Rhodopsin Modelling(ARM) protocol developed by Melaccio et al\cite{melaccio_towards_2016}. Starting QM/MM structures were taken from an extensive MD simulations. Then after CASSCF QM/MM optimization CASPT2 method was used to calculated excitation and emission energies for a set of ten structures. Representative structures for each protein were chosen for further reaction pathway and charge transfer analysis. Charge transfer in the chromophore during excitation and emission is analyzed in correlation with the conformational changes in protein environment. Also, through calculation of individual aminoacid contributions to absorption/emission, we provide an explanation for a remarkable blue-shift in bsDronpa absorption spectra. And finally conical intersection and a closest lying minimum was identified for each of the proteins. As a result an excited state reaction pathway is assessed in a rational and unifying manner. \newpage \section{Results} \subsection{Reproducing the experimental absorption and emission values} \begin{figure}[H] \includegraphics[width=13.5cm]{figures/Absorption_emissions_absorption.png} \caption{Comparison of the calculated (violet) and experimental (cyan) absorption and emission values for the model-set of five proteins. Results for the respective chromophores in vacuum are shown in orange. For the calculated values error bars are shown as well.} \label{fgr:trend} \end{figure} Results for the average absorption and emission values obtained in our study are shown in the Figure \ref{fgr:trend}. First of all, we would like to note that the absorption and emission trend is completely reproduced in our results. Not only a shift absorption/emission values is rather small, but also the order of proteins is conserved. Standard deviation of the average emission values with respect to the experimental emission values is around 1.5 kcal/mol (for all proteins). As it can be seen from the Figure \ref{fgr:trend} calculated absorption values are more blue-shifted than emission values (origins of that phenomena are discussed further in our paper). It is reflected in the standard deviation of the absorption average from the experimental values - from 2.66 kcal/mol (Padron0.9) to 4.63 kcal/mol (bsDronpa). It can be noticed that mutations introduced in Dronpa (see Table \ref{tab:mutations}) can significantly affect the absorption maxima, while emission maxima are much less spread. There are two main sources affecting the absorption and emission maxima: the chromophore conformation and the protein environment. Since the chromophore structure is conserved in all five proteins we see the major source for absorption distribution in the way protein environment interacts with the chromophore in the ground and excited state. To discard the influence of the chromophore conformation we have calculated the absorption and emission values for the chromophore in vacuum (orange points in the Figure \ref{fgr:trend}). Since the trend is drawn on the average value over ten structures for each protein, we have used the most representative (closest to the mean values) conformation. As expected neither trend nor significant spread in the values can be found in these results. Hence it can be concluded that we should focus on the protein environment effect on the absorption and emission. \subsection{Charge re-distribution in the chromophore. Origins of the higher absorption sensitivity to the mutations} One of the most intriguing features of our model set is a 6 kcal/mol absorption maxima difference between Padron0.9 -- bsDronpa and only 2 kcal/mol difference in emission between Dronpa -- bsDronpa. Not only absorption is much more sensitive to mutations, but also there is no correlation with the emission. In another words, a higher absorption wavelength does not imply a higher emission maximum. We have hypothesized that the electrostatic effects may play a pivotal role in this phenomena. Hence we have looked into the charge redistribution during the excitation and emission. Then the same calculation was repeated for chromophore $\pi$-system, but already in vacuum. The values presented in the Figure \ref{fgr:charge_redistribution} correspond to the charge redistribution in protein environment during the absorption and emission. The charge redistribution is essentially an atomic charge difference between the first excited and ground state. \begin{figure}[H] \includegraphics[width=15.5cm]{figures/charge_redistribution.png} \caption{Comparison of the charge redistribution in the chromophore during absorption and emission. The images are color coded with the color bar shown on the right. Values are indicated next to the atoms for which the charge difference exceeds 0.04. Hydrogen charges are summed into the heavy atoms.} \label{fgr:charge_redistribution} \end{figure} Blue and red colors in the Figure \ref{fgr:charge_redistribution} correspond to a positive and a negative charge differences. It can be noticed that the negative charge is transferred from the the aromatic phenol ring to imidazoline ring and the bridging atoms of the single-double bond bridge. This observation is in line with Filippi et al.\cite{filippi_bathochromic_2012} results on anionic GFP. In the ground state $\tau$-bond has a prevalent double character while $\phi$ is a single bond (benzenoid resonance structure). Observed charge redistribution corresponds to a double-bond delocalization over $\tau$ and $\phi$ thus facilitating the isomerization (or shift to a quinoid resonance structure). Although, charge redistribution pattern is similar during the absorption and emission the net charge transfer from phenol to imidazoline is more pronounced during the absorption (phenol ring is more blue and imidazoline ring is more red). When compared to the charge re-distribution in chromophore in vacuum no such difference was observed (see Figure~S1). Hence higher magnitude of the values spread in the absorption and not in the emission comes purely from protein environment. First of all, mutations in the protein have direct effect on electrostatic potential around the chromophore. Also, pure spatial arrangement of the aminoacids around the chromophore can have important effects. In case when only one aliphatic aminoacid is mutated into another aliphatic one (Val 157 $\rightarrow$ Gly 157) minimal perturbation in electrostatic field around the chromophore is expected. However, slight rearrangement of the aminoacids around the chromophore can bring polar aminoacids closer to the chromophore and either red or blue-shift its absorption (individual influence of the aminoacids on spectral maxima is discussed further). Charge transfer also results in a slight conformational change. The geometrical parameters that change the most in the proteins are depicted in the Figure~S2. Change in all of the parameters indicates that the chromophore is preparing for an isomerization. The negative charge is transferred from the phenol ring into the imidazoline ring and particularly the bridging carbon atom. This leads to phenol ring -- H-bonds weakening. Also the H-bonds between Arg~89, Arg~64 and imidazoline ring get stronger. Thus leading to a slight Glu~144 -- His~193 H-bond elongation and a consequent Glu~144 -- His~193 bond shortening. Finally, the double bond bridge is elongated indicating transfer to a single bond character. As a consequence there is a noticeable out-of-plane twisting around $\tau$ dihedral angle. It has been shown earlier by Morozov et al.\cite{morozov_hydrogen_2016} that a lower number of H-bonds with the chromophore induces the isomerization. A link between this observation and a cascade of hydrogen-bond-network (HBN) transformations triggered after the excitation is shown in our study. \subsection{Protein environment influence on the absorption and emission maxima} In order to see influence of the individual aminoacids on the absorption and emission maxima, we have performed calculations where their charges were consecutively turned off. Results of the calculation can be see in the Figure S3. The importance of His~193 due to it's $\pi$-stacking interaction with the phenol ring of the chromophore. H193T mutation as reported by Li et al\cite{li_primary_2010} shows how absence of this interaction induces the internal conversion and leads to fluorescence loss. Interaction of the chromophore with Tyr~116 is often overlooked and our calculations clearly demonstrate its importance. To our knowledge there is no experimental data published for Dronpa Y116 mutants, however Y116Q and Y116N mutations in pcDronpa (green-to-red photoconvertible mutant of Dronpa)\cite{moeyaert_green--red_2014} lead to an increased photo-conversion quantum yield and higher pKa. The blue-shifting effect of Arg~89 clearly manifests itself in bsDronpa, where it loses interaction with the chromophore. Although, for a definitive answer an X-ray structure of bsDronpa must be awaited for we anticipate that a large Stokes shift occurs due to Arg~89 flexibility. When its interaction with the chromophore is switched off both in theory (for Dronpa -- rsKame -- rsFastLime -- Padron) and in practice (bsDronpa) a blue-shift in absorption is observed. The importance of Arg~64 and Arg~89 can be attributed to their strong interaction with the chromophore through HBN. Both Arg~64 and Arg~89 for H-bonds with imidazoline ring of the chromophore. When absent, the negative charge transfer is impeded thus increasing the energy of the excited state. Consecutively a blue shift occurs. \subsection{Conical intersections on the isomerization path} Once we had a working protocol to study photophysical properties of the fluorescent proteins we decided to go further and connect their photochemical reactivity with the properties of conical intersections. Conical intersections (CIs) very likely act as bottlenecks for the isomerization reactions in photo-activatable proteins. In particular, properties of the CI region should be crucial for photoswitching speed of the RSFPs. However in order to prove this an assessment of a group of photoswitching proteins is needed. At the same time there is an inverse correlation between the speed of photoswitching and fluorescence quantum yield (as seen in the Table S1). That suggests a pivotal role of excited state reaction kinetics due to possible presence of exited state energy barriers as well as distinctive topography of the conical intersection. To identify conical intersections on the excited state isomerization path of RSFPs we have used a conical intersection optimization with a constrain module available in Molcas\cite{fdez._galv?n_analytical_2016}. CI optimizations were carried out at CASSCF(12,11)/6-31G* level and afterwards a single-point SA-CASPT2(12,11)/6-31G* calculation was done on optimized structures. Results of our findings can be seen in the Figure~\ref{fgr:CI} and relative energies are summarized in the Table~S2. For the triad rsFastLime -- Dronpa -- rsKame the conical intersection energies are inversely proportional to their isomerization quantum yields (and the photoswitching speed). Namely, rsFastLime is the fastest switcher of all three and respectively it's CI has the lowest energy (60.2~kcal/mol), then follows Dronpa (61.6~kcal/mol) and rsKame -- the slowest switcher -- has the highest CI energy of all three (66.7~kcal/mol). On the basis of the results for these three proteins we could have concluded that the energy of CI guides the photoswitching speed. Following this conclusion we would have expected the CI energy of Padron0.9 and bsDronpa be even lower than rsFastLime energy, since both of them exhibit extremely fast switching behavior. However, following numerous attempts on CI optimization, it was not possible to lower the CI energies of Padron0.9 and bsDronpa and they have stayed at 78.3kcal/mol and 76.3 kcal/mol respectively. Hence, not only kinetics but also topology of the CI region is important for photoswitching. \begin{figure}[H] \begin{subfigure}[b]{0.55\textwidth} \includegraphics[width=9.1cm]{figures/mechanism_Dronpa.png} \end{subfigure} \begin{subfigure}[b]{0.40\textwidth} \includegraphics[width=8.5cm]{figures/mechanism_Padron.png} \end{subfigure} \caption{Potential energy surface of the chromophore isomerization. Black lines correspond to the mechanism proposed for rsKame -- Dronpa --rsFastLime and blue lines are for bsDronpa -- Padron. The energy values are indicated in the corresponding order and color. Dashed and continuous lines correspond to the ground and excited state surfaces respectively. TS region is indicated in grey in it's hypothetical position.} \label{fgr:CI} \end{figure} To characterize the CI region we have also identified a local minimum which would be the closest to the CI point. So, for each minimization the structure identified as a CI point was a starting structure and minimization proceeded with no constraints. For rsFastLime -- Dronpa -- rsKame an energy minimum was found in a vicinity of a CI point with energies of 54.4, 55.9 and 65.1 kcal/mol respectively. Energy minima for Padron0.9 and bsDronpa were identified at 60.6 and 56.5 kcal/mol respectively. However, after visual inspection and comparison of the ground state energies it was established that the minimized structures belong to Franck-Condon region. In order to quantify the extent of the isomerization reaction coordinate we have used calculated the RMSD of the chromophore from the initial minimum ground state structure. RMSD comparison can be found in the Table~S2 and superimposed chromophore structures from FC and CI regions are shown in the Figure S4. Compiling all of the findings it can be deduced that the CI region of rsFastLime -- Dronpa --rsKame is separated by an energy barrier from a Franck-Condon region, while in case of Padron0.9 and bsDronpa it can be accessed directly. Our results are in accordance with the potential energy surface described by Polyakov et al.\cite{polyakov_potential_2010} for a HBDI chromophore. Authors indicate a possibility of a higher-lying CI with and a low excited-state transition state. At the same time they identify a low-lying CI and a high energy transition state. As such the fast switching of bsDronpa and Padron0.9 can be explained by a flat potential energy surface which directly connects FC and CI regions. \section{Conclusion} By selecting a consistent set of reversibly-switchable fluorescent proteins (Dronpa -- rsFastLime -- rsKame -- Padron -- bsDronpa) we have built a protocol completely reproducing their absorption and emission trend. With only a slight blue-shift of only about 2 kcal/mol it provides a solid tool for rational fluorescent protein design and a mechanistic study. A thorough investigation of an excited-state reaction path allowed us to rationalize photoswitching properties. Via charge transfer analysis and individual aminoacid contributions it has been shown that absorption is more sensitive to mutations and changes in the protein environment than emission. A general cause of this phenomena we see in higher susceptibility of the excited state species. Sometimes denominated as "hot" species \cite{vengris_contrasting_2004}, which are unequilibrated with the protein environment. During the excitation the system is in equilibrium in the ground and not excited state. While during the emission system founds itself in equilibrium in the excited state. Hence environmental changes disturb the system more during the absorption and less during the emission. bsDronpa with its remarkable Stokes shift serves as an example of the protein environment flexibility. The most representative population was identified during the MD simulations. The major difference from the rest of the proteins is n absence of a hydrogen bond between Arg~89 and the imidazoline ring of the chromophore. Our individual aminoacid calculations clearly show Arg~89 contributing to a blue shift in absorption, which is reproduced in bsDronpa. Further on, an excited state reaction pathway is assessed for a set of photoswitching proteins. It is shown how conical intersection controls the speed of photoswitching as well as brightness. However, it has to be noted that an existence of an excited state energy barrier also play an important role in the protein's photophysics. In general, a higher lying conical intersection is separated by an excited state energy barrier from Franck-Condon region. However, in case of the fastest switching proteins (Padron and bsDronpa) no such barrier was identified. And although conical intersection in these two proteins has a higher energy than in the slower switching ones, it is the flat potential energy surface which facilitates the isomerization. A more accessible conical intersection relates to a lower brightness and a higher isomerization speed. Further characterization of the conical intersection topology, such as it's slopeness, as well as excited state classical trajectory calculations could provide more mechanistic details. \section{Computational Methods} \textit{Molecular Dynamics (MD) and Replica-Exchange MD (REMD)} As the first step vast conformational sampling of all five proteins has been performed using Gromacs 5.0.1\cite{pronk_gromacs_2013} package and the Amber99sb-ILDN forcefield\cite{case_amber_2014}. In case of Dronpa and Padron0.9 X-ray structures of the bright states were used (2Z10\cite{mizuno_light-dependent_2008} and 3ZUJ\cite{regis_faro_low-temperature_2011} respectively). rsKame, rsFastLime and bsDronpa were modelled by point mutations on the basis of Dronpa crystal structure. The list of mutations differentiating Dronpa from the rest of the proteins is given in the Table \ref{tab:mutations}. In detail description of MD simulation for Dronpa, rsFastLime and rsKame can be found in our previous work\cite{smyrnova_molecular_2015, smyrnova_thermal_2016}. A similar scheme has been adopted for Padron0.9 and bsDronpa. PROPKA \cite{li_very_2005} predictions were used to determine protonation states of the titratable residues. In case of Padron0.9 we have adopted the protonation scheme proposed by Brakemann et al.\cite{brakemann_molecular_2010}, where GLU 144 is protonated and HIS 193 is neutral with a proton placed on N$\epsilon$. Crystal waters present in the X-ray structure were conserved. Then proteins were solved in a 80.5$\times$80.5$\times$80.5 {\AA}$^3$ TIP3P water box and neutralized by Na\textsuperscript{+} and Cl\textsuperscript{-} counterions added at the concentration of 0.15M. Prepared systems were minimized and equilibrated at 300K temperature and 1bar pressure for 100ps. Long-range interactions were included using Particle Mesh Ewald method, cut-off distance was set 13.5\AA. Since a good quality crystal structure was available for Padron0.9, five 100~ns MD runs. were enough for sufficient conformational sampling. A time step of 2~fs and SHAKE algorithm\cite{ryckaert_numerical_1977} were employed. The temperature was controlled using velocity rescaling with a stochastic term\cite{bussi_canonical_2007}. bsDronpa model has been constructed by introduction of 6 point-mutations (listed in the Table 1) to Dronpa structure. In order to assure a thorough conformational search a Temperature Replica-Exchange MD (T-REMD) simulation was performed. Twenty structures were assigned temperature values from 300K to 330 K which were obtained using the temperature generator server for REMD-simulations\cite{patriksson_temperature_2008}. Equilibration was performed at each of 20 temperatures following the same procedure as described for MD simulation. Then 50~ns runs were performed for each replica thus leading to 1$\mu$s cumulative production run. The resulting exchange probability was $\approx$ 20$\%$ with exchange attempts occurring every 2~ps. GROMACS utilities were used to perform cluster analysis for each of five proteins. As a results the median structure (the most representative for each simulation) was determined. Then these structures were used as input structures for QM/MM simulations. \newline \textit{The QM/MM calculations} MOLCAS/Tinker interface\cite{aquilante_molcas_2010} was used to perform QM/MM calculations. MOLCAS 7.8\cite{aquilante_molcas_2010} was used to describe QM part and Tinker 5.1\cite{ponder_efficient_1987} for MM part. In preparation of the QM/MM models the Automated Rhodopsin Modeling (ARM)\cite{melaccio_towards_2016} approach was used. This tool has proved to be a robust protocol for simulations of rhodopsin-like photoreceptors. Certainly there is structural difference between Dronpa-like fluorescent proteins and rhodopsin-like photoreceptors. However similarity in their photobiological characteristics allows to employ the same approach. The QM/MM model was constructed on the basis of a median structure obtained from classical MD simulations. Proteins were re-centered in the water box, then a sphere of 30{\AA} radius from the chromophore was defined. Since no periodic boundary conditions are employed during the QM/MM simulations this approach allows to save computing time without losing the environment effect. QM/MM model can be divided into three parts: environment, cavity and QM part. Schematic representation of the QM/MM models constructed for five proteins is shown in the Figure~S5. The input structure used for QM/MM model was subject to 10 $\times$1~ns MD, where the environment was kept frozen and cavity atoms flexible. Then 10 random frames were picked up from the resulting trajectories and used for consecutive QM/MM calculations. Thus for each protein 10 independent QM/MM models were prepared. \begin{figure}[H] \includegraphics[width=13.5cm]{figures/Dronpa_sphere_chrom.png} \caption{Full system is shown on the left (protein shown as cartoon, Na+ and Cl- ions as spheres (blue and violet respectively), dark cyan surface indicates a 30{\AA} water sphere and chromophore is shown as sticks). A 4{\AA} radius cavity around the chromophore was defined and comprised of X residues (cyan) including the chromophore (green). All this residues were flexible during the optimization. The water molecules present in the crystal structure were maintained and included in the cavity. Chromophore was included in the QM region.} \label{fgr:7} \end{figure} All the atoms corresponding to the environment are fixed. The 4{\AA} radius cavity was defined around and including the chromophore. Residues in the cavity were free to relax. The QM part comprised of the chromophore's $\pi$-system with link atoms\cite{singh_combined_1986} placed between carbon atoms (shown in the Fig. X). Link atom position was restrained according to Morokuma scheme\cite{svensson_oniom:_1996}. The charge of the frontier atoms was set to 0 and the residual fractional charge redistributed of the rest of chromophore atoms according to atomic mass and insuring -1 integer charge of the chromophore. In all calculations MM atoms were described with Amber94\cite{cornell_second_1995} force field. As the first step structures were optimized at HF/3-21G/MM level. Then a single-point CASSCF(12,11)/6-31G*/MM calculation was performed in order to define orbitals, which correspond to $\pi$-orbitals and excluding the ones with population close to 0 or 2. Orbitals defined in single-point calculation were used as input for the consecutive CASSCF(12,11)/6-31G*/MM optimization. Then we switch to a CASPT2(12,11)/6-31G*/MM 3-single root single point calculation is performed, taking a 3-roots State Average (SA)\cite{ferre_approximate_2002} CASSCF(12,11)/6-31G*/MM wavefunction as reference. As a result 10 vertical excitation energies ($\Delta$E\textsubscript{S1-S0}) were computed. Then CASSCF(12,11)/6-31G*/MM excited-state optimization was performed. For the located excited-state minimum structure a single-point CASPT2(12,11)/6-31G*/MM 3-single root calculation was performed. Emission energies were computed as the difference between the first two roots ($\Delta$E$'$\textsubscript{S1-S0}). Average over 10 values for each protein was taken as a reference for comparison to the experimental values. To calculate the standard deviation from the experimental values we have used the standard formula given in Eq.\ref{eqn:STD} \begin{equation} \sigma = \sqrt{\frac{1}{N} \sum_{i=1}^N (x_i - x_{exp})^2}\label{eqn:STD} \end{equation} where N=10 and x\textsubscript{exp} is an experimental value. Charge re-distribution shown in the Figure \ref{fgr:charge_redistribution} was calculated as a difference between the Mulliken charges in the ground and excited states. Hydrogen charges were summed up to the heavy atoms. To calculate the effect of each aminoacid on absorption and emission maxima the most representative structures for each protein were used for single-point CASPT2(12,11)/6-31G*/MM 3-single root calculation. Charges of atoms corresponding to each amino acid were set 0 in Tinker key-file. \begin{acknowledgement} Financial support from the Flemish Government through the KU Leuven concerted action scheme is gratefully acknowledged. The computational resources and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by the Hercules Foundation and the Flemish Government. \end{acknowledgement} \begin{suppinfo} Following information can be found in the Supporting Information: computational details,. \end{suppinfo}
1,108,101,563,961
arxiv
\section{Introduction} The eigenstates of the square of the angular momentum operator $L^2$ which are also eigenstates of $L_x+i\eta L_y$, where $\eta$ is a real parameter, have been called the {\em intelligent spin states} \cite{aragone} and have been the subject of intensive analytical studies \cite{kolodz,rashid}. However, at least to our knowledge, one does not find in the literature any discussion of their geometrical interpretation and of their utility in the construction of angular momentum coherent states. In completing a recent work \cite{rozmej} on the time evolution of coherent states built from a subclass of those states we came to conclusion that such a discussion had still to be presented. In this short article we will first show in section \ref{ssots} that the parameter $\eta$ enables to define squeezed states on the sphere i.e. states for which the uncertainties $\Delta L_x^2$ and $\Delta L_y^2$ can be varied at will. It is however for real values of $\eta$ that the product of the uncertainties has a minimum value. In section \ref{cotss} we will show that the states can be classified into three categories, a result already found in ref.~\cite{rashid}, and that the well known Radcliffe's states belong to one of these categories. In section \ref{soafoewp} we will study the angular localization and the partial wave expansion of a particular class generated from an exponential coherent state on the sphere. This state was introduced by us elsewhere~\cite{rozmej}. Finally in section \ref{frftcoarr} we will study the time evolution of this particular set of states in the same line as in \cite{rozmej} i.e. assuming that the wave packets evolve with the hamiltonian of a rigid body with moment of inertia $J$ i.e. \begin{equation}\label{Ham} H = \frac{\hbar^2}{2J}\,L^2 \: . \end{equation} In particular we will show that the scenario of fractional revivals found in \cite{averbukh} is well exhibited by the family of wave packets studied in section \ref{soafoewp}. \section{Squeezed states on the sphere}\label{ssots} Let us enumerate a few general properties of the states $|w,\eta\rangle$ which are eigenstates of $L_x+i\eta L_y$ with a complex value of $\eta$. In this first part there is no need to assume that they are eigenstates of $L^2$. These states are the normalized states which obey the equation: \begin{equation}\label{l1} (L_x+i\eta L_y)|w,\eta\rangle = w|w,\eta\rangle \: . \end{equation} It is a simple exercise, already discussed in references \cite{jackiw} and \cite{levy}, to prove equations (\ref{l2}-\ref{l7}). First of all \begin{equation}\label{l2} |\eta|^2 = \frac{\Delta L_x^2}{\Delta L_y^2} \: , \end{equation} i.e. $|\eta|$ can be called the squeezing parameter (with the definitions $\Delta L_i^2=\langle L_i^2\rangle -\langle L_i\rangle^2, i=x,y$). The phase $\alpha$ of $\eta$ determines the ratio of the average value of the anticommutator of $L_x$ with $L_y$ to the average of their commutator since \begin{equation}\label{l3} \tan\alpha = \frac{\langle\{ L_x,L_y \}\rangle - \langle L_x\rangle \langle L_y\rangle}{\langle L_z\rangle} \: . \end{equation} Finally the product of the uncertainties is given by: \begin{equation}\label{l4} \Delta L_x^2 \Delta L_y^2 = \frac{1}{4}[\langle L_z\rangle^2 + |\langle\{ L_x,L_y \}\rangle - \langle L_x\rangle\langle L_y\rangle|^2] = \frac{1}{4} \frac{\langle L_z\rangle^2}{\cos^2\alpha} \: . \end{equation} The average values of $L_x$ and $L_y$ are fixed both by the parameter $\eta$ and the eigenvalue $w$ by: \begin{equation}\label{l5} \langle L_x\rangle = \frac{\eta w^*+w\eta^*}{\eta+\eta^*}, \qquad \langle L_y\rangle = \frac {1}{i}\frac{w-w^*}{\eta+\eta^*} \: . \end{equation} The eigenstates corresponding to a real parameter $\eta$ satisfy the important minimum uncertainty relation from which the work \cite{aragone} was initiated \begin{equation}\label{l7} \Delta L_x^2 \Delta L_y^2 = \frac{1}{4}\langle L_z\rangle^2 \: . \end{equation} \section{Classification of the squeezed states}\label{cotss} The eigenvalue $w$ can be obtained very simply in the basis of eigenstates of $L^2$ with eigenvalue $l(l+1)$ if one uses the observation \cite{rashid} that, within a constant factor $\sqrt{1-\eta^2}$, the operator $L_x+i\eta L_y$ is one of the three generators of an SU(2) algebra. The set of operators satisfying this algebra is defined as \begin{equation}\label{l8} {\cal L}_3 = \frac{L_x+i\eta L_y}{\sqrt{1-\eta^2}}, \qquad {\cal L}_{\pm} = \pm \left(\frac{\eta L_x +iL_y}{\sqrt{1-\eta^2}}\right) - L_z \: . \end{equation} Therefore there are $2l+1$ solutions to equation (\ref{l1}); instead of $w$ one uses simply the eigenvalue of ${\cal L}_3$ with $k=-l,\ldots ,+l$ and the formula \begin{equation}\label{l10} w = k\sqrt{1-\eta^2} \: . \end{equation} Let us denote by $|l,k,\eta\rangle$ the eigenstates solutions of equation (\ref{l1}) in the basis where $L^2$ is diagonal. (Note that $k$ is the eigenvalue of ${\cal L}_3$ and not of $L_z$). Using (\ref{l10}) one obtains two expressions for the average of $L_x$ and $L_y$ in the particular case of a real value of $\eta$: \begin{equation}\label{l11} \langle L_x\rangle = k\sqrt{1-\eta^2} , \qquad \langle L_y\rangle=0 , \qquad \mbox{if} \qquad |\eta|<1 \end{equation} \begin{equation}\label{l12} \langle L_x\rangle = 0 , \qquad \langle L_y\rangle=k\sqrt{\eta^2-1} , \qquad \mbox{if} \qquad |\eta|>1 \: . \end{equation} The case with $|\eta|=1$ is obviously singular but there is a unique well known solution for which $w=0$. One has then \begin{equation}\label{l13} (L_x\pm iL_y) |l,k=\pm l,\eta=\pm 1\rangle = 0 \end{equation} and $|l,\pm l,\eta=\pm 1 \rangle$ coincide with the eigenstates of $L_z$ with eigenvalues $\pm l$. If $\eta$ takes a complex value $\langle L_x \rangle$ and $\langle L_y \rangle$ are both nonzero but are both proportional to $k$. The intelligent spin states are the solutions of equation (\ref{l1}) with average values given by (\ref{l11}) or (\ref{l12}) and which satisfy moreover equation (\ref{l7}). The {\em generalized intelligent states} or the {\em quasi-intelligent spin states} which were respectively defined in \cite{aragone} and \cite{rashid} are the solutions of equation (\ref{l1}) with a complex value of $\eta$ for which both averages are nonzero and to which equation (\ref{l4}) must be applied with a nonzero value of $\alpha$. In both cases it is sufficient to consider the interval $|\eta|<1$ and $\alpha \in [0,\pi/2]$. Let us now classify those states according to $k$ as follows: \begin{itemize} \item[~~i)] The states with $k=0$ for which $\langle L_x \rangle$ and $\langle L_y \rangle$ are zero and only $\langle L_z \rangle$ is nonzero.\\ \item[ ii)] The states with $k=\pm l$ which are very particular as we will show below. \\ \item[iii)] The states with intermediate values of $k$. \end{itemize} In order to justify this classification we have to express $\langle L_z \rangle$ as \begin{equation}\label{l14} \langle L_z\rangle = \eta \langle{\cal L}_3\rangle +i\sqrt{1-\eta^2} \langle L_y\rangle - \langle{\cal L}_+\rangle \: . \end{equation} Except for the case $k=\pm l$ the average of the operator $L_x+i\eta L_y$ defined by equation (\ref{l8}) is nonzero. This average can be calculated from the works of \cite{aragone} or \cite{rashid}. It is generally the ratio between two polynomials of the parameter $\eta$, the degree of which increases with $l$. Therefore the angle between the vector $\langle \vec{L} \rangle$ and the $Oz$ axis is not a simple function of $\eta$ for general values of $k$ and $l$ and there is no simple geometrical meaning of this angle. For $k=\pm l$ such an interpretation can indeed be found. Let us define $\eta$ in terms of two angles $\theta_0$ and $\phi_0$ by \begin{equation}\label{l15} \eta = \frac{\tan\phi_0+i\cos\theta_0}{\cos\theta_0\tan\phi_0+i} \: . \end{equation} With this value of $\eta$ the equation \begin{equation}\label{l16} (L_x+i\eta L_y) |l,l,\eta \rangle = l\sqrt{1-\eta^2} |l,l,\eta \rangle \end{equation} can be written simply as \begin{equation}\label{l17} (\vec{L}\cdot\vec{l}) |l,l,\eta \rangle = l |l,l,\eta \rangle \: , \end{equation} where the vector $\vec{l}$ is the unit vector in the direction of $\langle \vec{L} \rangle$ which is such that \begin{equation}\label{l18} \langle L_z\rangle = l\sin\theta_0\cos\phi_0, \quad \langle L_y\rangle = l\sin\theta_0\sin\phi_0, \quad \langle L_z\rangle = l\cos\theta_0 \: . \end{equation} Equation (\ref{l18}) expresses the fact that $|l,l,\eta\rangle$ are simply spherical harmonics $Y_l^l$ in a system of axis where $Oz$ is along the vector $l$. These states are called generally Radcliffe's states \cite{radcliffe}. Let $\vec{u},\vec{v},\vec{l}$ be three unit vectors forming an orthogonal direct system. One has also \begin{equation}\label{l19} (\vec{L}\cdot (\vec{u}+i\vec{v})|l,l,\eta \rangle = 0 \: . \end{equation} In this rotated system of coordinates the parameter $\eta$ is equal to one!\footnote{One also finds other cases where the states coincide with spherical harmonics with an axis of quantization different from $Oz$: it is the $Ox$ axis for $\eta=0$ and for $\eta=i|\eta|$ this axis has $\theta=\pi/2$ and $\sin\phi=-|\eta|/\sqrt{1-\eta^2}$.} For $k \neq \pm l$ the transformation which enables to build up the intelligent or quasi-intelligent spin states from the usual spherical harmonics is not a rotation. This remark was already formulated long ago in \cite{rashid} at the same time as the previous remark on the equivalence of Radcliffe's states with the states with $k=\pm l$. \section{Study of a family of exponential wave packets}\label{soafoewp} Let us now discuss the properties of wave packets built from intelligent or quasi-intelligent spin states with the same value of $w$, i.e. of $k$, and containing many different values of $l$. The consideration of such admixtures enables indeed to modify at our will the angular spread keeping either (\ref{l4}) or (\ref{l7}) and with the allowance, provided by equation (\ref{l2}), that $\eta$ is an adjustable squeezing parameter. These WP can be expressed either in the basis $|l,k,\eta\rangle$ with coefficients $c_l(k,\eta)$ as \begin{equation}\label{l20} |\Psi_{\eta k}\rangle = \sum_l \, c_l(k,\eta) |l,k,\eta\rangle \end{equation} or in the basis of ordinary spherical harmonics with coefficients $b_{lm}(k,\eta)$ as \begin{equation}\label{l21} \Psi_{\eta k}(\theta,\phi) = \sum_{lm}\,b_{lm}(k,\eta) \, Y_m^l(\theta,\phi) \: . \end{equation} We will discuss from now on the properties of families of states generated from a {\em parent} state investigated by us elsewhere \cite{rozmej} and called the {\em exponential coherent state}. For a real value of $\eta$ this state depends on a single parameter $N$ and a function of $\theta$ and $\phi$ called $v$, its expression is: \begin{equation}\label{l22} \Psi_{\eta 0}(\theta,\phi) = \sqrt{\frac{N}{2\pi\sinh 2N}} e^{Nv} = \sqrt{\frac{N}{2\pi\sinh 2N}} e^{N\sin\theta(\cos\phi+i\eta\sin\phi)} \: . \end{equation} The probability density which is associated depends only on $N$. It is maximum within a solid angle symmetric around the axis $Ox$ and the width of this solid angle is of the order $4\pi/(4N+1)$. Also the average $\langle L_z \rangle$ obeys the following formula \begin{equation}\label{l23} \langle L_z\rangle = \eta(N\coth (2N)-\frac{1}{2}) \stackrel{\scriptscriptstyle N\gg 1 \eta(N-\frac{1}{2})\: . \end{equation} We have shown in \cite{rozmej} how to obtain this WP from a three dimensional harmonic oscillator coherent state. As said in \cite{rozmej} all the WP have $k=0$. Our purpose is now to construct for real or complex $\eta$ families of WP having increasing values of $k$ using (\ref{l22}) for the parent state. We would like to study the angular spread as a function of $k$ and so to say the coherence properties of the following family: \begin{equation}\label{l24} \Psi_{\eta 0},\quad {\cal L}_+ \Psi_{\eta 0},\quad {\cal L}_+^2 \Psi_{\eta 0}, \ldots , {\cal L}_+^k \Psi_{\eta 0} \: . \end{equation} These states are all solutions of equation (\ref{l1}) with respectively eigenvalues $0, \sqrt{1-\eta^2}, 2\sqrt{1-\eta^2}, \ldots ,k \sqrt{1-\eta^2},\ldots $ If $\eta$ is real they all satisfy equation (\ref{l7}) while equation (\ref{l4}) is satisfied if $\eta$ is complex. Applying ${\cal L}_+$ onto the argument $v$ of (\ref{l22}) one obtains a function $v_+$ defined by \begin{equation}\label{l25} \fl v_+ = {\cal L}_+ v = \frac{1}{2}(\cos\theta-\eta)\sqrt{\frac{1+\eta}{1-\eta}} - \frac{1}{2}(\cos\theta+\eta)\sqrt{\frac{1-\eta}{1+\eta}} + (\eta\cos\phi+i\sin\phi)\sin\theta \: . \end{equation} But a second application leads to the following property: \begin{equation}\label{l26} {\cal L}_+ v_+ = {\cal L}_+^2 v = 0 \: . \end{equation} The set defined by (\ref{l24}) can be expressed then simply in terms of $\Psi_{\eta,0}$ and of powers of $v_+$ as \begin{equation}\label{l27} \Psi_{\eta 0},\quad v_+ \Psi_{\eta 0},\quad v_+^2 \Psi_{\eta 0}, \ldots , v_+^k \Psi_{\eta 0} \: . \end{equation} In this manner one sees that the gaussian will be dominant for any $k$ in such a way that the WP will be highly concentrated on the sphere even for high $k$. The action of the operator ${\cal L}_+^k$ is however very different if one analyses the decomposition of WP into partial waves according to the expansion (\ref{l21}). Indeed it suppress all the partial waves with $l<k$, it moves the distribution towards the values with $m>l$ and also, by the change in the normalization, it increases the weight of the higher partial waves. However if one uses not the basis of the usual spherical harmonics (\ref{l21}) but the basis of intelligent spin states one can draw benefit from the fact that we have in this basis the equation \begin{equation}\label{l28} {\cal L}_+ |l,k,\eta\rangle = \sqrt{l(l+1)-k(k+1)} |l,k+1,\eta\rangle \end{equation} which shows that the relative phases of the states present in (\ref{l20}) are not affected. Therefore the states will keep their coherence property as a function of $k$. To be complete we give below the expression of $b_{lm}(0,\eta)$ from which a recurrence can be built easily to find the coefficients for $k>0$. One has found in \cite{rozmej} that for real values of $\eta$ \begin{equation}\label{l29} \fl b_{lm} = \sqrt{\frac{2N}{\sinh (2N)}} \, \sum_{l_1l_2} \,(-1)^{l_2} \, \frac{(N(1+\eta))^{l_1} (N(1-\eta))^{l_2}}{\sqrt{(2l_1)!(2l_2)!}} \, \frac{\langle l_1 l_2 00|l0\rangle \langle l_1 l_2 l_1 -l_2|lm\rangle}{\sqrt{(2l+1)}}\; . \end{equation} For complex $\eta$ the square root in front must be changed. The distribution of the coefficients $b_{lm}$ is presented in figure 1 for $\eta=0.5, N=0$ and $k=0,10$ and 20. A very large shift of the distribution of the $b$'s towards the higher $l$ and $m>l$ is clearly observed when $k$ increases as well as an increase in the spread of the values of $m$. Sections of the WP at their maximum are presented in figure 2 which shows the angular shape of the WP. Apart from a small movement towards smaller values of $\theta$ when $k$ increases the WP is almost as strongly localized for $k=20$ as it was for $k=0$ for $\eta=0.5$ and $\alpha=0$, as well as for $\alpha\neq 0$. Despite the strong angular concentration of the WP for $k=20$ the calculation of $\langle L_z \rangle$ provides very large values for the product of the uncertainties. This average value is shown in figure 3 for $\eta=0.5$ in the same time as $\langle L^2 \rangle$. \section{Fractional revivals for the case of a rigid rotation}\label{frftcoarr} The time evolution of the wave packets described above can now be studied assuming the hamiltonian (\ref{Ham}). This assumption was applied in \cite{rozmej} for the state (\ref{l22}), i.e the state representing a rigid heteronuclear molecule and we have made an extensive study of its time evolution. The fractional waves that one obtains as time proceeds are obtained from the initial WP by multiplying the $b_{lm}$ by $\exp (-iI\omega_0 t_s)$ (see \cite{rozmej}) where $\omega_0$ is the frequency of periodicity of the rigid rotor and where $t_s$ is a fractional time. It is this change of $t_s$ which allows to obtain a rich variety of fractional waves on the sphere for $k=0$. Depending on the value of $\eta$ (only real values of $\eta$ were considered in \cite{rozmej}) the WP exhibits a rich scenario of fractional revivals in the line described in \cite{averbukh}, the origin of which being traced in the quantum mechanical spreading. The parameter $\eta$ allows to control the relative spread of each of the angular variables. For fractional time $(m/n)\,T_{rev}$, where $T_{rev}$ is a common revival time, the WP is subdivided into a certain number of WP ($n$ if $n$ is odd, $n/2$ in the even case) the shape of which depend strongly on the value of $\eta$. For the case $\eta=\pm 1$ the fractional WP are clones of the initial WP. For different values their shape change (we have called these WP {\em mutants}). If $N$ is large enough the fractional WP are located around different directions on the sphere and do not interfere much spatially for low enough values of $n$ ($n<8$ for $N=20$). We interpret this properties as a manifestation of a robust virtue of coherence of the WP. The revivals of various WP having $k=0,5,10$ and 20 are presented for $m/n=1/10$, such a time has been chosen as a typical exemple. There we should observe 5 wave packets according to \cite{averbukh}. These WP are always very well separated from each other, again there is no significant difference between the WPs constructed from intelligent spin states and those built from quasi-intelligent ones. \section{Conclusions}\label{con} The conclusion of this article is that there exists very numerous possibilities of construction of angular coherent states using the properties of the intelligent spin states. For a system with an hamiltonian quadratic in $I$ [$I(I+1)$ spectrum] these WP spread on the sphere but there is a well identified mechanism of fractional revivals that produce a set of well concentrated {\em mutants}. For rotators which are not quadratic the scenario is valid during a limited time. This is the case of nuclei for which we are making a parallel study \cite{rozmejapp}. \ackn One of us (P.R.) kindly acknowledge support of Polish Committee for Scientific Research (KBN) under the grant 2 P03B 143 14. \section*{References}
1,108,101,563,962
arxiv
\section{Introduction} \label{sec:Introduction} CT14 parton distribution functions (PDFs)~\cite{Dulat:2015mca} are obtained in a global analysis of a variety of hadronic scattering experimental data. They are suitable for general-purpose QCD calculations at the Large Hadron Collider (LHC) and in other experiments. The previous generation of general-purpose PDFs from CTEQ-TEA (CT) group, designated as CT10~\cite{Gao:2013xoa,Lai:2010vv}, was used in a wide range of analyses in hadron collider phenomenology. The CT10 PDFs were based on diverse experimental data from fixed-target experiments, HERA and the Tevatron collider, but without data from the LHC. The CT14 global analysis represents the upgrade of the CT10 fit and includes data from the LHC Run I, as well as updated data from the Tevatron and HERA experiments. The CT14 PDF sets are available at LHAPDF~\cite{LHAPDF} together with recent PDF parametrizations from other groups~\cite{Harland-Lang:2014zoa,Ball:2014uwa,Alekhin:2013nda,Jimenez-Delgado:2014twa}. The latest version of the PDF4LHC recommendation~\cite{Butterworth:2015oua} provides users with a consistent procedure on how to combine the CT14, NNPDF, and MMHT PDF sets in phenomenological analyses. The CT14 PDFs are determined from data on inclusive high-momentum transfer processes, for which perturbative QCD is expected to be reliable. For example, in the case of deep-inelastic lepton scattering (DIS), only data with $Q>2$ GeV and $W^2>12.5$ GeV$^2$ are used, where mass squared of the final state hadronic system $W^2= Q^2({1 \over x} -1)$. Data in this region are expected to be relatively free of nonperturbative effects, such as higher-twist or nuclear corrections. In the global analysis, the HERA Run I inclusive DIS measurements have imposed important PDF constraints in the CT10 and CT14 analyses. In 2015, the H1 and ZEUS collaborations released a novel combination of measurements of inclusive deep-inelastic scattering cross sections at $e^{\pm} p$ collider HERA~\cite{Abramowicz:2015mha}. We refer to this data ensemble as HERA2 throughout this paper, to be distinguished from the previous combination of HERA data sets on DIS published in 2009 \cite{Aaron:2009aa}, which we call HERA1. HERA2 is the combination of HERA Run I measurements of about 100 pb$^{-1}$ of $e^{+}p$ and 15 pb$^{-1}$ of $e^{-}p$ data, and Run II measurements of 150 pb$^{-1}$ of $e^{+}p$ and 235 pb$^{-1}$ of $e^{-}p$ data, resulting in a total integrated luminosity of approximately 500 pb$^{-1}$. The individual H1 and ZEUS measurements used in the combination were published previously in Refs.~\cite{Aaron:2009bp,Aaron:2009kv,Adloff:1999ah,Adloff:2000qj,Adloff:2003uh,Aaron:2012qi,Andreev:2013vha,Collaboration:2010ry} and~\cite{Breitweg:1997hz,Breitweg:2000yn,Breitweg:1998dz,Chekanov:2001qu,Breitweg:1999aa,Chekanov:2002ej,Chekanov:2002zs,Chekanov:2003yv,Chekanov:2003vw,Chekanov:2009gm,Chekanov:2008aa,Abramowicz:2012bx,Collaboration:2010xc,Abramowicz:2014jak}. The two collaborations employed different experimental techniques and used different detectors and methods for kinematic reconstruction. Therefore the new HERA2 {\em combined} measurements exhibit a significantly reduced systematic uncertainty. The main goal of this paper is to analyze the impact of the HERA2 measurements on the CT14 global analysis. We replace the combined HERA1 data set used in the published CT14 PDFs~\cite{Dulat:2015mca} with the HERA2 set and examine the resulting changes in PDF central values and uncertainties. Also, we study the dependence of the goodness of fit upon kinematic cuts on $Q$ and $x$, as it was suggested~\cite{Abramowicz:2015mha} that the low-$Q^2$ HERA2 data are not well fitted by the CT10 and CT14 PDFs. Related studies of the impact of HERA2 data in the context of MMHT14 and NNPDF3.0 fits can be found in Refs.~\cite{Harland-Lang:2016yfn,Thorne:2015caa,Rojo:2015nxa}. To this end, the CTEQ-TEA PDFs have been refitted at next-to-leading order (NLO) and next-to-next-to-leading order (NNLO) by using the global CT14 data ensemble, but with the HERA2 measurements in place of HERA1. The new PDFs obtained after the refitting procedure are named CT14$_\textrm{HERA2}$, to distinguish from CT14. The HERA2 data set has 1120 data points in the fitted region with $Q> 2$ GeV and $W^2>12.5$ GeV$^2$. There are 162 correlated systematic errors, and seven procedural uncertainties, in addition to the luminosity uncertainty. When HERA2 is included in the global fit, there are in total 3287 data points in the CT14$_{\rm HERA2}$\ data ensembles, compared to 2947 in the original CT14 fits. This is because two other changes have been made in the data analysis. First, we have dropped the New Muon Collaboration (NMC) muon-proton inclusive DIS data on $F_2^p$~\cite{Arneodo:1996qe}, because that data cannot be fitted well. As concluded in Ref.~\cite{Pumplin:2002vw}, the NMC $F_2$ proton data are influenced by some unknown or underestimated systematic errors. Meanwhile, we continue to include the NMC proton to deuteron ratio data on $F_2^p/F_2^d$. Second, we updated the data table for the CMS 7 TeV $5\mbox{ fb}^{-1}$ inclusive jet experiment~\cite{Chatrchyan:2012bja}, which became available after the completion of the CT14 study, without appreciable effects on the PDFs. As in CT14~\cite{Dulat:2015mca}, the theoretical predictions for the majority of processes in the CT14$_{\rm HERA2}$\ fit are calculated at the NNLO level of accuracy. In particular, a NNLO treatment~\cite{Guzzi:2011ew} of heavy-quark mass effects in neutral-current(NC) DIS is realized in the S-ACOT-$\chi$ scheme~\cite{Aivazis:1993kh,Aivazis:1993pi,Collins:1998rz,Tung:2001mv} and is essential for obtaining correct predictions for LHC electroweak cross sections~\cite{Gao:2013wwa,Lai:2010nw,Nadolsky:2008zw,Tung:2006tb}. However, the calculations for charged-current(CC) DIS and inclusive jet production are included at NLO only; in both cases, the complete NNLO contributions are not yet available. In Sec. II of Ref.~\cite{Dulat:2015mca}, we presented various arguments suggesting that the expected impact of the missing NNLO effects in jet production on the PDFs is small relative to current experimental errors. Similarly, the NNLO contribution to charged-current DIS, including massive charm scattering contributions \cite{Berger:2016inr}, is modest compared to the experimental uncertainties. It is useful to review quickly the advances in the CT14 global analysis, compared to CT10. {\em Regarding data:} The new LHC measurements of $W^{\pm}$ and $Z^{0}$ cross sections~\cite{Aad:2011dm,Chatrchyan:2013mza,Aaij:2012vn} directly probe flavor separation of $u, \overline{u}$ and $d, \overline{d}$ partons in an $x$-range around $0.01$ that was not directly assessed by earlier experiments. The updated measurements of electron charge asymmetry from the D\O~ collaboration~\cite{D0:2014kma} probe the $d$ quark PDF at $x>0.1$. These measurements are included in the CT14 and CT14$_{\rm HERA2}$\ analyses. {\em Regarding parametrization:} In the CT14 analysis, the description of variations in relevant PDF combinations, such as $d(x,Q)/u(x,Q)$ and $\bar d(x,Q)/\bar u(x,Q)$, is improved, as compared to CT10, by increasing the number of free PDF parameters from 25 to 28. The functional form for the initial scale PDFs adopted by the CT14 fit is parametrized by Bernstein polynomials (reviewed in the Appendix of Ref.~\cite{Dulat:2015mca}) which have the property that a single polynomial is dominant in any given $x$ range, hence reducing undesirable correlations among the PDF parameters that sometimes occurred in CT10. Also, in the asymptotic limits of $x \rightarrow 0$ or $x\rightarrow 1$, the CT14 functional forms allow the ratios of $d/u$ or $\bar d/\bar u$ to reach any values, so that these ratios are determined by the global fit; this is in contrast to the more constrained behavior of those PDF ratios assumed in the CT10 parametrization forms. The CT14$_{\rm HERA2}$\ fit adopts the same functional form for the initial scale parametrization as CT14, except for the strange quark and antiquark PDFs. More specifically, in the CT14$_\textrm{HERA2}$ analysis, we have used the CT14 PDF functional form~\cite{Dulat:2015mca} at the initial scale $Q_0 = 1.3\, {\rm GeV}$, \begin{equation}\label{eq:par} x \, f_a(x,Q_0) \, = \, x^{a_1} \, (1-x)^{a_2} \,P_a(x), \end{equation} where the $P_a(x)$ functions are linear combinations of Bernstein polynomials. In the CT14 fit\ \cite{Dulat:2015mca}, the strange quark PDF is parametrized according to Eq.~(\ref{eq:par}), with $P_{s}(x)$ being a constant. There, we have tied $a_{1}$ to the common $a_{1}$ of $\bar{u}$ and $\bar{d}$, and assumed $s(x) = \bar{s}(x)$ in the analysis. Thus, we have just two parameters for the strange quark and antiquark PDFs in our standard CT14 analysis: $a_{2}$ and normalization. With this limitation on $s(x,Q_{0})$, we find that it is necessary to extend the strange quark uncertainty by adding two ``extreme strange'' PDFs to the set of Hessian error PDFs. In the CT14$_\textrm{HERA2}$ PDFs, we use a different technique to avoid underestimating the strangeness uncertainty provided by the Hessian error PDF set: while in the published CT14 PDFs, we set $a_1(s)=a_1(\bar s)= a_1(\bar d) = a_1(\bar u)$; in the CT14$_{\rm HERA2}$\ fit, we allow $a_1(s)=a_1(\bar s)$ to differ from $a_1(\bar d) = a_1(\bar u)$. By freeing the parameter $a_{1}(s)$, we find that it is not necessary to construct additional extreme strange quark PDFs. So, whereas the CT14 error PDFs include two extreme strange and two extreme gluon PDFs, the CT14$_\textrm{HERA2}$ error PDFs include only two extreme gluon PDFs to model the uncertainty of gluon PDFs in the very small $x$ region. Thus the total number of error PDFs is the same for CT14 and CT14$_\textrm{HERA2}$, {\it viz.} 56 error PDFs. To summarize, we use this parametrization, differing from the standard CT14 parametrization \cite{Dulat:2015mca} only by the addition of one free parameter for $s(x,Q_{0})$; and we refit the CT14 data set, with the HERA1 combined data replaced by the HERA2 combination, after dropping the NMC muon-proton inclusive DIS data on $F_{2}^{p}$~\cite{Arneodo:1996qe} and correcting the data table for the CMS 7 TeV $5\mbox{ fb}^{-1}$ inclusive jet experiment~\cite{Chatrchyan:2012bja}. The rest of the paper summarizes findings of the CT14$_{\rm HERA2}$\ global analysis, presented in several parts: \begin{itemize} \item{Section 2 concerns the goodness-of-fit for this new QCD global analysis with special emphasis on the quality of the fit to the HERA2 combined data. We find a large value of $\chi^{2}/N_{pts}$ for a subset of the HERA2 measurements, from $e^{-}p$ scattering, and we discuss the origin of this increase.} \item{Section 3 describes a study of the role of HERA2 data points at low $Q$. This is studied by excluding low-$Q$ data points and refitting the PDFs.} \item{Section 4 concerns the changes of the PDFs themselves. We find some changes from CT14 to CT14$_{\rm HERA2}$\ , but they are not significant within the standard CTEQ estimates of PDF uncertainties.} \item{Section 5 is a summary of our conclusions.} \end{itemize} In the end, we find that the differences between CT14$_\textrm{HERA2}$ and CT14 PDFs are smaller than the uncertainties of the PDFs, as estimated by the Hessian method of error propagation. For this reason we reckon that the standard CT14 PDFs should continue to be used for making predictions to compare against current and future LHC data. However, we will make the CT14$_\textrm{HERA2}$ PDFs available in the LHAPDF format for specialized studies, such as those that are sensitive to behavior of strange (anti)quark PDFs. \section{The Global analysis with the final HERA2 combined Data \label{sec2}} As we explained in the introduction, when constructing a PDF ensemble for general-purpose applications, the CTEQ-TEA global analysis selects the experimental data points at large enough $Q$ and $W$, where contributions beyond the leading-twist QCD are reduced. With the default lower $Q$ cut on the selected data points, $Q \geq Q_{\textrm{cut}} = 2$ GeV, the HERA1 ensemble contains 579 data points, while that of HERA2 contains 1120 data points. In Table \ref{tbl:tbl1} we summarize the results for the total $\chi^2$ values of the HERA1 combined data (column 2) and HERA2 combined data (column 3), for both NLO and NNLO approximations of QCD. The rows CT14(NLO) and CT14(NNLO) use the published CT14 PDFs, with no refitting; they were fit with HERA1 data. The rows NLO10, NLO55, NNLO10 and NNLO55 are refits with a slightly more flexible parametrization for the strange quark PDF and the inclusion of the non-HERA data sets, as described in Sec. I; NLO10 and NNLO10 use only HERA1 data; NLO55 and NNLO55 use HERA1 data with weight 0.5 and HERA2 data with weight 0.5 in the global $\chi^{2}$ sum. The rows ${\rm CT14}_{\rm HERA2(NLO)}$ and ${\rm CT14}_{\rm HERA2(NNLO)}$ use the same parametrization and non-HERA data as NLO10 and NNLO10, but they use only HERA2 data. Note that $\chi^{2}_{\rm HERA1}$ increases, and $\chi^{2}_{\rm HERA2}$ decreases, as we vary the balance of HERA1 and HERA2 data used in the analysis, from weights $\{1,0\}$ to $\{0.5,0.5\}$ to $\{0,1\}$. However, the changes are not large, given the number of data points, 579 and 1120 respectively. We have also compared the $\chi^{2}$ values for {\em non-HERA data} for the new fits, and we find that $\chi^{2}_{\rm non-HERA}$ is essentially unchanged as we vary the balance of HERA1 and HERA2 data, with the three weighting choices. This shows that the HERA1 and HERA2 data sets are equally consistent with the non-HERA data. \begin{table}[h] \begin{tabular}{|l|c|c|} \hline & $\chi^2_{\textrm{HERA1}}$ (wt); $N_{\textrm{pts}}=579$ & $\chi^2_{\textrm{HERA2}}$ (wt); $N_{\textrm{pts}}=1120$ \tabularnewline \hline \hline CT14(NLO) & 590 & 1398 \tabularnewline \hline NLO10 & 576 (1.0) & 1404 (0.0) \tabularnewline NLO55 & 586 (0.5) & 1374 (0.5) \tabularnewline CT14$_\textrm{HERA2(NLO)}$ & 595 (0.0) & 1373 (1.0) \tabularnewline \hline \hline CT14(NNLO) & 591 & 1469 \tabularnewline \hline NNLO10 & 583 (1.0) & 1458 (0.0) \tabularnewline NNLO55 & 596 (0.5) & 1411 (0.5) \tabularnewline CT14$_\textrm{HERA2(NNLO)}$ & 610 (0.0) & 1402 (1.0) \tabularnewline \hline \hline \end{tabular} \caption{$\chi^2$ values for the HERA Run I data set ($\equiv$ HERA1) and the HERA Run I+II combined data set ($\equiv$ HERA2). The CT14 NLO and NNLO results use the published CT14 PDFs, i.e., without refitting. The other results are fits made with weights $\{1,0\}$, $\{0.5,0.5\}$ or $\{0,1\}$ for the HERA1 and HERA2 data sets, respectively. [The $\{1,0\}$ fits are not identical to CT14 because they were made (i) with a slightly more flexible parametrization for the strange quark PDF, (ii) without the NMC $F_{2}^{p}$ measurements, and (iii) with an updated data table for CMS jet production.] \label{tbl:tbl1}} \end{table} Furthermore, we find that the NLO fit has a lower value of global $\chi^2$ than the NNLO fit. This is a robust result: it is independent of whether a HERA1 or HERA2 data set is used. It is also still true if $\alpha_s(m_Z)$, $m_b$, and $m_c$ are varied as free parameters---separately, of course, for NLO and NNLO. The conclusions still hold if the kinematic cut $Q_{\textrm{cut}}$ is raised, cf. Sec.~\ref{kinematic-cuts}. In order to understand the impact of the HERA2 data, we focus on some more detailed quantitative studies in Figs. 1-3. Considering the value of the {\em global} $\chi^2$ per number of points ($N_{pts}$), i.e., the {\em overall} goodness of fit for the QCD global analysis, we find $\chi^2/N_{pts}$ to be 1.07 and 1.09, respectively, at the NLO and NNLO, which is about the same as for the standard CT14 global analysis~\cite{Dulat:2015mca}. However, the values of $\chi^2_{\rm HERA2}/N_{pts}$ for the HERA2 data after refitting are found to be 1.22 and 1.25, respectively, at the NLO and NNLO. (For comparison, the $\chi^2_{\rm HERA1}/N_{pts}$ for the HERA Run I ensemble data in the CT14 fits is about 1.02 at either NLO or NNLO.) These large values of $\chi^{2}_{\rm HERA2}/N_{pts}$ raise a question: do they come from a few isolated data points, or from a systematic difference between data and theory? To address this question, in Fig.~\ref{chi2res} we show the distribution of the reduced-$\chi^{2}$ ($\equiv \chi^{2}_{\rm re}$) values for individual data points, as they are distributed over the $(x,Q)$ kinematic plane. The definition of $\chi^{2}_{\rm re}$ is, for an individual data point ($k$), \begin{equation}\label{eq:reX2} \chi^{2}_{{\rm re},k} = (D_{k} - T_{k} - \sum_{\alpha} \lambda_{\alpha}\beta_{k\alpha})^2/s_{k}^{2}, \end{equation} where $D_{k}$ is the central data value, $T_{k}$ is the theory value, $s_{k}$ is the uncorrelated error, and the sum over $\alpha$ is an effective shift in the central value $D_k$ caused by optimized systematic nuisance parameters $\lambda_\alpha$. [See, e.g., Eq.\ (4) in the original CT10 analysis.~\cite{Lai:2010vv}.] Thus, $\chi^{2}_{{\rm re}, k}$ represents our best measure for the difference between data and theory for the $k$th data point. The total $\chi^2_{exp}$ for the experimental data set quoted in Table~\ref{tbl:tbl1} (where exp stands for HERA1 or HERA2) is obtained by summing $\chi^{2}_{{\rm re}, k}$ over all experimental points and adding the penalty $R^2$ for deviations of the optimized nuisance parameters $\lambda_\alpha$ from their central values at 0, \begin{equation} \chi^2_{exp} = \sum_{k=1}^{N_{pts}} \chi^{2}_{{\rm re}, k} + \sum_{\alpha}\lambda^2_{\alpha} \equiv \chi^{2}_{{\rm re}} + R^2. \label{chi2} \end{equation} To identify the source of the elevated total $\chi^{2}$ for the HERA2 ensemble, we first scrutinize the contributions $\chi^{2}_{{\rm re}, k}$ from the individual points. Figure~\ref{chi2res} illustrates the values of $\chi^{2}_{{\rm re},k}$ when the HERA1 data are compared to the CT14 (NLO and NNLO) theory, and the HERA2 data are compared to CT14$_\textrm{HERA2}$ (NLO and NNLO) theory. The bottom-right inset also shows different values of the geometric scaling variable $A_{gs}$ that are discussed in Sec.~\ref{kinematic-cuts}. \begin{figure}[htbp] \includegraphics[width=0.40\textwidth] {./FIGURES/FIG1.NLO159.eps} \includegraphics[width=0.40\textwidth] {./FIGURES/FIG1.NNLO159.eps} \includegraphics[width=0.40\textwidth] {./FIGURES/FIG1.NLO160.eps} \includegraphics[width=0.40\textwidth] {./FIGURES/FIG1.NNLO160.eps} \caption{The distribution of $\chi^2_{{\rm re},k}$ of HERA1 and HERA2 ensembles in the $(x,Q)$ plane, for the CT14 (upper row) and CT14$_\textrm{HERA2}$ (lower row) fits, respectively.} \label{chi2res} \end{figure} In the subfigures for HERA2 (either at NLO or NNLO), we notice that points with $\chi^{2}_{{\rm re},k} > 4$ are rather uniformly distributed throughout the $(x,Q)$ phase space, without being concentrated in a particular region. In other words, the elevated values of $\chi^2_{\rm HERA2}$ in Table~\ref{chi2res} do not arise from a single $(x,Q)$ kinematic region. \setcounter{subsubsection}{0} \subsubsection{Varied statistical weights for the HERA2 data} An interesting way to assess the impact of the HERA2 combined data is to {\em vary the weight} given to this data set in the global $\chi^{2}$ function. Namely, we increase the statistical weight $w$ of the HERA2 data; that is, we include $w\cdot \chi^2_{\rm HERA2}$, with $w > 1$, instead of the default $\chi^2_{\rm HERA2}$ (with $w = 1$), into the global function $\chi^2$. The purpose here is to see whether increasing the HERA2 weight will induce large changes in the PDFs. First, we examine how increasing the weight of HERA2 data reduces $\chi^{2}/N_{pts}$ for the HERA2 data. Figure~\ref{fig:CHIvWGT} shows $\chi^{2}/N_{pts}$ for the HERA2 combined data ($N_{pts} = 1120$) with CT14$_{\rm HERA2}$\ -like fits generated with weight factor varying from 1 to 6, at both NLO and NNLO accuracy. The upper-left plot shows $\chi^{2}/N_{pts}$; the upper-right plot shows $\chi^{2}_{\rm re}/N_{pts}$; and the lower one shows $R^2$, the sum of the quadratic penalties on the optimized systematic shifts in our treatment of correlated systematic errors as nuisance parameters~\cite{Lai:2010vv}. Of course, increasing the weight of the HERA2 data must cause $\chi^{2}/N_{pts}$ to decrease for that data. But the change of $\chi^{2}$ is not large---only about $-5\%$ for a factor of 6 extra weighting. The results are similar for NLO and NNLO. \begin{figure \includegraphics[width=0.40\textwidth] {./FIGURES/FIG2.X2vWt.eps} \includegraphics[width=0.40\textwidth] {./FIGURES/FIG2.reX2vWt.eps} \includegraphics[width=0.40\textwidth] {./FIGURES/FIG2.R2vWt.eps} \caption{Dependence of $\chi^2/N_{pts}$ (upper left), $\chi^2_{re}/N_{pts}$ (upper right), and $R^{2}$ penalty (lower panel) for HERA2 data on the statistical weight assigned to the HERA2 data ensemble; the PDFs are refitted for each weight. \label{fig:CHIvWGT}} \end{figure} Secondly, as the weight of the HERA2 data set is increased, the resulting PDFs change, too. Figure~\ref{fig:PDFvWGT} illustrates this, by plotting the ratio of the CT14$_\textrm{HERA2}$ PDF to the CT14 PDF, as a function of the weight factor assigned to the HERA2 data. The HERA2 weights range from 1 to 6. The uncertainty band of the CT14 PDF is also shown, evaluated at the 90\% confidence level (C.L.). All PDFs are plotted at $Q = 1.3$ GeV. For the gluon, as the HERA2 weight increases, the CT14$_\textrm{HERA2}$ PDF decreases at $x\lesssim 10^{-3}$ and decreases rapidly at $x > 0.4$; for intermediate $x$ values, $g(x,Q_{0})$ varies by a few percent. For the up quark, the PDF exhibits a modest fractional increase in the central $x$ region (for $0.01 < x < 0.5$) relative to its PDF error band, as the HERA2 weight increases. The down quark PDF has a similar behavior for $0.01 < x < 0.5$ but with larger magnitude of variation than the up quark. Similarly, for the up antiquark, the PDF exhibits a modest fractional increase for $x$ around $0.1$ to $0.2$, as the HERA2 weight increases; and the down antiquark PDF has a similar increase for $x$ around $0.3$. In contrast to the up and down flavors, the strange quark PDF is reduced relative to CT14. The reduction of $s(x,Q_{0})$ is mainly caused by freeing the parameter $a_{1}(s)$. But, as we weight the HERA2 data more heavily, $s(x,Q_{0})$ decreases even further. We note that the same conclusion also holds for the CT14 NLO PDFs. \begin{figure \includegraphics[width=0.49\textwidth] {./FIGURES/FIG3.glu.eps} \includegraphics[width=0.49\textwidth] {./FIGURES/FIG3.uqk.eps} \includegraphics[width=0.49\textwidth] {./FIGURES/FIG3.dqk.eps} \includegraphics[width=0.49\textwidth] {./FIGURES/FIG3.sqk.eps} \includegraphics[width=0.49\textwidth] {./FIGURES/FIG3.ubar.eps} \includegraphics[width=0.49\textwidth] {./FIGURES/FIG3.dbar.eps} \caption{ Comparison of CT14$_{\rm HERA2}$\ PDFs at $Q=1.3$ GeV within the CT14(NNLO) uncertainty band. Each curve represents the ratio of CT14$_\textrm{HERA2}$ / CT14 for a particular value of the weight assigned to the HERA2 data in the global analysis. The weight factors vary from 1 to 6. \label{fig:PDFvWGT}} \end{figure} \section{Impact of data selection cuts on the fit to HERA2 Data} \label{kinematic-cuts} The HERA2 publication~\cite{Abramowicz:2015mha} found that both HERAPDF2.0 PDFs and $\chi^2$ values depend significantly on the choice of $Q_{\rm cut}$, the minimum value of the four-momentum-transfer $Q$ in the HERA2 analysis. In this section we explore the impact of variations of $Q_{\rm cut}$ on the CT14$_{\rm HERA2}$\ global analysis. We perform multiple fits of CT14$_{\rm HERA2}$\ PDFs, in which $Q_{\rm cut}$ is varied from 2 to 6 GeV, and compare the results to the previous findings of the CT14 analysis. For every choice of $Q_{\rm cut}$, we report the total $\chi^{2}$, reduced $\chi^2$ ({\it i.e.}, $\chi^2_{\rm re}$), and systematic shift penalty $R^2$ defined by Eq.~(\ref{chi2}), together with the number of data points $N_{pts}$ in parentheses. Tables~\ref{tbl:subprocHERA1} and \ref{tbl:subprocHERA2} show these quantities for the HERA1 and HERA2 data, compared to the theoretical predictions based on CT14 NNLO and CT14$_{\rm HERA2}$\ NNLO PDFs, respectively. The lower parts of each table show the breakdown of $\chi^2_{\rm re}$ and numbers of points over the four contributing DIS subprocesses, in NC and CC interactions: NC $e^{+}p$, NC $e^{-}p$, CC $e^{+}p$, and CC $e^{-}p$. In the CT14 analysis the subsets of HERA1 data have small values of $\chi^2_{\rm re}/N_{pts}$, as shown in Table~\ref{tbl:subprocHERA1}. For the $e^{-}p$ processes, $\chi^{2}_{\rm re}/N_{pts}$ is less than $1$; for the $e^{+}p$ processes, $\chi^{2}_{\rm re}/N_{pts}$ is approximately $1$. Also, there is no dependence on $Q_{\rm cut}$, except for a small decrease in $\chi^{2}_{\rm re}/N_{pts}$ for the case of NC $e^{+}p$. The {\em total} $\chi^{2}/N_{pts}$ decreases with $Q_{\rm cut}$ because the NC $e^{+}p$ subset dominates the total. We conclude that, for the CT14/HERA1 analysis, the standard choice $Q_{\rm cut} = 2$ GeV is not qualitatively different from the other $Q_{cut}$ choices in the 2 to 6 GeV range. \begin{table}[ht \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $Q_{\textrm{cut}}$ [GeV] & No cut & 2.00 & 3.87 & 4.69 & 5.90 \tabularnewline \hline $\chi^2/N_{pts}(N_{pts})$ & (647) & 1.02 (579) & 0.93 (516) & 0.93 (493) & 0.91 (470) \tabularnewline \hline $R^2/114(R^2)$ & & 0.43(48.80)& 0.24(27.34)& 0.25(28.38)& 0.25(28.48)\tabularnewline \hline $\chi^2_{\rm re}/N_{pts}(N_{pts})$ & (647) & 0.94 (579) & 0.89 (516) & 0.87 (493) & 0.84 (470) \tabularnewline \hline \hline NC $e^+ p$ & (434) & 1.05 (366) & 0.96 (303) & 0.96 (280) & 0.92 (257) \tabularnewline \hline NC $e^- p$ & (145) & 0.74 (145) & 0.75 (145) & 0.75 (145) & 0.75 (145) \tabularnewline \hline CC $e^+ p$ & (34) & 0.97 (34) & 0.98 (34) & 0.99 (34) & 0.99 (34) \tabularnewline \hline CC $e^- p$ & (34) & 0.53 (34) & 0.53 (34) & 0.53 (34) & 0.53 (34) \tabularnewline \hline \end{tabular} \caption{Goodness-of-fit characteristics for the HERA1 combined data with specified $Q_{cut}$ selection constraints, and theory predictions based on the CT14 NNLO PDFs determined with the nominal cut $Q_{cut} \geq 2$ GeV. The four lowest rows give $\chi^{2}_{\rm re}/N_{pts}$ for each DIS subprocess. \label{tbl:subprocHERA1}} \end{table} In the CT14$_{\rm HERA2}$\ /HERA2 analysis (Table~\ref{tbl:subprocHERA2}), the values of $\chi^{2}_{\rm re}/N_{pts}$ are larger than 1 for the subprocesses, and much larger in the cases of $e^{-}p$ scattering. The PDFs for the different columns of Table III were refitted for each choice of $Q_{\rm cut}$. Even with the refitting, the values of $\chi^{2}_{\rm re}/N_{pts}$ remain large. The dependence of $\chi^{2}_{\rm re}/N_{pts}$ on $Q_{\rm cut}$ is small for NC $e^{+}p$ and negligible for the other three cases. \begin{table}[ht \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $Q_{\textrm{cut}}$ [GeV] & No cut & 2.00 & 3.87 & 4.69 & 5.90 \tabularnewline \hline $\chi^2/N_{pts}(N_{pts})$ & (1306) & 1.25 (1120) & 1.19 (967) & 1.21 (882) & 1.23 (842) \tabularnewline \hline $R^2/170(R^2)$ & & 0.51 (87.47)& 0.29(49.11)& 0.29 (48.99)& 0.29 (49.40)\tabularnewline \hline $\chi^2_{\rm re}/N_{pts}(N_{pts})$ & (1306) & 1.17 (1120) & 1.14 (967) & 1.15 (882) & 1.18 (842) \tabularnewline \hline \hline NC $e^+p$ & (1066) & 1.11 (880) & 1.06 (727) & 1.06 (642) & 1.09 (602) \tabularnewline \hline NC $e^-p$ & (159) & 1.45 (159) & 1.44 (159) & 1.45 (159) & 1.45 (159) \tabularnewline \hline CC $e^+p$ & (39) & 1.10 (39) & 1.10 (39) & 1.10 (39) & 1.10 (39) \tabularnewline \hline CC $e^-p$ & (42) & 1.52 (42) & 1.50 (42) & 1.50 (42) & 1.50 (42) \tabularnewline \hline \end{tabular} \caption{Goodness-of-fit characteristics for the HERA2 combined data with specified $Q_{cut}$ selection constraints, and theory predictions based on the CT14$_{\rm HERA2}$\ NNLO PDFs refitted with the same $Q_{cut}$ value. \label{tbl:subprocHERA2}} \end{table} In contrast to CT14, in the CT14$_{\rm HERA2}$\ analysis we see only small variations in $\chi^{2}_{\rm re}/N_{pts}$ with the four values of $Q_{\rm cut}$. We note that the apparent large change in $\chi^{2}/N_{pts}$ from $Q_{\rm cut}$ of 2 to 3.87\,GeV, as shown in the second row of Table \ref{tbl:subprocHERA2}, is due to the change in $R^2$ values in the third row. Recall that $\chi^2$ is given by the sum of $\chi^2_{re}$, which changes little, and $R^2$, which decreases from 2 GeV to 3.87 GeV. With a larger $Q_{\rm cut}$ value, at 3.87\,GeV, there are fewer data points to be fit with the same number of correlated systematic errors (170 in the CT14$_{\rm HERA2}$\ analysis), hence it leads to a smaller $R^2/170$ value, from 0.51 to 0.29. Figure~\ref{fig:cut1} shows the results on $\chi^{2}$ versus $Q_{\rm cut}$ of Table \ref{tbl:subprocHERA2} in graphical form. The behavior of $\chi^2/N_{pts}$ for the HERA2 data (sum of all four subprocesses) is illustrated in the left panels of Fig.~\ref{fig:cut1}. The graphs show the dependence on $Q_{\textrm{cut}}$ in the CT14$_{\rm HERA2}$\ analysis at both NLO and NNLO. The upper panel is $\chi^{2}$ and the middle panel is the reduced $\chi^{2}$, versus $Q_{\rm cut}$. The values of $\chi^{2}/N_{pts}$ for the HERA2 data exhibit a {\em shallow minimum} for $Q_{\rm cut}$ in the range $3.5\,{\rm GeV} \lesssim Q_{\rm cut} \lesssim 4$ GeV. The reduction of $\chi^{2}_{re}$ at $Q_{\rm cut} \sim 4$ GeV, compared to our standard choice of $Q_{\rm cut}=2$\,GeV, from 1.17 to 1.15, does not seem significant. An interesting feature of the graphs is that near the minimum the NNLO and NLO results are equal, whereas NNLO has slightly larger $\chi^{2}$ on either side of the minimum. \begin{figure}[htbp \includegraphics[width=0.49\textwidth] {./FIGURES/FIG6.X2vQcut.eps} \includegraphics[width=0.49\textwidth] {./FIGURES/FIG6.X2vAcut.eps} \includegraphics[width=0.49\textwidth] {./FIGURES/FIG6.reX2vQcut.eps} \includegraphics[width=0.49\textwidth] {./FIGURES/FIG6.reX2vAcut.eps} \includegraphics[width=0.49\textwidth] {./FIGURES/FIG6.R2vQcut.eps} \includegraphics[width=0.49\textwidth] {./FIGURES/FIG6.R2vAcut.eps} \caption{ Left panels: $\chi^{2}/N_{pts}$ (top), reduced $\chi^{2}/N_{pts}$ (middle), and $R^{2}$ (bottom) for the HERA2 data and CT14$_{\rm HERA2}$\ PDFs, as a function of $Q_{\rm cut}$. Right panels: The same as a function of the cutoff value of the geometric scaling variable $A_{\rm gs}$. \label{fig:cut1}} \end{figure} The lower panel in Fig.\ \ref{fig:cut1} shows $R^{2}$, the total quadratic penalty for the systematic errors, as a function of $Q_{\rm cut}$. The value of $R^2$ decreases significantly from $Q_{\rm cut}$ = 2 GeV to 3.87 GeV, from 87 to 49. For ideal Gaussian systematic errors we would expect $R^{2} \sim 170$ for 170 systematic errors. When the low-$Q$ data points are discarded by the cut, the systematic errors become less important. However, this reduction of $R^{2}$ is shared by 1120 total data points, so the overall net change in $\chi^{2}/N_{pts}$ is mild. \setcounter{subsubsection}{0} \subsubsection{Dependence on the geometric rescaling variable} While Fig\ \ref{fig:cut1} examines dependence of fits on $Q$ cuts that are imposed independently of the Bjorken $x$ value, it is as instructive to consider the dependence of $\chi^2$ on correlated cuts in $Q$ and $x$. For this purpose we define the geometric scaling variable $A_{\rm gs} = x^{\lambda} Q^2$, where $\lambda$ is a parameter set equal to $0.3$ in this study~\cite{Stasto:2000er,Caola:2009iy,Lai:2010vv}. The $A_{\rm gs}$ variable can be utilized to explore the impact of data in kinematic regions of both small $Q$ and small $x$. We can test whether the goodness of fit improves if we exclude data at small $\{x,Q\}$. The variable $A_{\rm gs}$ has been used in previous analyses to search for possible deviations from DGLAP evolution due to saturation or small-$x$ related phenomena~\cite{Stasto:2000er,Caola:2009iy}. The basic method is to (i) generate PDFs using data in the kinematic region above the $A_{gs}$ cut in the $x$ and $Q$ plane, where the NLO/NNLO DGLAP factorization is supposed to be valid; (ii) then use DGLAP evolution equations to evolve these PDFs down to the low-$x$ and $Q$ region below the $A_{gs}$ cut, where one might expect possible deviations; (iii) finally, compare predictions to the data in the low $A_{gs}$ region, which was not used for PDF determination. The portion of HERA2 data that is excluded by varying $(A_{\rm gs})_{\rm cut}$ from 1.0 to 6.0 is shown in Fig.~\ref{chi2res} (the lower right inset). The results of the fits for various choices of $(A_{\rm gs})_{\rm cut} $, at both NLO and NNLO accuracy, are illustrated in the right panels of Fig.~\ref{fig:cut1}. (The upper panel is $\chi^{2}$, the middle panel is reduced $\chi^{2}$, and the lower panel is $R^{2}$.) The values of $\chi^{2}/N_{pts}$ for four choices of $(A_{\rm gs})_{\rm cut} $ are shown. Here, we consider only data points with $Q$ values greater than 2 GeV in order to validate the application of the perturbative DGLAP evolution equation. We find that the behavior of $\chi^2$ has small variations, and they are not monotonic. Hence, we conclude that our analysis of HERA2 data does not indicate clear deviations from DGLAP evolution. Alternatively, one could include also the data points below the $A_{gs}$ cut (though still with $Q > 2$ GeV) in the calculation of $\chi^2$ in the final comparison while fitting only the data above the $A_{gs}$ cut. We found a similar conclusion as that carried out for the CT10 NLO PDFs, as shown in the appendix of Ref.~\cite{Lai:2010vv}. For example, the value of $\chi^{2}_{res}/N_{pts}$ of the combined HERA2 data set, with $A_{\rm gs} > 1.5$, increases by about $0.2-0.3$ units as compared to that without any $A_{\rm gs}$ cut. This result does not change much even when we use a more flexible gluon PDF, by introducing one more nonperturbative shape parameter, in the fit. Furthermore, the value of $\chi^{2}_{res}/N_{pts}$ for the NLO fit is larger than the NNLO fit by about 0.1 unit, which is about the same size as the variation from including the $A_{\rm gs} > 1.5$ cut in the fit. This is comparable with the usual uncertainties and consistent with the above conclusion that the HERA2 data do not show clear deviations from DGLAP evolution. \section{Comparison of CT14$_\textrm{HERA2}$ and CT14 PDFs} In this section we describe the changes in central values and uncertainties of CT14$_\textrm{HERA2}$ PDFs, which are obtained from our global analysis with the weight of HERA2 data set to be 1, compared to CT14 PDFs. Here, $Q$ is equal to the initial scale $Q_0=1.3$ GeV; also, only the NNLO PDFs are shown. At this low scale, the PDF uncertainties are magnified, and they are reduced at electroweak scales as a consequence of DGLAP evolution. Additional plots can be found on the CTEQ public website \cite{cteqweb}. Figures~\ref{fig:ct14h2PDFs} and \ref{fig:ct14h2RATs} show plots where CT14$_\textrm{HERA2}$ (dashed red) is compared to CT14 (solid blue), including error bands. Some comments about this comparison are listed below. \begin{itemize} \item{The central value of the CT14$_\textrm{HERA2}$ gluon in the range $10^{-2} \lesssim x \lesssim 0.2$ is almost unchanged compared to CT14; it is larger by about 30\% at $x \approx 10^{-4}$, by a larger factor for $x > 0.5$, and it is smaller by about 10\% at $x \approx 0.3$. } \item{The up and down quarks are generally slightly larger than (but close to) CT14 in the range $10^{-2}\lesssim x \lesssim 0.5$, where the CT14$_\textrm{HERA2}$ uncertainty band is comparable to that of CT14; whereas they are both systematically larger by about 5\% in the intermediate region of $10^{-4}\lesssim x \lesssim 10^{-2}$. The CT14$_\textrm{HERA2}$/CT14 ratio decreases at $x \lesssim 10^{-4}$ in both cases. The down quark increases at $x>0.5$, while the up quark decreases slightly at $x \approx 0.5$. The slow oscillations in $d(x,Q_{0})$ reflect the behavior of Bernstein polynomials in Eq. (1). } \item{The strange quark central PDF is reduced over the entire $x$ range, mainly due to the change of freeing one shape parameter for describing the strange (anti)quark PDF; but this reduction is statistically insignificant and completely within the uncertainty of the previous PDF ensemble. In particular a reduction of approximately $-50$\% is observed at both $x\lesssim 10^{-3}$ and $x\gtrsim 0.5$. } \item{The changes in $\bar{u}$ and $\bar{d}$ quarks share similar features. These PDFs are almost unchanged for $10^{-2} \lesssim x \lesssim 0.2$. The $\bar{u}$ quark PDF increases by about 10\% at $x$ around 0.2, and the $\bar{d}$ quark PDF similarly increases at $x$ around 0.3. Both the $\bar{u}$ and $\bar{d}$ quarks, similar to the $s$ quark, decrease by large factors for $x \gtrsim 0.4$, where the gluon and down quark PDFs increase, as a consequence of the momentum sum rule. It is important to keep in mind that at $x > 0.5$ the antiquark PDFs take very small values, their behavior is very uncertain and strongly depends on the parametrization form. } \item {The individual PDF uncertainties do not change appreciably, except in the unconstrained $x$ regions. } \item{We have verified that the change seen in gluon, up and down quark PDFs mainly arises from replacing the HERA1 data (in CT14 analysis) by the HERA2 data (in CT14$_\textrm{HERA2}$ analysis). This was explicitly checked by comparing CT14 PDFs to the result of yet another new fit in which we used the exact same setup as that in the CT14 global analysis, but with the HERA1 data replaced by the HERA2 data. } \end{itemize} \begin{figure} \includegraphics[width=0.47\textwidth] {./FIGURES/FIG4.glu.eps} \includegraphics[width=0.47\textwidth] {./FIGURES/FIG4.uqk.eps} \includegraphics[width=0.47\textwidth] {./FIGURES/FIG4.dqk.eps} \includegraphics[width=0.47\textwidth] {./FIGURES/FIG4.sqk.eps} \includegraphics[width=0.47\textwidth] {./FIGURES/FIG4.ubr.eps} \includegraphics[width=0.47\textwidth] {./FIGURES/FIG4.dbr.eps} \caption{ Comparison of CT14$_\textrm{HERA2}$ (dashed red) and CT14 (solid blue) PDFs at $Q=1.3$ GeV. Flavors $g, u, d, s, \bar{u},\bar{d}$ are shown. The curves compare the central fits, plotted as ratios to CT14. The uncertainty bands are 90\% C.L. uncertainties evaluated from the CT14 (shaded blue) and CT14$_\textrm{HERA2}$ (hatched red) error ensembles; both error bands are normalized to the corresponding central CT14 PDFs. All PDFs are from the NNLO QCD analysis. \label{fig:ct14h2PDFs}} \end{figure} Now we turn to certain {\em ratios} of PDFs. Figure~\ref{fig:ct14h2RATs} shows the most relevant effects of the HERA2 data on the PDF ratios at $Q_0=1.3$ GeV. Comparing CT14$_\textrm{HERA2}$ to CT14 we observe the following. \begin{itemize} \item{The ratio $d/u$ remains approximately the same for CT14$_\textrm{HERA2}$ and CT14, in both the central value and uncertainty, for all values of $x$.} \item{The ratio $\bar{d}/\bar{u}$ at $x\lesssim 0.1$ is about the same for CT14$_\textrm{HERA2}$ and CT14, with compatible uncertainties. However, it is larger for CT14$_\textrm{HERA2}$ as $x$ increases beyond 0.2, despite having a large uncertainty. We note that this change mainly arises from using the more flexible parametrization in the strange quark PDF.} An interesting feature is that $\overline{d}/\overline{u}$ is greater than 1 for CT14$_{\rm HERA2}$\ at large $x$ region. \item{The strange quark fraction $R_{s} = (s+\bar{s})/(\bar{u}+\bar{d})$ is an important PDF ratio that has been discussed recently in several QCD analyses~\cite{Aad:2012sb,Samoylov:2013xoa,Chatrchyan:2013uja,Alekhin:2014sya, Dulat:2015mca}. As done in the CT14 global analysis, we assume that $s$ and $\bar{s}$ PDFs are the same at the initial $Q_0$ scale. We find that the value of $R_{s}$ for CT14$_\textrm{HERA2}$ is smaller than for CT14 in the $x$ range from $10^{-4}$ to $0.5$. This is mainly because the strange quark PDF decreases when going from CT14 to CT14$_{\rm HERA2}$\ , as discussed above. } \end{itemize} \begin{figure}[htp] \includegraphics[width=0.47\textwidth] {./FIGURES/FIG5.dou.eps} \includegraphics[width=0.47\textwidth] {./FIGURES/FIG5.dboub.eps} \begin{center} \includegraphics[width=0.47\textwidth] {./FIGURES/FIG5.Rs.eps} \end{center} \caption{Comparison of 90\% C.L. uncertainties on the ratios $d/u$, $\bar{d}/\bar{u}$ and $(s+\bar{s})/(\bar{u}+\bar{d})$ at $Q = 1.3$ GeV. The error bands are for the CT14 (solid blue) and CT14$_\textrm{HERA2}$ (dashed red) error ensembles. All PDFs are from the NNLO QCD analysis. \label{fig:ct14h2RATs}} \end{figure} \section{Discussion and Conclusions} \label{sec:conclusions} In this paper, we have presented the CT14$_{\rm HERA2}$\ parton distribution functions, constructed from a global analysis of QCD that uses the HERA Run I and II combined data set on $e^\pm p$ deeply inelastic scattering~\cite{Abramowicz:2015mha}. This compendium of 20 years of HERA data, reconciled as well as possible, including comparative analysis of systematic errors from the two collaborations, H1 and ZEUS, provides the most comprehensive information about DIS available today. A comparison of the current QCD analysis of this data (HERA2) to the CT14 global analysis of the previous generation of HERA data (HERA1) yields important insights about the structure of the nucleon, at the highest precision achieved. The main purpose of the paper is to examine the quality of agreement of perturbative QCD predictions with the HERA2 data and discuss the impact of these data on the PDFs and their uncertainties used for a variety of LHC applications. We conclude that the CT14$_{\rm HERA2}$\ and CT14 PDFs, do have some differences. However, the differences are smaller than the PDF uncertainties of the standard CT14 analysis. Some specific features of the CT14$_{\rm HERA2}$\ PDFs are elucidated in the paper. \begin{itemize} \item{Figure 2 shows values of $\chi^{2}/N_{pts}$ for the HERA2 data. $\chi^2/N_{pts}$ is marginally smaller in the NLO analysis than at NNLO, but the difference is clearly negligible. In either case, $\chi^{2}$ decreases as HERA2 data is included with increasing weight, at about the same rate for NLO and NNLO.} \item{Figures 4 and 5 show that HERA2 data slightly modify the $g$, $d$, and $u$ PDFs. The $s$ PDF decreases, mainly due to the use of a slightly more flexible parametrization for the strange quark PDF. The $\bar u$ and $\bar d$ PDFs decrease at large $x$, where $g$ and $d$ PDFs increase, so as to satisfy the momentum sum rule. The most significant effects of the HERA2 data in the CT14$_{\rm HERA2}$\ analysis are seen in the ratio of $\overline{d}/\overline{u}$ which is greater than 1 for very large $x$, although this change is much less than the size of the error band. Also, the strangeness fraction $R_{s}$ is roughly 20\% smaller than the standard CT14 $R_{s}$ for the intermediate range of $x$. This is mainly caused by the reduction in the strange quark PDF. } \end{itemize} Because the CT14 and CT14$_{\rm HERA2}$\ PDFs agree well within the PDF errors, we do not expect noticeable differences in their predictions for experimental observables at the LHC. We have explicitly checked that using CT14$_{\rm HERA2}$\ and CT14 PDFs at NNLO gives almost the same predictions for the cross section for $W^\pm$ and $Z$ production~\cite{Aad:2016naf,CMS:2015ois,Aad:2011dm, CMS:2011aa, Chatrchyan:2014mua}, as well as the associated $W^\pm$ and charm production~\cite{Chatrchyan:2013uja}, at the LHC energies. In future CT analyses we may employ the HERA2 combined data as an important part of the global data set, together with the new LHC data that will be published, such as low- and high-mass Drell-Yan processes and top quark differential distributions. For the present, we continue to recommend CT14 PDFs for the analysis of LHC Run 2 experiments. However, we make the CT14$_\textrm{HERA2}$ PDFs available in the LHAPDF format for specialized studies, such as those that are sensitive to behavior of strange (anti)quark PDFs. \begin{acknowledgments} This research was supported in part by the National Science Foundation under Grants No. PHY-1410972 and No. PHY-1417326; by the U.S. Department of Energy under Award No. DE-AC02-06CH11357 and Grants No. DE-SC0013681 and No. DE-SC0010129; by the National Natural Science Foundation of China under Grant No. 11465018; and by the Lancaster-Manchester-Sheffield Consortium for Fundamental Physics under STFC Grant No.~ST/L000520/1. \end{acknowledgments} \bibliographystyle{h-elsevier3}
1,108,101,563,963
arxiv
\section{Introduction} \label{sec:intro} There are many different formulas for evaluating the determinant of a matrix. Apart from the familiar Leibniz formula, there is Laplace formula, Dodgson's condensation and Gaussian elimination. However, there is no formula to the best of our knowledge in which Cayley's celebrated formula \cite{Cayley} relating Pfaffians to determinants is transparent. In this work, we give a new formula which does precisely this. The formula uses the notion of Brauer diagrams. These parametrize the basis elements of the so-called Brauer algebra \cite{brauer}, which is important in the representation theory of the orthogonal group. Brauer diagrams are perfect matchings on a certain kind of planar graph. We shall prove in Theorem~\ref{thm:detm} (to be stated formally in Section~\ref{sec:doub}) that the determinant of an $n \times n$ matrix can be expanded as a sum over all Brauer diagrams of a certain weight function. Since perfect matchings are related to Pfaffians, we obtain a natural combinatorial interpretation of Cayley's beautiful result relating Pfaffians and determinants. There have been some connections noted in the literature between Brauer diagrams and combinatorial objects such as Young tableaux \cite{sundaram,terada,hallew}, and Dyck paths \cite{marmat} in the past. The connection between determinants and perfect matchings came up while studying the number of terms (including repetitions) in the determinants of Hermitian matrices, which turns out to be $(2n-1)!!$. The number of distinct terms in the determinant of symmetric and skew-symmetric matrices, on the other hand, is classical. This has been studied, among others, by Cayley and Sylvester \cite{muir}. In particular, Sylvester showed that the number of distinct terms in the determinant of a skew-symmetric matrix of size $2n$ is given by $(2n-1)!! v_{n}$, where $v_{n}$ satisfies \begin{equation} v_{n} = (2n-1)v_{n-1}-(n-1)v_{n-2}, \quad v_{0}=v_{1}=1. \end{equation} Aitken \cite{Aitken} has also studied recurrences for the number of terms in symmetric and skew-symmetric determinants. The number of terms in the symmetric determinant also appears in a problem in the American Mathematical Monthly proposed by Richard Stanley \cite{stanrior}. The spirit of this work is similar to those on combinatorial interpretations of identities and formulas in linear algebra \cite{jackson,foata1,straub,zeil1}, combinatorial formulas for determinants \cite{zeil2}, and for Pfaffians \cite{halton,knuth1,mahsubvin,egecioglu}. The plan of the paper is as follows. Two non-standard representations of a matrix are given in Section~\ref{sec:faux}. We recall the definition of Brauer diagrams in Section~\ref{sec:doub}. We will also define the weight and the crossing number of a Brauer diagram, and state the main theorem there. We will then digress to give a different combinatorial explanation for the number of terms in the determinant of these non-standard matrices in Section~\ref{sec:numterm}. The main idea of the proof is a bijection between terms in both determinant expansions and Brauer diagrams, which will be given in Section~\ref{sec:bij}. We define the crossing number for a Brauer diagram and prove some properties about it in Section~\ref{sec:cross}. The main result is then proved in Section~\ref{sec:main}. \section{Two Different Matrix Representations} \label{sec:faux} A word about notation: throughout, we will use $\text{\em \i}$ as the complex number $\sqrt{-1}$ and $i$ as an indexing variable. Let $A$ be a symmetric matrix and $B$ be a skew-symmetric matrix. Any matrix can be decomposed in two ways as a linear combination of $A$ and $B$, namely $A+B$ and $A+\text{\em \i} B$. We denote the former by $M_{F}$ and the latter by $M_{B}$. The terminology will be explained later. That is, \begin{equation} \label{defm} (M_{F})_{i,j} = \begin{cases} a_{i,j} + b_{i,j} & i < j, \\ a_{j,i} - b_{j,i} & i > j, \\ a_{i,i} & i=j, \end{cases}; \quad (M_{B})_{i,j} = \begin{cases} a_{i,j} + \text{\em \i} b_{i,j} & i < j, \\ a_{j,i} - \text{\em \i} b_{j,i} & i > j, \\ a_{i,i} & i=j, \end{cases} \end{equation} where $a_{i,j}$ and $b_{i,j}$ are {\bf complex} indeterminates. For example, a generic $3\times 3$ matrix can be written in these two ways, \begin{equation} \label{mateg3} \begin{split} M^{(3)}_{F} &= \begin{pmatrix} a_{1,1} & a_{1,2}+ b_{1,2} & a_{1,3}+ b_{1,3} \\ a_{1,2}- b_{1,2} & a_{2,2} & a_{2,3}+ b_{2,3}\\ a_{1,3}- b_{1,3} & a_{2,3}- b_{2,3} & a_{3,3} \end{pmatrix}, \\ M^{(3)}_{B} &= \begin{pmatrix} a_{1,1} & a_{1,2}+ \text{\em \i} b_{1,2} & a_{1,3}+\text{\em \i} b_{1,3} \\ a_{1,2}- \text{\em \i} b_{1,2} & a_{2,2} & a_{2,3}+ \text{\em \i} b_{2,3}\\ a_{1,3}- \text{\em \i} b_{1,3} & a_{2,3}- \text{\em \i} b_{2,3} & a_{3,3} \end{pmatrix}. \end{split} \end{equation} Notice that $a_{i,j}$ is defined when $i \leq j$ and $b_{i,j}$ is defined when $i<j$. The determinant of the matrices is clearly a polynomial in these indeterminates. For example, the determinant of the matrices in \eqref{mateg3} is given by \begin{equation} \label{deteg3} \begin{split} \det(M^{(3)}_{F})=& \; a_{{1,1}}a_{{2,2}}a_{{3,3}} -a_{{1,1}}{a_{{2,3}}}^{2} -a_{{2,2}}{a_{{1,3}}}^{2} -a_{{3,3}}{a_{{1,2}}}^{2}\\ &+a_{{1,1}}{b_{{2,3}}}^{2}+a_{{2,2}}{b_{{1,3}}}^{2} +a_{{3,3}}{b_{{1,2}}}^{2} +2\,a_{{1,2}}a_{{2,3}}a_{{1,3}}\\ &-2\,a_{{1,2}}b_{{2,3}}b_{{1,3}} +2\,a_{{1,3}}b_{{1,2}}b_{{2,3}}-2\,a_{{2,3}}b_{{1,2}}b_{{1,3}}, \\ \det(M^{(3)}_{B})=& \; a_{{1,1}}a_{{2,2}}a_{{3,3}} -a_{{1,1}}{a_{{2,3}}}^{2} -a_{{2,2}}{a_{{1,3}}}^{2} -a_{{3,3}}{a_{{1,2}}}^{2}\\ &-a_{{1,1}}{b_{{2,3}}}^{2} -a_{{2,2}}{b_{{1,3}}}^{2}-a_{{3,3}}{b_{{1,2}}}^{2} +2\,a_{{1,2}}a_{{2,3}}a_{{1,3}}\\ &+2\,a_{{1,2}}b_{{2,3}}b_{{1,3}} -2\,a_{{1,3}}b_{{1,2}}b_{{2,3}}+2\,a_{{2,3}}b_{{1,2}}b_{{1,3}}, \end{split} \end{equation} in these two decompositions. The number of terms in each of the formulas in \eqref{deteg3} is seen to be 15, which is equal to $5!!$. \section{Brauer Diagrams} \label{sec:doub} One of the most common representations of permutations is the {\bf two-line representation} or {\bf two-line diagram} of a permutation. This is also an example of a perfect matching on a complete bipartite graph. \setlength{\unitlength}{1mm} \begin{figure}[h!] \centering \begin{picture}(60, 20) \put(14,0){1} \put(19,0){2} \put(24,0){3} \put(29,0){4} \put(34,0){5} \put(39,0){6} \put(44,0){7} \put(15,4){\circle*{1}} \put(20,4){\circle*{1}} \put(25,4){\circle*{1}} \put(30,4){\circle*{1}} \put(35,4){\circle*{1}} \put(40,4){\circle*{1}} \put(45,4){\circle*{1}} \put(14,16){1} \put(19,16){2} \put(24,16){3} \put(29,16){4} \put(34,16){5} \put(39,16){6} \put(44,16){7} \put(15,14){\circle*{1}} \put(20,14){\circle*{1}} \put(25,14){\circle*{1}} \put(30,14){\circle*{1}} \put(35,14){\circle*{1}} \put(40,14){\circle*{1}} \put(45,14){\circle*{1}} \put(15,4){\line(3,2){15}} \put(20,4){\line(2,1){20}} \put(25,4){\line(-1,1){10}} \put(30,4){\line(-1,2){5}} \put(35,4){\line(1,1){10}} \put(40,4){\line(-2,1){20}} \put(45,4){\line(-1,1){10}} \end{picture} \caption{A two-line diagram for the permutation $3641725$.} \label{fig:permeg} \end{figure} One of the advantages of a two-line diagram is that the inversion number of a permutation is simply the number of pairwise intersections of the $n$ lines. In Figure~\ref{fig:permeg} above, there are 10 intersections, which is the inversion number of the permutation $3641725$. We will consider the complete graph on $2n$ vertices arranged in a two-line representation. Recall that a {\bf perfect matching} of a graph is a set of pairwise non-adjacent edges which matches all the vertices of a graph. The visual representations of such perfect matchings are called Brauer diagrams and are defined formally below. \begin{defn} Let $T$ and $B$ be the set of vertices in the top and bottom row respectively, with $n$ points each, forming a two-line diagram. An {\bf unlabeled Brauer diagram of size $n$}, $\mu$, is a perfect matching where an edge joining two points in $T$ is called a {\bf cup}; an edge joining two points in $B$ is called a {\bf cap} and an edge joining a point in $T$ with a point in $B$ is called an {\bf arc}. For convenience, we call the former horizontal edges, and the latter, vertical. The edges satisfy the following conditions. \begin{enumerate} \item Two caps may intersect in at most one point. \item Two cups may intersect in at most one point. \item A cap and a cup may not intersect. \item An arc meets an arc or a cap or a cup in at most one point. \end{enumerate} \end{defn} \noindent Let $\mathcal B_n$ be the set of unlabeled Brauer diagrams of size $n$. Figure~\ref{fig:pmeg} depicts an unlabeled Brauer diagram of size seven. \setlength{\unitlength}{1mm} \begin{figure}[h!] \centering \begin{picture}(60, 20) \put(15,4){\circle*{1}} \put(20,4){\circle*{1}} \put(25,4){\circle*{1}} \put(30,4){\circle*{1}} \put(35,4){\circle*{1}} \put(40,4){\circle*{1}} \put(45,4){\circle*{1}} \put(15,14){\circle*{1}} \put(20,14){\circle*{1}} \put(25,14){\circle*{1}} \put(30,14){\circle*{1}} \put(35,14){\circle*{1}} \put(40,14){\circle*{1}} \put(45,14){\circle*{1}} \put(20,4){\line(2,1){20}} \put(25,4){\line(-1,1){10}} \put(45,4){\line(-1,1){10}} \qbezier(15, 4)(27.5, 12)(40, 4) \qbezier(20, 14)(25, 10)(30, 14) \qbezier(30, 4)(32.5, 8)(35, 4) \qbezier(25, 14)(35, 9)(45, 14) \end{picture} \caption{An unlabeled Brauer diagram of size 7 with seven crossings.} \label{fig:pmeg} \end{figure} We now define two types of labeled Brauer diagrams. \begin{defn} Let $\mu \in \mathcal B_{n}$ and let $T$ be labeled with the integers 1 through $n$ from left to right. An {\bf $F$-Brauer diagram} (for forward) is a Brauer diagram where the integers 1 through $n$ are labeled left to right and an {\bf $B$-Brauer diagram} (for backward) is a Brauer diagram where the integers 1 through $n$ are labeled right to left. \end{defn} \noindent The $F$-Brauer diagram has the same labeling as the usual two-line diagram for a permutation. Let $(\mathcal B_{F})_{n}$ (resp. $(\mathcal B_{B})_{n}$) be the set of $F$-Brauer diagrams (resp. $B$-Brauer diagrams) of size $n$. Figure~\ref{fig:pmeg2} shows an example of each type. \setlength{\unitlength}{1mm} \begin{figure}[h!] \centering \begin{picture}(60, 20) \put(14,0){1} \put(19,0){2} \put(24,0){3} \put(29,0){4} \put(34,0){5} \put(39,0){6} \put(44,0){7} \put(15,4){\circle*{1}} \put(20,4){\circle*{1}} \put(25,4){\circle*{1}} \put(30,4){\circle*{1}} \put(35,4){\circle*{1}} \put(40,4){\circle*{1}} \put(45,4){\circle*{1}} \put(14,16){1} \put(19,16){2} \put(24,16){3} \put(29,16){4} \put(34,16){5} \put(39,16){6} \put(44,16){7} \put(15,14){\circle*{1}} \put(20,14){\circle*{1}} \put(25,14){\circle*{1}} \put(30,14){\circle*{1}} \put(35,14){\circle*{1}} \put(40,14){\circle*{1}} \put(45,14){\circle*{1}} \put(20,4){\line(2,1){20}} \put(25,4){\line(-1,1){10}} \put(45,4){\line(-1,1){10}} \qbezier(15, 4)(27.5, 12)(40, 4) \qbezier(20, 14)(25, 10)(30, 14) \qbezier(30, 4)(32.5, 8)(35, 4) \qbezier(25, 14)(35, 9)(45, 14) \end{picture} \hfil \begin{picture}(60, 20) \put(14,0){7} \put(19,0){6} \put(24,0){5} \put(29,0){4} \put(34,0){3} \put(39,0){2} \put(44,0){1} \put(15,4){\circle*{1}} \put(20,4){\circle*{1}} \put(25,4){\circle*{1}} \put(30,4){\circle*{1}} \put(35,4){\circle*{1}} \put(40,4){\circle*{1}} \put(45,4){\circle*{1}} \put(14,16){1} \put(19,16){2} \put(24,16){3} \put(29,16){4} \put(34,16){5} \put(39,16){6} \put(44,16){7} \put(15,14){\circle*{1}} \put(20,14){\circle*{1}} \put(25,14){\circle*{1}} \put(30,14){\circle*{1}} \put(35,14){\circle*{1}} \put(40,14){\circle*{1}} \put(45,14){\circle*{1}} \put(20,4){\line(2,1){20}} \put(25,4){\line(-1,1){10}} \put(45,4){\line(-1,1){10}} \qbezier(15, 4)(27.5, 12)(40, 4) \qbezier(20, 14)(25, 10)(30, 14) \qbezier(30, 4)(32.5, 8)(35, 4) \qbezier(25, 14)(35, 9)(45, 14) \end{picture} \caption{The same Brauer diagram in Figure~\ref{fig:pmeg} considered as an element of $(\mathcal B_{F})_{7}$ on the left and $(\mathcal B_{B})_{7}$ on the right.} \label{fig:pmeg2} \end{figure} We draw all members of $\mathcal B_3$ and label the matchings in Table~\ref{tab:chord3}. \setlength{\unitlength}{1mm} \begin{table}[h!] \begin{tabular}{c c c c c} \begin{picture}(20, 20) \put(5,5){\circle*{1}} \put(10,5){\circle*{1}} \put(15,5){\circle*{1}} \put(5,10){\circle*{1}} \put(10,10){\circle*{1}} \put(15,10){\circle*{1}} \put(5,5){\line(0,1){5}} \put(10,5){\line(0,1){5}} \put(15,5){\line(0,1){5}} \end{picture} & \begin{picture}(20, 20) \put(5,5){\circle*{1}} \put(10,5){\circle*{1}} \put(15,5){\circle*{1}} \put(5,10){\circle*{1}} \put(10,10){\circle*{1}} \put(15,10){\circle*{1}} \put(5,10){\line(1,-1){5}} \put(5,5){\line(1,1){5}} \put(15,5){\line(0,1){5}} \end{picture} & \begin{picture}(20, 20) \put(5,5){\circle*{1}} \put(10,5){\circle*{1}} \put(15,5){\circle*{1}} \put(5,10){\circle*{1}} \put(10,10){\circle*{1}} \put(15,10){\circle*{1}} \put(5,10){\line(2,-1){10}} \put(5,5){\line(1,1){5}} \put(10,5){\line(1,1){5}} \end{picture} & \begin{picture}(20, 20) \put(5,5){\circle*{1}} \put(10,5){\circle*{1}} \put(15,5){\circle*{1}} \put(5,10){\circle*{1}} \put(10,10){\circle*{1}} \put(15,10){\circle*{1}} \qbezier(5, 10)(7.5, 8)(10, 10) \put(5,5){\line(2,1){10}} \qbezier(10,5)(12.5, 7)(15, 5) \end{picture} & \begin{picture}(20, 20) \put(5,5){\line(1,1){5}} \put(5,5){\circle*{1}} \put(10,5){\circle*{1}} \put(15,5){\circle*{1}} \put(5,10){\circle*{1}} \put(10,10){\circle*{1}} \put(15,10){\circle*{1}} \qbezier(5, 10)(10, 7)(15, 10) \put(5,5){\line(1,1){5}} \qbezier(10,5)(12.5, 7)(15, 5) \end{picture} \\ \begin{picture}(20, 20) \put(5,5){\circle*{1}} \put(10,5){\circle*{1}} \put(15,5){\circle*{1}} \put(5,10){\circle*{1}} \put(10,10){\circle*{1}} \put(15,10){\circle*{1}} \put(5,5){\line(0,1){5}} \put(10,5){\line(1,1){5}} \put(15,5){\line(-1,1){5}} \end{picture} & \begin{picture}(20, 20) \put(5,5){\circle*{1}} \put(10,5){\circle*{1}} \put(15,5){\circle*{1}} \put(5,10){\circle*{1}} \put(10,10){\circle*{1}} \put(15,10){\circle*{1}} \put(5,10){\line(1,-1){5}} \put(15,5){\line(-1,1){5}} \put(5,5){\line(2,1){10}} \end{picture} & \begin{picture}(20, 20) \put(5,5){\circle*{1}} \put(10,5){\circle*{1}} \put(15,5){\circle*{1}} \put(5,10){\circle*{1}} \put(10,10){\circle*{1}} \put(15,10){\circle*{1}} \put(5,10){\line(2,-1){10}} \put(10,5){\line(0,1){5}} \put(5,5){\line(2,1){10}} \end{picture} & \begin{picture}(20, 20) \put(5,5){\circle*{1}} \put(10,5){\circle*{1}} \put(15,5){\circle*{1}} \put(5,10){\circle*{1}} \put(10,10){\circle*{1}} \put(15,10){\circle*{1}} \qbezier(5, 10)(7.5, 8)(10, 10) \put(10,5){\line(1,1){5}} \qbezier(5, 5)(10, 8)(15, 5) \end{picture} & \begin{picture}(20, 20) \put(5,5){\circle*{1}} \put(10,5){\circle*{1}} \put(15,5){\circle*{1}} \put(5,10){\circle*{1}} \put(10,10){\circle*{1}} \put(15,10){\circle*{1}} \qbezier(5, 10)(10, 7)(15, 10) \put(10,5){\line(0,1){5}} \qbezier(5, 5)(10, 8)(15, 5) \end{picture} \\ \begin{picture}(20, 20) \put(5,5){\circle*{1}} \put(10,5){\circle*{1}} \put(15,5){\circle*{1}} \put(5,10){\circle*{1}} \put(10,10){\circle*{1}} \put(15,10){\circle*{1}} \put(5,5){\line(0,1){5}} \qbezier(10, 10)(12.5, 8)(15, 10) \qbezier(10, 5)(12.5, 7)(15, 5) \end{picture} & \begin{picture}(20, 20) \put(5,5){\circle*{1}} \put(10,5){\circle*{1}} \put(15,5){\circle*{1}} \put(5,10){\circle*{1}} \put(10,10){\circle*{1}} \put(15,10){\circle*{1}} \put(5,10){\line(1,-1){5}} \qbezier(10, 10)(12.5, 8)(15, 10) \qbezier(5, 5)(10, 8)(15, 5) \end{picture} & \begin{picture}(20, 20) \put(5,5){\circle*{1}} \put(10,5){\circle*{1}} \put(15,5){\circle*{1}} \put(5,10){\circle*{1}} \put(10,10){\circle*{1}} \put(15,10){\circle*{1}} \put(5,10){\line(2,-1){10}} \qbezier(10, 10)(12.5, 8)(15, 10) \qbezier(5, 5)(7.5, 7)(10, 5) \end{picture} & \begin{picture}(20, 20) \put(5,5){\circle*{1}} \put(10,5){\circle*{1}} \put(15,5){\circle*{1}} \put(5,10){\circle*{1}} \put(10,10){\circle*{1}} \put(15,10){\circle*{1}} \qbezier(5, 10)(7.5, 8)(10, 10) \put(15,10){\line(0,-1){5}} \qbezier(5, 5)(7.5, 7)(10, 5) \end{picture} & \begin{picture}(20, 20) \put(5,5){\circle*{1}} \put(10,5){\circle*{1}} \put(15,5){\circle*{1}} \put(5,10){\circle*{1}} \put(10,10){\circle*{1}} \put(15,10){\circle*{1}} \qbezier(5, 10)(10, 7)(15, 10) \put(10,10){\line(1,-1){5}} \qbezier(5, 5)(7.5, 7)(10, 5) \end{picture} \\ \end{tabular} \vspace{0.2cm} \caption{All Brauer diagrams belonging to $\mathcal B_3$.} \label{tab:chord3} \end{table} Let $\mu \in (\mathcal B_{F})_{n}$ or $(\mathcal B_{B})_{n}$. Further, let $\mu_T$ (resp. $\mu_B$) contain cups (resp. caps) and $\mu_{TB}$ contain arcs. By convention, edges will be designated as ordered pairs. When the edges belong to $\mu_T$ or $\mu_B$, they will be written in increasing order and when they belong to $\mu_{TB}$, the vertex in the top row will be written first. The {\bf crossing number} $\chi(\mu)$ of $\mu$ is the number of pairwise intersections among edges in $\mu$. \begin{table}[h!] \begin{tabular}{|c|c|c|c|c|} \hline 0 & 1 & 2 & 0 & 1 \\ \hline 1 & 2 & 3 & 1 & 2 \\ \hline 0 & 1 & 0 & 0 & 1 \\ \hline \end{tabular} \vspace{0.2cm} \caption{Crossing numbers for all the Brauer diagrams in $\mathcal B_3$ according to Table~\ref{tab:chord3}.} \label{tab:cross3} \end{table} We now associate a weight to $\mu$, consisting of edges $\mu_T, \mu_B$ and $\mu_{TB}$. Let $a_{i,j}$ (resp. $b_{i,j}$) be unknowns defined for $1 \leq i \leq j \leq n$ (resp. $1 \leq i < j \leq n$) and let $(\widehat{i,j}) = (\min(i,j),\max(i,j))$. The {\bf weight of $\mu$}, $w(\mu)$, is given by \begin{equation} w(\mu) = \prod_{(i,j) \in \mu_T} b_{i,j} \prod_{(i,j) \in \mu_B} b_{i,j} \prod_{(i,j) \in \mu_{TB}} a_{\widehat{i,j}}. \end{equation} Note that this weight depends on whether we consider $\mu$ as an element of $(\mathcal B_{F})_{n}$or $(\mathcal B_{B})_{n}$. However, the formal expression is the same in both cases. For completeness, we list the weights of all Brauer diagrams in $\mathcal B_3$ according as whether they belong in $(\mathcal B_{F})_{n}$ and $(\mathcal B_{B})_{n}$ respectively. \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $a_{1,1} a_{2,2} a_{3,3} $ & $ a_{3,3} a_{1,2}^2 $ & $ a_{1,2}a_{1,3}a_{2,3} $ & $ a_{1,3} b_{1,2}b_{2,3} $ & $a_{1,2}b_{1,3}b_{2,3} $ \\[0.2cm] \hline $a_{1,1}a_{2,3}^2$ & $ a_{1,2}a_{1,3}a_{2,3} $ & $a_{2,2} a_{1,3}^2 $ & $ a_{2,3}b_{1,2}b_{1,3} $ & $ a_{2,2} b_{1,3}^2 $\\[0.2cm] \hline $a_{1,1}b_{2,3}^2 $ & $ a_{1,2}b_{1,3}b_{2,3} $ & $a_{1,3}b_{1,2}b_{2,3} $ & $ a_{3,3} b_{1,2}^2 $ & $ a_{2,3}b_{1,2}b_{1,3}$ \\[0.2cm] \hline \end{tabular} \vskip 0.5cm \begin{tabular}{|c|c|c|c|c|} \hline $a_{2,2} a_{1,3}^2 $ & $ a_{1,2}a_{1,3}a_{2,3} $ & $ a_{1,1}a_{2,3}^2 $ & $ a_{3,3} b_{1,2}^2 $ & $ a_{2,3}b_{1,2}b_{1,3}$ \\[0.2cm] \hline $a_{1,2}a_{1,3}a_{2,3} $ & $ a_{3,3} a_{1,2}^2 $ & $ a_{1,1} a_{2,2} a_{3,3} $ & $ a_{2,3}b_{1,2}b_{1,3} $ & $ a_{2,2} b_{1,3}^2 $\\[0.2cm] \hline $a_{1,3}b_{1,2}b_{2,3} $ & $ a_{1,2}b_{1,3}b_{2,3} $ & $ a_{1,1}b_{2,3}^2 $ & $ a_{1,3} b_{1,2}b_{2,3}$ & $ a_{1,2}b_{1,3}b_{2,3}$ \\[0.2cm] \hline \end{tabular} \vskip 0.2cm \caption{Weights of all the Brauer diagrams of size $n=3$ according to Table~\ref{tab:chord3}. The first table describes the weights for $(\mathcal B_{F})_{n}$ and the second, for $(\mathcal B_{B})_{n}$.} \label{tab:wt3} \end{center} \end{table} \noindent We are now in a position to state the main theorem. \begin{thm} \label{thm:detm} The determinant of an $n\times n$ matrix can be written as a sum of Brauer diagrams as, \begin{equation} \begin{split} \det(M_F) &= \sum_{\mu \in (\mathcal B_{F})_n} (-1)^{\chi(\mu)} w(\mu), \\ \det(M_B) &= (-1)^{\binom{n}{2}}\sum_{\mu \in (\mathcal B_{B})_n} (-1)^{\chi(\mu)}w(\mu). \end{split} \end{equation} \end{thm} One can verify that Theorem~\ref{thm:detm} is valid for $n=3$ in both cases by adding all the weights in Table~\ref{tab:wt3} times the corresponding crossing numbers in Table~\ref{tab:cross3} for all the Brauer diagrams in Table~\ref{tab:chord3}, and comparing with \eqref{deteg3}. \section{The number of terms in the determinant expansion} \label{sec:numterm} We show by a quick argument that the number of monomials in the determinant of an $n \times n$ matrix $M_{F}$ (and for the same reason, for $M_{B}$) is given by $(2n-1)!!$. This calculation is somewhat redundant because of Theorem~\ref{thm:detm}. The reason for this short demonstration is that it shows why determinants should be related to perfect matchings. To start, let $M$ be either $M_{F}$ or $M_{B}$. Recall the Leibniz formula for the determinant of $M$, \begin{equation} \det(M) = \sum_{\pi \in S_n} (-1)^{\inv(\pi)}(M)_{1,\pi(1)} \dots (M)_{n,\pi(n)}, \end{equation} where $S_n$ is the set of permutations in $n$ letters and $\inv(\pi)$ is the number of inversion of the permutation. Usually, this would give us $n!$ terms, of course. In the new notation, \eqref{defm}, we obtain many more terms because each factor $(M)_{i,\pi(i)}$ gives two terms whenever $\pi(i) \neq i$. To see how many terms we now have, it is best to think of permutations according to the number and length of cycles they contain, $\pi = C_1\dots C_k$. If a cycle $C$ is of length 1, $C=(i)$, then it corresponds to a diagonal element $a_{i,i}$, which contributes one term. If, on the other hand, $C$ contains $j$ entries, then there are $j$ off diagonal elements, which give $2^j$ terms, counting multiplicities, exactly half of which contain an odd number of $b_{i,j}$'s. These terms will be cancelled by the permutation $\pi'$ which has all other cycles the same, and $C$ replaced by $C'$, the reverse of $C$. Therefore, if $C$ contains $j$ entries, we effectively get a contribution of $2^{j-1}$ terms. The number of terms can be written as a sum over permutations with $k$ disjoint cycles. When there are $k$ cycles, we get $2^{n-k}$ terms. Since the number of permutations with $k$ disjoint cycles is the unsigned Stirling number of the first kind, $s(n,k)$, the total number of terms is given by \begin{equation} \sum_{k=1}^n s(n,k) 2^{n-k}. \end{equation} Since the generating function of the unsigned Stirling numbers of the first kind are given by the Pochhammer symbol or rising factorial, \begin{equation} \sum_{k=1}^n s(n,k) x^k = (x)^{(n)} \equiv x(x+1)\cdots(x+n-1), \end{equation} we can calculate the more general sum, \begin{equation} \sum_{k=1}^n s(n,k) x^{n-k} = (1+x)(1+2x)\cdots(1+(n-1)x). \end{equation} Substituting $x=2$ in the above equation gives $(2n-1)!!$, the desired answer. \section{Bijection between terms and Labeled Brauer diagrams} \label{sec:bij} We now describe the bijection between labeled Brauer diagrams on the one hand and permutations leading to a product of $a_{i,j}$'s and $b_{i,j}$'s on the other. The algorithm is independent of whether we consider $\mathcal B_{F}$ or $\mathcal B_{B}$. Let $\mu$ be a labeled Brauer diagram. We first state the algorithm constructing the latter from the former. \begin{alg} \label{alg:pmtoperm} We start with the three sets of matchings $\mu_T, \mu_B$ and $\mu_{TB}$. \begin{enumerate} \item For each term $(i,j)$ in $\mu_T$ and $\mu_B$, write the term $b_{i,j}$ and for $(i,j)$ in $\mu_{TB}$, write the term $a_{\widehat{i,j}}$. \item Start with $\pi=\emptyset$. \item Find the smallest integer $i_1 \in T$ not yet in $\pi$ and find its partner $i_2$. That is, either $(i_1,i_2) \in \mu_{TB}$ or $\widehat{(i_1,i_2)} \in \mu_T$. If $i_2=i_1$, then append the cycle $(i_1)$ to $\pi$ and repeat Step~3. Otherwise move on to Step~4. \item If $i_{k}$ is in $T$ (resp. $B$), look for the partner of the other $i_{k}$ in $B$ (resp. $T$) and call it $i_{k+1}$. Note that $i_{k+1}$ can be in $T$ or $B$ in both cases. \item Repeat Step~4 for $k$ from 2 until $m$ such that $i_{m+1}=i_1$. Append the cycle $(i_1,i_2,\dots,i_m$) to $\pi$. \item Repeat Steps~3-5 until $\pi$ is a permutation on $n$ letters in cycle notation. \end{enumerate} \end{alg} Therefore, we obtained the desired product in Step~1 and the permutation at the end of Step~6. Here is a simple consequence of the algorithm. \begin{lem} By the construction of Algorithm~\ref{alg:pmtoperm}, if the triplet $(\mu_T, \mu_B, \\ \mu_{TB})$ leads to $\pi$, then $(\mu_B, \mu_T,\mu_{TB})$ leads to $\pi^{-1}$. \end{lem} \begin{proof} Each cycle $(i_1,i_2,\dots,i_m)$ constructed according to Algorithm~\ref{alg:pmtoperm} by the triplet $(\mu_T, \mu_B,\mu_{TB})$ will be constructed as $(i_1,i_m,\dots, i_2)$ by the triplet $(\mu_B, \mu_T,\mu_{TB})$. Since each cycle will be reversed, this is the inverse of the original permutation. \end{proof} We now describe the reverse algorithm. \begin{alg} \label{alg:permtopm} We start with a product of $a_{i,j}$'s and $b_{i,j}$'s, and a permutation $\pi=C_1\dots C_m$ written in cycle notation such that $1 \in C_1$, the smallest integer in $\pi \setminus C_1$ belongs to $C_2$, and so on. \begin{enumerate} \item For each $b_{i,j}$, we obtain a term $\widehat{(i,j)}$ which belongs either to $\mu_T$ or $\mu_B$ and for each $a_{i,j}$, we obtain one of $(i,j)$ or $(j,i)$ which belongs to $\mu_{TB}$. \item Start with $\mu_T=\mu_B=\mu_{TB}=\emptyset$. Set $k=1$. \item Find the first entry $i_1$ in $C_k$ and look for either $a_{i_1,i_2}$ or $b_{i_1,i_2}$. If the former, assign $i_2$ to $B$ and append $(i_1,i_2)$ to $\mu_{TB}$ and otherwise, assign $i_2$ to $T$ and append $(i_1,i_2)$ to $\mu_T$. Set $l=2$. \item Find either $a_{i_l,i_{l+1}}$ or $b_{i_l,i_{l+1}}$. Assign $i_{l+1}$ to one of $T$ or $B$ and $(i_l,i_{l+1})$ to one of $\mu_T, \mu_B$ or $\mu_{TB}$ according to the following table. \begin{equation*} \begin{array}{|c|c|c|c|c|} \hline i_l & \text{Term} & i_{l+1} & (i_l,i_{l+1}) & \text{Next }i_{l+1} \\ \hline T & a & B & \mu_{TB} & T \\ T & b & T & \mu_T & B \\ B & a & T & \mu_{TB} & B \\ B & b & B & \mu_B & T \\ \hline \end{array} \end{equation*} Increment $l$ by one. \item Repeat Step~4 until you return to $i_1$, which will necessarily belong to $B$, since there are an even number of $b_{i,j}$'s in the term. \item Increment $k$ by 1. \item Repeat Steps~3-6 until $k=m$, i.e., until all cycles are exhausted. \end{enumerate} \end{alg} The following result is now an easy consequence. \begin{lem} \label{lem:bij} Algorithms~\ref{alg:pmtoperm} and \ref{alg:permtopm} are inverses of each other. \end{lem} \section{The Crossing Number} \label{sec:cross} Now that we have established a bijection between terms in the determinant expansion and labeled Brauer diagrams, we need to show that the sign associated to both of these are the same. We start with a labeled Brauer diagram $\mu$, which leads to a permutation $\pi=C_1\dots C_m$ and a product of $a$'s and $b$'s according to Algorithm~\ref{alg:pmtoperm}. Let $\tau$ be the same product obtained from the determinant expansion of the matrix using permutation $\pi$ {\em including the sign}. From the definition of the matrix \eqref{defm}, we will first write a formula for the sign associated to $\tau$. Let $C_j = (n^{(j)}_1,\dots,n^{(j)}_{l(j)})$. Then, define the sequences $\beta^{(j)}$ (resp. $\gamma^{(j)}$) of length $l(j)$ consisting of terms $\pm 1$ (resp. $\pm i$) according to the following definition. \begin{equation} \beta^{(j)}_i = \begin{cases} +1 & n^{(j)}_i < n^{(j)}_{i+1},\\ -1 & n^{(j)}_i > n^{(j)}_{i+1}, \end{cases}; \quad \gamma^{(j)}_i = \begin{cases} +i & n^{(j)}_i < n^{(j)}_{i+1},\\ -i & n^{(j)}_i > n^{(j)}_{i+1}, \end{cases} \end{equation} where $n^{(j)}_{l(j)+1} \equiv n^{(j)}_1$. Then the sign associated to the term $\tau$ depends on whether $\mu$ belongs to $(B_{F})_{n}$ or $(B_{B})_{n}$. In the former case, we have the formula \begin{equation} \label{signtau1} \sgn(\tau) = (-1)^{\inv(\pi)} \prod_{j=1}^m \;\; \prod_{\substack{i=1 \\ \displaystyle b_{\widehat{n^{(j)}_i,n^{(j)}_{i+1}}} \in \tau }}^{l(j)} \beta^{(j)}_i. \end{equation} and in the latter, \begin{equation} \label{signtau2} \sgn(\tau) = (-1)^{\inv(\pi)} \prod_{j=1}^m \;\; \prod_{\substack{i=1 \\ \displaystyle b_{\widehat{n^{(j)}_i,n^{(j)}_{i+1}}} \in \tau }}^{l(j)} \gamma^{(j)}_i. \end{equation} Since the number of $b$'s in the second product is even for all $j$, the product in \eqref{signtau2} will necessarily be real and equal to $\pm 1$. First we look at Brauer diagrams with no cups or caps. There are no $b_{i,j}$'s in the associated term in the determinant expansion. \begin{lem} \label{lem:onlyas} Suppose $\mu$ is a labeled Brauer diagram such that $\mu_T = \mu_B = \emptyset$ and let $\pi$ be the associated permutation. Then, if $\mu \in (\mathcal B_{F})_{n}$, then \begin{equation} \inv(\pi) = \chi(\mu), \end{equation} and if $\mu \in (\mathcal B_{B})_{n}$, then \begin{equation} \inv(\pi)+\chi(\mu) = \binom n2. \end{equation} \end{lem} \begin{proof} The former is obvious since $\mu$ is identical to the two-line diagram for $\pi$. The latter requires just a little more work. For a matching with only arcs, the edges are exactly given by $(i,\pi_i)$ for $i \in [n]$. Now consider two edges $(i,\pi_i)$ and $(j,\pi_j)$ where $i<j$, without loss of generality. Recall that $i,j \in T$ and $\pi_i,\pi_j \in B$ by convention. Then $(i,\pi_i)$ intersects $(j,\pi_j)$ if and only if $\pi_i< \pi_j$ because of the right-to-left numbering convention in $B$. Thus, \begin{equation} \chi(\mu) = |\{(i,j)| i<j, \, \pi_i <\pi_j \}|. \end{equation} On the other hand, the definition of an inversion number is \begin{equation} \inv(\pi) = |\{(i,j)| i<j, \, \pi_i >\pi_j \}|. \end{equation} Since these two count disjoint cases, which span all possible pairs $(i,j)$, they must sum up to the total number of possibilities $(i,j)$ where $i<j$, which is exactly $\binom n2$. \end{proof} Now we will see what happens to the crossing number of a matching when a cup and a cup are converted to two arcs. \begin{lem} \label{lem:chordba} All other edges remaining the same, for any $i,j,k,l$, the following results hold. \vspace{0.2cm} \setlength{\unitlength}{1mm} \begin{enumerate} \item[(a)] \begin{equation} \label{chordba} (-1)^{\displaystyle \chi \Bigg( \begin{picture}(40, 5) \put(0,-5){\line(1,0){40}} \put(0,5){\line(1,0){40}} \qbezier(5, -5)(20,-1)(35, -5) \qbezier(10,5)(20, 2)(30, 5) \put(5,-9){k} \put(35,-9){l} \put(10,7){i} \put(30,7){j} \end{picture} \Bigg)} = (-1)^{\displaystyle \chi \Bigg( \begin{picture}(40, 5) \put(0,-5){\line(1,0){40}} \put(0,5){\line(1,0){40}} \put(5,-5){\line(1,2){5}} \put(35,-5){\line(-1,2){5}} \put(5,-9){k} \put(35,-9){l} \put(10,7){i} \put(30,7){j} \end{picture} \Bigg)}.\nonumber \end{equation} \item[(b)] \vspace{.5cm} \begin{equation} \label{chordba2} (-1)^{\displaystyle \chi \Bigg( \begin{picture}(40, 5) \put(0,-5){\line(1,0){40}} \put(0,5){\line(1,0){40}} \qbezier(5, -5)(17.5,0)(30, 5) \qbezier(10,5)(22.5, 0)(35, -5) \put(5,-9){k} \put(35,-9){l} \put(10,7){i} \put(30,7){j} \end{picture} \Bigg)} = -(-1)^{\displaystyle \chi \Bigg( \begin{picture}(40, 5) \put(0,-5){\line(1,0){40}} \put(0,5){\line(1,0){40}} \put(5,-5){\line(1,2){5}} \put(35,-5){\line(-1,2){5}} \put(5,-9){k} \put(35,-9){l} \put(10,7){i} \put(30,7){j} \end{picture} \Bigg)}. \nonumber \end{equation} \item[(c)] \vspace{.5cm} \begin{equation} \label{chordba3} (-1)^{\displaystyle \chi \Bigg( \begin{picture}(40, 5) \put(0,-5){\line(1,0){40}} \put(0,5){\line(1,0){40}} \qbezier(5, 5)(20,1)(35, 5) \qbezier(10,5)(20, 0)(30, -5) \put(5,7){i} \put(35,7){k} \put(10,7){j} \put(30,-9){l} \end{picture} \Bigg)} = -(-1)^{\displaystyle \chi \Bigg( \begin{picture}(40, 5) \put(0,-5){\line(1,0){40}} \put(0,5){\line(1,0){40}} \qbezier(5, 5)(17.5,0)(30, -5) \qbezier(10,5)(22.5, 1)(35, 5) \put(5,7){i} \put(35,7){k} \put(10,7){j} \put(30,-9){l} \end{picture} \Bigg)}. \nonumber \end{equation} \item[(d)] \vspace{.5cm} \begin{equation} \label{chordbb} (-1)^{\displaystyle \chi \Bigg( \begin{picture}(40, 5) \put(0,-5){\line(1,0){40}} \put(0,5){\line(1,0){40}} \qbezier(5, 5)(20,-2)(35, 5) \qbezier(10,5)(20, 2)(30, 5) \put(5,7){i} \put(35,7){l} \put(10,7){j} \put(30,7){k} \end{picture} \Bigg)} = -(-1)^{\displaystyle \chi \Bigg( \begin{picture}(40, 5) \put(0,-5){\line(1,0){40}} \put(0,5){\line(1,0){40}} \qbezier(5, 5)(17.5, -1)(30, 5) \qbezier(10,5)(22.5, -1)(35, 5) \put(5,7){i} \put(35,7){l} \put(10,7){j} \put(30,7){k} \end{picture} \Bigg)}. \nonumber \end{equation} \end{enumerate} \end{lem} \vspace{0.2cm} \begin{proof} We will prove the result only for (a). The idea of the proof is identical for all other cases. We consider all possible edges that could intersect with any of the 4 edges $(i,j), (k,l), (i,l)$ and $(j,k)$ illustrated above. We group them according to their position. \begin{enumerate} \item Let $n_{ij}$ (resp. $n_{kl}$) be the number of edges such that exactly one of its endpoints lies between $i$ and $j$ (resp. $k$ and $l$), and the other endpoint does not lie between $k$ and $l$ (resp. $i$ and $j$). These edges intersect $(i,j)$ (resp. $(k,l)$) and do not intersect $(k,l)$ (resp. $(i,j)$). They also intersect exactly one among $(i,l)$ and $(j,k)$. \item Let $n_{ijkl}$ be the number of edges one of whose endpoints lies between $i$ and $j$, and the other, between $k$ and $l$. These intersect both $(i,j)$ and $(k,l)$. \item Let $n_{LR}$ be the number of edges, one of whose endpoints is less than $k$ if it belongs to the top row and more than $j$ in the bottom row, and the other is more than $l$ in the top row or less than $i$ in the bottom row. These are edges which do not intersect either $(i,j)$ or $(k,l)$, but intersect both $(i,l)$ and $(j,k)$. \end{enumerate} Now, the contribution of the edges $(i,j)$ and $(k,l)$ to $\chi$ in the left hand side of \eqref{chordba} is $n_{i,j}+n_{kl}+2n_{ijkl}$, whereas that to the right hand side of \eqref{chordba} is $n_{ij}+n_{kl}+2n_{LR}$. Since all other edges are the same, the difference between the crossing number of the configuration on the left and that on the right is $2n_{ijkl}-2n_{LR}$ and hence, the parity of both crossing numbers is the same. \end{proof} \section{The Main Result} \label{sec:main} We now prove the theorem in a purely combinatorial way. The proof will depend on whether the Brauer diagram belongs to $(\mathcal B_{F})_{n}$ or $(\mathcal B_{B})_{n}$, but the idea is very similar in both cases. We will prove the former and point out the essential difference in the proof of the latter at the very end. \begin{proofof}{Theorem~\ref{thm:detm}} From Lemma~\ref{lem:bij}, we have shown that every term in the expansion of the determinant corresponds, in an invertible way, to a Brauer diagram. We will now show the signs are also equal by performing an induction on the number of cups, or equivalently caps, since both are the same. Consider a $F$-Brauer diagram $\mu \in (\mathcal B_{F})_{n}$ with at least one cup and cap each. Using the bijection of Lemma~\ref{lem:bij}, construct the associated permutation $\pi$. By the construction in Algorithm~\ref{alg:pmtoperm}, there have to be at least two $b$'s in the same cycle $C$, say. We pick two of them such that $(i,j) \in \mu_B$ is a cup and $(k,l) \in \mu_T$ is a cap. We have to show that $(-1)^{\chi(\mu)}= \sgn(\tau)$ using \eqref{signtau2}. We now get a new Brauer diagram $\mu' \in (\mathcal B_{F})_{n}$ by replacing the cup $(i,j)$ and the cap $(k,l)$ by the arcs $(i,k)$ and $(j,l)$ using Lemma~\ref{lem:chordba}(a). This replaces the associated weights $b_{i,j}b_{k,l}$ with $a_{\widehat{i,k}} a_{\widehat{j,l}}$, and the sign remains the same, $(-1)^{\chi(\mu)} = (-1)^{\chi(\mu')}$. Now we use the same algorithm to construct the permutation $\pi'$ associated to the new term, and look at how the cycle $C$ changes to $C'$. Let $\tau$ and $\tau'$ be terms obtained in the determinant expansion of $M_F$ including the sign. There are four ways in which these 4 numbers are arranged in $C$. We list these and the way they transform in Table~\ref{tab:cycdec1}. In each case, the links $\{i,j\}$ and $\{k,l\}$ are broken and the links $\{i,k\}$ and $\{j,l\}$ are formed. Recall that $i<j$ and $k<l$ according to Lemma~\ref{lem:chordba}(a). \begin{table}[h!] \begin{tabular}{|c|c|c|c|} \hline $C \in \pi$ & $C' \in \pi'$ & Factors in $\pi$ & Relative sign \\ \hline $(i,j,\dots,k,l,\dots)$ &$(i,k,\dots,j,l,\dots)$ & $b_{i,j} b_{k,l}$ & $+1$ \\ $(i,j,\dots,l,k,\dots)$ &$(i,k,\dots)(j,\dots,l)$ & $b_{i,j} (-b_{k,l})$ & $-1$ \\ $(j,i,\dots,k,l,\dots)$ &$(j,l,\dots)(i,\dots,k)$ & $(-b_{i,j}) b_{k,l}$ & $-1$ \\ $(j,i,\dots,l,k,\dots)$ &$(j,l,\dots,i,k,\dots)$ & $(- b_{i,j})(-b_{k,l})$ & $+1$ \\ \hline \end{tabular} \vspace{0.2cm} \caption{Comparison between the difference of the number of cycles in $C$ and $C'$, and the relative sign between the factor in $\pi$ and $a_{\widehat{i,k}}a_{\widehat{j,l}} \in \pi'$.} \label{tab:cycdec1} \end{table} We now need an result from undergraduate combinatorics. When $n$ is odd (resp. even), a permutation $\pi$ of size $n$ is odd if and only if the number of cycles is even (resp. odd) in its cycle decomposition. Therefore, the parity of the permutation $\pi'$ is different from $\pi$ in cases (1) and (4) and the same as that of $\pi$ in cases (2) and (3). Notice that the relative signs also follow the same pattern. To summarize, we have shown that $(-1)^{\chi(\mu)}= \sgn(\tau)$ holds if and only if $(-1)^{\chi(\mu')}= \sgn(\tau')$ holds when $\mu,\mu' \in (\mathcal B_{F})_{n}$. But this is precisely the induction step since $\mu'$ and $\mu''$ have one less cup and one less cap that $\mu$. From Lemma~\ref{lem:onlyas}, we have already shown that the terms which correspond to Brauer diagrams with only arcs have the correct sign. This completes the proof. We follow the same strategy when $\mu$ belongs to $(\mathcal B_{B})_{n}$. The difference is that $l<k$ and that $b_{i,j}$ and $b_{k,l}$ come with additional factors of $\text{\em \i}$. The interested reader can check that these two contribute opposing signs leading to the same result. \end{proofof} For even antisymmetric matrices, this gives a natural combinatorial interpretation of Cayley's theorem different from the ones given by Halton \cite{halton} and E{\u{g}}ecio{\u{g}}lu \cite{egecioglu}. \begin{cor}[Cayley 1847, \cite{Cayley}] For an antisymmetric matrix $M$ of size $n$, \begin{equation} \det M = \begin{cases} (\pf M)^2 & \text{$n$ even}, \\ 0 & \text{$n$ odd}. \end{cases} \end{equation} \end{cor} \begin{proof} From \eqref{defm}, we see that all $a_{i,j}$'s are zero for an antisymmetric matrix for both $M_{F}$ and $M_{B}$. We consider only the former representation since the argument is identical for the latter. The only $F$-Brauer diagrams in $(\mathcal B_{F})_{n}$ that contribute are those with no arcs. If $n$ is odd, this is clearly not possible. Thus the determinant is zero. If $n$ is even, we have the sum in Theorem~\ref{thm:detm} over all Brauer diagrams with only cups and cups. This sum now factors into two distinct sums for cups and for caps. But for each of these cases, we know that the answer is the same since they are independent sums. Moreover, each of these is the Pfaffian \cite{stem}. \end{proof} It would be interesting to find an analogous expression for the permanent of a matrix. This might entail finding a different planar graph instead of a Brauer diagram or a different analog of the crossing number or both. For example, the permanent of the matrix in \eqref{mateg3} is given by \begin{equation} \label{permeg3} \begin{split} &\text{Perm}(M^{(3)}_{F})=a_{1,3}^2 a_{2,2}+a_{2,3}^2 a_{1,1} +a_{1,2}^2 a_{3,3}- b_{1,2}^2 a_{3,3}-b_{1,3}^2 a_{2,2} -b_{2,3}^2 a_{1,1} \\ &+2 a_{1,2} a_{1,3} a_{2,3}+a_{1,1} a_{2,2} a_{3,3}-2 a_{2,3} b_{1,2}b_{1,3} -2 a_{1,2} b_{1,3}b_{2,3}+2 a_{1,3} b_{1,2} b_{2,3}. \end{split} \end{equation} Note that not all signs in the permanent expansion of are positive. \section*{Acknowledgements} This work was motivated by discussions with Craig Tracy, whom we thank for encouragement and support. We also thank Ira Gessel, David M. Jackson, Christian Krattenthaler, Greg Kuperberg, Dan Romik, Alexander Soshnikov and Doron Zeilberger for constructive feedback. We also thank a referee for a very careful reading of the manuscript which led to many improvements. \bibliographystyle{alpha}
1,108,101,563,964
arxiv
\section{Introduction} It is natural to expect that an averaging operator should have certain smoothing properties; for instance, the spherical means on $\mathbb{R}^d$ map $L^2$ to $W^{\frac{d-1}{2},2}$ (see for example \S5.21 of Chapter 8 in \cite{SteinHA}). So one could expect that a maximal operator, being a supremum over averages, should not behave too differently. In fact, if maximal operators are not smoothing operators, at least they do not destroy the regularity of functions, up to one weak derivative. This is the principle behind the program started in 1998 by Kinunnen \cite {Ki} that studies the regularity of maximal operators acting on Sobolev functions. Since then, many authors have contributed to extend the theory, for instance \cite{CM}, \cite{HM}, \cite{KL}, \cite{KiSa}, \cite{Lu1}, always having in the background the general principle that, for maximal operators, an $L^p$-bound implies a $W^{1,p}$-bound. Things become more difficult when one works with $L^1$-functions, since the Hardy-Littlewood maximal operator does not map $L^1$ to $L^1$. For $f \in L^1_{loc}(\mathbb{R}^n)$ we define the {\it centered} maximal operator as follows: \begin{equation*} \mathcal{M}f(x) = \sup_{r>0}\frac{1}{m(B(x,r))} \int_{B(x,r)} |f(y)|\, \text{\rm d}y \,, \end{equation*} where $B(x,r)$ is the ball in $\mathbb{R}^n$ centered at $x$ with radius $r$ and $m(B(x,r))$ is the $n$-dimensional Lebesgue measure of this ball. In 2004, Haj\l asz and Onninen \cite[Question 1]{HO} asked the following question: \\ \\ {\bf Question A}: Is the operator $f \mapsto |\nabla \mathcal{M} f|$ bounded from $W^{1,1}(\mathbb{R}^n)$ to $L^1(\mathbb{R}^n)$?\\ \\ Observe that by dilation invariance, a bound of the type \begin{equation*} \|\nabla \mathcal{M}f\|_{L^1(\mathbb{R}^n)} \leq C \left( \|f\|_{L^1(\mathbb{R}^n)} + \|\nabla f\|_{L^1(\mathbb{R}^n)} \right) \end{equation*} implies that \begin{equation}\label{Tan1} \|\nabla \mathcal{M}f\|_{L^1(\mathbb{R}^n)} \leq C \|\nabla f\|_{L^1(\mathbb{R}^n)}\,, \end{equation} and thus the fundamental question here is to compare the variation of the maximal function with the variation of the original function (perhaps having the additional information that $f \in L^1(\mathbb{R}^n))$. In Tanaka's elegant paper \cite{Ta}, he gave a positive answer to this question for the {\it non-centered} maximal operator in dimension $n=1$, establishing (\ref{Tan1}) when $f \in W^{1,1}(\mathbb{R})$, with constant $C=2$. For $f \in L^1_{loc}(\mathbb{R}^n)$ the {\it non-centered} maximal operator is defined as follows: \begin{equation*} \widetilde{\mathcal{M}} f(x) = \sup_{\stackrel{r>0}{x \in B_r}}\frac{1}{m(B_r)} \int_{B_r} |f(y)|\, \text{\rm d}y \,, \end{equation*} where the supremum is now taken over all balls $B_r$ simply containing $x$. Philosophically, the {\it non-centered} version is a smoother operator than the centered version since it contains more averages, making it easier to handle. Observe that the constant $C=2$ obtained by Tanaka is not proved to be the best possible. In fact, we believe that (\ref{Tan1}) should hold with constant $C=1$, when $n=1$, for both the {\it centered} and {\it non-centered} operators, with $f (x)= \chi_{[a,b]}(x)$ being an extremal example. Question A remains untouched for the {\it centered} version, even in the case $n=1$. \subsection{The discrete one-dimensional setting} Finding discrete analogues for $L^p$-bounds in harmonic analysis is a topic of ongoing research. In the simplest cases, $\ell^p$ bounds for discrete analogues of classical operators such as Calder\'{o}n-Zygmund singular integral operators, fractional integral operators, and the Hardy-Littlewood maximal function follow from known $L^p$ bounds for the original operators in the Euclidean setting, via elementary comparison arguments (see \cite{SW1}, \cite{SW2}). But $\ell^p$ bounds for discrete analogues of more complicated operators, such as singular, fractional, and maximal Radon transforms (involving integration over a submanifold, or family of submanifolds), are not implied by results in the continuous setting, and moreover the discrete analogues are resistant to conventional methods. Indeed, discrete operators may even behave differently from their continuous counterparts, as is exhibited by the discrete spherical maximal operator \cite{MSW}. It is only recently that substantial progress has been made on discrete operators with Radon characteristics via techniques motivated by the circle method of Hardy and Littlewood, a technique from number theory pioneered in the context of discrete analogues by Bourgain \cite{Bour88A}, \cite{Bour88B}, and further developed in a number of interesting cases (see for example \cite{SW1}, \cite{SW2}, \cite{MSW}, \cite{IW}, \cite{IMSW}, \cite{Pie10}). In this paper we introduce the study of the regularity theory of discrete maximal operators in one dimension. Let $f: \mathbb{Z} \to \mathbb{R}$ be a discrete function and let $\mathbb{Z}^+ = \{0, 1, 2, 3, \ldots,\}$. The discrete {\it centered} Hardy-Littlewood maximal operator is defined by \begin{equation*} Mf(n) = \sup_{r \in \mathbb{Z}^{+}} \frac{1}{(2r + 1)} \sum_{k=-r}^{k=r} |f(n+k)|\,, \end{equation*} while the {\it non-centered} version is defined by \begin{equation*} \widetilde{M} f(n) = \sup_{r,s \in \mathbb{Z}^{+}} \frac{1}{(r +s+ 1)} \sum_{k=-r}^{k=s} |f(n+k)|\,, \end{equation*} Our aim is to answer discrete analogues of Question A for these operators. They clearly do not belong to the Radon transform paradigm, and we will not call upon the circle method; instead the challenge arises, at least in the case of the centered maximal operator $M$, from the fact that the analogous result in the continuous setting is not yet known! In order to study regularity properties of discrete operators, we establish the following conventions. For $1 \leq p <\infty$, the $\ell^p$ norm of a function $f: \mathbb{Z} \to \mathbb{R}$ is \begin{equation*} \|f\|_{\ell^p(\mathbb{Z})} = \left(\sum_{n=-\infty}^{\infty} |f(n)|^p \right)^{1/p}\,, \end{equation*} and the $\ell^{\infty}$ norm is \begin{equation*} \|f\|_{\ell^{\infty}(\mathbb{Z})} = \sup_{n \in \mathbb{Z}} |f(n)|. \end{equation*} We define the derivatives of a discrete function by \begin{align*} f'(n) &= f(n+1) - f(n)\,,\\ f''(n) & = f(n+2) - 2f(n+1) + f(n)\,,\\ f'''(n) & = f(n+3) - 3f(n+2) + 3 f(n+1) - f(n)\,, \end{align*} and so on. The space corresponding to $W^{k,p}(\mathbb{R})$ is then defined to be the set of discrete functions with finite $w^{k,p}(\mathbb{Z})$ norm, where \[ ||f||_{w^{k,p}(\mathbb{Z})} = \sum_{j=0}^k ||f^{(j)}||_{\ell^p(\mathbb{Z})}.\] But note that by the triangle inequality, for any $k \geq 1$, \begin{equation}\label{TI} \|f^{(k)}\|_{\ell^p(\mathbb{Z})} \leq 2^k \|f\|_{\ell^p(\mathbb{Z})} \,; \end{equation} thus in the discrete setting, any $\ell^p$-bound automatically provides a $w^{k,p}$-bound, for any $k \geq 1$ (in fact, the discrete $w^{k,p}$-spaces are just the classical $\ell^p$-spaces with an equivalent norm). This might make our efforts to transfer the regularity theory for maximal operators to the discrete setting seem almost vacuous. However, the situation is highly nontrivial when we deal with $\ell^1$-functions or functions of bounded variation. We define the total variation of $f:\mathbb{Z} \to \mathbb{R}$ by \begin{equation*} \text{\rm Var}(f) = \|f'\|_{\ell^1(\mathbb{Z})} = \sum_{n=-\infty}^{\infty} |f(n+1) - f(n)|\,. \end{equation*} Our first result, a discrete version of Tanaka's theorem for the discrete {\it non-centered} maximal operator, with sharp constant, is as follows \begin{theorem}\label{thm1} Let $f: \mathbb{Z} \to \mathbb{R}$ be a function of bounded variation. Then \begin{equation*} \text{\rm Var} (\widetilde{M} f) \leq \text{\rm Var}(f)\,, \end{equation*} and the constant $C=1$ is the best possible. \end{theorem} The continuous version of this result was proved by Aldaz and L\'{a}zaro in \cite[Theorem 2.5]{AP}. We shall prove this result in Section 2, adapting some of the ideas of the original proof of Tanaka for the continuous case. It is not hard to see that the constant $C=1$ is best possible in Theorem \ref{thm1}, for it suffices to consider the function \begin{equation}\label{ex1} f(n) = \left\{ \begin{array}{cc} 1& \textrm{if} \ n=0,\\ 0& \textrm{otherwise}. \end{array} \right. \end{equation} Dealing with the {\it centered} maximal operator is a much more subtle and intricate problem. By an extensive analysis of examples we are led to believe that the same bound should hold for the {\it centered} maximal operator:\\ \\ {\bf Question B}: Let $f: \mathbb{Z} \to \mathbb{R}$ be a function of bounded variation. Is it true that \begin{equation}\label{QB} \text{\rm Var}(Mf) \leq \text{\rm Var}(f)\ ?\\ \end{equation} Motivated by Question B, we prove the following result. \begin{theorem}\label{thm2} Let $f: \mathbb{Z} \to \mathbb{R}$ be a function in $\ell^1(\mathbb{Z})$. Then \begin{equation}\label{Var1} \text{\rm Var} (Mf) \leq \left(2 + \frac{146}{315} \right) \|f\|_{\ell^1(\mathbb{Z})}. \end{equation} \end{theorem} Theorem \ref{thm2} represents partial progress toward Question B. In fact, from (\ref{TI}), inequality (\ref{QB}) would imply (\ref{Var1}) with constant $C=2$, which would be sharp, with an extremal example given by (\ref{ex1}). We expect higher dimensional analogues of these results, both in the continuous and discrete cases, to hold as well (see the original question by Haj\l asz and Onninen \cite{HO}). However, Tanaka's method for the one dimensional continuous (uncentered) case, and ours for the discrete (centered and uncentered) cases, do not easily adapt to higher dimensions. The simplicity and innocence of the objects and statements described above might appear misleading at first glance. Before moving to the proofs, we encourage the interested reader to familiarize her/himself with the discrete maximal problem, especially Question B above, in order to better appreciate the beauty and the difficulties of the interplay between analysis and combinatorics, still not completely understood, in this problem. \section{Proof of Theorem \ref{thm1}} Since $\text{\rm Var}(|f|) \leq \text{\rm Var}(f)$ we may assume without loss of generality that $f$ takes only non-negative values. A function of bounded variation will certainly be bounded and thus, at each point $n$, the averages will also be bounded. However, since we do not assume $f \in \ell^1(\mathbb{Z})$, we must be aware of the fact that the supremum over these averages might not be realized. We define the {\it left} maximal operator as \begin{equation*} M_Lf(n) = \sup_{r \in \mathbb{Z}^{+}} \frac{1}{(r + \tfrac{1}{2})} \left\{\tfrac{1}{2}f(n) + \sum_{k=-r}^{k=-1} f(n+k) \right\}\,, \end{equation*} and the {\it right} maximal operator as \begin{equation*} M_Rf(n) = \sup_{s \in \mathbb{Z}^{+}} \frac{1}{(s+ \tfrac{1}{2})} \left\{\tfrac{1}{2}f(n) + \sum_{k=1}^{k=s} f(n+k) \right\}\,. \end{equation*} Observe that for each choice of $r, s \in \mathbb{Z}^{+}$ we have \begin{eqnarray*} \frac{1}{(r +s+ 1)} \sum_{k=-r}^{k=s} f(n+k) & = & \frac{(r+\tfrac{1}{2})}{(r+s+1)} \left( \frac{1}{(r + \tfrac{1}{2})} \left\{\tfrac{1}{2}f(n) + \sum_{k=-r}^{k=-1} f(n+k)\right\} \right) \\ && +\, \frac{(s+\tfrac{1}{2})}{(r+s+1)}\left( \frac{1}{(s+ \tfrac{1}{2})} \left\{\tfrac{1}{2}f(n) + \sum_{k=1}^{k=s} f(n+k) \right\} \right)\\ & \leq &\frac{(r+\tfrac{1}{2})}{(r+s+1)} M_Lf(n) +\frac{(s+\tfrac{1}{2})}{(r+s+1)} M_Rf(n) \\ & \leq& \max\{M_Lf(n), M_Rf(n)\}. \end{eqnarray*} Therefore we have \begin{equation*} \widetilde{M} f(n) \leq \max\{M_Lf(n), M_Rf(n)\}, \end{equation*} and since the reverse inequality is obvious we conclude that \begin{equation*} \widetilde{M} f(n) = \max\{M_Lf(n), M_Rf(n)\}, \end{equation*} for all $n \in \mathbb{Z}$. We will say that a point $n$ is a local maximum of $f$ if \begin{equation*} f(n-1) \leq f(n) \ \ \ \textrm{and} \ \ \ f(n) > f(n+1). \end{equation*} Similarly, a point $n$ is a local minimum for $f$ if \begin{equation*} f(n-1) \geq f(n) \ \ \ \textrm{and} \ \ \ f(n) < f(n+1). \end{equation*} The following lemma identifies a key property of the local maxima of $M_Lf$ and $M_Rf$. \begin{lemma}\label{lemma_extrema} $\;$\\ \begin{itemize} \item[(i)] If $n$ is a local maximum of $M_Lf$, then $M_Lf(n) = f(n)$.\\ \item[(ii)] If $n$ is a local maximum of $M_Rf$, then $M_Rf(n) = f(n)$.\\ \item[(iii)] If $n$ is a local maximum of $\widetilde{M} f$, then $\widetilde{M} f(n) = f(n)$.\\ \end{itemize} \end{lemma} \begin{proof} (i) and (ii). It suffices to prove the result for $M_Rf$; the argument for $M_Lf$ is analogous. For this we suppose that $M_Rf(n) > f(n)$ and consider two cases.\\ \\ {\it Case 1.} $M_Rf(n)$ is attained for some $s \in \mathbb{Z}^{+}$.\\ From the assumption that $M_Rf(n) > f(n)$ we know that $s \geq 1$. Therefore we have \begin{equation}\label{S1} M_Rf(n) = \frac{1}{(s + \tfrac{1}{2})} \left\{\tfrac{1}{2}f(n) + f(n+1) + \sum_{k=2}^{k=s} f(n+k) \right\}, \end{equation} in which the last sum may be vacuous. We can consider an average of length $s-1$ for the point $n+1$ to get \begin{equation}\label{S2} M_Rf(n+1) \geq \frac{1}{(s - \tfrac{1}{2})} \left\{\tfrac{1}{2}f(n+1) + \sum_{k=2}^{k=s} f(n+k) \right\}. \end{equation} Suppose that $M_Rf(n) \geq M_Rf(n+1) \geq f(n+1)$. Subtracting (\ref{S2}) from (\ref{S1}) we have \begin{equation*} M_Rf(n) \leq (s + \tfrac{1}{2})M_Rf(n) - (s - \tfrac{1}{2})M_Rf(n+1) \leq \frac{1}{2} (f(n) + f(n+1))\,, \end{equation*} which is a contradiction. Therefore $M_Rf(n) < M_Rf(n+1) $ and $n$ is not a local maximum.\\ \\ {\it Case 2.} $M_Rf(n)$ is not attained for any $s \in \mathbb{Z}^{+}$.\\ In this case we can actually prove that for any $m \in \mathbb{Z}$ we have $M_Rf(m) \geq M_Rf(n)$. For instance, take $m >n$ and let $C$ be a global upper bound for $f$. Given $\epsilon >0$ there is a large $s>0$ such that \begin{equation*} \frac{1}{(s + \tfrac{1}{2})} \left\{\tfrac{1}{2}f(n) + \sum_{k=1}^{k=s} f(n+k) \right\} \geq M_Rf(n) - \epsilon. \end{equation*} We then consider an average of length $s$ for the point $m$ to get \begin{align*} M_Rf(m) &\geq \frac{1}{(s + \tfrac{1}{2})} \left\{\tfrac{1}{2}f(m) + \sum_{k=1}^{k=s} f(m+k) \right\} \\ & \geq (M_Rf(n) - \epsilon) - \frac{2C(m-n)}{(s + \tfrac{1}{2})}. \end{align*} Letting $\epsilon \to 0$ and correspondingly letting $s \to \infty$ along an appropriate sequence, it follows that $M_Rf(m) \geq M_Rf(n)$. The case $m<n$ is treated analogously. Thus $n$ is not a local maximum.\\ \\ (iii) We just have to use the fact that \begin{equation*} \widetilde{M} f(n) = \max\{M_Lf(n), M_Rf(n)\}, \end{equation*} together with parts (i) and (ii) to conclude that if $n$ is a local maximum of $\widetilde{M} f$ then $\widetilde{M} f(n) = f(n)$. \end{proof} We now finish the proof of Theorem \ref{thm1}. From now on let us consider the alternating sequence of local maxima $\{a_i\}_{i\in \mathbb{Z}}$ and local minima $\{b_i\}_{i\in \mathbb{Z}}$ of $\widetilde{M} f$, satisfying \begin{equation}\label{sequence0} ...< b_{-2} < a_{-2} < b_{-1} < a_{-1} < b_0 < a_0 < b_1 < a_1 < b_2 < a_ 2 < .... \end{equation} The sequence (\ref{sequence0}) can be finite or infinite depending on the behavior of the tails of $\widetilde{M} f$. Let us consider the different cases:\\ \\ {\it Case 1}. The sequence (\ref{sequence0}) is infinite.\\ In this case we have \begin{align}\label{VarNC} \begin{split} \text{\rm Var}(\widetilde{M} f) &= 2\sum_{i=-\infty}^{\infty} \left(\widetilde{M} f(a_i) - \widetilde{M} f(b_i)\right) = 2\sum_{i=-\infty}^{\infty} \left(f(a_i) - \widetilde{M} f(b_i)\right)\\ & \leq 2\sum_{i=-\infty}^{\infty} \left(f(a_i) - f(b_i)\right) \leq \text{\rm Var}(f).\\ \end{split} \end{align} {\it Case 2}. The sequence (\ref{sequence0}) is finite on one (or both) side(s).\\ In this case several different behaviors might occur, but they are essentially treated in the same way, using (\ref{VarNC}) and a minor modification in the tail(s). Suppose for instance that $a_k$ is the last local maximum. The function $\widetilde{M} f(n)$ must be monotonically non-increasing for $n \geq a_k$ and since it is bounded, the limit \begin{equation*} \widetilde{M} f(\infty) = \lim_{n \to \infty} \widetilde{M} f(n) = c \end{equation*} will exist. In this case we will have \begin{equation*} \liminf_{n\to \infty}f(n) \leq c. \end{equation*} Therefore we can write \begin{align*} \begin{split} \text{\rm Var}(\widetilde{M} f) &= \text{\rm Var}(\widetilde{M} f)_{[-\infty,a_k]} + \text{\rm Var}(\widetilde{M} f)_{[a_k, \infty]} \\ &= 2\sum_{i=-\infty}^{k} \left(\widetilde{M} f(a_i) - \widetilde{M} f(b_i)\right) + (\widetilde{M} f(a_k) - c) \\ &= 2\sum_{i=-\infty}^{\infty} \left(f(a_i) - \widetilde{M} f(b_i)\right) + (f(a_k) - c)\\ & \leq 2\sum_{i=-\infty}^{\infty} \left(f(a_i) - f(b_i)\right) + (f(a_k) - c)\\ & \leq \text{\rm Var}(f)_{[-\infty,a_k]} + \text{\rm Var}(f)_{[a_k,\infty]} = \text{\rm Var}(f).\\ \end{split} \end{align*} The argument for all the other cases is a minor modification of this one. This concludes the proof of Theorem \ref{thm1}. \section{Proof of Theorem \ref{thm2}} One can begin consideration of the discrete {\it centered} maximal operator by investigating whether Lemma \ref{lemma_extrema}, or any natural modification of it, continues to hold. The following example shows that this need not be the case: \begin{equation*}\label{ex2} f(n) = \left\{ \begin{array}{cc} 10& \textrm{if} \ n=\pm4,\\ 0& \textrm{otherwise}. \end{array} \right. \end{equation*} One should not expect the local maxima of $Mf$ to touch $f$, or even expect that $Mf$ should be convex in each interval in which it disconnects from $f$. Thus new ideas are required to approach this problem. We start again by assuming that $f$ takes only non-negative values, and consider the sequence of local maxima $\{a_i\}_{i\in \mathbb{Z}}$ and local minima $\{b_i\}_{i\in \mathbb{Z}}$ of $Mf$, satisfying \begin{equation}\label{sequence} ...< b_{-2} < a_{-2} < b_{-1} < a_{-1} < b_0 < a_0 < b_1 < a_1 < b_2 < a_ 2 < .... \end{equation} We have \begin{equation}\label{Var} \text{\rm Var}(Mf) = 2\sum_{i=-\infty}^{\infty} \left(Mf(a_i) - Mf(b_i)\right). \end{equation} {\bf Remark}: If the sequence (\ref{sequence}) terminates on one or both ends, we modify the sum (\ref{Var}) accordingly as follows. Since $f \in \ell^1(\mathbb{Z})$ we must have $\lim_{n \to \pm \infty} Mf(n) = 0$, and this implies that if the sequence terminates, it would terminate with a last maximum $a_k$ and/or a first maximum $a_l$ (i.e. it would not terminate with a minimum). If there is a first maximum $a_l$ we consider \begin{equation*} \text{\rm Var}(Mf) = 2 Mf(a_l) + 2 \sum_{i=l+1}^{\infty} \left(Mf(a_i) - Mf(b_i)\right), \end{equation*} and make minor modifications in the argument below; similar modifications apply if there is a last maximum $a_l$. For each local maximum $a_i$ we let $r_i$ be the smallest radius such that \begin{equation}\label{avmax} Mf(a_i) = A_{r_i}f(a_i) = \frac{1}{2r_i + 1} \sum_{k=-r_i}^{k=r_i} f(a_i+k)\,, \end{equation} where we denote by $A_{r}$ the averaging operator of radius $r$ (since $f \in \ell^1(\mathbb{Z})$ this radius exists). For each point $b_i$ we consider the average of radius $s_i = r_i + (a_i - b_i)$ and since we have \begin{equation}\label{avmin} Mf(b_i) \geq A_{s_i}f(b_i)\,, \end{equation} it follows from (\ref{Var}), (\ref{avmax}) and (\ref{avmin}) that \begin{equation}\label{VarSum} \text{\rm Var}(Mf) \leq 2\sum_{i=-\infty}^{\infty} \left(A_{r_i}f(a_i) - A_{s_i}f(b_i)\right). \end{equation} Observe that the interval $[b_i - s_i, b_i + s_i]$ contains the interval $[a_i - r_i, a_i + r_i]$ and they both have the same right endpoint. Now we fix an integer $n$ and we will evaluate the maximum contribution that $f(n)$ can give to the sum on the right hand side of (\ref{VarSum}). For each $i \in \mathbb{Z}$ , if $n \in [a_i - r_i, a_i + r_i]$, then $n \in [b_i - s_i, b_i + s_i]$ and $f(n)$ contributes to $\left(A_{r_i}f(a_i) - A_{s_i}f(b_i)\right)$ the amount \begin{equation}\label{contribution of n} \frac{f(n)}{2r_i + 1} - \frac{f(n)}{2( r_i + (a_i - b_i)) + 1}. \end{equation} If $n \notin [a_i - r_i, a_i + r_i]$ the contribution of $f(n)$ to $\left(A_{r_i}f(a_i) - A_{s_i}f(b_i)\right)$ is zero or even negative and we disregard it. Now observe that if the contribution (\ref{contribution of n}) occurs we must have $r_i \geq |n - a_i|$, and therefore one can show \begin{align}\label{upperbound} \begin{split} f(n) &\left(\frac{1}{2r_i + 1} - \frac{1}{2( r_i + (a_i - b_i)) + 1}\right)\\ & \ \ \ \ \ \ \ \leq f(n) \left(\frac{1}{2|n-a_i| + 1} - \frac{1}{2( |n-a_i| + (a_i - b_i)) + 1}\right)\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \leq f(n) \left(\frac{1}{2|n-a_i| + 1} - \frac{1}{2( |n-a_i| + (a_i - a_{i-1})) + 1}\right), \end{split} \end{align} where in the last step we just used the ordering (\ref{sequence}). If we sum (\ref{upperbound}) over the index $i$ we obtain an upper bound for the total contribution of $f(n)$ to the right hand side of (\ref{VarSum}), namely \begin{equation}\label{FinalSum} 2 f(n) \sum_{i=-\infty}^{\infty}\left(\frac{1}{2|n-a_i| + 1} - \frac{1}{2( |n-a_i| + (a_i - a_{i-1})) + 1}\right). \end{equation} Theorem \ref{thm2} will follow if we prove that for {\it any} strictly increasing sequence $\{a_i\}_{i \in \mathbb{Z}}$ of integers, the sum in (\ref{FinalSum}) is bounded by a universal constant $C$. This is proved in Lemma \ref{lemma2} below. To conclude the proof of Theorem \ref{thm2}, we will ultimately sum the maximum contributions of all $f(n)$'s to the total variation (\ref{VarSum}) of $Mf$ to prove, as desired, that \begin{equation*} \text{\rm Var}(Mf) \leq 2 C \sum_{n=-\infty}^{\infty} f(n) = 2 C\,\|f\|_{\ell^1(\mathbb{Z})}. \end{equation*} \begin{lemma}\label{lemma2} Given $n\in \mathbb{Z}$, for {\it any} strictly increasing sequence $\{a_i\}_{i \in \mathbb{Z}}$ of integers, \begin{equation*} \sum_{i=-\infty}^{\infty}\left(\frac{1}{2|n-a_i| + 1} - \frac{1}{2( |n-a_i| + (a_i - a_{i-1})) + 1}\right)\leq \frac{4}{3}. \end{equation*} If furthermore $a_i - a_{i-1} \geq 2$ for all $i\in \mathbb{Z}$, $\frac{4}{3}$ may be replaced by $1 + \frac{1}{5} + \frac{1}{7} - \frac{1}{9}.$ \end{lemma} \begin{proof} It is sufficient to prove the result for $n=0$ since we can shift any sequence $a_i \mapsto a_i +n$. For $n=0$ we aim to prove that \begin{equation}\label{nicesum} S = \sum_{i=-\infty}^{\infty}\left(\frac{1}{2|a_i| + 1} - \frac{1}{2( |a_i| + (a_i - a_{i-1})) + 1}\right)\leq C\,. \end{equation} By shifting the indices we can also assume that $a_{-1} \leq 0 < a_0$. We divide our sum (\ref{nicesum}) into two parts \begin{align*} \begin{split} S &= \sum_{i=-\infty}^{-1}\left(\frac{1}{-2a_i + 1} - \frac{1}{-2a_{i-1} + 1}\right) + \sum_{i=0}^{\infty}\left(\frac{1}{2a_i + 1} - \frac{1}{2( a_i + (a_i - a_{i-1})) + 1}\right)\\ \\ & = S_1 + S_2. \end{split} \end{align*} The first sum $S_1$ is a telescoping sum and we find that \begin{equation*} S_1 \leq \frac{1}{-2a_{-1} + 1}. \end{equation*} (This continues to hold if the sequence terminates to the left as $i \to -\infty$.) The second sum is more involved and we use the following inequality, for integers $m > n\geq 0$: \begin{equation}\label{ineqtricky} \frac{1}{2m + 1} - \frac{1}{2( m + (m - n)) + 1} \leq \frac{1}{2(n+1) +1} - \frac{1}{2(m+1) + 1}. \end{equation} Inequality (\ref{ineqtricky}) can be proved simply by clearing denominators and observing that $m \geq n+1$. We then use (\ref{ineqtricky}) to bound $S_2$ as follows: \begin{align*} S_2 &= \left(\frac{1}{2a_0 + 1} - \frac{1}{2( a_0 + (a_0 - a_{-1})) + 1}\right) + \sum_{i=1}^{\infty}\left(\frac{1}{2a_i + 1} - \frac{1}{2( a_i + (a_i - a_{i-1})) + 1}\right)\\ \\ & \leq \left(\frac{1}{2a_0 + 1} - \frac{1}{2( a_0 + (a_0 - a_{-1})) + 1}\right) + \sum_{i=1}^{\infty} \left(\frac{1}{2(a_{i-1}+1) +1} - \frac{1}{2(a_i+1) + 1}\right)\\ \\ & \leq \left(\frac{1}{2a_0 + 1} - \frac{1}{2( a_0 + (a_0 - a_{-1})) + 1}\right)+ \frac{1}{2(a_{0}+1) +1}.\\ \end{align*} (This also continues to hold if the sequence terminates to the right as $i \to \infty$.) We have thus arrived at \begin{align}\label{Final1} \begin{split} S & = S_1 + S_2 \\ & \leq \frac{1}{-2a_{-1} + 1} + \left(\frac{1}{2a_0 + 1} - \frac{1}{2( a_0 + (a_0 - a_{-1})) + 1}\right)+ \frac{1}{2(a_{0}+1) +1}. \end{split} \end{align} Since $a_{-1} \leq 0 < a_0$ are integers, this is bounded by $4/3$. But in our application $a_0$ and $a_1$ are not consecutive integers (there must be a local minimum between them), and the right hand side of (\ref{Final1}) is maximized when $a_{-1} = 0$ and $a_0 = 2$. This finishes the proof of the lemma and hence of Theorem \ref{thm2}. \end{proof} \section*{Acknowledgments} The authors would like to thank Jeffrey Vaaler, Jean Bourgain, Diego Moreira and Dimitris Koukoulopoulos for helpful comments during the preparation of this work. J. Bober, E. Carneiro and L. B. Pierce acknowledge support from the Institute for Advanced Study and the National Science Foundation under agreement No. DMS-0635607. E. Carneiro also acknowledges support from CAPES/FULBRIGHT grant BEX 1710-04-4. L. B. Pierce is also funded by the Simonyi Fund and National Science Foundation grant DMS-0902658. \bibliographystyle{amsplain}
1,108,101,563,965
arxiv
\section{Introduction}\label{Intro} The weighted Hurwitz product arises naturally in the study of weighted derivations and weighted Rota--Baxter operators; see \cite{GKZ2008, GKZ2014} and their references. We review these concepts from a~ca\-tegorical perspective in Section~\ref{Review}. In Section~\ref{gimac} we present a monoidal structure on the category $\operatorname{Gph}{\mathscr V}$ of graphs in a monoidal additive category ${\mathscr V}$. We show how these derivations and operators on a monoid $A$ in ${\mathscr V}$ can be viewed as monoid and semigroup structures on particular graphs constructed from~$A$. The main theme of the paper is to lift concepts def\/ined on algebras (or monoids) to concepts def\/ined on monoidal categories\footnote{Some authors call this process ``categorif\/ication''.}. The $\lambda$-weighted Hurwitz product is traditionally def\/ined on the abelian group $A^{\mathbb{N}}$ of sequences in an algebra~$A$. This lifts in an obvious way, for a set $\Lambda$, to a def\/inition of $\Lambda$-weighted product on the category ${\mathscr V}^{\mathbb{N}}$ of sequences of objects in a reasonable monoidal ${\mathscr V}$. Rather than ${\mathscr V}^{\mathbb{N}}$, our interest here is in the category $[\mathfrak{S},{\mathscr V}]$ of ${\mathscr V}$-valued Joyal species \cite{Species} and a weighted version of the convolution product, called Cauchy product in~\cite{AgMa2010}. This is fully discussed in Sections~\ref{Ltenspecies}--\ref{weightedbimonoidalefam} and \ref{titpa}, and motivates a def\/inition of weighted categorical derivation in Section~\ref{wcd}. The weighted tensors def\/ine an interesting family of tensor products for linear representations of the symmetric groups which, by choosing specif\/ic weights, include the Cauchy product and the Heisenberg product~\cite{MoreiraPhD}. Finally, Sections~\ref{charades} and \ref{tds} suggest and explore a weighted tensor product for species on f\/inite vector spaces (or charades~\cite{Kapr1995, UCL}). This generalizes, in particular, the essentially classical tensor product of representations of the general linear groups over a f\/inite f\/ield, proved braided in~\cite{GLFq}. Further insights into the subject of this paper appear in~\cite{GaSt}. \section{Review of weighted derivations and Rota--Baxter operators}\label{Review} The inspiration for the family of tensor products on species came from the $\lambda$-weighted product of Hurwitz series as discussed in \cite{GKZ2008, GKZ2014} and their references. They begin by def\/ining a {\em derivation of weight~$\lambda$} on an algebra $A$ over a commutative ring $k$, with given $\lambda \in k$, to be a $k$-module morphism $d\colon A \to A$ satisfying $d(1)=0$ and \begin{gather*} d(ab) = d(a)b + a d(b) + \lambda d(a)d(b) . \end{gather*} They note the generalized Leibnitz rule \begin{gather*} d^n(ab)=\sum_{k=0}^n{\sum_{j=0}^{n-k}{\binom{n}{k}\binom{n-k}{j}\lambda^kd^{n-j}(a)d^{k+j}(b)}} . \end{gather*} However, I prefer to write this in the form \begin{gather}\label{genLeib} d^n(ab)=\sum_{n=r+s+t}{\binom{n}{r,s,t}\lambda^td^{r+t}(a)d^{s+t}(b)} \end{gather} to emphasise the relationship to the trinomial expansion rule for $(x+y+\lambda xy)^n$. Here \begin{gather*} \binom{n}{r,s,t} = \frac{n!}{r!s!t!} . \end{gather*} The {\em $\lambda$-Hurwitz product} on $A^{\mathbb{N}}$ can be def\/ined by the clearly related equation \begin{gather}\label{LambHurw} \big(f\cdot^{\lambda}g\big)(n)=\sum_{n=r+s+t}{\binom{n}{r,s,t}\lambda^t f(r+t) g(s+t)} . \end{gather} \begin{Example} For $\lambda = 0$, $k=\mathbb{R}$ and $A$ the algebra of smooth functions $f\colon \mathbb{R}\to \mathbb{R}$ under pointwise addition and multiplication, the dif\/ferentiation function $d\colon A\to A$ is a 0-weighted derivation by the classical Leibnitz rule. \end{Example} \begin{Example} For $\lambda$ invertible, $k=\mathbb{R}$ and $A$ the algebra of functions $f\colon \mathbb{R}\to \mathbb{R}$ under pointwise addition and multiplication, the function $d\colon A\to A$ def\/ined by \begin{gather*} d(f)(x) = \frac{f(x+\lambda)-f(x)}{\lambda} \end{gather*} is a $\lambda$-weighted derivation. \end{Example} \begin{Example}\label{consecdifference} Def\/ine $d\colon A^{\mathbb{N}} \to A^{\mathbb{N}}$ by $d(s)(n) = s(n+1) -s(n)$. This $d$ is a 1-weighted derivation when $A^{\mathbb{N}}$ is equipped with the pointwise addition and multiplication. \end{Example} \begin{Example}\label{diffseq} Def\/ine $d\colon A^{\mathbb{N}} \to A^{\mathbb{N}}$ by $d(f)(n) = f(n+1)$. This $d$ is a $\lambda$-weighted derivation when $A^{\mathbb{N}}$ is equipped with the $\lambda$-Hurwitz product for any $\lambda$. Notice that we have an algebra morphism $d^*\colon A^{\mathbb{N}} \to (A^{\mathbb{N}})^{\mathbb{N}}$ def\/ined by $d^*(f)(m)(n)= f(m+n)$. This may motivate the next def\/inition. \end{Example} Def\/ine $d^*\colon A\to A^{\mathbb{N}}$ by $d^*(a)(n)=d^n(a)$. We see that the Leibnitz rule \eqref{genLeib} amounts to \begin{Proposition} $d^*\colon A\to A^{\mathbb{N}}$ is an algebra morphism for all $\lambda$-weighted derivations $d$ on~$A$, where $A^{\mathbb{N}}$ has the $\lambda$-Hurwitz product. \end{Proposition} In fact, $A\mapsto A^{\mathbb{N}}$ is a comonad \begin{gather}\label{cmd} G=\big((-)^{\mathbb{N}},\varepsilon, \delta \big) \end{gather} on the category $\mathrm{Alg}_k$ of $k$-algebras whose Eilenberg--Moore-coalgebras are $k$-algebras $A$ equipped with a $\lambda$-derivation, so-called {\em $\lambda$-derivation algebras}; write $\mathrm{DA}_{\lambda}$ for the category of these. The morphism $d^*\colon A\to A^{\mathbb{N}}$ is the coaction of the comonad. Where there is dif\/ferentiation, there should also be integration. A {\em Rota--Baxter operator of weight $\lambda$} on a $k$-algebra $A$ is a $k$-linear morphism $P\colon A \to A$ satisfying \begin{gather* P(a)P(b) = P (P(a)b +P(a)b+\lambda ab ) . \end{gather*} The pair $(A,P)$ is called a {\em $\lambda$-weighted Rota--Baxter algebra}. Write $\mathrm{RBA}_{\lambda}$ for the category of these. \begin{Example} For $\lambda = 0$, $k=\mathbb{R}$ and $A$ the algebra of continuous functions $f\colon \mathbb{R}\to \mathbb{R}$ under pointwise addition and multiplication, the integration function $P\colon A\to A$, def\/ined by $P(f)(x) = \int_0^xf(t)\mathrm{d}t$, is a 0-weighted Rota--Baxter operator by the classical integration-by-parts rule. \end{Example} \begin{Example}\label{parsum} For $\lambda = 1$ and any $k$-algebra $A$, def\/ine $P\colon A^{\mathbb{N}}\to A^{\mathbb{N}}$ to take a sequence $u$ in $A$ to its sequence $P(u)$ of partial sums \begin{gather*} P(u)(n) = \sum_{i=0}^{n-1}{u(i)} . \end{gather*} Then $P$ is a 1-weighted Rota--Baxter operator on $A^{\mathbb{N}}$ with pointwise addition and multiplication. See \cite{Baxt1960, Rota1969}. For $d$ the consecutive dif\/ference operator as def\/ined in Example~\ref{consecdifference}, notice that $d\circ P=1_{A^{\mathbb{N}}}$. \end{Example} \begin{Example} If $Q$ is a 1-weighted Rota--Baxter operator on $A$ then $P(a) = \lambda Q(a)$ def\/ines a~$\lambda$-weighted Rota--Baxter operator $P$ on $A$. \end{Example} A {\em $\lambda$-weighted derivation RB-algebra} is a $k$-algebra $A$ equipped with a $\lambda$-weighted deriva\-tion~$d$ and a $\lambda$-weighted Rota--Baxter operator~$P$ such that $d\circ P=1_A$. Write $\mathrm{DRB}_{\lambda}$ for the category of these. \begin{Proposition}[see \cite{GKZ2008}] \label{lift} Let $P$ be a RB-operator of weight $\lambda$ on~$A$. Then $A^{\mathbb{N}}$ equipped with the $\lambda$-Hurwitz product, the derivation~$d$ of Example~{\rm \ref{diffseq}}, and~$P$ defined by \begin{gather*} P(f)(n) = \begin{cases} P(f(0)) & \mbox{for } n=0, \\ f(n-1) & \mbox{for } n>0 \end{cases} \end{gather*} is a $\lambda$-weighted derivation RB-algebra. Moreover, the following square commutes \begin{equation*} \xymatrix{ A^{\mathbb{N}} \ar[rr]^-{P} \ar[d]_-{\mathrm{ev}_0} && A^{\mathbb{N}} \ar[d]^-{\mathrm{ev}_0} \\ A \ar[rr]_-{P} && A} \end{equation*} \end{Proposition} With a little more work following Proposition~\ref{lift}, we see that the comonad~$G$~\eqref{cmd} lifts to $\mathrm{RBA}_{\lambda}$. In particular, with $\mathrm{V}$ denoting the forgetful functor, we have a comonad $\bar{G}$ and a~commutative square \begin{gather*} \xymatrix{ \mathrm{RBA}_{\lambda} \ar[rr]^-{\bar{G}} \ar[d]_-{\mathrm{V}} && \mathrm{RBA}_{\lambda} \ar[d]^-{\mathrm{V}} & \\ \mathrm{Alg}_{k} \ar[rr]_-{G} && \mathrm{Alg}_{k} & } \end{gather*} Write $\mathrm{RBA}_{\lambda -}$ for the category of $\lambda$-weighted Rota--Baxter algebras where we do not insist on the algebras having a~unit. \begin{Proposition}\label{diamond} For each $(A,\cdot,P)\in \mathrm{RBA}_{\lambda -}$, there is an associative binary operation $a\diamond b$ defined on $A$ by \begin{gather*} a\diamond b = P(a)\cdot b + a\cdot P(b) + \lambda a \cdot b . \end{gather*} Moreover, $T(A,\cdot, P)=(A,\diamond, P)$ defines an endofunctor \begin{gather*} T\colon \ \mathrm{RBA}_{\lambda -} \longrightarrow \mathrm{RBA}_{\lambda -} , \end{gather*} which is $($well-$)$copointed by a natural transformation $\gamma \colon T\Rightarrow 1_{\mathrm{RBA}_{\lambda -}}$ whose component at $(A,\cdot,P)$ is $P\colon (A,\diamond, P) \to (A,\cdot,P)$. \end{Proposition} \begin{Remark}\label{Lack} The day after my seminar talk of 4 February 2015, on the material of this section and Section~\ref{Ltenspecies}, Stephen Lack made the following comments: \begin{enumerate}\itemsep=0pt \item Consider the category $[\Sigma \mathbb{N},{\mathscr V}]$ whose objects are pairs $(M,d)$ consisting of an object $M$ of a nice monoidal additive category ${\mathscr V}$ and an endomorphism $d\colon M\to M$. The forgetful functor \begin{gather}\label{undder} \mathrm{U}\colon \ [\Sigma \mathbb{N},{\mathscr V}] \longrightarrow {\mathscr V} , \end{gather} taking $(M,d)$ to $M$, has a right adjoint taking $M$ to $(M^{\mathbb{N}},d)$ where $d(f)(n) = f(n+1)$. There is a monoidal structure on $[\Sigma \mathbb{N},{\mathscr V}]$ def\/ined by \begin{gather*} (M,d)\otimes^{\lambda}(N,d) = (M\otimes N, d\otimes 1+1\otimes d + \lambda d\otimes d) . \end{gather*} The monoids in this monoidal category are precisely $\lambda$-derivation algebras. Moreover, $\mathrm{U}$~\eqref{undder}~and its right adjoint form a~monoidal adjunction which therefore def\/ines an adjunction between the categories of monoids. This adjunction generates the comonad~$G$~\eqref{cmd} on the category $\operatorname{Mon}{\mathscr V}$ of monoids in~${\mathscr V}$. \item There is a bialgebra structure on the polynomial algebra $k[x]$ with comultiplication the algebra morphism $\delta \colon k[x] \to k[x,y]\cong k[x]\otimes k[x]$ def\/ined by \begin{gather*} \delta(x) = x + y +\lambda xy . \end{gather*} Then the convolution product on the left-hand side of the canonical isomorphism \begin{gather*} \mathrm{Mod}_k(k[x],A)\cong A^{\mathbb{N}} \end{gather*} transports to the $\lambda$-Hurwitz product on $A^{\mathbb{N}}$. \item It feels like there should be a multicategory/promonoidal/substitude structure on $[\Sigma \mathbb{N},{\mathscr V}]$ for dealing with RB-algebras. \end{enumerate} \end{Remark} \section{Graphs in monoidal additive categories}\label{gimac} Let ${\mathscr V}$ be a monoidal additive category. We act as if the monoidal structure were strict. Let $\operatorname{Gph}{\mathscr V}$ be the category of directed graphs in ${\mathscr V}$. So an object has the form of a pair of parallel morphisms $s, t \colon E \longrightarrow A$ in~${\mathscr V}$; we use $s$ and $t$ for source and target morphisms in all graphs. A~morphism $(f,\phi) \colon (A,E) \longrightarrow (B,F)$ in $\operatorname{Gph}{\mathscr V}$ consists of morphisms $f$ and $\phi$ making the following diagram commute \begin{gather*} \xymatrix{ A \ar[d]_-{f} & E \ar[l]_-{s} \ar[r]^-{t} \ar[d]^-{\phi} & A \ar[d]^-{f} \\ B & F \ar[l]^-{s} \ar[r]_-{t} & B} \end{gather*} Write $\mathrm{ver} \colon \operatorname{Gph}{\mathscr V} \!\longrightarrow\! {\mathscr V}$ for the forgetful functor taking $(A,E)$ to $A$ and write \mbox{$\mathrm{edg} \colon \operatorname{Gph}{\mathscr V} \!\longrightarrow\! {\mathscr V}$} for the forgetful functor taking~$(A,E)$ to~$E$. We will use the notation $\langle n \rangle = \{ 1, 2, \dots, n\}$. For $R\subseteq \langle n \rangle$, write \begin{gather*} \chi_R \colon \ \langle n \rangle \longrightarrow \{s,t\} \end{gather*} for the characteristic function of $R$ def\/ined by \begin{gather*} \chi_R(i) = \begin{cases} s & \mbox{for } i\in R, \\ t & \mbox{for } i \notin R . \end{cases} \end{gather*} Choose an endomorphism $\lambda \colon I \to I$ of the tensor unit $I$ in ${\mathscr V}$. For any $f \colon A \to B$ in ${\mathscr V}$, we def\/ine $(\lambda f \colon A \to B) = (\lambda \otimes f \colon I\otimes A \to I\otimes B)$. Given a list $(A_1,E_1),\dots, (A_n,E_n)$ of objects of $\operatorname{Gph}{\mathscr V}$, we def\/ine an $n$-fold tensor product \begin{gather}\label{gphtensor} {\otimes}_{1\le i \le n}^{\lambda}{(A_i,E_i)} = \big({\otimes}_{1\le i \le n}{A_i}, {\otimes}_{1\le i \le n}{E_i} \big) , \end{gather} where \begin{gather*} s = \sum_{\varnothing \neq R\subseteq \langle n \rangle}{\lambda^{(\#R-1)}\chi_R(1)\otimes \dots \otimes \chi_R(n)} \qquad \text{and} \qquad t = t \otimes \dots \otimes t . \end{gather*} For $n=2$ this gives a binary tensor product \begin{gather*} (A,E){\otimes}^{\lambda} (B,F) = (A\otimes B, E\otimes F) \end{gather*} with \begin{gather*} s = \lambda s\otimes s + s\otimes t + t\otimes s \qquad \text{and} \qquad t = t \otimes t . \end{gather*} The unit for this tensor is the graph $(I,I)$ with $s = 0 \colon I\to I$ and $t = 1_I\colon I \to I$. \begin{Proposition}\label{monstrgph} A monoidal structure on $\operatorname{Gph}{\mathscr V}$ is defined by \eqref{gphtensor} for any given $\lambda \in {\mathscr V}(I,I)$. Both $\mathrm{ver}$ and $\mathrm{edg} \colon \operatorname{Gph}{\mathscr V} \longrightarrow {\mathscr V}$ are strict monoidal. \end{Proposition} \begin{proof} Easy calculations of the source morphisms for \begin{gather*} \big((A,E){\otimes}^{\lambda} (B,F)\big){\otimes}^{\lambda} (C,G) \qquad \text{and} \qquad (A,E){\otimes}^{\lambda} \big((B,F){\otimes}^{\lambda} (C,G)\big) \end{gather*} show they agree with that of the triple tensor product. The target morphisms obviously agree. What this means is that the associativity constraints for~${\mathscr V}$ lift through $\mathrm{ver}$ and $\mathrm{edg}$ to $\operatorname{Gph}{\mathscr V}$ and are therefore coherent. \end{proof} Let $[\Sigma \mathbb{N},{\mathscr V}]$ denote the category whose objects $(A, e\colon A\to A)$ consist of an object~$A$ of~${\mathscr V}$ equipped with an endomorphism~$e$. Let \begin{gather}\label{jay} J\colon \ [\Sigma \mathbb{N},{\mathscr V}] \longrightarrow \operatorname{Gph}{\mathscr V} \end{gather} be the functor def\/ined by $J(A, e) = (A,A)$ with $s = e$ and $t = 1_A$; and $Jf=(f,f)$. Notice also that a morphism $(f,\phi) \colon (B,F) \to J(A,e)$ in $\operatorname{Gph}{\mathscr V}$ with codomain in the subcategory amounts to a commutative diagram \begin{gather* \xymatrix{ F \ar[r]^-{t} \ar[d]_-{s} & B \ar[r]^-{f} & A \ar[d]^-{e} \\ B \ar[rr]_-{f} & & A } \end{gather*} where $\phi$ is forced to be $f\circ t \colon F \to A$. Clearly $J$ is fully faithful and the monoidal structure of Proposition~\ref{monstrgph} restricts to a monoidal structure on $[\Sigma \mathbb{N},{\mathscr V}]$ yielding~\eqref{jay} as a strict monoidal functor. Indeed, this is none other than the monoidal structure of Remark~\ref{Lack}, item~1. \begin{Definition}\label{wdermon} A {\em $\lambda$-weighted derivational monoid} in ${\mathscr V}$ is a monoid $(A,d)$ in $[\Sigma \mathbb{N},{\mathscr V}]$ equipped with the monoidal structure obtained as the restriction through \eqref{jay} of that of Proposition~\ref{monstrgph} on $\operatorname{Gph}{\mathscr V}$. An object $(A,d) \in [\Sigma \mathbb{N},{\mathscr V}]$ with an associative binary operation will be called a~{\em $\lambda$-weighted derivational semigroup}. \end{Definition} More explicitly, a $\lambda$-weighted derivational monoid is a monoid $A$ in ${\mathscr V}$ equipped with an endomorphism $d\colon A \to A$ satisfying the $\lambda$-weighted equation \begin{gather* d \circ \mu = \mu \circ (\lambda d\otimes d + d\otimes 1 + 1\otimes d) , \end{gather*} and the equation $d\circ \eta = 0$ (where $\eta \colon I\to A$ is the unit of~$A$). There is an isomorphism of categories \begin{gather* \mathrm{op} \colon \ \operatorname{Gph}{\mathscr V} \longrightarrow \operatorname{Gph}{\mathscr V} \end{gather*} taking $(A,E)$ to $(A,E)^{\mathrm{op}}$ for which~$A$ and~$E$ are unchanged but $s$ and $t$ have been interchanged. Put \begin{gather* J^{\mathrm{op}} = \big([\Sigma \mathbb{N},{\mathscr V}] \stackrel{J}\longrightarrow \operatorname{Gph}{\mathscr V} \stackrel{\mathrm{op}}\longrightarrow \operatorname{Gph}{\mathscr V} \big) . \end{gather*} Like $J$, this composite $J^{\mathrm{op}}$ is fully faithful. However, the image of $J^{\mathrm{op}}$ is \textit{not} closed under the monoidal structure of Proposition~\ref{monstrgph}. All we obtain on $[\Sigma \mathbb{N},{\mathscr V}]$ is a structure of multicategory (sometimes called a ``coloured operad''). The sets of multimorphisms are def\/ined by \begin{gather}\label{multimorph} \mathrm{P}_{\lambda} ((A_1,p_1),\dots,(A_n,p_n); (B,p) ) = \operatorname{Gph}{\mathscr V}\big(\otimes^{\lambda}_{1\le i \le n}{J^{\mathrm{op}}(A_i,p_i)}, J^{\mathrm{op}}(B,p)\big). \end{gather} To be more explicit, for $R\subseteq \langle n \rangle$ and $i\in \langle n \rangle$, put \begin{gather*} R(i) = \begin{cases} 1_{A_i} & \mbox{for } i\in R, \\ p_i & \mbox{for } i \notin R . \end{cases} \end{gather*} Then, an element of the set \eqref{multimorph}, a {\em multimorphism}, is a morphism \begin{gather*} f \colon \ A_1\otimes \dots \otimes A_n \longrightarrow B \end{gather*} satisfying the equation \begin{gather*} f \circ (p_1\otimes \dots \otimes p_n) = p\circ f \circ \sum_{\varnothing \neq R\subseteq \langle n \rangle}{\lambda^{(\#R-1)}R(1)\otimes \dots \otimes R(n)} . \end{gather*} This is a case of a general process of obtaining a multicategory structure on a category by restriction along a functor into a monoidal category. The notion of monoid makes sense in any multicategory. We have the following special case. \begin{Definition}\label{wRBmon} A {\em $\lambda$-weighted Rota--Baxter monoid} in ${\mathscr V}$ is an object $(A,p)$ of $[\Sigma \mathbb{N},{\mathscr V}]$ (that is, $p\colon A \to A$ in ${\mathscr V}$) equipped with the structure of semigroup on $J^{\mathrm{op}}(A,p)$ in the monoidal category $\operatorname{Gph}{\mathscr V}$ of Proposition~\ref{monstrgph}, and a unit $\eta \colon I \to A$ for the underlying semigroup $A$ in ${\mathscr V}$. \end{Definition} This def\/inition should make the calculation of free weighted Rota--Baxter monoids possible; compare \cite{AM2006, Cart1972,E-FG2008, Rota1969}. To make Def\/inition~\ref{wRBmon} a little more explicit, as expected, a $\lambda$-weighted Rota--Baxter monoid $(A,p)$ is a monoid~$A$ in~${\mathscr V}$ equipped with an endomorphism $p\colon A \to A$ satisfying \begin{gather* \mu \circ (p\otimes p) = p \circ \mu \circ (\lambda 1\otimes 1 + 1\otimes p + p\otimes 1) . \end{gather*} Derivations and Rota--Baxter operators are not the only sources of semigroups and monoids for the monoidal structure of Proposition~\ref{monstrgph}. The forgetful functor \begin{gather*} \mathrm{ve} \colon \ \operatorname{Gph}{\mathscr V} \longrightarrow {\mathscr V} \times {\mathscr V} , \end{gather*} taking the graph $(A,E)$ to the pair $(A,E)$, is strict monoidal and has a right adjoint $\mathrm{R}$ def\/ined by \begin{gather*} \mathrm{R}(X,Y) = (X, X\oplus X\oplus Y) \end{gather*} with $s = \mathrm{pr}_1$ (the f\/irst projection) and $t = \mathrm{pr}_2$ (the second projection). It follows that~$\mathrm{R}$ is monoidal and hence takes monoids to monoids. \begin{Example} Take ${\mathscr V} = \mathrm{Mod}_k$, the category of modules over a commutative ring $k$. For a graph $(A,E)$ in this ${\mathscr V}$, we can write $e\colon a\to b$ to mean $a,b\in A$, $e\in E$ with $s(e)=a, t(e)=b$. For $k$-algebras $A$ and $B$, we obtain a monoid $\mathrm{R}(A,B)$ in $\operatorname{Gph}{\mathscr V}$: the graph is $\mathrm{pr}_1, \mathrm{pr}_2\colon A\oplus A \oplus B \to A$ and the multiplication is def\/ined by: \begin{gather*} \left((a_1,a_2,b) \colon a_1 \to a_2\right) \cdot \left((c_1,c_2,d)\colon c_1\to c_2\right) = (\lambda a_1c_1 + a_1c_2 + a_2c_1, a_2c_2,bd)\colon a_1c_1 \to a_2c_2. \end{gather*} \end{Example} Of course the ${\mathscr V}$-functor $J$ \eqref{jay} has both adjoints if ${\mathscr V}$ is complete and cocomplete enough. In particular, the right adjoint \begin{gather* K \colon \ \operatorname{Gph}{\mathscr V} \longrightarrow [\Sigma \mathbb{N},{\mathscr V}] \end{gather*} is def\/ined by taking $K(A,E)$ to be the equalizer of the two morphisms \begin{gather*} s^{\mathbb{N}}, t^{\mathrm{succ}} \colon \ E^{\mathbb{N}}\to A^{\mathbb{N}} \end{gather*} equipped with the endomorphism $e\colon K(A,E) \to K(A,E)$ induced by $E^{\mathrm{succ}}$. Here $\mathrm{succ} \colon \mathbb{N} \to \mathbb{N}$ is the successor function $n\mapsto n+1$. Since $J$ \eqref{jay} is strong monoidal for the monoidal structures under discussion, the adjunction $J\dashv K$ is monoidal. So $K$ takes semigroups to semigroups and monoids to monoids. In particular, if $(A,p)$ is a $\lambda$-weighted Rota--Baxter monoid in ${\mathscr V}$, then $K$ takes the graph $(A,A)$ with $s=1_A$ and $t=p$ to a $\lambda$-weighted derivational semigroup in~${\mathscr V}$. The underlying object is the limit of the diagram \begin{gather*} A \stackrel{p} \longleftarrow A \stackrel{p} \longleftarrow A \stackrel{p} \longleftarrow \cdots \end{gather*} in ${\mathscr V}$. \begin{Example} Taking ${\mathscr V} = \mathrm{Vect}_k$ and a $\lambda$-weighted Rota--Baxter $k$-algebra $p\colon A\to A$, we have the non-unital $\lambda$-weighted derivational $k$-algebra \begin{gather*} K(J(A,p)^{\mathrm{op}}) = \big\{a\in A^{\mathbb{N}} \, | \, p(a_{n+1}) = a_n \big\} \end{gather*} with $d(a)_n = a_{n+1}$. The multiplication on $K(J(A,p)^{\mathrm{op}})$ is the restriction of the $\lambda$-weighted Hurwitz multiplication on $A^{\mathbb{N}}$ arising from the non-unital algebra~$(A,\diamond)$ of Proposition~\ref{diamond}. Moreover, $K(J(A,p)^{\mathrm{op}})$ supports a $\lambda$-weighted Rota--Baxter operator $p$ def\/ined by $p(a)_n = p(a_n)$. Notice too that $d\circ p = 1$. \end{Example} We conclude this section by describing the promonoidal structure in the sense of Day \cite{DayConv} with respect to which the monoidal structure of Proposition~\ref{monstrgph} is convolution. Let $\mathbb{G}$ denote the category whose only objects are $0$ and $1$, with the only non-identity morphisms $\sigma , \tau \colon 1 \to 0$. Write $I_*\mathbb{G}$ for the free ${\mathscr V}$-category on~${\mathscr V}$. Then $\operatorname{Gph}{\mathscr V} = [\mathbb{G}, {\mathscr V}] = [I_*\mathbb{G}, {\mathscr V}]$ where the f\/irst set of square brackets means the ordinary functor category while the second means the ${\mathscr V}$-enriched functor category. The promonoidal structure in question is technically on~$I_*\mathbb{G}$ in the ${\mathscr V}$-enriched sense. However, we can look at it as consisting of an ordinary a functor \begin{gather*} \mathrm{P} \colon \ \mathbb{G}^{\mathrm{op}} \times \mathbb{G}^{\mathrm{op}} \longrightarrow \operatorname{Gph}{\mathscr V} \end{gather*} and an object $\mathrm{J} \in \operatorname{Gph}{\mathscr V}$. Of course $\mathrm{J}$ is just the graph $0, 1 \colon I \to I$ which is the tensor unit. We can regard $\mathrm{P}$ as a ``cograph of cographs of graphs'' (although a cograph looks just like a graph): \begin{gather*} \xymatrix{ I \ar@<-1ex>[rr]_-{(0,1)} \ar@<1ex>[rr]^-{(1,0)} \ar@<-1ex>[dd]_{(1,0)} \ar@<1ex>[dd]^{(0,1)} && I\oplus I=2\cdot I \ar@<-1ex>[dd]_{\bigl(\begin{smallmatrix} 1&0&0&0\\ 0&0&1&0 \end{smallmatrix} \bigr)} \ar@<1ex>[dd]^{\bigl(\begin{smallmatrix} 0&1&0&0\\ 0&0&0&1 \end{smallmatrix} \bigr)} \\ \\ 2\cdot I=I\oplus I \ar@<-1ex>[rr]_-{\bigl(\begin{smallmatrix} 0&0&1&0\\ 0&0&0&1 \end{smallmatrix} \bigr)} \ar@<1ex>[rr]^-{\bigl(\begin{smallmatrix} 1&0&0&0\\ 0&1&0&0 \end{smallmatrix} \bigr)} && \left((\lambda,1,1,0), (0,0,0,1) \colon I \to 4\cdot I\right) } \end{gather*} \section[The $L$-Hurwitz product of species]{The $\boldsymbol{L}$-Hurwitz product of species}\label{Ltenspecies} Let $\mathfrak{S}$ denote the groupoid whose objects are f\/inite sets and whose morphisms are bijective functions. We write $U+V$ for the disjoint union of sets~$U$ and~$V$; this is the binary coproduct as objects of the category~$\mathrm{Set}$ of sets and all functions. It is not the coproduct in~$\mathfrak{S}$; yet it does provide the symmetric monoidal structure on $\mathfrak{S}$ of interest here. When we write $X=A+B$ for~$A$ and~$B$ subsets of a set~$X$, we mean $X=A\cup B$ and $\varnothing = A\cap B$. We have the particular f\/inite sets $\langle n \rangle = \{ 1, 2, \dots, n\}$. Let ${\mathscr V}$ denote a monoidal category with f\/inite coproducts which are preserved by tensoring on either side by an object. The tensor product of $V,W\in {\mathscr V}$ is denoted by $V\otimes W$ and the unit object by $I$. Justif\/ied by coherence theorems (see~\cite{BTC} for example), we write as if the monoidal structure on ${\mathscr V}$ were strictly associative and strictly unital. For any set~$S$, write $S\cdot V$ for the coproduct of~$S$ copies of $V\in {\mathscr V}$, when it exists (as it does for~$S$ f\/inite). The {\em category of ${\mathscr V}$-valued Joyal species}, after \cite{Species, AnalFunct}, is the functor category $[\mathfrak{S},{\mathscr V}]$. The objects will simply be called {\em species} when ${\mathscr V}$ is understood. Suppose $L\colon \mathfrak{S} \to \mathcal{Z}{\mathscr V}$ is a braided strong monoidal functor into the monoidal centre (in the sense of \cite{TYBO}) of ${\mathscr V}$. We have natural isomorphisms \begin{gather}\label{u} u_{X,V} \colon \ LX\otimes V\cong V\otimes LX, \end{gather} such that \begin{gather*} \xymatrix{ LX\otimes V\otimes W \ar[rd]_{u_{X,V\otimes W}} \ar[rr]^{u_{X,V}\otimes 1_W} && V\otimes LX\otimes W \ar[ld]^{ \ 1_V\otimes u_{X,W}} \\ & V\otimes W\otimes LX & } \end{gather*} If ${\mathscr V}$ itself is braided ({\em a fortiori} symmetric), we can take a braided strong monoidal functor $\mathfrak{S} \to {\mathscr V}$ and compose it with the canonical braided strong monoidal functor ${\mathscr V} \to \mathcal{Z}{\mathscr V}$ to obtain such an $L$. By way of example of an $L\colon \mathfrak{S} \to \mathcal{Z}{\mathscr V}$, we could take any f\/inite set $\Lambda$ and $LX = \Lambda^X \cdot I$ with $L\sigma = \Lambda^{\sigma^{-1}}\cdot I$ for any bijective function $\sigma$. \begin{Definition} The {\em $L$-Hurwitz product} $F\otimes^LG$ of species $F$ and $G$ is def\/ined on objects $X\in \mathfrak{S}$ by \begin{gather}\label{Lten} \big(F\otimes^LG\big)X = \sum_{X=U\cup V}{L(U\cap V) \otimes FU\otimes GV} . \end{gather} The def\/inition of $F\otimes^LG$ on morphisms is clear since any bijective function $\sigma \colon X\to Y$ restricts to bijections \begin{gather*} U\to \sigma U, \qquad V\to \sigma V, \qquad U\cup V \to \sigma U\cup \sigma V, \qquad U\cap V \to \sigma U\cap \sigma V . \end{gather*} \end{Definition} Let $J\colon \mathfrak{S} \to {\mathscr V}$ be the species whose value at $X$ is the unit $I$ for tensor in~${\mathscr V}$ when $X$ is empty and is initial in~${\mathscr V}$ otherwise. Clearly $J$ is a unit for the $L$-Hurwitz product in the sense that we have canonical isomorphisms \begin{gather*} \lambda_G\colon \ J\otimes^LG \to G \qquad \text{and} \qquad \rho_F \colon \ F \to F\otimes^LJ . \end{gather*} Associativity isomorphisms \begin{gather}\label{Lassoc} \alpha_{F,G,H} \colon \ \big(F\otimes^LG\big)\otimes^LH \cong F\otimes^L\big(G\otimes^LH\big) \end{gather} are obtained using the following result easily proved by Venn diagrams. \begin{Lemma}\label{Venn} $(U\cup V)\cap W + U\cap V \cong U\cap (V\cup W) + V\cap W$. \end{Lemma} Then, to def\/ine \eqref{Lassoc}, we use the isomorphisms \begin{gather*} L((U\cup V)\cap W)\otimes L(U\cap V)\otimes FU\otimes GV\otimes HW \\ \qquad{} \cong L((U\cup V)\cap W+U\cap V)\otimes FU\otimes GV\otimes HW \\ \qquad{} \cong L(U\cap (V\cup W) + V\cap W)\otimes FU\otimes GV\otimes HW \\ \qquad{} \cong L(U\cap (V\cup W)) \otimes L(V\cap W)\otimes FU\otimes GV\otimes HW \\ \qquad{} \cong L(U\cap (V\cup W)) \otimes FU\otimes L(V\cap W)\otimes GV\otimes HW , \end{gather*} the f\/irst and third coming from the strong monoidal structure on $L$, the second from Lemma~\ref{Venn}, and the fourth from the monoidal centre structure~\eqref{u} on $L(V\cap W)$. In the case $L=J$, we recover from~\eqref{Lten} the usual convolution (Cauchy) product of species appearing in~\cite{Species}. In the case where $L$ is the exponential series $LX=I$ for all~$X$, we recover the Heisenberg product appearing in \cite{AFM2015, MoreiraPhD}. For a general $L$, the term $L(U\cap V)$ can be considered a measure of the failure of $U$ and $V$ to be disjoint. \section{A combinatorial interpretation}\label{aci} We consider the case where ${\mathscr V} = \mathrm{Set}$ so that $[\mathfrak{S},\mathrm{Set}]$ is the category of species as studied in~\cite{Species}. Fix any set $\Lambda$. Def\/ine the species~$L$ by \begin{gather*} LX = \biggl\{ S=(S_{\lambda})_{\lambda \in \Lambda} \, | \, S_{\lambda}\subseteq X , \, \sum_{\lambda \in \Lambda}{S_{\lambda}} = X \biggr\} \end{gather*} and $(L\sigma )S=(\sigma S_{\lambda})_{\lambda \in \Lambda}$. In other words, a structure of the species $L$ on the set $X$ is a partition of $X$ into a $\Lambda$-indexed family of disjoint (possibly empty) subsets. A structure of the species $F\otimes^LG$ on the set consists of a quintuplet $(U,V,S,\phi,\gamma)$ where~$U$,~$V$ are subsets of $X$ such that $X=U\cup V$, and~$S$, $\phi$, $\gamma$ are $L$-, $F$-, $G$-structures on $U\cap V$, $U$, $V$, respectively. We write $\#S$ for the cardinality of the set $S$. We assume $\Lambda$ is f\/inite and put $\lambda = \# \Lambda$. The {\em cardinality sequence} of a species $F$ is the sequence $\#F \colon \mathbb{N} \to \mathbb{Z}$ def\/ined by \begin{gather*} (\#F)(n) = \#F\langle n \rangle . \end{gather*} We consider the $\lambda$-Hurwitz product \eqref{LambHurw} on $\mathbb{Z}^{\mathbb{N}}$. \begin{Proposition}\label{card} $\# (F\otimes^LG)= \# F\cdot^{\lambda} \#G$. \end{Proposition} This result specializes to Theorem~2.7 of \cite{AFM2015} when~$L$ is the exponential species and $\lambda = 1$. \section{The iterated tensor and coherence}\label{titac} \begin{Proposition}\label{altLtensor} An alternative definition of $F\otimes^LG$ is \begin{gather*} \big(F\otimes^LG\big)X = \sum_{X=A+B+C}{L(C) \otimes F(A+C)\otimes G(B+C)} . \end{gather*} \end{Proposition} \begin{proof} Given $X=A+B+C$, put $U=A+C$ and $V=B+C$. Given $X=U\cup V$, put $A= U\backslash V$, $B= V\backslash U$, and $C=U\cap V$. \end{proof} The $n$-fold version of this tensor product is \begin{gather} \otimes^L_n(F_1,\dots,F_n)X \nonumber\\ \qquad{} = \sum_{X=\sum\limits_{\varnothing \ne S\subseteq \langle n \rangle} {A_S}} {L\left( \sum_{S}(\#S-1)\cdot A_S\right) \otimes {F_1\left( \sum_{1\in S}A_S\right) \otimes \dots \otimes F_n\left( \sum_{n\in S}A_S\right) }} .\label{nfoldLtensor1} \end{gather} This yields the formula in Proposition~\ref{altLtensor} for $n=2$ by taking $A=A_{1}$, $B=A_{2}$, $C=A_{\{1,2\}}$. Note that~\eqref{nfoldLtensor1} is unchanged if we replace $\langle n \rangle$ by any set of cardinality~$n$. \begin{Remark} As Joachim Kock reminded me, if we replace $\langle n \rangle$ by the `$(n-1)$-simplex' $[n-1] = \{0,1,\dots , n-1\}$, then the non-empty subsets~$S$ correspond to the non-degenerate faces of~$[n-1]$ and~$\#S-1$ is the dimension of the face. \end{Remark} Let us consider the ef\/fect of inserting one pair of parentheses in a multiple tensor~\eqref{nfoldLtensor1}. We look at \begin{gather* \otimes^L_{p+1+r}\big(F_1,\dots,F_p, \otimes^L_q(F_{p+1},\dots,F_{p+q}),F_{p+q+1},\dots,F_{p+q+r}\big)X . \end{gather*} Using \eqref{nfoldLtensor1} twice, once with $n = p+1+r$ and once with $n=q$, we obtain the expression \begin{gather*} L\left(\sum_{T}(\#T-1)\cdot B_T\right)\otimes F_1\left(\sum_{1\in T}B_T\right) \otimes \dots \otimes F_p\left(\sum_{p\in T}B_T\right) \\ \qquad{} \otimes L\left(\sum_{R}(\#R-1)\cdot C_R\right)\otimes F_{p+1}\left(\sum_{p+1\in R}C_R\right) \otimes \dots \otimes F_{p+q}\left(\sum_{p+q\in R}C_R\right) \nonumber \\ \qquad{} \otimes F_{p+q+1}\left(\sum_{p+q+1\in T}B_T\right) \otimes \dots \otimes F_{p+q+r}\left(\sum_{p+q+r\in T}B_T\right) \nonumber \end{gather*} summed over all families \begin{gather*} B = \big( B_T \, | \, \varnothing \ne T\subseteq \{1,\dots , p,\star ,p+q+1,\dots , p+q+r\} \big) \end{gather*} providing a partition $X= \sum_TB_T$ of $X$, together with all families \begin{gather*} C = \big( C_R \, | \, \varnothing \ne R\subseteq \{p+1,\dots , p+q\} \big) \end{gather*} providing a partition $\sum_{\star \in T}B_T= \sum_RC_R$ of $\sum_{\star \in T}$. Using the fact that $L$ lands in $\mathcal{Z}{\mathscr V}$ and that~$L$ is strong monoidal, we obtain the isomorphic expression \begin{gather} L \left(\sum_{T}(\#T-1)\cdot B_T + \sum_{R}(\#R-1)\cdot C_R \right) \otimes F_1\left(\sum_{1\in T}B_T\right) \otimes \dots \otimes F_p\left(\sum_{p\in T}B_T\right) \nonumber \\ \qquad{} \otimes F_{p+1}\left(\sum_{p+1\in R}C_R\right) \otimes \dots \otimes F_{p+q}\left(\sum_{p+q\in R}C_R\right) \nonumber \\ \qquad{} \otimes F_{p+q+1}\left(\sum_{p+q+1\in T}B_T\right) \otimes \dots \otimes F_{p+q+r}\left(\sum_{p+q+r\in T}B_T\right)\label{nfoldLtensor3} \end{gather} summed over the same families $(B,C)$. For $\star \in T$, we have $B_T = \sum_R{C_R\cap B_T}$. On the other hand, $C_R = \sum_{\star \in T}{C_R\cap B_T}$. Put \begin{gather*} Q = \{p+1,\dots , p+q \} \qquad \text{and} \qquad N = \{1,\dots ,p\}\cup \{ p+q+1, \dots , p+q+r \} , \end{gather*} and obtain a family \begin{gather*} A = (A_S \, | \, \varnothing \ne S\subseteq \langle p+q+r \rangle ) \end{gather*} partitioning $X$ by def\/ining \begin{gather*} A_S = \begin{cases} B_S & \mbox{for} \ S\cap Q = \varnothing, \\ C_{S\cap Q} \cap B_{(S\cap N) \cup \{\star \}} & \mbox{for} \ S\cap Q \ne \varnothing . \end{cases} \end{gather*} Then we can recover the $B$ and $C$ families via \begin{gather*} B_T = \begin{cases} A_T & \mbox{for} \ \star \notin T , \\ \sum_R{A_{R\cup (T\backslash \star)}} & \mbox{for} \ \star \in T, \end{cases} \qquad \text{and} \qquad C_R = \sum_{\star \in T}{A_{R\cup (T\backslash \star)}} . \end{gather*} We have the following equations \begin{alignat*}{3} & (i)\quad && \sum_{S}{(\#S-1)\cdot A_S} = \sum_{T}{(\#T-1)\cdot B_T} + \sum_{R}{(\#R-1)\cdot C_R},&\\ & (ii) \quad && \sum_{k\in S}{A_S} = \begin{cases} \displaystyle \sum\limits_{k\in T}{B_T} & \mbox{for} \ 1\le k \le p \ \text{ or } \ p+q +1\le k \le p+q + r, \vspace{1mm}\\ \displaystyle \sum\limits_{k\in R}{C_R} & \mbox{for} \ p+1\le k \le p+q . \end{cases} \end{alignat*} This shows that the sum of the expressions \eqref{nfoldLtensor3} over the pairs~$(B,C)$ is equal to \eqref{nfoldLtensor1} with $n = p+q+r$. Remember however that the tensor product $+$ on $\mathfrak{S}$ is not strict symmetric; the symmetry on $\mathfrak{S}$ provides canonical bijections between the left- and right-hand sides of~(i) and~(ii). Since~$L$ is braided, we have constructed a natural isomorphism \begin{gather} a_{p,q,r} \colon \ \otimes^L_n(F_1,\dots,F_{p+q+r}) \nonumber \\ \hphantom{a_{p,q,r} \colon \ }{} \cong \otimes^L_{p+1+r}\big(F_1,\dots,F_p, \otimes^L_q(F_{p+1},\dots,F_{p+q}),F_{p+q+1},\dots,F_{p+q+r}\big) . \label{nfoldassoc} \end{gather} Now consider the Mac Lane--Stashef\/f pentagon for 2-fold bracketings of $F_1\otimes^L F_2\otimes^L F_3 \otimes^L F_4$ as the vertices. Let $a\colon H \to K$ denote one of the edges of the pentagon obtained using the associativity isomorphisms~\eqref{Lassoc}. There is a composite~$b$ of two isomorphisms, each using one instance of an isomorphism \eqref{nfoldassoc}, which goes from $\otimes^L_4(F_1,F_2,F_3,F_4)$ to~$H$, and another one $c \colon \otimes^L_4(F_1,F_2,F_3,F_4) \to H$. By coherence of the braided strong monoidal functor~$L$, it follows that $a\circ b = c$. Commutativity of the pentagon is a consequence of commutativity of all these triangular sides of the so-formed pentagonal cone. \section[Promonoidal structures on $\mathfrak{S}$]{Promonoidal structures on $\boldsymbol{\mathfrak{S}}$}\label{psoS} For f\/inite sets $A$, $B$ and $X$, let $\mathrm{Cov}(A,B;X)$ denote the set of jointly surjective pairs $(\mu , \nu)$ of injective functions \begin{gather*} A\stackrel{\mu} \longrightarrow X \stackrel{\nu} \longleftarrow B . \end{gather*} We write $A\times_X B$ for the pullback of $\mu$ and $\nu$. Def\/ine a functor \begin{gather*} \mathrm{P} \colon \ \mathfrak{S}^{\mathrm{op}}\times \mathfrak{S}^{\mathrm{op}}\times \mathfrak{S} \longrightarrow {\mathscr V} \end{gather*} by \begin{gather*} \mathrm{P}(A,B;X) = \sum_{(\mu , \nu)\in \mathrm{Cov}(A,B;X)}{L(A\times_X B)} . \end{gather*} \begin{Proposition} $(F\otimes^L G)X \cong \int^{A,B}{\mathrm{P}(A,B;X)\otimes FA\otimes GB}$. \end{Proposition} \begin{proof} A universal dinatural transformation \begin{gather*} \theta_{A,B} \colon \ \mathrm{P}(A,B;X) \otimes FA\otimes GB \longrightarrow \sum_{X=U\cup V}{L(U\cap V) \otimes FU\otimes GV} \end{gather*} is def\/ined by taking its composite with the injection at $(\mu , \nu)\in \mathrm{Cov}(A,B;X)$ to be obtained from the $(\mu(A),\nu(B))$ injection and the bijections $A\cong \mu(A)$, $B\cong \nu(B)$, $A\times_X B \cong \mu(A)\cap \nu(B)$, noting $X = \mu(A)\cup \nu(B)$. \end{proof} By Day's general theory of promonoidal categories \cite{DayPhD, DayConv}, we have \begin{Corollary} If moreover ${\mathscr V}$ is $($left and right$)$ closed and sufficiently complete then $\otimes^L$ defines a~$($left and right$)$ closed monoidal structure on $[\mathfrak{S},{\mathscr V}]$. The monoidal structure coincides with that of Section~{\rm \ref{Ltenspecies}}. \end{Corollary} \section[The weighted bimonoidale structure on $\operatorname{fam}\mathfrak{S}$]{The weighted bimonoidale structure on $\boldsymbol{\operatorname{fam}\mathfrak{S}}$}\label{weightedbimonoidalefam} Lately (for example, in~\cite{103}), we have used the term {\em monoidale} for ``pseudomonoid'', also called ``monoidal object'', in a monoidal bicategory~${\mathscr M}$~\cite{mbaHa}. For example, the monoidales in the cartesian monoidal bicategory~$\mathrm{Cat}$ are monoidal categories. When the monoidal bicategory ${\mathscr M}$ is symmetric, the monoidales themselves form a symmetric monoidal bicategory where the morphisms are strong monoidal. With the same tensor product, the opposite bicategory~${\mathscr M}^{\mathrm{op}}$ is symmetric monoidal. A~{\em bimonoidale in ${\mathscr M}$} is a monoidale in~${\mathscr M}^{\mathrm{op}}$. Incidentally, every monoidale in $\mathrm{Cat}$ is uniquely a bimonoidale. Consider the 2-category $\mathrm{Cat}_{+}$ of (small) categories admitting f\/inite coproducts, and f\/inite-coproduct-preserving functors. This becomes a symmetric closed monoidal bicategory (see~\cite{mbaHa}) with tensor product ${\mathscr A} \boxtimes {\mathscr B}$ representing functors $H \colon {\mathscr A} \times {\mathscr B} \to {\mathscr X}$ for which each $H(A,-)$ and each $H(-,B)$ is f\/inite coproduct preserving. Clearly the monoidal category~${\mathscr V}$ of Section~\ref{Ltenspecies} is a~monoidale (= pseudomonoid) in $\mathrm{Cat}_{+}$. For any category ${\mathscr C}$, we write $\operatorname{fam}{\mathscr C}$ for the free f\/inite coproduct completion of ${\mathscr C}$. That is, $\mathrm{fam}$ provides the left biadjoint to the forgetful 2-functor $\mathrm{Cat}_{+} \to \mathrm{Cat}$. Indeed, $\mathrm{fam}$ is a strong monoidal pseudofunctor; in particular, there is a canonical equivalence \begin{gather*} \operatorname{fam}{\mathscr C} \boxtimes \operatorname{fam}{\mathscr D} \simeq \operatorname{fam} ({\mathscr C} \times {\mathscr D} ) . \end{gather*} Every monoidal category ${\mathscr C}$ determines a monoidale $\operatorname{fam}{\mathscr C}$ in $\mathrm{Cat}_{+}$. Explicitly, the objects of $\operatorname{fam}{\mathscr C}$ can be written formally as $\sum\limits_{s\in S}{C_s}$ where $S$ is a f\/inite set and $C_s\in {\mathscr C}$. Then, if ${\mathscr C}$ is monoidal, the monoidale structure on $\operatorname{fam}{\mathscr C}$ is def\/ined by \begin{gather*} \sum_{s\in S}{C_s}\otimes \sum_{t\in T}{D_t} = \sum_{(s,t)\in S\times T}{C_s\otimes D_t} . \end{gather*} We are interested in $\operatorname{fam}\mathfrak{S}$. By what we have just said, this is a monoidale in $\mathrm{Cat}_{+}$: \begin{gather*} \sum_{s\in S}{U_s}\otimes \sum_{t\in T}{V_t} = \sum_{(s,t)\in S\times T}{(U_s+V_t)} . \end{gather*} Fix a f\/inite set $\Lambda$ and def\/ine $L \colon \mathfrak{S} \to \mathrm{Set}$ by $LX = \Lambda^X$ and $L\sigma = \Lambda^{\sigma^{-1}}$. Def\/ine a coproduct-preserving functor \begin{gather}\label{Delta} \Delta \colon \ \operatorname{fam}\mathfrak{S} \longrightarrow \operatorname{fam}(\mathfrak{S}\times \mathfrak{S}) \simeq \operatorname{fam}\mathfrak{S}\boxtimes \operatorname{fam}\mathfrak{S} \end{gather} by \begin{gather*} \Delta(X) = \sum_{X=A+B+C}{L(C)\cdot (A+C,B+C)} \end{gather*} for $X\in \mathfrak{S}$. \begin{Proposition} The functor $\Delta$ of \eqref{Delta} is strong monoidal. \end{Proposition} \begin{proof} In $\Delta(X+Y)= \sum_{X+Y=A+B+C}{L(C)\cdot (A+C,B+C)}$ we can put \begin{gather*} P=X\cap A,\qquad Q=X\cap B, \qquad R=X\cap C,\\ U=Y\cap A, \qquad V=Y\cap B, \qquad W=Y\cap C \end{gather*} to obtain \begin{gather*} \Delta(X+Y) = \sum_{X =P+Q+R, Y=U+V+W}{L(R+W)\cdot (P+U+R+W,Q+V+R+W)} \\ \hphantom{\Delta(X+Y) }{} \cong \sum_{X =P+Q+R}{L(R)\cdot (P+R,Q+R)\times \sum_{Y =U+V+W}{L(W)\cdot (U+W,V+W)}} \\ \hphantom{\Delta(X+Y) }{} \cong \Delta X \times \Delta Y , \end{gather*} as required. \end{proof} The relationship between this structure and the promonoidal structure of Section~\ref{psoS} will be examined elsewhere; indeed, see~\cite{GaSt}. \section{Weighted categorical derivations}\label{wcd} Suppose ${\mathscr V}$ is a symmetric monoidal closed category which is complete and cocomplete, and suppose $L\colon \mathfrak{S} \to {\mathscr V}$ is a strong monoidal functor. Harking back to Remark~\ref{Lack}, we are prompted to consider the 2-category \begin{gather}\label{Hom} \mathfrak{E} = \mathrm{Hom}(\Sigma \mathfrak{S},{\mathscr V}\text{-}\mathrm{Cat}_{L,+}) . \end{gather} Here $\Sigma \mathfrak{S}$ denotes the bicategory with one object (denoted $\star$) whose homcategory is the symmetric groupoid $\mathfrak{S}$; composition is provided by the monoidal structure~$+$ on~$\mathfrak{S}$. Also~${\mathscr V}\text{-}\mathrm{Cat}_{L,+}$ denotes the 2-category of ${\mathscr V}$-categories admitting f\/inite coproducts and tensoring with the object~$L(X)$ of~${\mathscr V}$; the morphisms are ${\mathscr V}$-functors preserving these colimits; the 2-cells are ${\mathscr V}$-natural transformations. The objects of~\eqref{Hom} are pseudofunctors $T \colon \Sigma \mathfrak{S}\to {\mathscr V}\text{-}\mathrm{Cat}_{L,+}$, the morphisms are pseudonatural transformations, and the 2-cells are modif\/ications (in terminology of~\cite{KelSt1974}). Such an object $T$ determines a ${\mathscr V}$-category $T\star = {\mathscr M} \in {\mathscr V}\text{-}\mathrm{Cat}_{L,+}$ and a strong monoidal functor $T_{\star \star}\colon \mathfrak{S} \to {\mathscr V}\text{-}\mathrm{Cat}_{L,+}({\mathscr M},{\mathscr M})$. This $T_{\star \star}$ is determined up to equivalence by an endomorphism $D\colon {\mathscr M} \to {\mathscr M}$ in ${\mathscr V}\text{-}\mathrm{Cat}_{L,+}$ and an involutive Yang--Baxter\footnote{This is Rodney Baxter \url{http://en.wikipedia.org/wiki/Rodney_Baxter}, not the author of~\cite{Baxt1960}.} operator $\rho \colon D\circ D \Rightarrow D\circ D$ on $D$ (for example, see~\cite{TYBO} for terminology). Then $T_{\star \star}\langle n \rangle \cong D^{\circ n}$ and, for the non-identity bijection $\tau \colon \langle 2 \rangle \to \langle 2 \rangle$, $T\tau$ transports to $\rho$. Therefore we shall write the object~$T$ of~$\mathfrak{E}$~\eqref{Hom} as a pair $({\mathscr M}, D^*)$ where $T\star = {\mathscr M}$ and $T_{\star \star} = D^*$. The morphisms of $\mathfrak{E}$ are then squares \begin{gather*} \xymatrix{ {\mathscr M} \ar[d]_{D^*X}^(0.5){\phantom{AA}}="1" \ar[rr]^{K} && {\mathscr N} \ar[d]^{E^*X}_(0.5){\phantom{AA}}="2" \ar@{=>}"1";"2"^-{\kappa_X \cong} \\ {\mathscr M} \ar[rr]_-{K} && {\mathscr N} } \end{gather*} in ${\mathscr V}\text{-}\mathrm{Cat}_{L,+}$ which are ${\mathscr V}$-natural in $X$ and, stacking vertically, respect the tensor in $\mathfrak{S}$. Ge\-ne\-ralizing the tensor $\boxtimes$ on $\mathrm{Cat}_{+}$ as in Section~\ref{weightedbimonoidalefam}, we have a tensor, also denoted by $\boxtimes$, on ${\mathscr V}\text{-}\mathrm{Cat}_{L,+}$, where the tensor product ${\mathscr A} \boxtimes {\mathscr B}$ represents ${\mathscr V}$-functors $H \colon {\mathscr A} \otimes {\mathscr B} \to {\mathscr X}$ for which each of~$H(A,-)$ and~$H(-,B)$ preserves f\/inite coproducts and tensoring with each~$L(X)$. This makes ${\mathscr V}\text{-}\mathrm{Cat}_{L,+}$ into a monoidal bicategory. This tensor product $\boxtimes$ on ${\mathscr V}\text{-}\mathrm{Cat}_{L,+}$ lifts to one, denoted by $\widehat{\boxtimes}$, on $\mathfrak{E}$ \eqref{Hom}: \begin{gather*} ({\mathscr M}, D^*)\widehat{\boxtimes} ({\mathscr N}, E^*) = \big({\mathscr M} \boxtimes {\mathscr N}, D^*\widehat{\boxtimes}E^*\big)m, \end{gather*} where \begin{gather*} \big(D^*\widehat{\boxtimes}E^*\big)X = \sum_{X=U\cup V}{L(U\cap V) \otimes D^*U \boxtimes E^*V} . \end{gather*} To see that $D^*\widehat{\boxtimes}E^* \colon \mathfrak{S} \to {\mathscr V}\text{-}\mathrm{Cat}_{L,+}({\mathscr M} \boxtimes {\mathscr N},{\mathscr M} \boxtimes {\mathscr N})$ is strong monoidal, we calculate \begin{gather*} (D^*\widehat{\boxtimes}E^*)(X+Y) \cong \sum_{X+Y=U\cup V}{L(U\cap V) \otimes D^*U \boxtimes E^*V} \\ \qquad{} \cong \sum_{X=U_1\cup V_1, Y = U_2\cup V_2}{L(U_1\cap V_1 + U_2\cap V_2) \otimes (D^*U_1\circ D^*U_2) \boxtimes (E^*V_1\circ E^*V_2)} \\ \qquad{} \cong \sum_{X=U_1\cup V_1, Y = U_2\cup V_2}{L(U_1\cap V_1) \otimes L(U_2\cap V_2) \otimes (D^*U_1\boxtimes E^*V_1) \circ (D^*U_2\boxtimes E^*V_2)}\\ \qquad{} \cong \sum_{X=U_1\cup V_1, Y = U_2\cup V_2}{L(U_1\cap V_1) \otimes (D^*U_1\boxtimes E^*V_1) \circ L(U_2\cap V_2)\otimes (D^*U_2\boxtimes E^*V_2)} \\ \qquad{} \cong (D^*\widehat{\boxtimes}E^*)X \circ (D^*\widehat{\boxtimes}E^*)Y . \end{gather*} In this way, $\mathfrak{E}$ \eqref{Hom} becomes a monoidal bicategory. \begin{Definition} An {\em $L$-weighted derivation} $D^*$ on a monoidale ${\mathscr M}$ in ${\mathscr V}\text{-}\mathrm{Cat}_{L,+}$ is a lifting of the monoidale structure on ${\mathscr M}$ to a monoidale structure on $({\mathscr M},D^*)$ in $\mathfrak{E}$~\eqref{Hom}. \end{Definition} \begin{Example}\label{specder} An $L$-weighted derivation $D^* \colon \mathfrak{S} \to {\mathscr V}\text{-}\mathrm{Cat}_{L,+}\left( [\mathfrak{S}, {\mathscr V}],[\mathfrak{S}, {\mathscr V}] \right)$ on the monoidale $([\mathfrak{S}, {\mathscr V}], \otimes^L)$ is def\/ined by $(D^*X)F = F(X+-)$. The main point is the canonical isomorphism below \begin{gather*} \xymatrix{ [\mathfrak{S}, {\mathscr V}]\boxtimes [\mathfrak{S}, {\mathscr V}] \ar[d]_{(D^*\widehat{\boxtimes}D^*)X}^(0.5){\phantom{AAAAAA}}="1" \ar[rr]^{\otimes^L} && [\mathfrak{S}, {\mathscr V}] \ar[d]^{D^*X}_(0.5){\phantom{AAAAAA}}="2" \ar@{=>}"1";"2"^-{\cong} \\ [\mathfrak{S}, {\mathscr V}]\boxtimes [\mathfrak{S}, {\mathscr V}] \ar[rr]_-{\otimes^L} && [\mathfrak{S}, {\mathscr V}] } \end{gather*} \end{Example} \begin{Remark} The f\/irst item of Remark~\ref{Lack} has a categorical version. The forgetful 2-functor $\mathrm{U} \colon \mathfrak{E} \to {\mathscr V}\text{-}\mathrm{Cat}_{L,+}$ has a right biadjoint $\mathrm{JS}$ taking the ${\mathscr V}$-category ${\mathscr A}$ to the object of $\mathfrak{E}$ determined be the ${\mathscr V}$-category $\mathrm{JS}{\mathscr A} = [\mathfrak{S}, {\mathscr A}]$ of species in ${\mathscr A}$, equipped with $L$-weighted derivation the $D^*$ just as in Example~\ref{specder} with the codomain ${\mathscr V}$ replaced by~${\mathscr A}$. Since $\mathrm{U}$ is strong monoidal, the biadjunction $\mathrm{U} \dashv_{\mathrm{bi}} \mathrm{JS}$ is monoidal. Consequently the biadjunction lifts to one between the 2-categories of monoidales in $\mathfrak{E}$ and ${\mathscr V}\text{-}\mathrm{Cat}_{L,+}$. Indeed $\mathrm{U}$ is pseudocomonadic. \end{Remark} \section{The iterated tensor product again}\label{titpa} Observe the following simple reindexing of \eqref{Lten}. \begin{Proposition}\label{alt2Ltensor} An alternative definition of $F\otimes^LG$ is \begin{gather*} (F\otimes^LG)X = \sum_{V\subseteq U\subseteq X}{L(U\backslash V) \otimes F(U)\otimes G(X\backslash V)} . \end{gather*} \end{Proposition} This leads us to another formula for the $n$-fold $L$-weighted tensor product. Def\/ine the {\em modified $n$-filtration set}\footnote{We are ``modifying'' the f\/iltration $U$ of $X$ by equipping it with the extra subsets $V$.} on any f\/inite set $X$ by \begin{gather*} \mathrm{mFil}_nX= \bigl\{(U,V) \, | \, U = (0=U_0\subseteq U_1\subseteq \dots \subseteq U_{n-1}\subseteq U_n=X ), \nonumber \\ \hphantom{\mathrm{mFil}_nX= \bigl\{}{} V = (V_0, V_1, \dots, V_{n-1} ) \text{ with } V_i\subseteq U_i \text{ for } 0\le i < n \bigr\} \end{gather*} \begin{Proposition} An alternative definition of the $n$-fold tensor product \eqref{nfoldLtensor1} is \begin{gather} \otimes^L_n(F_1,\dots,F_n)X = \sum_{(U,V)\in \mathrm{mFil}_nX} {L(U_1\backslash V_1) \otimes \dots \otimes L(U_{n-1}\backslash V_{n-1}}) \nonumber \\ \phantom{\otimes^L_n(F_1,\dots,F_n)X = }{} \otimes F_1 ( U_1\backslash V_0 ) \otimes \dots \otimes F_n ( U_n\backslash V_{n-1} ). \label{nfoldLtensor4} \end{gather} \end{Proposition} \begin{proof} The formula follows by repeated application of the formula of Proposition~\ref{alt2Ltensor} in eva\-luating the left bracketing \begin{gather*} \big(\cdots \big(F_1\otimes^LF_2\big)\otimes^L \dots \big)\otimes^LF_n \end{gather*} at $X$. \end{proof} Let us relate the formulas \eqref{nfoldLtensor1} and \eqref{nfoldLtensor4} in the case $n=3$. A modif\/ied 3-f\/iltration $(U,V)\in \mathrm{mFil}_3X$ of $X$ amounts to subsets $U_1\subseteq U_2\subseteq X$ and $V_1\subseteq U_1, V_2\subseteq U_2$. With this we can def\/ine \begin{gather*} A_1 = V_1\cap V_2 , \qquad A_2 = V_2\backslash U_1\cap V_2 , \qquad A_3 = X\backslash U_2 , \\ A_{12} = U_1\cap V_2 \backslash A_1 , \qquad A_{13} = V_1\backslash A_1 , \qquad A_{23} = (U_2\backslash U_1) \backslash A_2 , \qquad A_{123} = (U_1\backslash V_1) \backslash A_{12} \end{gather*} and verify that $X= A_1+A_2+A_3+A_{12}+A_{13}+A_{23}+A_{123}$. Conversely, given the partition~$A$ of~$X$, we can def\/ine \begin{gather*} U_1 = X\backslash (A_2+A_3 +A_{23}) , \qquad U_2 = X\backslash A_3 , \qquad V_1=A_1+A_{13} , \qquad V_2 = A_1+A_2+A_{12} . \end{gather*} \section{Tensor products for charades}\label{charades} The term ``charade'' is intended in the sense of Kapranov \cite[Def\/inition~3.2]{Kapr1995} and is related to Hall algebras (for example, see~\cite[Section~2]{UCL}). One has a small abelian (or triangulated) category ${\mathscr A}$ and looks at functors def\/ined on the groupoid ${\mathscr A}_{\mathrm{g}}$ of invertible morphisms in~${\mathscr A}$. A~promonoidal structure is def\/ined on ${\mathscr A}_{\mathrm{g}}$ using the short exact sequences (or triangles) of~${\mathscr A}$. The functors are tensored using convolution. Here we will only discuss the case where~${\mathscr A}$ is the category of f\/inite vector spaces over a f\/ixed f\/inite f\/ield~$\mathbb{F}_q$. We make a conjecture that a family of weighted monoidal structures exists and give some evidence for it. Motivated by Proposition~\ref{alt2Ltensor}, we consider generalizing the tensor product of~\cite{GLFq}. Let $\mathfrak{G}_q$ be the groupoid of f\/inite vector spaces over the f\/ield $\mathbb{F}_q$ of cardinality $q$; the morphisms are linear bijections. We write $V\le U$ to mean $V$ is an $\mathbb{F}_q$-linear subspace of~$U$, and we write $U/V$ for the quotient space. To be specif\/ic, take ${\mathscr V} = \mathrm{Vect}_{\mathbb{C}}$ to be the category of complex vector spaces with all linear functions. Let $L\colon \mathfrak{G}_q \to {\mathscr V}$ be a suitable functor: we will consider conditions on it later. For functors $F, G \colon \mathfrak{G}_q \to {\mathscr V}$, def\/ine $F\otimes^LG \colon \mathfrak{G}_q \to {\mathscr V}$ by \begin{gather}\label{qLtensor} \big(F\otimes^LG\big)X = \sum_{V\le U\le X}{L(U/V) \otimes F(U)\otimes G(X/ V)} . \end{gather} This leads us to an $n$-fold tensor product in a manner analogous to~\eqref{nfoldLtensor4}. Def\/ine the {\em modified $n$-flag set} on any f\/inite $\mathbb{F}_q$-vector space $X$ by \begin{gather*} \mathrm{mFlg}_nX= \bigl\{(U,V) \, | \, U = (0=U_0\le U_1\le \dots \le U_{n-1}\le U_n=X ), \nonumber \\ \hphantom{\mathrm{mFlg}_nX= \bigl\{}{} V = (V_0, V_1, \dots, V_{n-1} ) \text{ with } V_i\le U_i \text{ for } 0\le i < n \bigr\} \end{gather*} Now we put \begin{gather* \otimes^L_{n}(F_1,\dots,F_n)X = \sum_{(U,V)\in \mathrm{mFlg}_nX} {L(U_1\backslash V_1) \otimes \dots \otimes L(U_{n-1}\backslash V_{n-1}}) \\ \phantom{\otimes^L_{n}(F_1,\dots,F_n)X =}{} \otimes F_1 ( U_1\backslash V_0 ) \otimes \dots \otimes F_n ( U_n\backslash V_{n-1} ) . \end{gather*} The formula follows by repeated application of \eqref{qLtensor} in evaluating the left bracketing \begin{gather*} \big(\cdots \big(F_1\otimes^LF_2\big)\otimes^L \dots \big)\otimes^LF_n \end{gather*} at $X$. Let us look at the ternary tensor product \begin{gather*} \otimes^L_{3}(F,G,H)X = \big(\big(F\otimes^LG\big)\otimes^LH\big)X \end{gather*} It is a direct sum over modif\/ied 3-f\/lags $(U,V)$ on $X$; that is, subspaces $U_1\le U_2\le X$, $V_1\le U_1$ and $V_2\le U_2$. From these we can uniquely def\/ine vector spaces $A_S$ for each $\varnothing \ne S\subseteq \langle 3 \rangle$ via the following diagrams of short exact sequences: \begin{gather* \xymatrix{ A_1 \ar@{{ >}->}[d]_-{} \ar@{{ >}->}[r]^-{} & U_1\cap V_2 \ar@{{ >}->}[d]_-{} \ar@{->>}[r] & A_{12} \ar@{{ >}->}[d] \\ V_1 \ar@{->>}[d]_-{} \ar@{{ >}->}[r]^-{} & U_1 \ar@{->>}[d]_-{} \ar@{->>}[r]^-{} & U_1/V_1 \ar@{->>}[d]^-{} \\ A_{13} \ar@{{ >}->}[r]_-{} & U_1/U_1\cap V_2 \ar@{->>}[r]_-{} & A_{123} \\ } \\ \xymatrix{ U_1\cap V_2 \ar@{{ >}->}[d]_-{} \ar@{{ >}->}[r]^-{} & V_2 \ar@{{ >}->}[d]_-{} \ar@{->>}[r] & A_{2} \ar@{{ >}->}[d] \\ U_1 \ar@{->>}[d]_-{} \ar@{{ >}->}[r]^-{} & U_2 \ar@{->>}[d]_-{} \ar@{->>}[r]^-{} & U_2/U_1 \ar@{->>}[d]^-{} \\ U_1/U_1\cap V_2 \ar@{{ >}->}[r]_-{} & U_2/V_2 \ar@{->>}[r]_-{} & A_{23} \\ } \\ \xymatrix{ U_2 \ar@{{ >}->}[r]_-{} & X \ar@{->>}[r]_-{} & A_{3} , } \end{gather*} from which we see \begin{gather}\label{directsumdecomp} X \cong A_1\oplus A_2 \oplus A_3\oplus A_{12}\oplus A_{23}\oplus A_{13}\oplus A_{123} . \end{gather} Note also the isomorphisms \begin{gather* U_1/V_1 \cong A_{12}\oplus A_{123} , \qquad U_2/V_2 \cong A_{13}\oplus A_{23} \oplus A_{123}, \\ U_1 \cong A_1\oplus A_{12}\oplus A_{13}\oplus A_{123} , \qquad U_2/V_1 \cong A_{2}\oplus A_{12}\oplus A_{23}\oplus A_{123}, \\ X/V_2 \cong A_{3}\oplus A_{13}\oplus A_{23} \oplus A_{123} . \end{gather*} On the other hand, we can see that the formula for the right bracketing is \begin{gather*} \big(F\otimes^L\big(G\otimes^LH\big)\big)X = \sum_{M\le N\le X, M\le I\le J\le X}{L(N/M)\otimes L(J/I)\otimes FN\otimes G(J/M)\otimes H(X/I)}. \end{gather*} We can see that this indexing set also leads to a direct sum decomposition~\eqref{directsumdecomp} from the following diagrams of short exact sequences: \begin{gather* \xymatrix{ A_1 \ar@{{ >}->}[d]_-{} \ar@{{ >}->}[r]^-{} & U_1\cap V_2 \ar@{{ >}->}[d]_-{} \ar@{->>}[r] & A_{12} \ar@{{ >}->}[d] \\ V_1 \ar@{->>}[d]_-{} \ar@{{ >}->}[r]^-{} & U_1 \ar@{->>}[d]_-{} \ar@{->>}[r]^-{} & U_1/V_1 \ar@{->>}[d]^-{} \\ A_{13} \ar@{{ >}->}[r]_-{} & U_1/U_1\cap V_2 \ar@{->>}[r]_-{} & A_{123} \\ } \\ \xymatrix{ I\cap N \ar@{{ >}->}[d]_-{} \ar@{{ >}->}[r]^-{} & I \ar@{{ >}->}[d]_-{} \ar@{->>}[r] & A_{2} \ar@{{ >}->}[d] \\ J\cap N \ar@{->>}[d]_-{} \ar@{{ >}->}[r]^-{} & J \ar@{->>}[d]_-{} \ar@{->>}[r]^-{} & J/J\cap N \ar@{->>}[d]^-{} \\ A_{123} \ar@{^{(}->}[r]_-{} & J/I \ar@{->>}[r]_-{} & A_{23} \\ } \\ \xymatrix{ J+N \ar@{{ >}->}[r]_-{} & X \ar@{->>}[r]_-{} & A_{3} , \qquad J\cap N \ar@{{ >}->}[r]_-{} & N \ar@{->>}[r]_-{} & A_{13} . } \end{gather*} Note also the isomorphisms \begin{gather* N/M \cong A_{12}\oplus A_{13}\oplus A_{123} , \qquad J/I \cong A_{23} \oplus A_{123}, \qquad N \cong A_1\oplus A_{12}\oplus A_{13}\oplus A_{123} , \\ J/M \cong A_{2}\oplus A_{12}\oplus A_{23}\oplus A_{123},\qquad X/I \cong A_{3}\oplus A_{13} \oplus A_{23} \oplus A_{123} . \end{gather*} In order to have an associativity isomorphism we at least need a canonical isomorphism \begin{gather*} L(A_{12}\oplus A_{123})\otimes L(A_{13}\oplus A_{23} \oplus A_{123}) \cong L(A_{12}\oplus A_{13}\oplus A_{123}) \otimes L(A_{23} \oplus A_{123}) . \end{gather*} We do have such an isomorphism if $L\colon \mathfrak{G}_q\to {\mathscr V}$ takes direct sums to tensor products; of course, direct sum of $\mathbb{F}_q$-vector spaces is neither product nor coproduct in~$\mathfrak{G}_q$. This is still merely evidence that the desired associativity isomorphism should exist: it is not a complete def\/inition. Recall from \cite{GLFq} that $\mathfrak{G}_q$ has a braided promonoidal structure. The convolution structure on $[\mathfrak{G}_q,{\mathscr V}]$ arising from this (as per Day~\cite{DayPhD, DayConv}) is precisely the tensor product $F\otimes^J G$ where $JX = I$ for $X=0$ and $JX=0$ for $X\ne 0$. \begin{Conjecture} If $L$ is braided strong promonoidal then~\eqref{qLtensor} defines a monoidal structure $\otimes^L$ on $[\mathfrak{G}_q,{\mathscr V}]$. \end{Conjecture} Should this be the case, the tensor $\otimes^L$ on $[\mathfrak{G}_q,{\mathscr V}]$ would be obtained from quite an interesting promonoidal structure on~$\mathfrak{G}_q$. A short sequence \begin{gather}\label{spes} \begin{split} & \xymatrix{ A \ar@{{ >}->}[r]^-{f} & X \ar@{->>}[r]^-{g} & B } \end{split} \end{gather} in $\mathrm{Vect}_{\mathbb{F}_q}$ might be called {\em short pre-exact} when $f$ is a monomorphism, $g$ is an epimorphism and $\mathrm{ker}g \le \mathrm{im}f$. Write $\mathrm{Spes}(A,B;X)$ for the set of such~$(f,g)$. Put \begin{gather*} \mathrm{P}(A,B;X) = \sum_{(f,g)\in \mathrm{Spes}(A,B;X)}{L (\mathrm{im}(g\circ f) )} . \end{gather*} This $\mathrm{P} \colon \mathfrak{G}_q^{\mathrm{op}}\times \mathfrak{G}_q^{\mathrm{op}}\times \mathfrak{G}_q \longrightarrow {\mathscr V}$, def\/ined on morphisms in the obvious way, would give the promonoidal structure in question. The term $L (\mathrm{im}(g\circ f) )$ measures the failure of the sequence~\eqref{spes} to be exact. \section{The dimension sequence}\label{tds} Following on from Section~\ref{charades}, we take $F\in [\mathfrak{G}_q,\mathrm{Vect}_{\mathbb{C}}]$ and def\/ine its {\em dimension sequence} $\dim F\in \mathbb{Z}^{\mathbb{N}}$ by \begin{gather* (\dim F)n = \dim \big(F\big(\mathbb{F}_q^n\big)\big) . \end{gather*} This inspires an algebra structure on $A^{\mathbb{N}}$ for any $k$-algebra $A$. We assume we have $\lambda \in k$ as before, but also some integer~$q$ (not necessarily a prime power). As in~\cite{GLFq}, we use \begin{gather*} \phi_n(q) = \big(q^n-1\big)\big(q^{n-1}-1\big)\cdots (q-1) . \end{gather*} We def\/ine \begin{gather*} {n \brack r,s}_{q} = \frac{\phi_n(q)}{\phi_r(q)\phi_s(q)}, \qquad {n \brack r,s,t}_{q} = \frac{\phi_n(q)}{\phi_r(q)\phi_s(q) \phi_t(q)}, \qquad \dots . \end{gather*} For $f,g \in A^{\mathbb{N}}$, put \begin{gather*} f \cdot^{\lambda}_q g = \sum_{r+s+t=n}{{n \brack r,s,t}_{q} \lambda^t f(r+t) g(s+t)} . \end{gather*} The calculations of Section~\ref{charades} show that this is associative at least when $A=\mathbb{Z}$, $q$~is a prime power and $\lambda = \dim L(\mathbb{F})$. More generally, I claim $A^{\mathbb{N}}$ is an associative $k$-algebra. \begin{Proposition}\label{dim} $\dim (F\otimes^L G) = \dim F \cdot^{\lambda}_q \dim G$ \end{Proposition} \subsection*{Acknowledgements} I am grateful to the referees for their careful work and, in particular, for pointing out the references \cite{AFM2015, AM2006, MoreiraPhD}. The author gratefully acknowledges the support of Australian Research Council Discovery Grant DP130101969. \pdfbookmark[1]{References}{ref}
1,108,101,563,966
arxiv
\section{Introduction} \label{sec:intro} \IEEEPARstart{C}{onvolutional} Neural Networks (CNNs) have been one of the cornerstones of Sound Event Classification or Tagging (SET) in recent years \cite{fonseca2020addressing,kong2019panns,gong2021psla,Fonseca2019learning}. One of their commonly assumed properties is \textit{shift} or \textit{translation invariance}, by which output predictions are not affected by small shifts (or even small deformations) in the input signal. In theory, this is ensured by the convolution and pooling operations forming the CNNs. However, recent works in computer vision uncover that this is not always the case. Azulay and Weiss find that small shifts and transformations in the input can change the network’s predictions substantially \cite{azulay2018deep}. In particular, they quantify that by shifting or resizing a random input image by one single pixe , the top class predicted can change with a probability of up to 15\% and 30\%, respectively. This and other related works \cite{engstrom2018rotation,zhang2019making} empirically show the brittleness of CNNs against minor input perturbations, and their only-partial invariance to shifts. These works argue that one of the causes of the lack of shift invariance is a wrongly executed subsampling operation that ignores the classic sampling theorem. This theorem establishes that, for the subsampling to be done correctly, the sampling rate must be at least twice the highest frequency in the incoming signal \cite{oppenheim2001discrete}. Otherwise, \textit{aliasing} problems can occur, generating lack of shift invariance in the system and potentially causing a certain distortion in the output---some of the highest frequency components can overlay other low frequency ones. To address this issue, the classical signal processing measure is to introduce an \textit{anti-aliasing} low-pass filter before downsampling in order to limit the signal’s band \cite{oppenheim2001discrete}. In CNNs, subsampling operations are prevalent through strided layers, e.g., convolution or pooling layers with a stride larger than one. As anti-aliasing actions are not usually taken, feature maps containing high frequency components may lead to shift invariance and/or distortion problems. The findings above have led to a growing area of research aimed at increasing shift invariance in CNNs, either through architectural improvements \cite{zhang2019making,chaman2021truly,vasconcelos2020effective} or via data augmentation \cite{engstrom2018rotation}. In this work, we are interested in the former, which usually revolves around the idea of improving the subsampling operations. The predominant trend consists of adding anti-aliasing measures to the CNN architectures. Similarly to the signal processing fix, some works adopt different low-pass filter based solutions, mainly for image recognition \cite{zhang2019making,vasconcelos2020effective} and more recently also for speech recognition \cite{bruguier2020anti}. Zhang demonstrates that adding blurring to deep convolutional networks before the strided operations (convolution and pooling) provides increased accuracy on ImageNet \cite{deng2009imagenet} and improved robustness to image perturbations \cite{zhang2019making}. Vasconcelos et al. conduct a study to isolate the impact of aliasing within the different modules of a ResNet-50 architecture \cite{vasconcelos2020effective}. Bruguier et al. insert 1D low-pass filters along the temporal dimension of feature maps in a RNN-based speech recognizer \cite{bruguier2020anti}. In contrast to the anti-aliasing line of work, another alternative is to design architectural changes to explicitly enforce invariance in the network. For example, several previous works focus on increasing the invariance of CNNs to \textit{rotations} in input images, by applying constraints to the convolutional filters \cite{worrall2017harmonic} or proposing \textit{ad hoc} operations to enforce this property \cite{dieleman2016exploiting}. Recently, to address the lack of shift invariance caused by subsampling operations, Chaman and Dokmanic propose a downsampling mechanism called \textit{adaptive polyphase sampling} \cite{chaman2021truly}. The key idea is to avoid using the same fixed sampling grid for subsampling a feature map (as typically done in CNNs), but instead select it adaptively based on some criterion (e.g., choosing the grid that produces a downsampled output with highest energy). To our knowledge, this kind of techniques aimed at fostering shift invariance in CNNs have not been evaluated for sound event classification. In this paper, we ask whether lack of shift invariance is a problem in sound event recognition, and whether there are benefits in addressing it. To this end, we apply several mechanisms aimed at increasing shift invariance in the subsampling operations of CNNs, and evaluate them on a large-vocabulary sound event classification task. Specifically, we adopt mechanisms from the two trends mentioned above, namely, low-pass filters (non-trainable as proposed in \cite{zhang2019making}, as well as a trainable version proposed by us), and adaptive polyphase sampling \cite{chaman2021truly}. We insert these architectural changes into the max-pooling layers of VGG variants \cite{simonyan2014very}, and we evaluate their effect on the FSD50K dataset \cite{fonseca2020fsd50k} using models of small and large capacity, and in presence of a strong regularizer (\textit{mixup} augmentation \cite{zhang2017mixup}). We show that these simple changes consistently improve sound event classification in all cases considered. We also demonstrate that they increase network's robustness to spectrogram shifts. This is achieved without adding any (or adding very few) trainable parameters, which makes the proposed pooling mechanisms an appealing alternative to conventional pooling layers. The outcome is a new state-of-the-art mAP of 0.541 on the FSD50K classification benchmark when not using external training data. Code will be made available in the final version of the paper.\footnote{\url{https://github.com/edufonseca/shift_sec}} \section{Method} \label{sec:method} Our focus is on evaluating mechanisms to improve shift invariance applied to the subsampling operations within max-pooling layers in CNNs. A max-pooling layer with squared size $k$ and stride $s$ can be understood as the cascade of two operations, as illustrated in the top diagram of Fig.~\ref{fig:diagram}: a densely-evaluated (i.e., with unit stride) max-pooling operation with size $k$, followed by a subsampling operation with stride $s$ greater than unity. \begin{figure}[t] \centering \centerline{\includegraphics[width=0.82\columnwidth]{figs/block_diagram_v7.pdf}} \vspace{-3mm} \caption{Max-pooling layer and proposed methods to improve shift invariance. \textit{Top}: A max-pooling layer can be decomposed into a densely-evaluated max-pooling operation with size $k$, followed by a subsampling operation with stride $s$. \textit{Middle}: Inclusion of a low-pass filter before subsampling. \textit{Bottom}: Adaptive Polyphase Sampling (APS) can be used instead of naive subsampling.} \label{fig:diagram} \end{figure} \subsection{Low-Pass Filtering Before Subsampling} \label{sec:lpf} We focus on the effect of low-pass filtering feature maps before subsampling in the context of a max-pooling layer, inspired by Zhang~\cite{zhang2019making}. The subsampling operation may incur in aliasing problems as the incoming signal (the feature map) is not band-limited. The classic signal-processing fix is to add a low-pass filter before subsampling \cite{oppenheim2001discrete}. One manner to realize this filter is through a 2D kernel, $LPF_{m,n}$, of size $m$ x $n$, such that the max-pooling layer for an incoming feature map $x$ becomes \begin{equation} \label{eqn:blurpool} y_{lpf} = Subsample_s(LPF_{m,n}(MaxPool_{k,1}(x))), \end{equation} where $MaxPool_{k,1}$ is a max-pooling operation across areas of size $k$ x $k$ and unit stride, $LPF_{m,n}$ applies a low-pass filter of size $m$ x $n$, and $Subsample_s$ denotes naive subsampling with a stride $s$, as illustrated in the middle diagram of Fig.~\ref{fig:diagram}. This simple measure can have different benefits when applied within CNNs. First, in case the feature maps present energy variations of too high frequency for the subsampling operation to be carried errorless, $LPF_{m,n}$ will help mitigate aliasing.\footnote{By high-frequency energy variations in the feature map we refer to rapid spectro-temporal modulations or sharp patterns in the 2D signal formed by a feature map. This should not be confused with the frequency components of the input audio signal. The high-frequency energy variations are not necessarily constrained to a specific region of the feature map. For example, a sequence of human clapping sounds forms a time-frequency representation with a series of transients. In its corresponding feature map, the energy variations given by such a sequence of transients can generate high frequencies, even at the lowest end of the spectrum.} This can reduce the amount of corrupted information flowing through the network. Second, the signal processing literature demonstrates how preventing aliasing can favour shift invariance in a given process \cite{oppenheim2001discrete}. One way to see it is that $LPF_{m,n}$ spreads possible sharp patterns across neighbouring feature map bins. Intuitively, when subsampling differently shifted versions of a spectrogram, the subsampled feature maps are likely to be more structurally similar if they have been previously low-pass filtered. This could provide the network with improved generalization to this kind of small shifts, potentially increasing classification performance. Third, $LPF_{m,n}$ is essentially blurring or smoothing out the incoming feature map, which could be understood as a form of regularization. For example, L2 regularization is a common way to penalize outlier weights with large absolute values, driving them close towards zero \cite{cortes2012l2}. It could be argued that the proposed $LPF_{m,n}$ inflicts a similar effect on the feature map bins, smoothing out the most drastic energy variations---in other words, attenuating the high frequency components in the 2D signal formed by the feature map. In Sec. \ref{sec:experiments} we discuss through experiments which of these hypotheses seem more plausible. To implement $LPF_{m,n}$, some of the most basic characteristics of common 2D image-oriented low-pass filter kernels are: non-negative weights that add up to unity \cite{distante2020handbook}. This can be realized in several ways. \smallskip \noindent \textbf{Non-trainable low-pass filters.} These are commonly defined as binomial filters, which are in turn discrete approximations of Gaussian filters. To generate 1D binomial filters, a simple manner is to repeatedly convolve the base averaging mask [1,1] with itself, in order to get filter masks such as [1,2,1], [1,3,3,1], or [1,4,6,4,1], for one, two and three convolutions, respectively. Then, a 2D squared binomial mask can be obtained simply by convolving a 1D binomial filter with its transpose \cite{distante2020handbook}. In Sec. \ref{sec:experiments} we denote this type of filters as \textit{BlurPool} for consistency with \cite{zhang2019making} as the non-trainable low-pass filters that we use in our experiments are largely inspired by this work. \smallskip \noindent \textbf{Trainable low-pass filters.} These can be defined by randomly initializing a kernel with dimensions $m$ x $n$, and learning their weights through back propagation. In order to imprint the low-pass nature to the filter, its weights can be passed through a softmax function to ensure non-negativity and normalization. In Sec. \ref{sec:experiments} we denote this type of filters as Trainable Low-Pass Filter (\textit{TLPF}). An alternative is to create auxiliary loss functions to encourage the filter weights to adopt a low-pass behaviour through loss penalization. Trainable low-pass filters have also been used recently within a learnable audio frontend \cite{zeghidour2021leaf}. In this work, we compare non-trainable low-pass filters (BlurPool) and trainable low-pass filters (TLPF) constrained via softmax, for simplicity. A given low-pass filter can be applied over the incoming feature map via a convolution operation that also incorporates the required subsequent subsampling stride $s$. More specifically, $Subsample_s(LPF_{m,n}(\cdot))$ in (\ref{eqn:blurpool}) is implemented via a depthwise separable convolution using trainable or non-trainable $LPF_{m,n}$ and stride $s$. \subsection{Adaptive Polyphase Sampling} \label{sec:aps} Adaptive polyphase sampling (APS) is a downsampling mechanism that directly addresses the lack of shift invariance caused by subsampling operations \cite{chaman2021truly}. The underlying principle of APS is based on a simple observation: the result of subsampling a time-frequency (T-F) patch and subsampling its shifted-by-one-bin version can be different when bins are sampled at the same fixed positions. This happens because the energies captured by the same grid over two shifted patches are likely to be different. However, when subsampling a feature map, multiple candidate grids could actually be used instead of always using the same grid (as typically done). Intuitively, a time/frequency shift applied over an input patch could be seen conceptually as translating its energy bins from one grid to another. One way to be robust to these shifts is to select the subsampling grid adaptively based on some criterion, such that the grid follows the shift at the input. More formally, given an input feature map $x$, and considering a subsampling operation\footnote{This subsampling operation would follow a densely-evaluated max pooling operation in order to form a typical max-pooling layer, see Fig. \ref{fig:diagram}} with stride 2, there are four possible grids that can be used for subsampling, depending on which bin from the four options in each 2x2 area is passed to the output. Subsampling with each grid will yield one of the four possible candidate subsampled feature maps, termed \textit{polyphase components} \cite{chaman2021truly}, which can be denoted as $\left\{y_{ij}\right\}_{i,j=0}^1$. Analogously, if we consider a shifted-by-one-bin version of the input feature map, $\tilde{x}$, its polyphase components are given by $\left\{\tilde{y}_{ij}\right\}_{i,j=0}^1$. The conventional course of action consists of always choosing the same subsampling grid and consequently returning the same polyphase component (e.g., $y_{00}$ by picking the top left bin in each 2x2 area). However, as mentioned, depending on the input patch, this will likely cause different downsampled outputs when the patch is simply shifted by one bin ($y_{00} \neq \tilde{y}_{00}$). It can be demonstrated that the set $\left\{\tilde{y}_{ij}\right\}$ is a re-ordered version of $\left\{y_{ij}\right\}$ \cite{chaman2021truly} (which may be potentially shifted, but carrying identical energy values). Therefore, by adaptively choosing a polyphase component in a permutation invariant way, a very similar subsampled output, $y_{i_{aps}j_{aps}}$, would be obtained regardless of sampling from $x$ or $\tilde{x}$. The adaptive selection can be done by maximizing a given criterion, for example, maximizing some norm $l_p$, as given by \begin{equation} \label{eqn:aps} i_{aps},j_{aps} = \argmax_{i,j} \left\{\norm{y_{ij}}_p\right\}_{i,j=0}^1, \end{equation} where $p\in\left\{1,2\right\}$. In this way, by substituting the naive subsampling in a max-pooling layer by APS (as illustrated in the bottom diagram of Fig.~\ref{fig:diagram}), robustness to incoming time/frequency shifts is increased. The benefit from APS comes from the generalization to shifts embedded in the network’s architecture, in a fashion conceptually similar to what is done with $LPF_{m,n}$ (Sec. \ref{sec:lpf}). APS, in contrast, provides no explicit measures against potential aliasing problems. \section{Experimental Setup} \label{sec:setup} \subsection{Evaluation and Training Details} \label{ssec:dataset} We evaluate the proposed methods on a large-vocabulary sound event tagging task using the recently released FSD50K dataset \cite{fonseca2020fsd50k}. FSD50K is open dataset of sound events containing over 51k Freesound\footnote{\url{https://freesound.org}} audio clips, totalling over 100h of audio manually labeled using 200 classes drawn from the AudioSet Ontology \cite{gemmeke2017audio}. We follow the evaluation procedure proposed in the FSD50K paper \cite{fonseca2020fsd50k} (with minor deviations), which is outlined next. Incoming clips are transformed to log-mel spectrograms using a 30ms Hann window with 10ms hop, and 96 bands. To deal with the variable-length clips, we use T-F patches of 1s, equivalent to 101 frames, yielding patches of $T \times F=101 \times 96$ that feed the networks. Clips shorter than 1s are replicated while longer clips are trimmed in several patches with 50\% overlap inheriting the clip-level label. We train, validate and evaluate using the proposed \textit{train} set, \textit{val} set and \textit{eval} set \cite{fonseca2020fsd50k}. Models are trained using Adam optimizer \cite{kingma2014adam} to minimize binary cross-entropy loss. Learning rate is 3e-5, halved whenever the validation metric plateaus for 10 epochs. Models are trained up to 150 epochs, earlystopping the training whenever the validation metric is not improved in 20 epochs. We use a batch size of 128 and shuffle training examples between epochs. Once the training is over, the model checkpoint with the best validation metric is selected to predict scores and evaluate performance on the eval set. For inference, we compute output scores for every (eval or val) T-F patch, then average per-class scores across all patches in a clip to obtain clip-level predictions. Our evaluation metric is balanced mean Average Precision (mAP), that is, AP computed on a per-class basis, then averaged with equal weight across all classes to yield the overall performance, following \cite{fonseca2020fsd50k,gemmeke2017audio,fonseca2020addressing}. \subsection{Baseline Model} \label{ssec:baseline} As a base network, we use a VGG-like architecture \cite{simonyan2014very}. This type of architecture has been widely used for SET \cite{kong2019panns,dorfer2018training,hershey2017cnn,ebrahimpour2020end} and is the most competitive baseline for FSD50K when compared to others of higher complexity \cite{fonseca2020fsd50k}---which also accords with recent music tagging evaluations \cite{won2020evaluation}. Due to its limited size compared to other baselines \cite{fonseca2020fsd50k}, it allows faster experimentation. In addition, this architecture conveniently features several max-pooling layers that allow the study of the proposed pooling mechanisms. Specifically, the network that we use for the majority of our experiments is similar to a VGG type A \cite{simonyan2014very}. The network, denoted as \textit{VGG41}, consists of 4 convolutional blocks, with each block comprising two convolutions with a receptive field of (3,3), and each convolution followed by Batch Normalization \cite{ioffe2015batch} and ReLU activation. Between the blocks, max-pooling layers of size (2,2) (and same stride) are placed by default---they will be substituted by the proposed pooling mechanisms. A densely-evaluated max pooling operation (of size 3x3 and unit stride) will sometimes be inserted between the convolutions within each block---we will refer to it as \textit{intra-block pooling} (IBP). This provides partial translation invariance but not dimensionality reduction, allowing the same (max) element to be transferred to the output in adjacent spatial locations. This tweak has been applied in various non-audio applications \cite{goodfellow2016deep}, and to a lesser extent also in SET tasks \cite{ebrahimpour2020end}. Finally, in order to summarize the final feature map information before the output classifier, we use a global pooling in which we first aggregate information along the spectral dimension via averaging for every time step, then max-pool the outcome in the time dimension. We found out that aggregating first spectral and then temporal information in this manner is the most beneficial for our task among other combinations. VGG41 has 1.2M weights, which allows for relatively fast experimentation. The baseline and topline settings using VGG41 are also evaluated using \textit{VGG42} (of 4.9M weights), where we double the width of the network with respect to VGG41 (i.e., using twice the number of filters in every convolutional layer). \subsection{\textit{mixup}} \label{ssec:mixup} We evaluate the baseline and top performing methods proposed in Sec. \ref{sec:method} with or without \textit{mixup} augmentation \cite{zhang2017mixup} in order to analyze their behavior in presence of a strong regularizer. Mixup acts as a regularizer by encouraging networks to predict less confidently on linear interpolations of training examples. In particular, it augments the training distribution by creating virtual examples under the assumption that linear interpolations in the feature space correspond to linear interpolations in the label space. Following \cite{zhang2017mixup}, we sample $\lambda$ from a beta distribution $\lambda \sim$ Beta$(\alpha, \alpha)$, for $\alpha \in (0, \infty)$. The hyper-parameter $\alpha$, which controls the interpolation strength, is set to 1.25 after tuning on the val set. We choose mixup because the concept of mixing sounds is an audio-informed operation, and it has been proven useful for SET \cite{kong2019panns,Fonseca2019model,gong2021psla} and other sound event research tasks \cite{fonseca2021unsupervised}. In our view, mixup can be interpreted from two different perspectives. First, it is a regularizer to mitigate overfitting, which can be important at our scale of data, especially for some classes that present less than 100 training clips. Second, mixup is a mechanism that allows to cover during training a diversity of examples that may be encountered in evaluation, hence improving generalization. In particular, upon the creation of FSD50K, audio clips with multiple sound sources were prioritized to some extent for the eval set, whereas the dev set presents a higher proportion of single-source clips. It can therefore be argued that a kind of domain shift exists between both sets, which is being partially compensated through mixup. Hence, this type of augmentation is specially well aligned with the recognition task of FSD50K. \section{Experiments} \label{sec:experiments} We evaluate the methods proposed in Sec. \ref{sec:method} on the SET task posed by FSD50K, using VGG41 (Sec. \ref{ssec:eval_41}) and also using mixup and VGG42 (Sec. \ref{ssec:eval_42}). In Sec. \ref{ssec:quanti} we demonstrate that the methods are increasing the network's robustness to input shifts. Sections \ref{ssec:discuss} and \ref{ssec:exp_previous_work} provide discussion and comparison with previous work on FSD50K. For all the performance results (Tables \ref{tab:results} and \ref{tab:progress}), we report average performance and standard deviation across three runs. \subsection{Evaluation using a Small Model} \label{ssec:eval_41} \begin{table*}[ht] \caption{mAP obtained by inserting different pooling mechanisms into the VGG41 baseline. TLPF = Trainable Low-Pass Filter, APS = Adaptive Polyphase Sampling, IBP = Intra-block Pooling.} \vspace{-1mm} \centering \begin{tabular}{lc|lc|lc} \toprule \textbf{Method} & \textbf{mAP} & \textbf{Method} & \textbf{mAP} & \textbf{Method} & \textbf{mAP} \\ \midrule \midrule VGG41 (baseline) & 0.457 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.003 & + BlurPool 5x5 + IBP & 0.479 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.003 & + TLPF 3x3 + IBP & 0.478 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.002 \\ + BlurPool 3x3 & 0.475 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.002 & + TLPF 5x5 + IBP & 0.481 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.002 & + TLPF 4x4 + IBP & 0.480 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.004 \\ + BlurPool 5x5 & 0.476 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.003 & + TLPF 5x5 + APS $l_1$ & \textbf{0.484} $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.002 & + TLPF 5x5 + IBP & \textbf{0.481} $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.002 \\ + TLPF 3x3 & 0.476 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.003 & + APS $l_1$ + IBP & 0.478 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.001 & + TLPF 6x6 + IBP & 0.480 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.001\\ + TLPF 5x5 & 0.479 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.003 & & & + TLPF 1x4 + IBP & 0.475 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.005 \\ + TLPF 6x6 & 0.477 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.001 & & & + TLPF 1x5 + IBP & 0.480 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.001 \\ + APS $l_1$ & \textbf{0.480} $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.001 & & & + TLPF 1x6 + IBP & 0.480 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.002 \\ + APS $l_2$ & 0.460 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.002 & & & + TLPF 4x1 + IBP & 0.469 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.004 \\ + IBP & 0.472 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.002 & & & + TLPF 5x1 + IBP & 0.470 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.004 \\ & & & & + TLPF 6x1 + IBP & 0.472 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.002 \\ \bottomrule \end{tabular} \label{tab:results} \end{table*} Table \ref{tab:results} shows the results of inserting the pooling mechanisms individually into VGG41 (left section) as well as in some pairwise combinations (center section). The right section lists the results of exploring TLPF in depth. By looking at the left section, it can be seen that all the evaluated methods outperform the baseline system. That is, inserting each of the methods alone into a standard VGG-like architecture improves recognition performance. The boosts range from 0.003 in the worst case (APS $l_2$) to 0.023 in the best case (APS $l_1$). If we focus on the low-pass filter based solutions, we observe that this classical signal processing technique is beneficial for CNN-based sound event classification. While it may seem that blurring the feature maps can smooth out relevant detailed information (thus leading to performance degradation) results indicate that it is indeed helpful. The choice of trainable vs non-trainable low-pass filters does not seem critical, yet the trainable version TLPF seems to produce slightly higher mAP values. The different sizes of these filters allow to find a trade-off between high-frequency smoothing and loss of information in the incoming feature maps (the larger the size, the stronger the smoothing effect). Results seem to indicate that larger smoothing areas (5x5 vs 3x3) are beneficial. By looking at results with APS, we observe that $l_1$ outperforms $l_2$ as norm criterion. We also did preliminary experiments with other metrics such as $l_\infty$, $l_0$ and variance, but we found $l_1$ to be the best choice overall. Interestingly, a naive tweak like IPB also shows some impact, although more modest than that of the other methods. The two top methods when applied individually are APS $l_1$ and TLPF 5x5, showing on par performance. We set out to combine some of the methods in pairs in order to see if they are complementary (center section). Combining low-pass filtering (which operates before subsampling between convolutional blocks) and IBP (which operates between convolutions within every block) seems to provide a small but consistent boost, for both BlurPool and TLPF. When joining the top performing methods, specifically, low-pass filtering the incoming feature maps with TLPF 5x5, followed by subsampling them with APS, we observe a small performance boost. A possible explanation for their complementarity could lie in TLPF addressing aliasing issues while APS is agnostic to it. Finally, joining APS and IBP does not yield further boosts. The right section of Table \ref{tab:results} shows the results of exploring different low-pass filter shapes in TLPF. In previous work, low-pass filters are usually adopted for computer vision tasks, hence they are of squared size (e.g., \cite{zhang2019making,vasconcelos2020effective}). Here, we seek to find out if there is one axis of the audio spectrogram feature maps (time or frequency) along which low-pass filtering is more beneficial. To this end, we run experiments using 1D trainable low-pass filters applied only along the frequency axis (filters of size $1$ x $n$) or the temporal axis (filters of size $m$ x $1$). At the top of the right section, we first report the results by progressively increasing the area of squared filters. A sweet spot in the size 5x5 can be observed. Then we report results by low-pass filtering only along the frequency axis. Interestingly, we find out that much of the performance obtained with squared filters is already achieved by smoothing out the spectral variations alone. In contrast, when we apply the low-pass filters only along the time axis the performance is noticeably worse. \subsection{Evaluation using Regularization and a Larger Model} \label{ssec:eval_42} Next, we select the best setups of the two pooling mechanisms considered on VGG41 (one based on low-pass filtering and another based on APS), as well as their combination. Table \ref{tab:progress} shows the results using these setups, now adding mixup augmentation and also doubling the width of the network, which means multiplying its number of weights approximately by four. \begin{table}[!t] \vspace{-2mm} \caption{mAP obtained by using top performing pooling mechanisms in presence of mixup and with the larger capacity VGG42. Values in parenthesis are absolute improvements over the corresponding baseline. TLPF = Trainable Low-Pass Filter, APS = Adaptive Polyphase Sampling, IBP = Intra-block Pooling.} \vspace{-1mm} \centering \begin{tabular}{@{}lccc@{}} \toprule & \textbf{VGG41} & \textbf{VGG41} & \textbf{VGG42} \\ \textbf{Method} & & \textbf{+ mixup} & \textbf{+ mixup} \\ \midrule \midrule Baseline & 0.457 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.003 & 0.497 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.003 & 0.523 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.002\\ \midrule + APS $l_1$ & 0.480 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.001 & 0.513 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.003 & 0.538 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.004 \\ & \footnotesize{(+0.023)} & \footnotesize{(+0.016)} & \footnotesize{(+0.015)} \\ + TLPF 5x5 + IBP & 0.481 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.003 & 0.511 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.003 & 0.539 $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.002\\ & \footnotesize{(+0.024)} & \footnotesize{(+0.014)} & \footnotesize{(+0.016)} \\ + TLPF 5x5 + APS $l_1$ & \textbf{0.484} $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.002 & \textbf{0.514} $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.003 & \textbf{0.541} $\raisebox{.2ex}{$\scriptstyle\pm$}$ 0.002 \\ & \footnotesize{(+0.027)} & \footnotesize{(+0.017)} & \footnotesize{(+0.018)} \\ \bottomrule \end{tabular} \label{tab:progress} \vspace{-4mm} \end{table} The left column lists the best values from Table \ref{tab:results}, showing boosts from 0.023 to 0.027 with respect to the baseline. When we add mixup to VGG41 (center column) substantial performance improvements are observed, demonstrating the good alignment of this operation with SET in general (which accords with \cite{kong2019panns,gong2021psla}) and with the FSD50K classification task in particular (as discussed in Sec. \ref{ssec:mixup}). All methods perform in the same ballpark, showing boosts with respect to the baseline of up to 0.017, with the combination of TLPF and APS yielding top mAP. Our motivation to combine the proposed methods with mixup is to analyze their behaviour in presence of a strong regularizer. If they act solely as a general form of regularization, we would expect them to provide limited boosts when combined with mixup. We do observe a certain improvement decrease when combined, but the boosts are still solid, both when using VGG41 and VGG42 (center and right columns). These results suggest that the proposed methods are addressing problems beyond lack of regularization, presumably reinforcing robustness to time/frequency shifts at the input. In Sec. \ref{ssec:quanti} we demonstrate that this is the case by systematically applying time/frequency shifts to a set of input spectrogram patches, and analyzing the network's robustness against these shifts with and without the proposed pooling mechanisms. Finally, when inserting these pooling methods into VGG42 in presence of mixup (right column), we see that they are also beneficial within a larger-capacity model where performance is more competitive (in our case, increasing the capacity from 1.2 to 4.9M weights). In particular, combining TLPF and APS yields the top mAP again, showing a boost of 0.018 over the baseline. \subsection{Characterizing the Increase of Shift Invariance} \label{ssec:quanti} In previous experiments we have seen that classification performance is improved when we adopt the proposed pooling mechanisms, presumably due to the increase of shift invariance. Here, we demonstrate empirically that these pooling mechanisms are indeed addressing this problem. To this end, we apply shifts to a set of input spectrogram patches and analyze the network's robustness against these shifts with and without the proposed pooling mechanisms. The set of data for this evaluation consists of 1000 audio clips\footnote{This list of 1000 files is released for future evaluations.} from the eval set. They are selected at random after applying the following constraints for a more controlled experimental scenario: \textit{i)} we choose clips with one single label (i.e., presumably containing one single sound event); \textit{ii)} we choose clips with a minimum length of 2s, so that we can always select a patch in the time span [0.5, 1.5] s, allowing the discard of potential preceding silence that sometimes occurs. The shifts applied to the input 1s T-F patches of $T \times F=101 \times 96$ obey one of the two following protocols. The first protocol (denoted as \textit{time-$n_f$}) is simply a time shift of the patch by $n_f$ frames, with $n_f \in \left\lbrace 1,3,5\right\rbrace$. Each unity of $n_f$ corresponds to 10ms (the hop size when framing the input audio signal). The second protocol (denoted as \textit{freq-$n_b$}) consists of shifting the input patch by $n_b$ mel bands upwards in frequency, with $n_b \in \left\lbrace 1,3,5\right\rbrace$. The $n_b$ original highest bands are discarded, and the $n_b$ lowest bands in the new patch are filled with white noise centered at the mean value of the original lowest band. By doing this, we analyze the effect not only of frequency shifts but also of small artificial perturbations in the input. For every input patch, we \textit{i)} apply one shift according to one of the protocols above; \textit{ii)} compute network predictions for both original and shifted patches, and \textit{iii)} measure the predictions’ sensitivity to the shift using two metrics, namely, \textit{classification consistency} and \textit{mean absolute change}. Classification consistency refers to the percentage of cases in which the network predicts the same top class for both original and shifted patches \cite{zhang2019making,chaman2021truly}. Mean absolute change (MAC) is a metric that measures the absolute change of the probability predicted for the top class after the shift, averaged across the 1000 examples \cite{azulay2018deep}. The motivation to use MAC is to rule out the possibility that variations in classification consistency are originated by tiny differences between the top class and the second most likely class predicted. Table \ref{tab:quantify} shows the result of this evaluation for the time and frequency protocols. In Table \ref{tab:quantify}, \textit{baseline} corresponds to the VGG41 baseline of Table \ref{tab:results}, whereas \textit{proposed} corresponds to the same model after incorporating TLPF 5x5 and APS $l_1$. \begin{table}[!t] \caption{Classification consistency (in \%, higher is better) and mean absolute change (MAC) (lower is better) when applying time and frequency shift protocols over input patches. Models evaluated are the \textit{baseline} of Table \ref{tab:results} and the same model after inserting TLPF 5x5 and APS $l_1$ (\textit{proposed}). The proposed model exhibits higher robustness to shifts.} \vspace{-1mm} \centering \begin{tabular}{lcc|cc} \toprule \textbf{Protocol} & \multicolumn{2}{c}{\textbf{Consistency (\%)}} & \multicolumn{2}{c}{\textbf{MAC}} \\ & baseline & proposed & baseline & proposed \\ \midrule \midrule time-1 & 82.0 & 92.5 & 0.078 & 0.030 \\ time-3 & 75.2 & 87.2 & 0.106 & 0.048 \\ time-5 & 74.0 & 83.2 & 0.110 & 0.062 \\ \midrule freq-1 & 63.5 & 79.8 & 0.175 & 0.078 \\ freq-3 & 53.3 & 67.6 & 0.234 & 0.145 \\ freq-5 & 49.0 & 60.0 & 0.263 & 0.199 \\ \bottomrule \end{tabular} \label{tab:quantify} \vspace{-4mm} \end{table} In the results of the time protocol (top section), we see that by applying a time shift of only 1 frame (10 ms), the baseline network changes its top prediction 18\% of the time. Note that this is a minor modification in the time framing of the input audio signal, which can be regarded as imperceptible to a human in most cases. The proposed network (including the mechanisms to increase shift invariance) shows higher classification consistency than the baseline for all the cases considered. As the time shifts increase, the network becomes less consistent (according to our definition). These findings are confirmed by the proposed network consistently showing smaller MAC values, i.e., the output probability for the top class is more robust to the time shifts. By looking at the results of the frequency protocol (bottom section) we observe a similar trend, with the proposed network showing increased robustness to frequency shifts and small perturbations (larger classification consistency percentages, and smaller MAC values than the baseline model). Interestingly, we see that the classification consistency values in this protocol are overall lower than those observed for the time protocol (and vice versa for the MAC values). For example, the minimal shift applied in time yields consistency values of 82.0\% and 92.5\% for baseline and proposed models, respectively. In contrast, the minimal shift applied in frequency leads to analogous values of 63.5\% and 79.8\%. This can be due to several reasons. First, the network considered may be more sensitive to frequency shifts than to time shifts. This could be linked to results in Sec. \ref{ssec:eval_41}, where low-pass filtering along the frequency axis is shown to be more effective than along time. Second, it could happen that the frequency shifts affect the semantics of the input examples to some degree (which is more unlikely with the time protocol). Third, in this protocol we are introducing small artificial perturbations never seen at training time, which may confuse the network. Regardless, the proposed approach exhibits higher robustness to the applied shifts in all cases analyzed. Similar trends are observed when using the proposed methods alone (either TLPF or APS). In summary, results of Table \ref{tab:quantify} demonstrate that the proposed pooling mechanisms increase shift invariance in the network. Fig. \ref{fig:shift_ex} shows the classification stability of the same models used for Table \ref{tab:quantify} with two examples: applying the time shift protocol over a water dripping sound (top) and the frequency shift protocol over a computer keyboard typing sound (bottom). In the top plots, predictions for the \textit{Drip} class are stable when we insert the proposed pooling mechanisms, as one would expect upon shifting the signal framing by few miliseconds. However, the baseline predictions show certain fluctuations. Similarly, in the bottom plots the proposed network exhibits higher robustness against the frequency shifts and the induced perturbations. \begin{figure}[t] \centering \centerline{\includegraphics[width=1.05\columnwidth]{figs/plot_final_paper_800.png}} \caption{Predicted score for the correct class of water dripping (top) and computer keyboard typing (bottom) examples, as a function of shifted time frames (top) and mel bands (bottom). Inserting the pooling mechanisms (TLPF 5x5 + APS $l_1$) makes the predictions more stable against spectrogram shifts.} \label{fig:shift_ex} \end{figure} \subsection{Discussion} \label{ssec:discuss} We have seen that two methods with different underlying principles targeting the increase of shift invariance yield improvements within the same ballpark in our task. Further, we have empirically shown that they increase model's robustness to spectrogram shifts. These facts demonstrate that there is indeed some lack of this property in the CNN under test, and suggest that reinforcing shift invariance is beneficial for sound event classification. One interesting observation is that while anti-aliasing measures are helpful to increase performance and shift invariance, they do not seem strictly necessary in light of the overall similar performance attained by APS. In terms of model size, the impact is negligible for all evaluated methods. Specifically, TLPF5x5 adds 6k (0.50\%) and 12k (0.24\%) trainable parameters over VGG41 and VGG42 respectively. Its non-trainable counterpart (BlurPool 5x5) adds the same number of non-trainable parameters. APS does not require any additional parameters (trainable or non-trainable). The additional compute required by the methods is also limited. For the low-pass filtering methods, one additional convolution is needed to apply the low-pass filter over the incoming feature maps for every subsampling operation. Analogously, the only additional compute required by APS is the computation of the polyphase components and their norms in every subsampling operation. Thus, the proposed architectural modifications (which apply only to the pooling layers) yield consistent recognition boosts when inserted into a well-known CNN, with minimal additional computation. This makes them an appealing alternative to conventional pooling layers. \subsection{Comparison with Previous Work} \label{ssec:exp_previous_work} Table \ref{tab:sota} lists previously published results on FSD50K, including mAP and d-prime ($d'$) when available. Our best system obtains state-of-the-art mAP of 0.541, slightly outperforming recent Transformer-based approaches (0.537) \cite{verma2021audio}, as well as the PSLA approach when trained only on FSD50K (0.452) \cite{gong2021psla}. PSLA makes use of a collection of training techniques (ImageNet pretraining, data balancing and augmentation, label enhancement, weight averaging, and ensemble of several models) \cite{gong2021psla}. Among all of them, the key ingredient seems to be ImageNet pretraining, without which the performance decreases dramatically. While using transfer learning from ImageNet seems to provide substantial boosts, we consider transfer learning from external datasets a different track. Our proposed state-of-the-art approach consists of simple architectural changes inserted into a widely-used CNN at minimal computational cost along with simple augmentation. \begin{table}[!t] \vspace{-2mm} \caption{State-of-the-art on FSD50K.} \vspace{-1mm} \centering \begin{tabular}{lcc} \toprule \textbf{Method} & \textbf{mAP} & \pmb{$d’$} \\ \midrule \midrule Baseline \cite{fonseca2020fsd50k} & 0.434 & 2.167 \\ \midrule PSLA (not using ImageNet) \cite{gong2021psla} & 0.452 & - \\ Audio Transformers \cite{verma2021audio} & 0.537 & - \\ VGG42 + APS $l_1$ (ours) & 0.538 & 2.415 \\ VGG42 + TLPF5x5 + IBP (ours) & 0.539 & 2.417 \\ VGG42 + TLPF 5x5 + APS $l_1$ (ours) & \textbf{0.541} &\textbf{2.431} \\ \midrule \textcolor{mygray}{PSLA (using ImageNet)} \cite{gong2021psla} & \textcolor{mygray}{0.567} & - \\ \bottomrule \end{tabular} \label{tab:sota} \vspace{-4mm} \end{table} \section{Conclusion} \label{sec:conclu} We have evaluated two pooling methods to improve shift invariance in CNNs in the context of a sound event classification task. These methods are based on low-pass filtering and adaptive sampling of incoming feature maps, and are implemented via small modifications in the pooling layers of CNNs. We have evaluated the effect of these architectural changes on the FSD50K dataset, using models of different capacity and in presence of strong regularization. Results show that the models evaluated indeed present a problem of only-partial shift invariance, and that adopting the proposed methods to improve it yields recognition boosts. The improvements observed are within the same ballpark for both methods, despite them having different underlying principles, which allows for small further boosts via their combination. Inserting these pooling methods into VGG variants makes the networks exhibit higher robustness to time/frequency shifts in the input spectrograms. These facts suggest that reinforcing shift invariance in CNNs is beneficial for sound event classification. The proposed architectural changes applied to a widely-used CNN yield consistent recognition improvements with minimal additional computation, which makes them an appealing alternative to conventional pooling layers. Our best system achieves a new state-of-the-art mAP of 0.541 on FSD50K. \newpage \bibliographystyle{IEEEtran}
1,108,101,563,967
arxiv
\section{Introduction} \label{sec:intro} Silicon solid-state nanophotonic structures have a large third-order Kerr nonlinearity and strong light confinement, enabling nonlinear optic dynamics with broad impact and applications. Four-wave mixing (FWM), as an elemental nonlinear process, has been deeply investigated in nanoscale silicon platforms [1-5] and implemented in a multitude of functionalities, ranging from optical signal regeneration [6], mid-infrared frequency conversion [7-8], phase conjugation [9], continuum generation [10], regenerative oscillations [11], correlated photons generation [12], and spectroscopy [13]. Essentially, FWM arises when two intense laser fields cause oscillation of the refractive index via the Kerr effect, which in turn imposes nonlinear phase modulation back onto the input driving fields themselves, producing modulation sidebands at new frequencies that satisfy photon energy and momentum conservation conditions [14, 15]. In silicon nonlinear waveguide, in addition to the Kerr effect, two-photon absorption (TPA) generates considerable free-carrier densities, with corresponding nonlinearity and change of refractive index via free-carrier dispersion (FCD) and free-carrier absorption (FCA) [14]. Importantly, the generation of free-carrier density via TPA is already quadratically proportional to the incident laser intensity, making the cascaded refractive index modulation via FCA/FCD a fifth-order nonlinear dynamics on top of the third-order Kerr nonlinearity [16-24]. Various FCD/FCA induced nonlinear phenomena have been demonstrated in silicon, such as soliton fission [18, 19], soliton compression [20], frequency shift [21], and spectrum broadening [23, 24]. However, a fundamentally important question---akin to third-order Kerr effect generating FWM, whether the fifth-order FCD/FCA can give rise to six-wave mixing dynamics---has not been probed until now. Here we present the first demonstration of free-carrier induced six-wave mixing (FC-SWM) in silicon waveguide. We show a non-dispersion-induced inverse dependence of the FC-SWM strength to input laser detunings, which confirms the existence of FC-SWM resulting in the predominance of FC-SWM over FWM at small laser detunings. Furthermore, we map out the stronger dependence of FC-SWM to input pump power compared to conventional Kerr FWM. Finally, we observe asymmetric sideband generation efficiencies and identify the phase sensitive interaction between FC-SWM and FWM as the mechanism for such symmetry breaking. \section{Concept and theoretical analysis} \label{sec:markupcmd} Kerr and free-carrier nonlinear dynamics in silicon can be described by the nonlinear Schr\"{o}dinger equation (NLSE), which when coupled with the free-carrier generation and recombination dynamics are governed by [14]: \begin{equation} \label{eq:useless} \begin{array}{l} \frac{{\partial E}}{{\partial z}}{\rm{ = }} - i\frac{{{\beta _2}}}{2}\frac{{{\partial ^2}E}}{{\partial {t^2}}} + \frac{{{\beta _3}}}{6}\frac{{{\partial ^3}E}}{{\partial {t^3}}} - \frac{{{\alpha _l}}}{2}E\\\\ \;\;\;\;\;\;\;\;\; + (i{\gamma} - \frac{{{\beta _{TPA}}}}{{2{A_0}}}){\left| E \right|^2}E - (i\delta + \sigma ){N_c}E \end{array} \end{equation} \begin{equation} \label{eq:useless} \begin{array}{l} {N_c} = \int_0^t {\left( {\frac{{{\beta _{TPA}}}}{{2A_0^2h{v_0}}}{{\left| {E(z,\tau )} \right|}^4} - \frac{{{N_c}}}{{{\tau _c}}}} \right)d\tau } \end{array} \end{equation} In Eq. (1-2), $E$ is the slowly-varying envelope of the overall input fields into the nanowire waveguide, $\beta_n$ is the $n$th order dispersion, $\gamma$ is the effective Kerr nonlinear coefficient, $\beta_{TPA}$ is the degenerate TPA coefficient, $N_c$ is the free carrier density, $\delta$ and $\sigma$ are the Drude FCD and FCA coefficients respectively, $A_0$ denotes the effective mode area, $\tau_c$ is the free-carrier lifetime, $h$ is the Plank's constant, and $v_0$ is the pump frequency. To analytically derive the wave mixing process, the input light field $E$ is described by ${{A_1}\cos ({\omega _1}t) + {A_2}\cos ({\omega _2}t)}$. As shown in Fig. 1(a), here we utilize two input laser frequencies to study the degenerate FWM via ${\chi ^{(3)}}\left( {2{\omega _1} - {\omega _2};{\omega _1},{\omega _1},{\omega _2}} \right)$ and the degenerate FC-SWM via ${\chi ^{(5)}_{FC}}\left( {2{\omega _1} - {\omega _2};{\omega _1},{\omega _1},{-\omega _1}, {\omega _1}, {\omega _2}} \right)$. Such two-frequency configuration can greatly reduce the complexity of theoretical derivation without loss of generality (more discussions detailed in Supporting Information [25]). Meanwhile, it should be noted that, the ${\chi ^{(5)}_{FC}}$ is induced by FCD/FCA, and has no contribution from the fifth-order electronic nonlinearity $n_4$ [14]. Considering the evolution of the input field and neglecting the dispersion of the waveguide, the nonlinear wave mixing strength ($NM$) experienced by the input field $E$ can be written as (detailed derivation in Supporting Information [25]): \begin{equation} \label{eq:useless} \begin{array}{l} NM = \exp \left( {iG\cos (bt) + \frac{{iD}}{b}\sin (bt)} \right)\\\\ \;\;\;\;\;\;\;\; \;\;\; \; \times \left( {1 + P\cos (bt)} \right) \times \left( {1 + \frac{A}{b}\sin (bt)} \right) \end{array} \end{equation} Here $b={\omega_1} - {\omega_2}$ is the frequency detuning between input lasers, $P=-L{A_1}{A_2}{\beta _{TPA}}{(2{A_0})^{-1}}$, $G=L{A_1}{A_2}{\gamma}$, $A = D\sigma {\delta ^{-1}}$, $D=-3LA_1^3{A_2}\delta {\beta_{TPA}}{(4{A_0}^2h{v_0}) ^{-1}}$, and $L$ is the waveguide length.Herein $G$, $P$, $D$, $A$ represent the effects of Kerr, TPA, FCD, and FCA, respectively. In our study we consider that TPA, FCA, and FCD responds instantaneously to the beating oscillation corresponding to the input laser detunings tested in the experiments ($b/2\pi<$1 THz) [26]. Importantly, Eq. (3) shows that the Kerr and FCD effects cause nonlinear phase modulations that have a $\pi/2$ phase offset with respect to each other (the first and second terms on the right-hand-side of Eq. (3) respectively), while TPA and FCA cause nonlinear intensity modulations that also have a $\pi/2$ phase offset (the third and fourth terms on the right-hand-side of Eq. (3) respectively). Furthermore, Eq. (3) can be re-written as: \begin{equation} \label{eq:useless} NM = {\rm{exp}}({iH\sin(bt+\theta)})\times({1+M{\rm{cos}}(bt+\psi )}) \end{equation} Therein, $H=\sqrt{{(D/b}^2)+ {G^2}}$, $M=\sqrt{(A/b)^2+P^2}$, $\theta=\arctan({Gb/D} )$, $\pi/2\leqslant\theta<\pi$, and $\psi=\arctan(-A/Pb)$, $\pi/2\leqslant\psi<\pi$. The values of $\theta$ and $\psi$ are determined by the signs of $G$, $P$, $D$ and $A$. Using the Bessel expansion, we thus arrive at: \begin{equation} \label{eq:useless} \begin{array}{l} NM = \sum\limits_{n = - 1}^1 {{J_n}\left( H \right){e^{inbt + in\theta }}} \\\\ \;\;\;\;\;\;\;\;\;\; \;\; \; \times \left( {1 + \frac{1}{2}M\left( {{e^{ibt + i\psi }} + {e^{ - ibt - i\psi }}} \right)} \right) \end{array} \end{equation} In Eq. (5) $J_n$ is the $n$th order Bessel function. Simplifying Eq. (5) with frequency detunings $\pm b$ respectively and we finally obtain [25]: \begin{equation} \label{eq:useless} N{M_b} = \;\frac{{\rm{1}}}{{\rm{2}}}M{J_0}{e^{i\psi }} + {J_1}{e^{i\theta }} \end{equation} \begin{equation} \label{eq:useless} N{M_{ - b}} = \;\frac{{\rm{1}}}{{\rm{2}}}M{J_0}{e^{ - i\psi }} - {J_1}{e^{ - i\theta }} \end{equation} First, as seen in Eq. (5) and the expressions of $H$ and $M$, the FC-SWM components induced by FCD and FCA have a inverse dependence on the input laser detuning ($1/b$), while FWM from Kerr and TPA is detuning independent. Second, the effective fifth-order FC-SWM components are proportional to $A_1^3{A_2}$ (seen in the expressions of $D$ and $A$), while the third-order FWM components are only proportional to $A_1{A_2}$ (seen in the expressions of $G$ and $P$), implying that FC-SWM has a higher sensitivity than FWM. Third, comparing Eq. (6) and (7), under the combinational effects of Kerr, TPA, FCA and FCD in silicon, the generated overall wave mixing sideband power (${\left| {N{M_{ \pm b}}} \right|^2}$) can be asymmetric for positive and negative input laser detunings, even when the waveguide dispersion is neglected. The exact extent of the asymmetry depends on the parameters of the measured waveguide and input light fields. We next present detailed FC-SWM and FWM measurements to demonstrate and validate these theoretical predications. \begin{figure*}[t \sidecaption \includegraphics*[width=0.88\textwidth]{fig1} \caption{Six-wave mixing spectra and nanowire characteristics. (a) The origin of Kerr FWM and FC-SWM in silicon nanowire waveguide with two input laser frequencies. (b) Scanning electron micrographs of the measured nanowire (upper panel) and simulated mode profile of the fundamental TE$_{11}$ mode using finite-difference time-domain based mode-solver (lower panel). (c) Nanowire dispersion of the TE$_{11}$ mode (left $y$-axis) and the corresponding linear phase mismatch (right $y$-axis). (d) Measured wave mixing spectra with ± 0.28 THz detuning, and the generated sideband idler powers at smaller (s1) and bigger (s2) frequencies. (e) Numerically simulated wave mixing spectra using the NLSE model with experimental parameters, for the case of solely FWM (lower) and Drude FC-SWM plus FWM (upper).The spectrum power discrepancies between numerical and experimental results are due to that the experimental pulse powers were averaged by the 50 MHz pulse repetition rate.} \label{fig-2} \end{figure*} \section{Experimental results and discussion} \label{ssec:preamble} We study a 0.3-mm long rib silicon nanowire fabricated on a silicon-on-insulator wafer, with the scanning electron micrograph shown in Fig. 1(b). Its numerically estimated mode profile and corresponding dispersion of the fundamental TE$_{11}$ mode are shown in Fig. 1(c). The measured nanowire waveguide has a linear loss $\alpha_l=2$ dB/cm, with an effective mode area $A_0=1.3\times10^{-13}$ m$^2$, a degenerate TPA coefficient $\beta_{TPA}=9\times10^{-12}$ m/W, a FCA coefficient $\theta=1.45\times10^{-21} $m$^2$, a FCD coefficient $\delta=1.0\times10^{20} $m$^2$, and a free-carrier recombination lifetime $\tau_c=500$ ps [14, 21-22]. The two incident drive lasers consist of a 50 MHz repetition rate, 100 ps pulsewidth pump field (amplitude $A_1$ and central angular frequency $\omega_1$) and a continuous-wave (c.w.) signal field (amplitude $A_2$ and angular frequency $\omega_2$). The pump pulse train of maximum intra-waveguide peak powers at 11.7 W generates high free-carrier densities on the order of $10^{19}$ cm$^{-3}$. The 50 MHz pulse repetition rate allows sufficient time (more than $40\tau_c$) for complete free-carrier recombination relaxation so that no inter-pulse interference occurs. Moreover, with the small waveguide dispersion and short waveguide length, the dispersion induced linear phase mismatch $\triangle\phi$ is negligibly small within the examined 1550 nm to 1562 nm wavelength range: the maximum value of $\triangle\phi\times L$ is only $0.0037\pi$, as shown in Fig. 1(c), which does not have appreciable impact on the wave mixing process [25, 27-29]. Fig. 1(d) shows two examples of the wave mixing spectra generated in the silicon nanowire waveguide, with input laser detunings equal to $\pm0.28$ THz, and the generated sidebands are denoted as s1 and s2, respectively. Here the input pulse peak power is 11.7 W, and the c.w. power is 0.4 mW. Significantly we observe that the output pulse spectra have apparent FCD-induced spectral blue-shift ($\sim$0.03 THz) [20-24], and the output c.w. spectra exhibit the feature of FCD-induced cross-phase modulation from the pulse [28]. Consequently, the generated wave mixing idler spectra show complex and broadened structures. All these evidences indicate that the measured wave mixing processes are conducted in a regime with strong nonlinear free-carrier dynamics. To confirm this, Fig. 1(e) upper panel shows the numerically simulated wave mixing spectrum using the NLSE model given by Eq. (1-2), which illustrates remarkable agreement with the measurements without fitting. Meanwhile, we note that when the free-carrier dynamics are eliminated from the model, as shown in Fig. 1(e) lower panel, the modeled output spectra lose all the salient features (lineshape broadening, asymmetry, and spectral dips) observed in our measurements. \subsection{Inverse dependence of FC-SWM on input pump-signal laser detuning.} \label{ssec:preamble} From the expressions of parameters $H$ and $M$ in Eq. (4-5), it is noted that the FC-SWM sideband power inversely depends on the input laser detuing ($1/b$). Intrinsically, the $1/b$ factor originates from the temporal integration of $|E|^4$ in the rate equation of carrier density $N_c$, that is, the slower beating oscillation between the input lasers gives rise to higher carrier density fluctuations, and potentially imposes larger six-wave mixing strengths [14]. To demonstrate such dynamics, we scan the input laser detuning from $-$0.7 THz to 0.7 THz by changing the input c.w. frequency and record the wave mixing sideband powers at each scan point, as shown in Fig. 2(a). Intriguingly we observe that for 11.7 W input pulse, as the input laser detuning changes from $\pm0.3$ THz to zero, the generated wave mixing sideband power exhibits an increase of about 3.0 dB, clearly verifying the $1/b$-dependence prediction. To support the measurements, Fig. 2(b) shows the corresponding numerical simulation via the NLSE model Eq. (1-2), as well as the analytical calculation via Eq. (5), both achieving remarkable agreements with our measurements. For comparison, in Fig. 2(b) we plot the theoretical FWM sideband powers induced solely by Kerr effect and TPA, which shows no detuning dependence, and confirms that the waveguide dispersion is negligible compared to the intrinsic $1/b$-dependence of FC-SWM [25]. Particularly, it is seen from Fig. 2(b) that, when laser detuning approaches zero, FC-SWM becomes dominant over FWM (e.g., at 0.1 THz detuning, the FC-SWM sideband is $-$24.0 dBm, about 4.0 dB larger than FWM), strongly supporting the existence of FC-SWM in silicon. \begin{figure}[t] \includegraphics*[width=\linewidth]{fig2} \caption{Direct detuning-dependence of the Drude FC-SWM. (a) Experimentally measured wave mixing sideband powers as a function of input laser detuning. The gap in the middle detuning arises because, in such region, the generated sideband components are covered by the pulse spectrum. The laser scanning step is 25 GHz. The inset shows a more densely recorded data with 2.5 GHz step within the closer detuning window from $-$0.3 THz to 0.3 THz. (b) Numerical (open-circle) and analytical (solid-line) wave mixing sideband powers calculated as a function of laser detuning. The green circles and line are for FC-SWM and the red circles and line are for pure FWM (when without the free-carrier contributions). To compare theory with experiment, the calculated sideband powers are down-shifted by 23 dB to compensate for the 50 MHz pulse repetition rate and power attenuation in the measurement.} \label{fig-1} \end{figure} \subsection{Strong pump power dependency of FC-SWM.} \label{ssec:preamble} From Eq. (5) and the expressions of $P$, $G$, $D$, $A$, the FWM strength caused by third-order Kerr and TPA is proportional to $A_1A_2$, while the FC-SWM strength induced by the FCD/FCA is proportional to $A_1^3A_2$, rooting in the fifth-order nonlinear property of FC-SWM. To observe this higher pump power sensitivity of FC-SWM than FWM, Fig. 3(a-b) plots the experimental and theoretical power transfer functions between the input pulse peak power and the generated wave mixing sideband power, under three different frequency detuning values: 0.025 THz, 0.25 THz, and 1.25 THz. Particularly, with the $1/b$ scaling of the FC-SWM demonstrated above, as the detuning decreases from 1.25 THz to 0.025 THz, the contribution of FC-SWM in the overall wave mixing process is substantially enhanced. Simultaneously the slope (linearly fitted) of the power transfer function increases from 0.21 to 1.05, clearly illustrating the stronger pump power dependence of FC-SWM over FWM. Ideally the FC-SWM (FWM) produces a power transfer function with slope equal to 4 (2) [14, 15], but with the concurrent existence of TPA and FCA, the pulse and c.w. laser fields are heavily attenuated in the nanowire waveguide [21-22], and resultantly the measured and calculated power transfer functions are much less steep, as shown in Fig. 3(a-b). Even so, the power transfer function dominated by FC-SWM shows a $5\times$ slope improvement over solely FWM, further confirming the fifth-order nature of FC-SWM. \begin{figure}[b] \includegraphics*[width=\linewidth]{fig3} \caption{Strong dependences of FC-SWM to pump power. (a) Experimentally measured power transfer function between the input pulse and generated wave mixing sideband, under three different detunings: 0.025 THz (black); 0.25 THz (red); and 1.25 THz (blue). The solid lines are linear fits of the experiment data, with the fitted slope coefficients listed below each corresponding line. (b) Numerically simulated power transfer functions corresponding to the results in panel (a).} \label{fig-1} \end{figure} \subsection{Phase-sensitive interaction between FC-SWM and FWM.} \label{ssec:preamble} It is observed from Fig. 2(a-b) that, with the utilized experimental parameters, FC-SWM and FWM coexist and have comparative magnitudes, which allows us to explore the interplay between six- and four-wave mixing in silicon. As illustrated in Eq. (6) and (7), the combinational effects of FC-SWM and FWM produces an overall wave mixing strength that is asymmetrical for positive and negative input laser detunings. Since the waveguide dispersion is neglected in the derivation, such unconventional asymmetry could arises from the interplay between FC-SWM and FWM [30-31]. The same dynamics is observed experimentally in Fig. 1(d), for two opposite detuning values $\pm$0.28 THz, the generated power of sideband s1 and s2 has an appreciable difference of 1.6 dB. More generally, as observed in Fig. 2(a), the recorded sideband powers exhibit as an apparently asymmetric lineshape as the wavelength detunes from $-$0.70 THz to 0.70 THz. \begin{table}[t] \centering \caption{Wave mixing coefficients induced by different constituents of nonlinear processes.} \label{tlab} \begin{tabular}{@{}lcccc@{}} \toprule \textbf{Effects} & $NM_{-b}$ & $NM_{b}$\\ \midrule Kerr+TPA & $-MJ_0/2+iJ_1$ & $-MJ_0/2+iJ_1$\\ TPA+FCA & $Me^{i\psi /2}$ & $Me^{-i\psi /2}$\\ Kerr+FCD & $-J_1e^{-i\theta}$ & $J_1e^{i\theta}$\\ FCA+FCD & $-iMJ_0/2+J_1$ & $iMJ_0/2-J_1$\\ Kerr+FCA & $-iMJ_0/2+iJ_1$ & $iMJ_0/2+iJ_1$\\ TPA+FCD & $-MJ_0/2+J_1$ & $-MJ_0/2-J_1$\\ \bottomrule \end{tabular} \end{table} \begin{figure}[t] \includegraphics*[width=\linewidth]{fig4} \caption{Comparative detuning lineshape symmetries and asymmetries for different nonlinear constituents, noted in Table 1. The open circles are from numerical NLSE simulation and the solid lines are from analytical calculations, based on parameters used from Fig. 2 at 11.7 W. (a) Solely Kerr and TPA without free-carrier dynamics, with negligible detuning dependences. (b) Kerr and FCD constituents, with symmetric lineshape. (c) TPA and FCA constituents, with symmetric lineshape. (d) FCD and FCA constituents, with symmetric lineshape. (e) Kerr and FCA constituents, with lineshape asymmetry. (f) TPA and FCD constituents, with lineshape asymmetry. Note that the numerical results of the sideband powers close to zero detuning are subtracted due to the overlap with the pump spectra linewidth itself.} \label{fig-1} \end{figure} To elucidate such unconventional sideband evolutions and probe the interaction between different nonlinear wave mixing processes, we tailored Eq. (6-7) with different sets of nonlinear process combinations. As summarized in Table 1, for TPA and FCD, Kerr and FCA, we indeed obtain unequal wave mixing intensity with $\pm b$ detuning; while the other combinations all generate symmetric sideband powers. Importantly, comparing Table 1 and Eq. (3), we conclude that the breaking of wave mixing symmetry can only originate from the interplay between the nonlinear amplitude modulations and the nonlinear phase modulations that have a phase offset of $\pi/2$ (i.e., between TPA and FCD, Kerr and FCA). Fig. 4 shows the analytically calculated wave mixing sideband powers as the function of detuning, and the features predicted in Table 1 are clearly illustrated. To further confirm our analysis, Fig. 4 also presents the numerically simulated sideband powers under different nonlinear effects in Eq. (1-2), which agrees very well with the analytical results. \begin{figure}[t] \includegraphics*[width=\linewidth]{fig5} \caption{Wave mixing evolution via the phase-sensitive superposition of FC-SWM and FWM. (a) Experimentally measured wave mixing sideband powers as a function of input laser detuning, under two different input pulse peak powers: 11.7 W (green) and 2.1 W (cyan). The laser scanning resolution is 2.5 GHz. (b) Numerical (open-circles) and analytical (solid-lines) calculated wave mixing sideband power evolution, corresponding to a 11.7 W and a 2.1 W pulse. (c) Calculated false-color image of overall wave mixing sideband powers while sweeping input pulse powers from 1 W to 12 W. The sideband powers are normalized to the maximum value for each input pulse power.} \label{fig-1} \end{figure} Moreover, we find that the phase-sensitive interaction between FC-SWM and FWM significantly modifies the overall sideband generation and opens up new possibilities to manipulate the multi-wave energy exchange in silicon. As indicated in Eq. (6-7), the contributions of FC-SWM and FWM are intertwined within the functions $H$, $M$, $J_0$, $J_1$, which have different monotonicities. Hence the change of each effect can result in variation of the overall sideband power evolution in a non-explicit fashion. To demonstrate this, Fig. 5(a) and 5(b) show the measured and calculated sideband power evolution for two input pulse peak powers. Particularly, for the 11.7 W pulses, FC-SWM dominates FWM such that the sideband evolution approximately follows the $1/b$-dependence featured by FC-SWM, as discussed above in Fig. 2(a-b). On the other hand, when the input pulse power is decreased to 2.1 W, FC-SWM is subjected to more degradation due to its higher dependence on the pump power. Consequently, at this power level, FWM now competes with FC-SWM and results in approximately independence of sideband power to detuning $b$, as shown in Fig. 5(a) and 5(b). More generally, Fig. 5(c) shows the calculated sideband power evolution while sweeping the input pulse power. We observe that as the pulse power increases from 1 W to 12 W, the sideband evolution changes significantly, providing abundant and readily accessible power transfer functions. Such coupled and controllable sideband evolutions can be applicable for all-optical signal processing applications such as all-optical signal regeneration and frequency conversion. \section{Conclusion} \label{ssec:preamble} Here we report the original demonstrations and analysis of Drude free-carrier plasma induced six-wave mixing in silicon nanowire waveguides. Unique features of FC-SWM have been experimentally observed and discussed in-depth. First, the non-dispersion-induced inverse dependence of FC-SWM frequency conversion to the input laser detuning is observed, with FC-SWM sideband power rapidly increasing by 3.0 dB within a 0.3 THz detuning window. Second, the strong dependence of the FC-SWM to input pump powers is illustrated. Third, the phase-sensitive interaction between FC-SWM and FWM is demonstrated for the first time, giving rise to the asymmetric lineshape of sideband power evolution as a function of laser detuning. These observations not only advance our understanding of free-carrier nonlinear dynamics in the multiple-wave regime, but also open up new possibilities for applications based on wave mixing, such as on-chip spectral broadening and all-optical signal processing. Finally, the processes and phenomena demonstrated here can potentially be observed in other physical systems involving plasma nonlinearity, such as gas photoionization in hollow-core photonic crystal fibers [32-34] and light-plasma interactions in semiconductor photonic crystals [35]. \begin{acknowledgement} This work was funded by the China 863 Grant 2013AA014402, UESTC young faculty award ZYGX2015-KYQD051, NSF CBET-1438147 and DMR-1611598, Air Force Office of Scientific Research Young Investigator Award (S. W. Huang) FA9550-15-1-0081, and Office of Naval Research N00014-14-1-0041. The authors acknowledge discussions with Xingyu Zhou, Feng Wen, Baojian Wu, and Jinghui Yang. \end{acknowledgement}
1,108,101,563,968
arxiv
\section{Introduction} \label{sec::introduction} Given a video sequence, the task of visual tracking is to locate an object instance whose state is specified at the first frame. In general, tracking models can be grouped into three categories, i.e., generative, discriminative and hybrid. Generative models aim to locate an image region that is most similar to the target appearance, and possess a good generalization when only a limited number of training samples are available~\cite{Ng01nips}. While discriminative ones are to train binary classifiers to distinguish the target from background, and would achieve excellent performance if the size of the training set is sufficiently large~\cite{Lasserrre06cvpr}. Hybrid models are usually inherited from both advantages of generative and discriminative models~\cite{SCM12cvpr,Sui15iccv}. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{Comparison_Results} \\ \caption{A comparison of our approach FOF with the baseline HCF~\cite{Ma15iccv} and the state-of-the-art MDNet~\cite{MDNet15cvpr} on three example sequences. Our FOF tracker successfully tackles the challenges of motion blur, low resolution, partial occlusion, appearance variation and background clutter. }\label{fig::comparison_results} \end{figure} Recent studies on tracking are dominated by deep convolutional neural network (CNN)~\cite{Ma15iccv,C-COT16eccv,MDNet15cvpr,ECO17cvpr,StrutSiam18eccv}, and some of them resort to the pre-trained CNN model for correlation filtering. Correlation filter based tracking models exploit circular shifts to generate thousands of translated training samples, and put training and detection into the Fourier domain according to the circulant properties of the translated sample features, which can reduce both storage and computation by several orders of magnitude~\cite{CSK15pami}. Although achieving appealing results, most of these correlation filter based methods directly use feature maps for training samples (e.g., CNN features), and their performance in robustness might be affected by redundancy, noise and less discriminative ability for some certain instances of deep features. For example, the output of the \emph{conv5-4} convolutional layer in the VGGNet-19~\cite{vgg15iclr} trained on ImageNet is widely used for visual tracking and has 512 dimensions, most of whose elements are zeros and scattered~\cite{ECO17cvpr}. In addition, the above feature may be sufficient to represent generic targets, but its effectiveness in terms of tracking is limited due to the fundamental inconsistency between classification and tracking problems, i.e., predicting object class labels versus locating targets of arbitrary classes~\cite{MDNet15cvpr}. The Bag-of-features (BoF) framework is widely used in various applications, such as image classification and retrieval~\cite{LSC13pami,Huang14pami}. It encodes each input feature on the constructed codebook into a coding vector, which not only benefits from the representation ability of original feature, but also is more compact and discriminative~\cite{SCM12cvpr,Wang15tip}. The usage of BoF in visual tracking is to reconstruct the input features on the positive and negative dictionaries, and then employ the reconstruction coefficients (i.e., coding features)~\cite{SCM12cvpr,Liu11cvpr,DSSM14tip,Zhang17tnnls} to define the likelihood scores of candidates in the Bayesian or particle filtering framework. For these methods, however, there are two major issues to be not addressed yet. First, the used features are pixel intensities which are too weak, significantly limiting the tracking performance. In addition, these methods usually requires solving the $\ell_1$-minimization problem as many times as the number of candidates. Considering the time nature of visual tracking, therefore, it is unacceptable to use very high dimensional deep features in this framework. Second, the sparse sampling strategy is difficult to balance the trade-off between tracking accuracy and computational burden. This paper takes both advantages of discriminative and generative models into account for visual tracking, and handles all of above-mentioned problems of correlation filter and BoF based tracking algorithms . In particular, we elaborately design a novel hybrid model to enhance the discriminative capability of the correlation filter with the usage of good generalization of feature coding to mitigate the redundancy and noise effects of deep features, and thus achieve clearly improved tracking performance with the considerable efficiency. Specifically, we encode input features as new representations in a robust way using the Laplacian coding algorithm that exploits the dependence among the local features, and learn it together with the correlation filter in a single unified optimization framework. In addition to the \emph{compact} and \emph{discriminative} properties inherited from original feature coding methods, our learnt feature representations are \emph{target-oriented} due to the proposed joint optimization scheme, and thus significantly augment the discriminative capability of the correlation filter. Some tracking examples are presented in Fig.~\ref{fig::comparison_results} to show the effectiveness of the proposed approach against the baseline and state-of-the-art trackers. To our best knowledge, it is probably the first work to improve the correlation filter framework from the perspective of BoF, and we think it would be a potential direction to the visual tracking task. We summarize the major contributions of this work as follows. \begin{itemize} \item We propose an effective approach to alleviate the effects of feature redundancy and noise in visual tracking. Extensive experiments show that the proposed method clearly outperforms baseline trackers with a modest impact on the frame rate, and performs comparably against the state-of-the-art trackers on three benchmark datasets. Source codes and experimental results would be available online for reproducible research. \item We present a novel correlation filter model to augment the discriminative ability by learning a compact, discriminative and target-oriented feature representations. The proposed model jointly optimizes the correlation filter and the feature representations in a unified framework. \item We develop an efficient algorithm to solve the associated optimization problem, where each sub-problem is convex and the convergence is guaranteed. Moreover, we analyze the computational complexity of the proposed algorithm in detail. Empirically, our algorithm can converge within very few iterations on real image data, and thus only has very slight computational burden on the tracking speed. \end{itemize} \section{Related Work} \label{sec::related_work} There are many kinds of trackers~\cite{Li16tip1,Li18eccv,Li18pami,Li19pr}, and here we only introduce some methods most relevant to ours in this section. \subsection{Correlation Filter for Tracking} The early work of correlation filters (CFs) for tracking is MOSSE~\cite{MOSSE10cvpr} which uses a set of training samples to train CFs in the frequency domain. Henriques~\emph{et al.}~\cite{CSK15pami} extend CFs with the kernel trick and multi-channel features. Based on these works, several notable improvements have been proposed. For example, scale-adaptive schemes are incorporated for handling {\bf scale variation}~\cite{DSST17pami,arv17iccvw}, and part-based models are proposed for addressing {\bf partial occlusion}~\cite{liu15cvpr,liu16cvpr}. To utilize the complementary benefits of different features, multiple {\bf features integration}~\cite{Ma15iccv,bertinetto2016staple,C-COT16eccv,ECO17cvpr} are fully investigated for improving tracking performance. To mitigate background effects, {\bf background context} is used to enhance the discriminative ability of correlation filters, i.e., suppressing background information in filter learning~\cite{ba17iccv,ca17cvpr}. In addition, various spatially regularized models are proposed to handle {\bf boundary effect}~\cite{danelljan2015learning,dcf-csr16cvpr} caused by periodic repetitions of circulant shifted samples. \subsection{Feature Coding for Tracking} Feature coding (FC) is a core component of the BoF framework~\cite{LSC13pami,Huang14pami}, and has been widely applied in different fields of computer vision, including the tracking task. Inspired by the properties of the receptive fields of simple cells in visual cortex, FC uses the representation coefficients as features to describe the appearance of target objects~\cite{Zhang13pr}. Liu et al.~\cite{Liu11cvpr} propose a local sparse appearance model to learn a target-based sparse coding histogram, and then employ the meanshift algorithm to perform tracking. Zhong et al.~\cite{SCM12cvpr} propose a sparsity-based collaborative model in the Bayesian filtering framework, in which a sparsity-based generative model is developed to construct histogram features for target objects, and spatial information and occlusion handling are incorporated. A biologically inspired method is proposed by Zhang et al.~\cite{Zhang17tnnls} to model the target appearance via a coding layer, and tracking is carried out in a particle filter framework. With the same tracking framework, other coding algorithms are employed to design effective trackers, such as locality-constrained linear coding~\cite{Ding18iet} and sparse and local linear coding~\cite{Wang15tip}. Different from them, we jointly learn the feature code and the correlation filter in a unified optimization framework so as to yield a more compact, discriminative and target-oriented feature representation. \section{Dual Correlation Filter} \label{sec::preliminaries} In this section, we give a brief description of correlation filter (CF) and its dual form (DCF), which is preliminary for our algorithm. The key idea of CF is that thousands of negative samples have the circulant structure whose computation can be transformed in the Fourier domain at a high speed~\cite{CSK15pami}. Given an image patch ${\bf x}$ with the size of $M\times N$, CF trackers use all circular shifts, denoting as ${\bf x}_{m,n}$, to train a correlation filter ${\bf w}$, where $(m,n)\in\{0,1,...,M-1\}\times\{0,1,...,N-1\}$. The labels ${\bf y}_{m,n}$ of these shifted samples are generated by a Gaussian function, and the goal is to find the optimal weights ${\bf w}$ in the following program: \begin{equation}\label{eq::cf} \begin{aligned} &\arg\min_{\bf w}~\sum_{m,n}\lVert{\bf c}_{m,n}\odot{\bf x}_{m,n}\odot{\bf w}_{m,n}-{\bf y}_{m,n}\rVert^2_2+\lambda\lVert{\bf w}\rVert^2_2, \end{aligned} \end{equation} where $\lVert\cdot\rVert_2^2$ denotes the $\ell_2$-norm of a vector, and $\lambda$ is a regularization parameter. $\odot$ indicates the element-wise product, and ${\bf c}$ is a cosine window used to suppress boundary effects of periodic shift samples~\cite{CSK15pami}. To simplify computation for multiple channel features, the dual form of~\eqref{eq::cf}, called dual correlation filter (DCF), can equivalently be expressed as follows: \begin{equation}\label{eq::dcf} \begin{aligned} &\arg\min_{\bf u}~\frac{1}{4\lambda}{\bf u}^\top S({\bf x}_{\bf c})S({\bf x}_{\bf c})^\top{\bf u}+\frac{1}{4}{\bf u}^\top{\bf u}-{\bf u}^\top{\bf y}, \end{aligned} \end{equation} where ${\bf u}$ is the dual variable of ${\bf w}$, $S({\bf x})$ denotes a circulant matrix whose base vector is ${\bf x}$, and ${\bf c}\odot{\bf x}$ is denoted as ${\bf x}_{\bf c}$ for simplicity. Through using the fast Fourier transformation (FFT) to diagonalize the circulant matrix, the solution of~\eqref{eq::dcf} is as: \begin{equation}\label{eq::solution-dcf} \begin{aligned} &\hat{\bf u}=\frac{\hat{\bf y}}{\frac{1}{2\lambda}\hat{\bf x}^*_{\bf c}\odot\hat{\bf x}_{\bf c}+\frac{1}{2}}, \end{aligned} \end{equation} where $\hat{\bf u}$ denotes the discrete Fourier transformation (DFT) of ${\bf u}$, i.e., $\hat{\bf u}=\mathcal{F}({\bf u})$, and $\mathcal{F}(\cdot)$ represents the Fourier transformation. ${\bf x}^*$ is the complex-conjugate of ${\bf x}$. ${\bf u}$ can be obtained via ${\bf u}=\mathcal{F}^{-1}(\hat{\bf u})$, where $\mathcal{F}(\cdot)$ indicates the inverse Fourier transformation. If ${\bf x}$ has $D$-dimensional channels, by simply summing over them in the Fourier domain~\cite{CSK15pami}, the solution of~\eqref{eq::dcf} can be written as: \begin{equation}\label{eq::solution-dcf1} \begin{aligned} &\hat{\bf u}=\frac{\hat{\bf y}}{\frac{1}{2\lambda}\sum_d\hat{\bf x}^{d*}_{\bf c}\odot\hat{\bf x}^d_{\bf c}+\frac{1}{2}}, \end{aligned} \end{equation} where $d\in\{1,2,...,D\}$ denotes the channel index. \section{Filter Optimization Driven Feature Coding} \label{sec::joint-learning} In this section, we will describe the proposed model in detail, and present the associated optimization algorithm. \subsection{Model Formulation} As discussed above, we aim at using feature coding-based object representation to enhance the discriminative capacity of dual correlation filter (DCF) while utilizing the filter optimization to guide feature learning. \begin{figure}[t] \centering \includegraphics[width=0.7\columnwidth]{illustration_feature_maps} \\ \caption{Illustration of feature maps from the output (denoted by ${\bf X}$) of the \emph{conv5-4} layer in VGGNet-19~\cite{vgg15iclr} trained on ImageNet and the learnt feature representation (denoted by ${\bf Z}$). (a) From left to right: search window of a target object (indicated by red bounding box), average feature map of ${\bf X}$, and average feature map of ${\bf Z}$, where the input image patch is from the \emph{tiger2} sequence in the OTB100 dataset~\cite{RGBbenchmark15pami}. (b) Three feature maps randomly selected from ${\bf X}$. (c) Three feature maps randomly selected from ${\bf Z}$. One can see that the learnt features are more compact, discriminative and target-oriented than those directly extracted from VGGNet-19.}\label{fig::illustration_feature_maps} \end{figure} A set of codewords needs to be generated first to compose a codebook ${\bf B}\in\mathcal{R}^{D\times k}$, where $D$ is the feature dimension, and $k$ is the number of codebook elements. We will discuss the details of cookbook later. We encode the input target patch feature ${\bf x}$ on ${\bf B}$, and integrate the encoded feature representation into the dual correlation filter (DCF). To this end, we rearrange the multi-channel feature vector ${\bf x}\in\mathcal{R}^{MN\times D}$ as a matrix ${\bf X}\in\mathcal{R}^{D\times MN}$, and represent it using the coefficient ${\bf Z}\in\mathcal{R}^{k\times MN}$ on ${\bf B}$: ${\bf X}={\bf B}{\bf Z}$. Notice that $k$ can be viewed as the channel number of ${\bf Z}$, and we use $k = 1$ to consider the single channel case for the convenience of our formulation. When $k > 1$, the solution of ${\bf u}$ can be obtained by simply summing over all channels of ${\bf Z}$ in the Fourier domain, as discussed in Section~\ref{sec::preliminaries}. Instead of using original feature ${\bf x}$, we employ the more compact and discriminative feature coding representation ${\bf Z}$~\cite{Huang14pami,Jegou12pami} to learn DCF in~\eqref{eq::dcf}, and formulate it as follows: \begin{equation}\label{eq::combine} \begin{aligned} & \min_{{\bf Z},{\bf u}} \frac{1}{4\lambda}{\bf u}^\top S({\bf Z}_{\bf c})S({\bf Z}_{\bf c})^\top{\bf u}+\frac{1}{4}{\bf u}^\top{\bf u} -{\bf u}^\top{\bf y},\\ &+\Phi({\bf Z})~s.t.~{\bf X}={\bf B}{\bf Z}, \end{aligned} \end{equation} where $\Phi({\bf Z})$ represents prior constraints on ${\bf Z}$. As seen from~\eqref{eq::combine}, in addition to the compact and discriminative properties inherited from the feature coding algorithm, the learnt feature ${\bf Z}$ is also target-oriented due to the joint learning scheme of ${\bf Z}$ and ${\bf u}$. In addition, if we use the pre-trained CNN features, the model in~\eqref{eq::combine} can significantly reduce the feature dimension to remove redundancy and noise in learning the filter. For example, the output of the \emph{conv5-4} convolutional layer widely used in visual tracking has 512 dimensions while the dimension $k$ of ${\bf Z}$ is set to 10 in this work. Although the dimension of the learnt feature (i.e., ${\bf Z}$) is much smaller than the \emph{conv5-4} convolutional feature (i.e., ${\bf X}$), ${\bf Z}$ is more discriminative and target-oriented than ${\bf X}$, as shown in Fig.~\ref{fig::illustration_feature_maps}. Moreover, we can observe that different feature maps from ${\bf Z}$ mainly focus on different parts of the target object while suppressing background parts, which can be explained by the fact that the proposed joint learning algorithm strengthens the discriminative power of feature representations. Therefore, the learnt features make the filter more robust to various challenges, such as appearance change and background clutter. Various prior constraints could be explored to regularize ${\bf Z}$ for better stability and the quality of the feature coding, such as Frobenius norm, low rank and sparse, and we adopt simple yet effective one for computational efficiency. Note that similar features may be encoded as totally different sparse codes, and such instability easily harms the robustness of the feature coding~\cite{LSC13pami} and thus might affect the tracking performance. Therefore, we put the Laplacian constraint on ${\bf Z}$ that preserves the locality and similarity information among local features to alleviate the instability of feature coding, and the final joint learning model is as follows: \begin{equation}\label{eq::objective} \begin{aligned} & \min_{{\bf Z},{\bf u}} \frac{1}{4\lambda}{\bf u}^\top S({\bf Z}_{\bf c})S({\bf Z}_{\bf c})^\top{\bf u}+\frac{1}{4}{\bf u}^\top{\bf u} -{\bf u}^\top{\bf y}\\ &+\gamma~tr({\bf ZLZ}^\top),~s.t.~{\bf X}={\bf B}{\bf Z}, \end{aligned} \end{equation} where $tr(\cdot)$ indicates the matrix trace, and $\gamma$ is the balanced parameter. ${\bf L}={\bf F}-{\bf G}$ is the Laplacian matrix, where ${\bf F}$ is the degree matrix whose diagonal element $F_i=\sum_jG_{ij}$, and ${\bf G}$ is a binary matrix indicating the relationship between any two coding features with $G_{ij}=1$ if ${\bf z}_i$ is among the $r$ nearest neighbors of ${\bf z}_j$ otherwise $G_{ij}=0$. \subsection{Optimization Algorithm} The model in~\eqref{eq::objective} is not joint convex on ${\bf Z}$, ${\bf E}$ and ${\bf u}$, but it is convex with respect to each of them when others are fixed. The ADMM (alternating direction method of multipliers) algorithm~\cite{ADMM11} has shown to be an efficient and effective solver of such problems. To apply ADMM for the above problem, we need to make the objective function separable. Therefore, we introduce an auxiliary variable ${\bf p}$ to replace ${\bf Z}_{\bf c}$ in~\eqref{eq::objective}: \begin{equation}\label{eq::separable} \begin{aligned} & \min_{{\bf Z},{\bf u}} \frac{1}{4\lambda}{\bf u}^\top S({\bf p})S({\bf p})^\top{\bf u}+\frac{1}{4}{\bf u}^\top{\bf u} -{\bf u}^\top{\bf y}\\ &+\gamma~tr({\bf ZLZ}^\top),~s.t.~{\bf X}={\bf B}{\bf Z},{\bf p}={\bf Z}_{\bf c}. \end{aligned} \end{equation} The augmented Lagrangian function is: \begin{equation}\label{eq::lagrangian} \begin{aligned} & \mathcal{L}_{\{{\bf Z},{\bf u},{\bf p}\}} =\frac{1}{4\lambda}{\bf u}^\top S({\bf p})S({\bf p})^\top{\bf u}+\frac{1}{4}{\bf u}^\top{\bf u} -{\bf u}^\top{\bf y}\\ &+\frac{\mu}{2}\lVert{\bf X}-{\bf B}{\bf Z}\rVert_F^2+\gamma~tr({\bf ZLZ}^\top)+\langle{\bf Y}_1,{\bf X}-{\bf B}{\bf Z}\rangle\\ &+\frac{\mu}{2}\lVert{\bf X}-{\bf B}{\bf Z}\rVert_2^2+\langle{\bf y}_2,{\bf p}-{\bf Z}_{\bf c}\rangle+\frac{\mu}{2}\lVert{\bf p}-{\bf Z}_{\bf c}\rVert_2^2, \end{aligned} \end{equation} where ${\bf Y}_1$ and ${\bf y}_2$ are the Lagrange multipliers, and $\mu$ is the Lagrange parameter. The augmented Lagrangian function~\eqref{eq::lagrangian} can be iteratively minimized by ADMM which sequentially solves the following sub-problems at each iteration: \begin{equation}\label{eq::Z} \begin{aligned} &\min_{{\bf Z}}~\frac{\mu}{2}\lVert{\bf X}-{\bf B}{\bf Z}+\frac{{\bf Y}_1}{\mu}\rVert_F^2+\frac{\mu}{2}\lVert{\bf p}-{\bf Z}_{\bf c}+\frac{{\bf y}_2}{\mu}\rVert_2^2\\ &+\gamma~tr({\bf ZLZ}^\top), \end{aligned} \end{equation} \begin{equation}\label{eq::P} \begin{aligned} &\min_{{\bf p}}\frac{1}{4\lambda}{\bf u}^\top S({\bf p})S({\bf p})^\top{\bf u}+\frac{\mu}{2}\lVert{\bf p}-{\bf Z}_{\bf c}+\frac{{\bf y}_2}{\mu}\rVert_2^2, \end{aligned} \end{equation} \begin{equation}\label{eq::u} \begin{aligned} &\min_{{\bf u}}\frac{1}{4\lambda}{\bf u}^\top S({\bf p})S({\bf p})^\top{\bf u}+\frac{1}{4}{\bf u}^\top{\bf u} -{\bf u}^\top{\bf y}. \end{aligned} \end{equation} {\flushleft \bf Efficient solutions}. The problem in~\eqref{eq::Z} is a convex problem, but does not have a closed-form solution. In this work, we solve it efficiently using the Nesterov's Accelerated Gradient descent (NAG) algorithm~\cite{NAG16arXiv}. ${\bf p}$ in~\eqref{eq::P} and ${\bf u}$ in~\eqref{eq::u} can be calculated fast in the Fourier domain~\cite{CSK15pami}. With simple algebra, the solutions of the above sub-problems are as follows: \begin{equation}\label{eq::Z-solution} \begin{aligned} &{\bf Z}=\mathcal{N}(f({\bf Z})), \end{aligned} \end{equation} \begin{equation}\label{eq::P-solution} \begin{aligned} &\hat{\bf p}=\frac{\mu\hat{\bf Z}_{\bf c}-\hat{\bf y}_2}{\frac{1}{2\lambda}\hat{\bf u}^*\odot\hat{\bf u}+\mu}, \end{aligned} \end{equation} \begin{equation}\label{eq::u-solution} \begin{aligned} &\hat{\bf u}=\frac{\hat{\bf y}}{\frac{1}{2\lambda}\hat{\bf p}^*\odot\hat{\bf p}+\frac{1}{2}}, \end{aligned} \end{equation} where $\mathcal{N}(\cdot)$ indicates the operator of the NAG, and $f({\bf Z})=\frac{\mu}{2}\lVert{\bf X}-{\bf B}{\bf Z}+\frac{{\bf Y}_1}{\mu}\rVert_F^2+\frac{\mu}{2}\lVert{\bf p}-{\bf Z}_{\bf c}+\frac{{\bf y}_2}{\mu}\rVert_2^2+\gamma~tr({\bf ZLZ}^\top)$. The Lagrange multipliers and parameters are updated by a standard scheme~\cite{ADMM11}: \begin{equation}\label{eq::multipliers} \begin{aligned} &{\bf Y}_1={\bf Y}_1+\mu({\bf X}-{\bf B}{\bf Z});\\ &{\bf y}_2={\bf y}_2+\mu({\bf p}-{\bf Z}_{\bf c});\\ &\mu=\min(\mu_m,\rho\mu), \end{aligned} \end{equation} where $\mu_m$ denotes the maximum value of $\mu$ and $\rho$ is the scale parameter. \subsection{Discussion} {\flushleft \bf Codebook construction}. There are many methods for codebook construction, e.g., $k$-means clustering and dictionary learning. $k$-means clustering is to initialize some cluster centers randomly and then perform clustering using the $k$-means algorithm to obtain the final clusters as dictionary elements. However, the quality of dictionary is greatly affected by initial centers and the results are undeterministic as initial centers are randomly generated. Therefore, we use the dictionary learning algorithm proposed in~\cite{Jenatton10icml} to construct the codebook. Considering the time-sensitive nature of visual tracking, we construct the codebook in the first frame, and do not update it in subsequent frames. On one hand, the target image region in the first frame consists of the most representative patterns of the target object. On the other hand, although new pattern of the target object appears, the pattern across different frames would encode similar features on the fixed dictionary, and thus the tracking performance is not affected much. {\flushleft \bf Complexity analysis}. Since ${\bf L}$ and ${\bf Z}$ are sparse matrices, and the computational cost of solving ${\bf Z}$ is $\mathcal{O}(k^3N_JDMN)$, where $N_J$ is the maximum number of iterations of the NAG. The complexity of solving $\hat{\bf p}$ and $\hat{\bf s}$ is $\mathcal{O}(MN)$. Taking the FFT and inverse FFT into account, the complexity of solving $\hat{\bf p}$ and $\hat{\bf s}$ is $\mathcal{O}(kMN log(MN))$. Hence, the overall cost of our algorithm is $\mathcal{O}(MN(k^3N_JD + kN_I log(MN)))$, where $N_I$ is the maximum number of iterations of the ADMM. While the complexity of DCF is $\mathcal{O}(MN log(MN))$. Since $k$, $N_I$ and $N_J$ are very small and $D$ is very smaller than $MN$, the complexity of our algorithm is comparable with DCF. Note that ${\bf B}^\top{\bf B}$ and ${\bf B}^\top{\bf X}$ can be precomputed, and the computational time is thus further reduced. {\flushleft \bf Convergence}. Note that each subproblem in~\eqref{eq::lagrangian} is convex, and thus we can guarantee that the limit point by our algorithm satisfies the Nash equilibrium conditions~\cite{Xu13siam}. In addition, we empirically find that the proposed optimization algorithm can converge within 2 iterations in ADMM and 3 iterations in NAG on most of sequences, and thus set $N_I$ to 2 in ADMM and $N_J$ to 3 in NAG for efficiency. \begin{algorithm}[t] \caption{Our Proposed Object Tracking Algorithm}\label{alg::tracking} \begin{algorithmic}[1] \REQUIRE Input video sequence, target bounding box $bb_0$. \ENSURE Estimated target bounding box $bb^*_t$. \STATE {\tt // Initialization} \STATE Construct codebook ${\bf B}^l$ for the $l$-th layer and Guassian shape label vector ${\bf y}$; \REPEAT \STATE {\tt // Feature extraction} \STATE Extract hierarchical convolutional features ${\bf X}_t^l$ and HOG feature ${\bf H}_t$ according to $bb_{t-1}$, and compute Laplacian matrix ${\bf L}^l_t$ using ${\bf X}^l_t$; \STATE {\tt // Target localization} \STATE Solve~\eqref{eq::objective} with ${\bf B}^l$, ${\bf y}$, ${\bf X}_t^l$ and ${\bf L}^l_t$ as inputs to obtain motion filter ${\bf u}^l_t$; \STATE Compute response map for each layer, and combine all response maps to obtain confidence map; \STATE Estimate target location $\bar{bb}_t$ by finding maximum confidence score $s_t$; \STATE Update motion models using~\eqref{eq::update}; \STATE {\tt // Target re-detection} \IF{$s_t$ is below $T_1$} \STATE Generate proposals, and compute their response maps using appearance filter ${\bf w}_{at}$ with ${\bf H}_t$; \IF{Maximum response score is larger than $T_2$} \STATE Update $\bar{bb}_t$ as $bb_t$; \STATE Update appearance models similar to~\eqref{eq::update}; \ENDIF \ENDIF \STATE {\tt // Scale estimation} \STATE Generate a target pyramid, and compute their response maps using scale filter ${\bf w}_{st}$ with ${\bf H}^b_t$; \IF{Maximum response score is larger than $s_t$} \STATE Update $\bar{bb}_t$ or $bb_t$ as $bb^*_t$; \STATE Update scale models similar to~\eqref{eq::update}; \ENDIF \UNTIL{\emph{End of video sequence}.} \end{algorithmic} \end{algorithm} \section{Tracker Details} \label{sec::tracking} Based on the proposed joint learning model, we briefly present our tracker with four modules, including model updating, target localization, target re-detection and scale handling. Algorithm~\ref{alg::tracking} shows the whole tracking procedure. \subsection{Tracking Modules} {\flushleft \bf Model updating}. To account for appearance changes of target objects, we update the appearance model $\bar{\bf x}$ and the filter model $\bar{\bf u}$ over time. At time $t$, model parameters are updated by: \begin{equation}\label{eq::update} \begin{aligned} &\mathcal{F}(\bar{\bf x})^t=(1-\eta)\mathcal{F}(\bar{\bf x})^{t-1}+\eta\mathcal{F}({\bf x});\\ &\mathcal{F}(\bar{\bf u})^t=(1-\eta)\mathcal{F}(\bar{\bf u})^{t-1}+\eta\mathcal{F}({\bf u}), \end{aligned} \end{equation} where $\eta$ is a learning rate. We update the above models with 3 frames interval to avoiding overfitting. {\flushleft \bf Target localization}. Given the learned appearance model $\bar{\bf x}$ and filter model $\bar{\bf u}$, we estimate the target translation by searching for the location of the maximal value of $\bar{\bf y}$ in~\eqref{eq::translation}: \begin{equation}\label{eq::translation} \begin{aligned} &\bar{\bf y}=\mathcal{F}^{-1}(\mathcal{F}(\bar{\bf u})\odot\sum_d^D\mathcal{F}({\bf x}^d\odot\bar{\bf x}^d)), \end{aligned} \end{equation} where ${\bf x}$ denotes an image patch in the new frame. {\flushleft \bf Target re-detection}. If tracking failures occur, the proposed method is hard to recover targets and would affect the tracking performance. To handle this problem, we integrate the scheme of target re-detection into our tracking framework like~\cite{Ma15cvpr,HCF18pami}. Specifically, we set a threshold $T_1$ to judge whether tracking failures occur or not. If the confidence score is below $T_1$, we treat the tracker as losing the target and generate a set of region proposals using the EdgeBox algorithm~\cite{EdgeBox14eccv} across the whole frame for recovering target objects. Then, another correlation filter learnt over the HOG feature is used to re-detect target objects, and we update this filter with the learning rate $\eta_2$ when its confidence score is larger a threshold $T_2$. {\flushleft \bf Scale handling}. During object tracking, we construct a target pyramid around the estimated translation location for scale estimation~\cite{Ma15cvpr}. Note that $M\times N$ is the target size in a test frame and let $R$ indicate the number of scales $\mathbb{B}=\{a^{\bar{r}}|\bar{r}=\lfloor-\frac{R-1}{2}\rceil,\lfloor-\frac{R-3}{2}\rceil,...,\lfloor\frac{R-1}{2}\rceil\}$. For each $b\in \mathbb{B}$, we extract an image region of a size $bM\times bN$ centered around the estimated location. Then, we uniformly resize all image regions with the size $M\times N$, and the optimal scale of target can be achieved by evaluating all resized image regions using the correlation filter learnt over HOG feature for efficiency. The parameter setting of scale estimation is the same with~\cite{Ma15cvpr}, and we update the scale filter with the learning rate $\eta_1$. \subsection{Difference from Previous Work} It should be noted that our method is significantly different from~\cite{ColorNamesTracker14cvpr,ECO17cvpr} from the following aspects. 1) Danelljan et al.~\cite{ColorNamesTracker14cvpr} compute a matrix via PCA to project high-dimensional features into a lower space, and Danelljan et al.~\cite{ECO17cvpr} formulate a projection matrix into the correlation filter model to project high-dimensional features into low-dimensional ones. While we encode input features on a predefined dictionary to generate new feature representations, and then employ them to optimize the filter in a unified framework. 2) For low-dimensional features (especially for one-dimension, e.g., gray value),~\cite{ColorNamesTracker14cvpr,ECO17cvpr} are not suitable to enhance the discrimination, but our method could handle them and also improve the tracker performance, as demonstrated in the experiments. Our method is also very different from other feature coding based trackers~\cite{Zhang17tnnls,Liu11cvpr,SCM12cvpr,Zhang13pr,Wang15tip}. These methods usually employ feature coding algorithms to learn a target-based appearance histogram, and then define the likelihood scores of candidates using similarities with the target template in the Bayesian or particle filtering framework. Different from these methods, we pursue a robust feature coding in the correlation filter model to yield a compact, discriminative and target-oriented feature representation. \begin{table*}[t]\footnotesize \caption{PR/SR scores of FOF versus the trackers that only use deep features on the OTB100 dataset, where the best results are in bold fonts. } \centering \begin{tabular}{c|c|c c c c c c c c c|c} \hline Dataset & Metric & CFNet & SINT++ & SRDCFdecon & HCF & FCNT & StrutSiam & HDT & DeepSRDCF & MDNet & FOF\\ & & CVPR17 & CVPR18 & CVPR16 & ICCV15 & ICCV15 & ECCV18 & CVPR16 & ICCVW15 & CVPR16 & \\ \hline OTB50 & PR & 0.807 & 0.839 & 0.870 & 0.891 & 856 & 0.880 & 0.889 & 0.849 & 0.911 & {\bf 0.915}\\ & SR & 0.611 & 0.624 & 0.653 & 0.605 & 599 & 0.638 & 0.603 & 0.641 & 0.671 & {\bf 0.672} \\ \hline OTB100 & PR & 0.751 & 0.768 & 0.825 & 0.837 & 0.779 & 0.848 & 0.851 & 0.851 & 0.878 & {\bf 0.881} \\ & SR & 0.580 & 0.574 & 0.627 & 0.562 & 0.551 & 0.564 & 0.621 & 0.635 & {\bf 0.646} & 0.639 \\ \hline \end{tabular} \label{tb::otb100} \end{table*} \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{Visual_Comparisons} \\ \caption{Visual examples of our method comparing five trackers on four video sequences. }\label{fig::visual_results} \end{figure*} \section{Performance Evaluation} \label{sec::evaluation} To validate the effectiveness of our framework, i.e., Filter Optimization driven Feature coding (FOF), we evaluate it on three benchmarks, i.e., the OTB50 dataset~\cite{RGBbenchmark13cvpr}, the OTB100 dataset~\cite{RGBbenchmark15pami} and the VOT2016 dataset~\cite{vot16challenge}. At last, we analyze the proposed model. \subsection{Evaluation Setting} {\flushleft \bf Implementation details}. We adopt VGGNet-19 trained on the ImageNet dataset for feature extraction, and use the outputs of the \emph{conv3-4}, \emph{conv4-4} and \emph{conv5-4} convolutional layer as our features. The proposed algorithm is employed on these features and their response maps are combined together with the weights 0.25, 0.5 and 1, respectively. We keep the learning rates $\eta$, $\eta_1$ and $\eta_2$ the same as 0.01 for simplicity, and set the thresholds $T_1$ and $T_2$ as 0.25 and 0.38, respectively. For generating proposals in EdgeBox, we set the step size to 0.85 and the NMS (non-maximum suppression) threshold to 0.8. In the proposed model~\eqref{eq::objective}, we empirically set $\lambda=0.5$ and $\gamma=0.8$. {\flushleft \bf Evaluation metrics}. On both OTB50 and OTB100 datasets~\cite{RGBbenchmark13cvpr,RGBbenchmark15pami}, we use precision rate (PR) and success rate (SR) for quantitative performance. PR is the percentage of frames whose output location is within a threshold distance of ground truth, and SR is the percentage of the frames whose overlap ratio between the output bounding box and the ground truth bounding box is larger than a threshold. We set the threshold to be 20 pixels to obtain the representative PR, and employ the area under the curves of success rate as the representative SR for quantitative performance. On the VOT2016 dataset~\cite{vot16challenge}, we adopt 3 primary measures (i.e., Accuracy (A), robustness (R) and expected average overlap (EAO)) to assess a tracker. A is the average overlap between the predicted and ground truth bounding boxes during successful tracking periods, and R the robustness measures how many times the tracker loses the target (fails) during tracking. EAO is an estimator of the average overlap a tracker is expected to attain on a large collection of short-term sequences with the same visual properties as the given dataset~\cite{vot16challenge}. \subsection{Evaluation on the OTB50 Dataset} On the OTB50 dataset, we evaluate our approach with comparison to nine state-of-the-art trackers that only use deep learning features, including MDNet~\cite{MDNet15cvpr}, DeepSRDCF~\cite{DeepSRDCF15iccvw}, HDT~\cite{HDT16cvpr}, StrutSiam~\cite{StrutSiam18eccv}, FCNT~\cite{Wang15iccv}, HCF~\cite{Ma15iccv}, SRDCFdecon~\cite{SRDCFdecon16cvpr}, SINT++~\cite{SINT++18cvpr}, and CFNet~\cite{CFNet17cvpr}. Table~\ref{tb::otb100} shows the results, which suggest that our FOF generally perform well against the state-of-the-art trackers on the OTB50 dataset. In particular, our method improves the baseline HCF by a large margin (2.4\%/6.7\% performance gains in PR/SR), and outperforms the state-of-the-art MDNet, which is not offline trained on auxiliary sequences for fair comparison. The overall promising performance of our method can be explained by the fact that the proposed joint learning algorithm strengthens the discriminative power of the filter by suppressing feature redundancy and noise. We also present four visual examples of our method comparing five trackers in Fig.~\ref{fig::visual_results}, including HCF~\cite{Ma15iccv}, SCM~\cite{SCM12cvpr}, Struck~\cite{Stuck11iccv}, LCT~\cite{Ma15cvpr} and SINT++~\cite{SINT++18cvpr}, which qualitatively justify the effectiveness of our FOF tracker in handling the challenges of motion blur, low resolution, partial occlusion, appearance variation, target rotation and background clutter. \subsection{Evaluation on the OTB100 Dataset} On the OTB100 dataset, we also evaluate our approach with the above nine state-of-the-art trackers, as shown in Table~\ref{tb::otb100}. The results on the OTB100 dataset again demonstrate the similar observations with the OTB50 dataset. Specifically, our FOF outperforms the baseline HCF by 4.4\%/7.7\% in PR/SR, bigger performance gains than on the OTB50 dataset. Comparing with MDNet, we achieve superior performance in PR, but slightly worse in SR. It is because our tracker uses a simple strategy that samples a sparsely set of scaled regions from an image pyramid and then evaluates them using a HOG-based correlation filter for scale estimation. While the MDNet method adopts the bounding box regression model trained with deep features to improve the target localization accuracy. Overall, the favorable results against the state-of-the-art methods on the OTB100 dataset further demonstrate the effectiveness of the proposed approach. \begin{table}[t]\footnotesize \caption{Accuracy, Robustness and EAO on the VOT2016 dataset, where the best results are in bold fonts. } \centering \begin{tabular}{c|c c c c |c} \hline & HCF & SiamFC & SRDCF & MDNet & FOF\\ \hline A & 0.445 & 0.527 & 0.532 & {\bf 0.541} & 0.531 \\ R & 0.664 & 0.630 & 0.657 & 0.714 & {\bf 0.760} \\ EAO & 0.220 & 0.235 & 0.247 & 0.257 & {\bf 0.307} \\ \hline \end{tabular} \label{tb::vot} \end{table} \subsection{Evaluation on the VOT2016 Dataset} Finally, we report the evaluation results of FOF against MDNet~\cite{MDNet15cvpr}, SRDCF~\cite{danelljan2015learning}, SiamFC~\cite{SiamFC16eccv} and HCF~\cite{Ma15iccv} on the VOT2016 dataset~\cite{vot16challenge}, as shown in Table~\ref{tb::vot}. From the results we can see that, the performance of our FOF is clearly better than MDNet, SRDCF, SiamFC and HCF in terms of most metrics, further demonstrating the effectiveness of the proposed tracker. The overlap ratios of our method is lower than MDNet, and we have explain the reason in the analysis on the OTB100 dataset. The VOT2016 report suggests that trackers whose EAO value exceeds 0.251 belong to the state-of-the-art, and so our FOF is the state-of-the-art. \begin{table}[t]\footnotesize \caption{Analysis of parameter sensitivity on the OTB100 dataset. } \centering \begin{tabular}{c| c c c| c c c } \hline & \multicolumn{3}{c|}{$\lambda$} & \multicolumn{3}{c}{$\gamma$} \\ \hline & 0.3 & 0.5 & 0.7 & 5 & 10 & 15 \\ \hline PR & 0.871 & 0.881 & 0.879 & 0.878 & 0.881 & 0.879 \\ SR & 0.635 & 0.639 & 0.638 & 0.637 & 0.639 & 0.637 \\ \hline \end{tabular} \label{tb::parameter_sensitivity} \end{table} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{Component_OTB100} \\ \caption{Ablation study on the OTB100 dataset.}\label{fig::component} \end{figure} \subsection{In-depth Analysis of Our Approach} Our implementation in MATLAB runs on a PC with an i7-4.2 GHz CPU with 32GB memory. To clarify the proposed approach, we analyze it in detail from the following four aspects. {\flushleft \bf Sensitivity analysis of model parameters}. The scalar parameters $\lambda$ and $\gamma$ in~\eqref{eq::objective} are to avoid overfitting of ${\bf u}$ and make a balance between the Laplacian term and other terms. We set $\lambda$ to 0.3, 0.5 and 0.7, and $\gamma$ to 5, 10 and 15 to evaluate the tracking performance on the OTB100 dataset. Table~\ref{tb::parameter_sensitivity} shows the results, and the tracking performance is not disturbed much when slightly adjusting $\lambda$ and $\gamma$. {\flushleft \bf Ablation study}. We conduct experiments to justify the effectiveness of the used components in our approach. They are: 1) noJL, that removes the scheme of joint learning in~\eqref{eq::objective}, i.e., first performing feature coding then employing the coding features to train the correlation filter; 2) noL, that removes the Laplacian constraint on feature coding in~\eqref{eq::objective}; 3) noTR, that removes the scheme of target re-detection in the proposed tracking method; and 4) noSH, that removes the strategy of scale handling in our tracking approach. The results are presented in Fig.~\ref{fig::component}. We can see that the proposed joint scheme plays a very critical role in improving the tracking performance by observing the large improvement (2.9\%/2.2\% in PR/SR) of FOF over FOF-noJR. The Laplacian constraint is also helpful in enhancing the coding quality and stability. In addition, the target re-detection scheme and the scale handling strategy improve the tracking performance considerably. Overall, the results justify the effectiveness of different components introduced in our tracking framework. \begin{table}[t]\footnotesize \caption{Analysis of parameter sensitivity on the OTB100 dataset. } \centering \begin{tabular}{c| c c c| c c } \hline & \multicolumn{3}{c|}{{\bf Size}} & \multicolumn{2}{c}{{\bf Algorithm}} \\ \hline & 3 & 10 & 20 & $k$-means & $k$-means\\ \hline PR & 0.878 & 0.881 & 0.872 & 0.853 & 0.863 \\ SR & 0.636 & 0.639 & 0.633 & 0.624 & 0.628 \\ \hline \end{tabular} \label{tb::codebook} \end{table} {\flushleft \bf Impacts of codebook}. We also study the impact of different sizes of codebook, and the results are shown in Table~\ref{tb::codebook}. The results show that the performance raises or drops a little when increasing or decreasing the codebook size, and thus we set it to 10. It can be explained by the fact that the smaller size of codebook decreases the diversity of the learnt features, and the larger size will increase the instability of feature codes, which could be solved by introducing more prior constraints on the learnt features, and we will study it in the future. In addition, we replace the dictionary learning algorithm with the $k$-means algorithm to construct the codebook, and find that the tracker performance is affected much and the results of two runs are different due to the random initialization of $k$-means. It again demonstrates the effectiveness of the dictionary learning scheme adopted in our method. {\flushleft \bf Efficiency analysis}. Finally, we present the runtime of our FOF using a single CPU against the baseline HCF~\cite{Ma15iccv} and state-of-the-art MDNet~\cite{MDNet15cvpr} with their tracking performance on the OTB100 dataset in Table~\ref{tb::efficiency}. Overall, the results demonstrate that our framework clearly outperforms baseline HCF (4.1\%/7.7\% performance gains in PR/SR) with a modest impact on the frame rate (1.86 FPS versus 2.36 FPS), and performs comparably against the state-of-the-art MDNet but much faster than it (1.86 FPS versus 0.27 FPS). We also report the time costs of major parts of our approach, and the time costs of feature extraction, target re-detection, scale handling and the optimization occupy 67\%, 15\%, 10\% and 8\%, respectively. One can see that the feature extraction costs most time, and the proposed optimization algorithm only consumes a little (8\%, about 40ms per frame on a single CPU). \begin{table}[t]\footnotesize \caption{Performance and runtime of our FOF against the baseline and state-of-the-art methods on a single CPU. } \centering \begin{tabular}{c| c c c } \hline & HCF & MDNet & FOF \\ \hline PR & 0.837 & 0.878 & 0.881 \\ SR & 0.562 & 0.646 & 0.639 \\ \hline FPS & 2.36 & 0.27 & 1.86 \\ \hline \end{tabular} \label{tb::efficiency} \end{table} \section{Conclusion} \label{sec::conclusion} In this paper, we have proposed a joint learning algorithm to enhance the discriminative capacity of the correlation filter, which outperforms the baseline trackers with a clear margin and also achieves favorable performance against the state-of-the-art trackers on three tracking datasets. The proposed algorithm bridges the feature coding and correlation filter learning, and provides a potential research direction to the visual tracking task. In future work, we will integrate other feature coding algorithms and spatio-temporal cues into our framework to improve the robustness of representation learning and thus enhance the discriminative capacity of the correlation filter. {\small \bibliographystyle{ieee}
1,108,101,563,969
arxiv
\section{Introduction} \label{Sec:Intro} The dispersive interaction between a qubit and a cavity forms the basis for qubit state measurement widely employed in superconducting quantum circuits. As predicted by the Jaynes-Cummings model of this interaction \cite{blais_et_al_2004}, each qubit state induces a different shift on the effective resonance frequency of the readout cavity \cite{wallraff_et_al_2004}. By monitoring this shift with a microwave probe-pulse, the qubit state can be accurately measured. The rapid and high-fidelity application of qubit state readout is widely recognized to be a critical component in the implementation of current quantum computing algorithms. The fidelity of this protocol is predicated on the dominance of certain number-conserving terms in the effective qubit evolution under the action of the probe-pulse that is quasi-resonant with the readout cavity. This dynamical regime, sometimes referred to as the linear dispersive regime, is generally expected to prevail for cavity photon occupations well below the ``critical photon number" $n_\text{crit} = \Delta^2/4g^2$, where $\Delta = \om_\ssc - \om_\ssq$ is the detuning between the cavity ($\om_\ssc$) and qubit ($\om_\ssq$) resonance frequencies and $g$ is the vacuum Rabi frequency characterizing the coupling strength \cite{blais_et_al_2004, boissonneault_et_al_2009}. For present systems based on transmon qubits \cite{koch_et_al_2007}, this number is typically $n_\text{crit} \ge 25$. \begin{figure}[t!] \includegraphics[width=0.65\linewidth]{fig_dispersive_readout.pdf} \caption{Schematics of the readout setup. A Josephson junction qubit (mode $\hat{a}$) is capacitively coupled to a waveguide through a readout cavity (mode $\hat{c}$). A microwave drive (amplitude $\ed$, frequency $\wdr$) is applied through the waveguide. \label{Fig:Circuits} } \end{figure} Recent experimental analysis \cite{mundhada_et_al_2016, minev_et_al_2019} indicates that $T_1$ relaxation time may decrease by as much as a factor of two for relatively small cavity photon occupations $\bar{n}_\ssc \sim 5$. Understanding the plausible fundamental mechanism behind this observation is one of the goals of this paper. It should come as no surprise that in a coherently driven nonlinear system the validity of perturbation theory in Hamiltonian parameters (such as $g/\Delta$) requires some care regarding the nature of the qubit nonlinearity. Early work carefully analyzed the so-called ``nonlinear dispersive regime" of operation and the systematic corrections to the frequencies and dissipation rates \cite{boissonneault_et_al_2009} within the Jaynes-Cummings framework, suitable for qubits with a strong anharmonicity, such as the Cooper pair box or quantronium qubit \cite{makhlin_et_al_2001,bouchiat_et_al_1998,vion_et_al_2002,lehnert_et_al_2003}. This approach predicts \cite{boissonneault_et_al_2009, sete_et_al_2014} that in the absence of any dephasing noise, the relaxation rate ($1/T_1$) of the qubit {\it decreases} with drive strength. The presence of a dephasing noise, on the other hand, is found to lead to an {\it increase} of the relaxation rate with the drive strength. This ``dressed dephasing hypothesis'' does seem to agree with some experimental data that found an increase in the relaxation rate with the drive-strength \cite{slichter_et_al_2012}, but does not seem to correctly capture the effective temperature of the qubit in the steady-state in experiments conducted on 3D transmon qubits \cite{mundhada_et_al_2016}. The question therefore arises whether accurate modeling of the Josephson nonlinearity of the qubit changes any of these predictions in a qualitative way. We address this question here building on the technique of unitary transformations established in Ref.~[\onlinecite{malekakhlagh_et_al_2018}], hereafter called Part I. Here we derive an effective master equation (EME) for a weakly anharmonic qubit driven by a coherent microwave tone. We consider the situation typical of dispersive readout, where the weakly anharmonic qubit is coupled to a single-mode resonator, which in turn is connected capacitively to a semi-infinite transmission line [see Fig.~\ref{Fig:Circuits}]. Extending the formalism developed in Part I to the coherently driven case, we provide analytical expressions for effective system frequencies, as well as relaxation and excitation rates that depend on drive parameters. Through a two-parameter expansion in the weak Josephson anharmonicity and the drive strength, we show that at lowest order the system unitary dynamics is governed by a multi-mode Kerr Hamiltonian \cite{Nigg_BlackBox_2012}, as found in Part I, but with drive-adapted parameters. The renormalization of relaxation rates can only be captured by retaining the number non-conserving terms in the Josephson potential. One important finding is that drive-activated correlated qubit-cavity relaxation processes are dominantly responsible for large renormalizations of the qubit relaxation rates. The formalism presented here is the time-dependent generalization of that in Part I, and the results reduce to those obtained in Part I in the limit of zero drive strength. There are several conclusions that can be drawn from our results regarding driven Josephson junctions. Here we consider solely the electromagnetic fluctuations of the infinite transmission line at zero temperature as a source of relaxation (and excitation, when mixed with the coherent drive tone, as we show). We find that the lowest order impact of the drive is to {\it increase} the relaxation rate of a dispersively coupled qubit. This is in contrast to earlier findings \cite{boissonneault_et_al_2009, sete_et_al_2014} that the relaxation rate decreases with the drive strength in the absence of dephasing sources. The reason can be traced back to the two-level approximation to the Josephson nonlinearity that underlies the Jaynes-Cummings (and the Rabi) model. From the point of view of the anharmonicity, the Josephson nonlinearity is a softening potential, while the two-level truncation is the extreme case of a hardening potential. In terms of the parameter $\epsilon = \sqrt{2 E_\text{C} / E_\text{J}}$ of a Josephson potential, the two-level truncation corresponds to $\epsilon<0$, a principally unphysical limit. This has the additional consequence that the impact of the drive is effective already at lower excitation powers than previously foreseen, with important implications for optimization of readout protocols. Finally, the impact of a radiative bath through which the drive is incident is found to also lead to excitation of the qubit in proportion to the drive strength, even at zero temperature. Any initialization, computation and readout operation on superconducting circuits involves microwave drives. Our results indicate that the accurate modeling of the Josephson potential of qubits in such circuits is critical as the demand for high-fidelity operations is pushed to its limits. Methods to deal with this challenge may be based on purely numerical schemes. Indeed in recent years, it has become necessary to better model strongly driven Josephson circuits, in a variety of applications: parametric schemes for engineering effective nonlinearities \cite{zhang_et_al_2019,frattini_et_al_2018,sivak_et_al_2019}, high-power readout schemes \cite{ginossar_et_al_2010,reed_et_al_2010}, as well as the driven-dissipative stabilization of states confined to a given quantum manifold, such as cat states \cite{leghtas_et_al_2015,vlastakis_et_al_2013,puri_et_al_2017}, as well as implementations of parametric two-qubit gates \cite{rigetti_devoret_2010,chow_et_al_2013,mckay_et_al_2016,magesan_et_al_2018,sheldon_et_al_2016,caldwell_et_al_2018,didier_et_al_2018,reagor_et_al_2018}. The initial evaluation of the effectiveness of the two-level system approximation for modeling high-power dynamics \cite{bishop_et_al_2010} has been addressed in Ref.~\onlinecite{boissonneault_et_al_2010}. More recently the Floquet master equation \cite{grifoni_haenggi_1998} has been successful in describing the escape of certain strongly driven Josephson circuits into states unconfined by the cosine potential \cite{lescanne_et_al_2019,verney_et_al_2019}. Earlier theoretical and experimental work also points to the role of counter-rotating terms in explaining the unexpectedly high susceptibility of certain Josephson circuits to excitation in certain power bands \cite{sank_et_al_2016}. The pursuit of deriving effective generators for the evolution of open systems has a long history which can be traced back to the projection-operator formalism of Feshbach \cite{feshbach_1958}. Most of these schemes rely on numerical methods to extract the low-frequency dynamics generated by linear operators of the Lindblad class \cite{kessler_2012,reiter_sorensen_2012,mirrahimi_rouchon_2009}. A similar method has been applied to obtain effective dynamics on reduced manifolds using quantum stochastic differential equations \cite{bouten_et_al_2008,bouten_silberfarb_2008,tezak_et_al_2017}. An important aspect of the approach presented here is that one obtains explicit drive-dependent renormalizations of both frequencies and relaxation rates because of the inclusion of number-nonconserving terms. Underlying our method is a series of unitary Schrieffer-Wolff transformations \cite{schrieffer_wolff_1966} that remove number-nonconserving terms order-by-order from the system Hamiltonian, but dress the interactions of the system with its environment. \begin{figure*}[t!] \includegraphics[width=\textwidth]{fig_eme_all_results.pdf \caption{\label{Fig:EMEQ1M1} Master equation simulations for our model of dispersive readout [see Fig.~\ref{Fig:Circuits}]. a) EME solution: The natural logarithm of the qubit occupation number $\langle \hat{a}^\dagger \hat{a} \rangle$ as a function of time, for different values of the drive power. As the drive strength is increased, the relaxation rate of the qubit increases linearly as a function of the cavity steady state population. Inset: The Kerr-only master equation predicts no drive- or nonlinearity-induced renormalization of the qubit relaxation rate. b) The drive strength is adjusted such that the cavity has a mean steady state population $\bar{n}_\ssc$ c) Qubit relaxation rate [Eq.~(\ref{Eq:Fitted})] extracted from a number of numerical simulations: Kerr theory, EME with all terms included, as well as EME in which a subset of the terms are included only (see text for complete discussion).} \end{figure*} The remainder of the paper is organized as follows. Section~\ref{Sec:Model} introduces the model for the quantum circuit consisting of a qubit coupled to a cavity, which is the standard setup for the dispersive readout scheme, and outlines the main steps of the perturbation theory in weak anharmonicity and weak drive used to obtain corrections to frequency and decay rate. We apply the EME to understand dispersive readout in Sec.~\ref{Sec:PT}. The EME is analytically derived and then numerically simulated. The main outcome of this section is our prediction for the renormalizations of qubit transition frequency and zero-temperature relaxation rate in the presence of a driven cavity at a steady-state population $\bar{n}_\ssc$. Finally, we summarize our results in Sec.~\ref{Sec:Summary}. We have opted to relegate many details to appendices in an effort to improve clarity. Each of the appendices will be pointed to in the main sections of the paper when necessary. \section{Model and main result} \label{Sec:Model} In this section, we present the model and the steps towards obtaining the EME. The system under consideration is a superconducting transmon qubit \cite{koch_et_al_2007} capacitively coupled to a cavity, which is an idealization of the circuitry typically used for the dispersive readout \cite{blais_et_al_2004,Wallraff_Strong_2004}. The dynamics of the system (subscript ``s'') coupled to the waveguide (``bath'', subscript ``b'') follows from the full Hamiltonian: \begin{equation} \hat{\mathcal{H}} = \hat{\mathcal{H}}_{\text{s}} + \hat{\mathcal{H}}_\text{d}(t) + \hat{\mathcal{H}}_{\text{sb}} + \hat{\mathcal{H}}_{\text{b}}. \end{equation} We have approximated the circuit as an oscillator characterized by inductance $L_\ssc$ and capacitance $C_\ssc$ [see Fig.~\ref{Fig:Circuits}], resulting in the oscillator frequency $\bar{\om}_c = \sqrt{L_\ssc C_\ssc}$. The simplification of the superconducting cavity to a single mode is done for transparency of results. The techniques presented in this work can be easily generalized to a multi-mode setup, starting from the exact electromagnetic modeling of the system \cite{Malekakhlagh_NonMarkovian_2016}. The coupling capacitance between the qubit and the cavity is denoted by $C_g$. The transmon qubit is defined by the Josephson and Coulomb charging energies, respectively, denoted by $E_\text{J}$ and $E_\text{C} = e^2/(2C_\text{J})$, where $C_\text{J}$ is the capacitance across the Josephson junction. This leads to the qubit transition frequency\cite{koch_et_al_2007} $\bar{\om}_a \approx \sqrt{8 E_\text{J} E_\text{C}}$ in the limit of high anharmonicity $E_\text{J}/E_\text{C} \gg 1$. Upon quantizing\cite{Devoret_Quantum_1995} the circuit of Fig.~\ref{Fig:Circuits}, we arrive at the following system Hamiltonian, which was the starting point of Part I: \begin{equation} \label{Eq:Hs} \hat{\mathcal{H}}_\text{s} = \frac{\bar{\om}_a}{4}\left[\hat{\bar{Y}}_\ssq^2-\frac{2}{\epsilon}\cos\left(\sqrt{\epsilon}\hat{\bar{X}}_\ssq\right)\right]+\frac{\bar{\om}_c}{4}\left(\hat{\bar{X}}_\ssc^2+\hat{\bar{Y}}_\ssc^2\right)+g\hat{\bar{Y}}_\ssq\hat{\bar{Y}}_\ssc, \end{equation} consisting of a part describing the transmon qubit, one describing the linear superconducting cavity, and finally a coupling term between the two. With our conventions, the commutator of the phase and charge quadratures contains an additional factor of two: $[\hat{\bar{X}}_{\ssq,\ssc}, \hat{\bar{Y}}_{\ssq,\ssc}] = 2i$ [see App.~\ref{App:Conventions} for an explanation of our conventions]. The energy scale $g$ denotes the capacitive qubit-cavity coupling strength, and it can be related to the coupling capacitance $C_g$. Finally, an important dimensionless quantity appearing in the qubit Hamiltonian $\hat{\mathcal{H}}_\ssq$ is the anharmonicity parameter \begin{equation} \epsilon = \sqrt{2 E_\text{C}/E_\text{J}}, \end{equation} which will form the basis of the perturbative expansion for weak anharmonicity $\epsilon \ll 1$, which coincides with the regime of operation for transmon qubits \cite{koch_et_al_2007}. Note that the quadratures in Eq.~(\ref{Eq:Hs}) are standard phase and number operators scaled by factors of $\sqrt{\epsilon}$ (see App.~\ref{App:Conventions}). Switching to the dimensionless quadratures $\hat{\bar{X}}_{\ssq,\ssc}$ etc. in the expression of the system Hamiltonian~(\ref{Eq:Hs}) will permit a perturbative expansion in powers of $\epsilon$. We assume that the drive term $\hat{\mathcal{H}}_\text{d}(t)$ acts on the bare charge quadrature, as depicted in Fig.~\ref{Fig:Circuits}. The driven Hamiltonian, time-periodic with a period $2\pi / \wdr$, introduces additional complexity in the derivation of EMEs as compared to the undriven case treated in Part I. In the EMEs derived below, the strength of the drive, $\ed$, and the anharmonicity parameter, $\epsilon$, will be treated on equal footing as small parameters in a perturbative expansion that leads to effective driven-dissipative dynamics. To organize this double perturbative expansion, we first need to switch from the bare mode basis to the normal mode basis, whose advantage is that the linear theory becomes diagonal in Fock space. This involves re-expressing the bare quadratures in Eq.~(\ref{Eq:Hs}) in terms of normal mode quadratures \begin{equation} \{\hat{\bar{X}}_\ssq,\hat{\bar{Y}}_\ssq,\hat{\bar{X}}_\ssc,\hat{\bar{Y}}_\ssc\} \to \{\hat{X}_\ssq,\hat{Y}_\ssq,\hat{X}_\ssc,\hat{Y}_\ssc\}, \end{equation} at the expense of introducing hybridization coefficients [see App.~\ref{App:Conventions}]. We now turn to the description of the drive and relaxation mechanisms in this normal mode basis. In this work, we think of both drive and relaxation as being facilitated by capacitively coupling the bare cavity mode to the waveguide, see Fig.~\ref{Fig:Circuits}. Thus the drive is distributed to the normal modes as follows \begin{equation} \hat{\mathcal{H}}_\text{d}=\ed (v_\textit{ca} \hat{Y}_\ssq + v_\textit{cc} \hat{Y}_\ssc) \sin(\wdr t). \label{eqn:Model-Hd in normpic} \end{equation} We assume that there is no intrinsic decay rate for the bare qubit oscillator, \textit{i.e.} that relaxation is only induced on the qubit by coupling it to the open cavity. This is the situation of pure radiative decay known as the Purcell effect. The effect of higher harmonics of the cavity can be addressed using the theoretical framework introduced in Ref.~[\onlinecite{Malekakhlagh_Cutoff_2017}]. In the normal mode basis, the system-bath coupling arises from the capacitive coupling of the system to the waveguide: \begin{eqnarray} \hat{\mathcal{H}}_{\text{sb}} &=& \left(v_\textit{ca} \hat{Y}_\ssq + v_\textit{cc} \hat{Y}_\ssc \right) \hat{Y}_\text{b}, \label{Eq:HSB} \end{eqnarray} where \begin{equation} \hat{Y}_\text{b}=\sum_k g_{k} \left( -i \hat{B}_{k} + i\hat{B}_{k}^\dag \right) \end{equation} is the noise operator to which the bare cavity quadrature couples, and the continuum of bath modes is described by bosonic creation and annihilation operators obeying the commutation relation $[ \hat{B}_k, \hat{B}_k^\dag ]=1$ for each index $k$, governed by the linear Hamiltonian $\hat{\mathcal{H}}_\text{b} = \sum_k \om_{k} \hat{B}_{k}^\dag \hat{B}_{k}$. In order to prepare a perturbative expansion in the two parameters $\ed$ and $\epsilon$, we may now bring $\hat{\mathcal{H}}_\text{s} + \hat{\mathcal{H}}_\text{d}(t)$ to a new form in which the anharmonicity and the drive appear on equal footing. This is achieved by a displacement transformation that removes the terms which are linear in the quadratures (App.~\ref{Ap:Shift}). Upon performing this transformation, we denote the resulting Hamiltonian $\hat{\mathcal{H}}_\text{s} + \hat{\mathcal{H}}_\text{d}(t) \to \hat{\mathcal{H}}_{\text{s}}(t)$, in which the drive terms appear as follows: \begin{eqnarray} &&\hat{\mathcal{H}}_{\text{s}}(t) =\om_\ssq\left(\hat{a}^{\dag}\hat{a}+\frac{1}{2}\right)+\om_\ssc\left(\hat{c}^{\dag}\hat{c}+\frac{1}{2}\right)\;\;\;\;\;\; \label{Eq:DispH} \\ && +\frac{\bar{\om}_a}{2} \sum_{n=2}^{\infty} \frac{(-\epsilon)^{n-1}}{(2n)!} \left[u_\textit{aa} \hat{a} + u_\textit{ac} \hat{c} + \eta_x e^{-i\wdr t} + \text{H.c.} \right]^{2n}. \nonumber \end{eqnarray} The Hamiltonian of Eq.~(\ref{Eq:DispH}) is the starting point for the analysis of an arbitrary linearly-driven weakly anharmonic two-mode circuit. The form above is general: the displacement parameter $\eta_x$ and the hybridization coefficients $u_{ij}$ would take different forms for different types of linear coupling between the drives, the qubit, and the cavity. We now proceed to illustrate the distinct role of number non-conserving terms in the renormalization of relaxation rates. We show that we can perform a unitary transformation on the system Hamiltonian that removes number non-conserving terms up to a desired order $\epsilon^n$ in the Hamiltonian. Because $\hat{\mathcal{H}}_{\text{s}}(t)$ is time dependent, the condition that a unitary transformation preserve the dynamics of the Schr\"odinger equation needs to be formulated in terms of the Floquet Hamiltonian, which differs from the Hamiltonian through the addition of the energy operator $-i\partial_t$ \cite{sambe_1973}: \begin{eqnarray} \hat{\mathcal{H}}_{\text{s},\text{eff}}(t) - i\partial_t = e^{-\hat{G}(t)} \left[ \hat{\mathcal{H}}_\text{s}(t) - i\partial_t \right] e^{\hat{G}(t)}. \label{Eq:FloquetUGHUG} \end{eqnarray} The antihermitian generator $\hat{G}(t)$ is time-dependent and it is defined by the condition that the \textit{effective} Hamiltonian, $\hat{\mathcal{H}}_{\text{s},\text{eff}}(t)$, contains no number-nonconserving terms up to some order in $\epsilon$. The generator can be found order by order upon an expansion in powers of the anharmonicity, $\hat{G}(t) = \epsilon \hat{G}_4(t) + \epsilon^2 \hat{G}_6(t) + \ldots$, through a hierarchical set of operator-valued ordinary differential equations, which are derived in App.~\ref{Sec:Hierarchy}. In this article, we present the solution for the generator $\hat{G}_4(t)$ that cancels the number-nonconserving terms of the Josephson nonlinearity up to linear order $\epsilon$. To this end we expand the system Hamiltonian in powers of the anharmonicity, to wit \begin{equation} \hat{\mathcal{H}}_{\text{s}}(t) = \hat{\mathcal{H}}_{\text{2}} - \epsilon \hat{\mathcal{H}}_4(t) + \epsilon^2 \hat{\mathcal{H}}_6(t) + \ldots, \label{Eq:HstExpansionMain} \end{equation} and decompose each operator $\hat{\mathcal{H}}_{2n}(t) = \hat{\mathcal{S}}_{2n}(t) + \hat{\mathcal{N}}_{2n}(t)$ into a sum of two normal-ordered operators. These are the number-conserving and number-nonconserving terms, respectively. The condition for the generator can be written to lowest order in the anharmonicity $\epsilon$ in the compact form of a differential equation [see App.~\ref{Sec:Hierarchy}]: \begin{eqnarray} - i \dot{\hat{G}}_4(t) + \left[ \hat{\mathcal{H}}_2, \hat{G}_4(t) \right] = \hat{\mathcal{N}}_4(t),\label{Eq:G4tODEMain} \end{eqnarray} with initial condition $\left[ \hat{\mathcal{H}}_2, \hat{G}_4(0) \right] = \hat{\mathcal{N}}_4(0)$, where $\hat{\mathcal{N}}_4(t)$ contains the number-nonconserving terms arising from the normal-ordered expression of the fourth power in the expansion of the Josephson nonlinearity in Eq.~(\ref{Eq:DispH}). The key point here is that there is a major simplification of the operator-valued ordinary differential Eq.~(\ref{Eq:G4tODEMain}) if one expands $\hat{G}_4(t)$ as the sum of all possible normal-ordered ``monomials'' $\hat{a}^{\dagger m} \hat{a}^n \hat{c}^{\dagger p} \hat{c}^q$, which are merely many-body operators consisting of powers of creation and annihilation operators of the two normal modes. By virtue of the following property of the bosonic algebra, \begin{equation} [\hat{a}^\dagger \hat{a}, \hat{a}^{\dagger m} \hat{a}^n] = (m-n) \hat{a}^{\dagger m} \hat{a}^n, \end{equation} with an analogous form for $\hat{c}$, one can turn Eq.~(\ref{Eq:G4tODEMain}) into collection of \textit{uncoupled} ordinary differential equations for the complex-valued coefficients of these monomials in the expansion of the generator. Therefore, the generator $\hat{G}_4(t)$ is analytically tractable and closed-form expressions can be written down for the simplest examples (see App.~\ref{Ap:SWCorr-DrQu} for a one-mode theory), while computer algebra \cite{zitko_2011} can be used for the general situation encountered in the problem of dispersive readout. Once the generator is determined, the first effect of this transformation is that number-nonconserving terms have been removed to order $\epsilon$ from the effective Hamiltonian. The latter takes a Kerr form, containing interactions up to quadratic order in the number operators counting photons in the two normal modes corresponding to qubit and cavity, and terms at most linear in the anharmonicity $\epsilon$: $\hat{\mathcal{H}}_{\text{s},\text{eff}} = \hat{\mathcal{H}}_2 - \epsilon \hat{\mathcal{S}}_4(t)$. Secondly, the action of the generator $\hat{G}(t)$ on the system-bath Hamiltonian yields corrected system operators coupling to the bath noise operator $\hat{Y}_\text{b}$. In the Born-Markov approximation \cite{breuer_petruccione_2002}, this leads to the EME in the Lindblad form: \begin{eqnarray} \dot{\hat{\rho}}(t) = -i \left[ \hat{\mathcal{H}}_{\text{s},\text{eff}}(t), \hat{\rho}(t) \right] + \sum_{j}2 \kappa(\om_j) \mathcal{D}\left[ \hat{C}_{\text{eff}}(\om_j) \right] \hat{\rho}(t),\label{Eq:EMESumIntro} \nonumber \\ \; \end{eqnarray} where $\hat{C}(\om_j)$ are renormalized system collapse operators defined at a set of frequencies $\{\om_j\}$, which are linear combinations involving integer multiples of the normal mode and the drive frequencies, and the dissipator superoperators are defined as usual, $\mathcal{D}[\hat{C}](\bullet)=\hat{C}(\bullet)\hat{C}^{\dag}-1/2\{\hat{C}^{\dag}\hat{C},(\bullet)\}$. Note that we have performed the Born-Markov and secular approximations \textit{after} the application of two unitary transformations on the full Hamiltonian describing the system and its environment: the first, a displacement transformation into the frame rotating at the drive frequency, and, the second, a Schrieffer-Wolff transformation that eliminated the number-nonconserving terms. This was the essential step that allowed us to derive drive- and anharmonicity-corrected dissipators. This point underlies the derivation of the EME in App.~\ref{App:EME}. We are now in a position to summarize our main result. For the readout problem, where the drive is nearly-resonant with the cavity, there are two dominant contributions entering the EME, arising from the following two dissipators: \begin{eqnarray} \hat{C}(\om_\ssq)\approx&&-i\Bigg[v_\textit{ca}-\frac{\epsilon}{8}\left(\frac{\bar{\om}_a}{\om_\ssq}\vcqu_\textit{aa}^2 - 4\frac{\bar{\om}_a\om_\ssq}{\om_\ssc^2-\om_\ssq^2}v_\textit{cc}\uqcu_\textit{aa}\right) \nonumber \\ &&\;\;\;\;\;\;\;\times\left(u_\textit{aa}^2+u_\textit{ac}^2+u_\textit{aa}^2 \hat{n}_\ssq +2u_\textit{ac}^2 \hat{n}_\ssc + 2|\eta_x|^2\right)\Bigg]\hat{a} \nonumber \\ && - \frac{i \epsilon}{2} \frac{\bar{\om}_a}{\om_\ssc} v_\textit{ca} u_\textit{ac} u_\textit{aa}^2 \frac{\wdr}{\om_\ssc-\wdr} \hat{a} \left(\eta_x^*\hat{c} - \eta_x \hat{c}^\dag\right), \label{Eq:CqEffApprox} \end{eqnarray} where $\eta_x$ is a complex number arising from the displacement to the frame rotating with the drive, $\hat{n}_\ssq = \hat{a}^\dagger \hat{a}$ and $\hat{n}_\ssc=\hat{c}^\dagger \hat{c}$, and \begin{eqnarray} \hat{C}(\om_\ssc)\approx&&-i\Bigg[v_\textit{cc}-\frac{\epsilon}{8}\left(\frac{\bar{\om}_a}{\om_\ssc}\vccu_\textit{ac}^2 - 4\frac{\bar{\om}_a\om_\ssc}{\om_\ssq^2-\om_\ssc^2}\vcqu_\textit{aa}u_\textit{ac}\right) \nonumber \\ &&\;\;\;\;\;\;\;\times\left(u_\textit{ac}^2+u_\textit{aa}^2+u_\textit{ac}^2 \hat{n}_\ssc +2u_\textit{aa}^2 \hat{n}_\ssq + 2|\eta_x|^2\right)\Bigg]\hat{c} \nonumber \\ && - i\frac{\epsilon}{8} \frac{\bar{\om}_a \wdr}{\om_\ssc} v_\textit{cc} u_\textit{ac} u_\textit{aa}^2 \frac{\eta_x}{\wdr-\om_\ssc} \hat{n}_\ssq. \nonumber \\ \label{Eq:CcEffApprox} \end{eqnarray} These collapse operators derive from the coupling of the bare cavity to the environment, $\hat{\mathcal{H}}_\text{sb}$, dressed (to lowest order in $\epsilon$) by the number-nonconserving terms of the Josephson anharmonicity [see Sec.~\ref{Sec:PT}]. Note that, in addition to scalars rescaling the annihilation operators $\hat{a}$ and $\hat{c}$, there are other contributions which become important in the presence of drive, such as a qubit dephasing term $\hat{a}^\dagger \hat{a}$ appearing in the cavity dissipator, as well as a correlated cavity-qubit relaxation $\hat{a} \hat{c}$ and qubit-cavity conversion $\hat{a} \hat{c}^\dagger$. The correlated decay processes are responsible for stark renormalizations of the qubit relaxation rates, as illustrated in Fig.~\ref{Fig:EMEQ1M1}, which summarizes the numerical results hinging on the EME fully developed in Sec.~\ref{Sec:PT}. The rates associated with the collapse operators~(\ref{Eq:CqEffApprox}) and~(\ref{Eq:CcEffApprox}) correspond to transitions at or nearly at the qubit and cavity normal mode frequencies, respectively: \begin{eqnarray} \kappa_\ssq = \kappa(\om_\ssq) =\frac{1}{2} \SFN (\om_\ssq),\; \kappa_\ssc = \kappa(\om_\ssc) = \frac{1}{2} \SFN (\om_\ssc). \label{Eq:RatesQ1M1} \end{eqnarray} In defining rates above, we needed the bilateral power spectral density corresponding to the bosonic bath described by $\hat{\mathcal{H}}_\text{b}$, defined as the Fourier transform of the finite-temperature two-point correlation function: \begin{equation} \SFN (\om) = \lim_{T \to 0}\int_{-\infty}^\infty d\tau \, e^{-i\om \tau} \text{Tr} \left[ \frac{e^{- \hat{\mathcal{H}}_\text{b}/ k_\text{B} T}}{Z_\text{b}(T)} \hat{Y}_{\text{b}}(\tau) \, \hat{Y}_{\text{b}}(0) \right], \label{Eq:SFN} \end{equation} where the bath partition function is \begin{equation} Z_\text{b}(T) = \text{Tr} \left[ e^{- \hat{\mathcal{H}}_\text{b}/ k_\text{B} T} \right] \end{equation} and the bath modes are assumed to be in thermal equilibrium at zero temperature obeying Bose-Einstein statistics: $\text{Tr}_\text{b} \left\{ \hat{B}_k \hat{B}_l^\dag \right\} \equiv \delta_{kl} (1+n_k)$, and $\text{Tr}_\text{b} \left\{ \hat{B}_k^\dag \hat{B}_l \right\} \equiv \delta_{kl} n_k$; $n_k =\left[e^{\om_k/(k_B T)} - 1\right]^{-1}$ is the value of the Bose-Einstein distribution at energy $\om_k$ and temperature $T$. We conclude our presentation of the model and the main steps towards obtaining the EME by reiterating the main property underlying the derivation of the EME to lowest order in $\epsilon$: Corrections to the eigenfrequencies are captured by the number-conserving terms in $\hat{\mathcal{H}}_{\text{s},\text{eff}}$, whereas the renormalized dissipators in~(\ref{Eq:EMESumIntro}) arise from the number-nonconserving terms of the Josephson nonlinearity. Correlated processes between the qubit and the cavity in the presence of drive can result in a significant drive-dependent renormalization of the qubit relaxation rates. \section{Effective Master Equation for the readout problem} \label{Sec:PT} In this section we carry out the program outlined in Sec.~\ref{Sec:Model} for the EME describing dispersive readout. We develop the pertubation theory for a weakly anharmonic qubit coupled to an open driven resonator, shown schematically in Fig.~\ref{Fig:Circuits}. We are confining ourselves to the analysis of the enhancement of the Purcell effect in the presence of drive and anharmonicity. For a pedagogical application of the method, we point the reader to App.~\ref{Ap:SWCorr-DrQu} where we consider a one-mode theory of a weakly driven, weakly anharmonic qubit coupled to an infinite waveguide, which yields the effective dressing of the qubit decay rate and frequency. That toy problem contains all the essential ingredients of the methodology to derive the EME and sets up the stage for the readout problem treated in this section. The remainder of this section is organized as follows. Subsection~\ref{SubSec:DerivEME} contains the derivation of the EME for dispersive readout. Equations~(\ref{Eq:EMESum}) and~(\ref{Eq:CqEff}) contain the main results, with approximate forms applicable to the typical scenario for dispersive readout, when the drive is close-to-resonant with the cavity normal mode frequency, obtained in Eqs.~(\ref{Eq:CqEffApproxSec}) and~(\ref{Eq:CcEffApproxSec}). The reader interested in the numerical results directly could skip to Subsec.~\ref{SubSec:NumerEME}, where the EME numerical simulations are discussed, with numerical results summarized in Fig.~\ref{Fig:EMEQ1M1}. \subsection{Derivation of EME} \label{SubSec:DerivEME} With number-nonconserving terms removed from the driven system Hamiltonian $\hat{\mathcal{H}}_{\text{s}}$, their effect carries over to two different quantities appearing in the dynamical equations. First, applying the unitary transformation derived from the condition above to the system-bath coupling yields a renormalized system quadrature coupling to the bath [cf. Eq.~(\ref{Eq:HSB})]: \begin{eqnarray} \hat{\mathcal{H}}_{\text{sb}} \to e^{-\hat{G}(t)}\hat{\mathcal{H}}_{\text{sb}} e^{\hat{G}(t)} = \hat{\mathcal{H}}_{\text{sb}} + \epsilon \left[ \hat{\mathcal{H}}_{\text{sb}}, \hat{G}_4(t) \right] + O(\epsilon^2). \nonumber\\ \; \end{eqnarray} Secondly, the unitary must be applied to the system reduced density matrix, which becomes \begin{eqnarray} \hat{\rho}_{\text{s}}(t) \to e^{-\hat{G}(t)} \hat{\rho}_{\text{s}}(t) e^{\hat{G}(t)} = \hat{\rho}_{\text{s}}(t) + \epsilon \left[ \hat{\rho}_{\text{s}}(t), \hat{G}_4(t) \right] + O(\epsilon^2). \nonumber\\ \; \end{eqnarray} We show in this section that, among the many terms that correct the quadratures, there will be a simple rescaling of the qubit and cavity collapse operators leading to the enhancement of relaxation rates. The Hamiltonian describing the setup of Fig.~\ref{Fig:Circuits}, which is an idealization of the circuit used in dispersive readout schemes, is: \begin{eqnarray} \hat{\mathcal{H}} = \hat{\mathcal{H}}_\text{s}(t) + \hat{\mathcal{H}}_\text{b} + \hat{\mathcal{H}}_{\text{sb}}, \end{eqnarray} where $\hat{\mathcal{H}}_\text{s}(t)$ is the displaced system Hamiltonian introduced in Eq.~(\ref{Eq:DispH}) truncated after the linear order in the anharmonicity $\epsilon$, \begin{eqnarray} \hat{\mathcal{H}}_{\text{s}}(t) &=&\om_\ssq\left(\hat{a}^{\dag}\hat{a}+\frac{1}{2}\right)+\om_\ssc\left(\hat{c}^{\dag}\hat{c}+\frac{1}{2}\right) \label{Eq:DispHRep} \;\;\;\;\;\; \\ && -\frac{\epsilon\bar{\om}_a}{48}\left(u_\textit{aa} \hat{a} +u_\textit{ac} \hat{c} + \eta_x e^{-i\wdr t} + \text{H.c.} \right)^4. \nonumber \end{eqnarray} The system-bath coupling $\hat{\mathcal{H}}_\text{sb}$ was expressed already in Eq.~(\ref{Eq:HSB}) and $\hat{\mathcal{H}}_\text{b}$ is the Hamiltonian describing the bath modes. Note that although only the bare cavity was driven, now both the qubit-like and the cavity-like normal modes are subjected to the drive due to hybridization: \begin{eqnarray} \eta_x = u_\textit{aa} \eta_{\ssq,x} + u_\textit{ac} \eta_{\ssc,x}. \end{eqnarray} The coherent parts corresponding to each normal mode are given by \begin{eqnarray} \eta_{\ssq,x} &=& \frac{v_\textit{ca} \ed (\wdr+i\kappa_\ssq)}{\om_\ssq^2-(\wdr+i\kappa_\ssq)^2}, \nonumber \\ \eta_{\ssc,x}&=& \frac{v_\textit{cc}\ed (\wdr + i\kappa_\ssc)}{\om_\ssc^2 - (\wdr + i\kappa_\ssc)^2}. \end{eqnarray} These are the amplitudes of the displacement of the phase quadrature for the two normal modes $\hat{a},\hat{c}$. Note that these expressions depend explicitly on the relaxation rates and they are obtained from the linear theory [for a derivation, see App.~\ref{Ap:ShiftME}]. That is, if the anharmonicity were turned off, $\epsilon=0$, then the steady state population of the cavity would be \begin{equation} \bar{n}_\ssc = |(\eta_{\ssc,x} + i \eta_{\ssc,y})/2|^2, \label{Eq:nbarc} \end{equation} where $\eta_{\ssc,y} = -i \om_\ssc/(\wdr + i\kappa_\ssc) \eta_{\ssc,x}$ is the corresponding amplitude of the displacement of the charge quadrature. Note that since the hybridization between the cavity and the qubit is typically taken to be weak, the dressed cavity is only weakly nonlinear, and therefore we can use Eq.~(\ref{Eq:nbarc}) as a very good estimate of the actual numerical steady state population. We now follow the same program as in the previous section to find the generator $\hat{G}(t)$ to lowest order in $\epsilon$ that removes the number-nonconserving terms of the nonlinear potential of Eq.~(\ref{Eq:DispHRep}), according to the general condition~(\ref{Eq:FloquetUGHUG}). The generator $\hat{G}_4(t)$ has been obtained by analogy to the one-mode theory [App.~\ref{Ap:SWCorr-DrQu}] using computer algebra \cite{zitko_2011}. The number-conserving terms of the quartic nonlinearity amount to the following contributions \begin{eqnarray}\; \epsilon \hat{\mathcal{S}}_4(t) = \lambda_\ssq(t) \hat{n}_\ssq + \lambda_\ssc(t) \hat{n}_\ssc + \chi_\textit{ac} \hat{n}_\ssq \hat{n}_\ssc + \alpha_\ssq \hat{n}_\ssq^2 + \alpha_\ssc \hat{n}_\ssc^2 \label{Eq:S4Text}, \nonumber \\ \; \end{eqnarray} with \begin{eqnarray} \lambda_\ssq(t)&=& \epsilon \frac{ \bar{\om}_a}{8} u_\textit{aa}^2 \left[4\Real{\eta_x^2 e^{2 i \wdr t}} + 4|\eta_x|^2 +u_\textit{aa}^2 + 2u_\textit{ac}^2\right], \nonumber \\ \lambda_\ssc(t)&=& \epsilon \frac{ \bar{\om}_a}{8} u_\textit{ac}^2 \left[4\Real{\eta_x^2 e^{2 i \wdr t}} + 4|\eta_x|^2 +u_\textit{ac}^2+2u_\textit{aa}^2\right], \nonumber \\ \chi_\textit{ac} &=& \epsilon \frac{ \bar{\om}_a}{4} u_\textit{ac}^2 u_\textit{aa}^2, \;\; \alpha_\ssq = \epsilon \frac{ \bar{\om}_a}{8} u_\textit{aa}^4, \; \; \alpha_\ssc = \epsilon \frac{ \bar{\om}_a}{8} u_\textit{ac}^4. \label{Eq:DefinitionsEffectiveHam} \end{eqnarray} These terms enter the effective Hamiltonian: \begin{eqnarray} \hat{\mathcal{H}}_{\text{s},\text{eff}} (t) = [\om_\ssq - \lambda_\ssq(t)] \hat{n}_\ssq + [\om_\ssc - \lambda_\ssc(t)] \hat{n}_\ssc \nonumber \\ - \chi_\textit{ac} \hat{n}_\ssq \hat{n}_\ssc - \alpha_\ssq \hat{n}_\ssq^2 - \alpha_\ssc \hat{n}_\ssc^2. \end{eqnarray} This form includes AC Stark shift contributions on the first row, and cross-Kerr, and self-Kerr contributions, respectively, on the second row. On the one hand, $\hat{\mathcal{H}}_{\text{s},\text{eff}}(t)$ is the quantum non-demolition Hamiltonian required for dispersive measurement in circuit QED. On the other hand, the explicit form above shows that, at linear order in $\epsilon$, the qubit transition frequencies acquire a dependence on the qubit and cavity states as well as on the drive power. Next, we address the system-bath coupling in order to categorize all the possible relaxation processes induced by the number non-conserving terms. For this, as before, we calculate the corrections to the dressed system quadratures $\hat{Y}_\ssq$ and $\hat{Y}_\ssc$ which enter the system-bath couplings, Eq.~(\ref{Eq:HSB}). These quadratures transform according to \begin{eqnarray} \hat{Y}_\ssq &\to& \hat{Y}_\ssq + \epsilon \left[ \hat{Y}_\ssq, \hat{G}_4(t) \right] + O(\epsilon^2), \nonumber \\ \hat{Y}_\ssc &\to& \hat{Y}_\ssc + \epsilon \left[ \hat{Y}_\ssc, \hat{G}_4(t) \right] + O(\epsilon^2). \end{eqnarray} We focus first on the corrections to the qubit quadrature, \textit{i.e.} $\left[ \hat{Y}_\ssq, \hat{G}_4(t) \right]$, which will induce corrections to qubit relaxation. The resulting expressions are lengthy; they can be found in App.~\ref{Ap:Tables} (Tables~\ref{Tab:CoeffsQuadQ}, \ref{Tab:CoeffsQuadC}, and \ref{Tab:CoeffsQuadQC} for qubit-only, cavity-only and mixed processes, respectively). The results for the corrected cavity quadrature, $\left[ \hat{Y}_\ssc, \hat{G}_4(t) \right]$, can be found by applying the following transformation to the three tables: $\om_\ssq \leftrightarrow \om_\ssc$, $u_\textit{aa} \leftrightarrow u_\textit{ac}$, $v_\textit{aa} \leftrightarrow v_\textit{ac}$, and $\hat{a} \leftrightarrow \hat{c}$, while leaving $\bar{\om}_a$ intact. To derive the EME, we next express the renormalized qubit quadrature in the interaction picture with respect to $\hat{\mathcal{H}}_{\text{s},\text{eff}}(t) + \hat{\mathcal{H}}_\text{b}$. This amounts to a sum of operators effecting transitions between the states of the effective Hamiltonian, multiplied by phase factors oscillating at the transition frequency [for a detailed derivation, see App.~\ref{App:EME}]: \begin{eqnarray} && e^{i \int_0^t dt' \hat{\mathcal{H}}_{\text{s},\text{eff}}(t')} \left\{ \hat{Y}_\ssq + \epsilon \left[ \hat{Y}_\ssq, \hat{G}_4\right] \right\} e^{-i \int_0^t dt' \hat{\mathcal{H}}_{\text{s},\text{eff}}(t')} \nonumber \\ &&\;\;\;\;\;\;\;\;\;\equiv \sum_j \hat{C}(\om_j) e^{i \om_j t}, \end{eqnarray} where $j$ indexes a discrete set of frequencies $\{\om_1,\om_2,...\}$ which are linear combinations of $\om_{\text{d}},\om_\ssq,$ and $\om_\ssc$. $\hat{C}(\om_j)$ are operators at most linear in $\epsilon$, which will enter the dissipators of the EME, according to the prescription: \begin{eqnarray} \hat{C}(\om_j) e^{i\om_j t} \to 2 \kappa(\om_j) \mathcal{D}\left[ \hat{C}(\om_j) \right], \end{eqnarray} where $2\kappa(\om_j) = \SFN(\om_j)$. To order $\epsilon$, the effective collapse operator for the qubit is: \begin{widetext} \begin{eqnarray} \hat{C}(\om_\ssq)=&&-i\Bigg[v_\textit{ca}-\frac{\epsilon}{8}\left(\frac{\bar{\om}_a}{\om_\ssq}\vcqu_\textit{aa}^2 - 4\frac{\bar{\om}_a\om_\ssq}{\om_\ssc^2-\om_\ssq^2}v_\textit{cc}\uqcu_\textit{aa}\right) \left(u_\textit{aa}^2+u_\textit{ac}^2+u_\textit{aa}^2 \hat{n}_\ssq +2u_\textit{ac}^2 \hat{n}_\ssc + 2|\eta_x|^2\right)\Bigg]\hat{a} \nonumber \\ && - i\frac{\epsilon}{8} \frac{\bar{\om}_a \wdr}{\om_\ssq} v_\textit{ca} u_\textit{aa}^2 \left[ \frac{\eta_x^2 }{\wdr+\om_\ssq} + \frac{\eta_x^{*2}}{\wdr-\om_\ssq} \right] \hat{a}^\dag \nonumber \\ && +\frac{i \epsilon}{2} \frac{\bar{\om}_a \wdr}{\om_\ssc-\om_\ssq} v_\textit{ca} u_\textit{aa} u_\textit{ac} \left[\frac{\eta_x^2}{2\wdr+(\om_\ssc-\om_\ssq)} + \frac{\eta_x^{*2}}{2\wdr-(\om_\ssc-\om_\ssq)} \right] \hat{c} \nonumber \\ && -\frac{i \epsilon}{2} \frac{\bar{\om}_a \wdr}{\om_\ssc+\om_\ssq} v_\textit{ca} u_\textit{aa} u_\textit{ac} \left[\frac{\eta_x^{*2}}{2\wdr+(\om_\ssc+\om_\ssq)} + \frac{\eta_x^2}{2\wdr-(\om_\ssc+\om_\ssq)} \right] \hat{c}^\dagger \nonumber \\ && - i\frac{\epsilon}{2} \frac{\bar{\om}_a \wdr}{\om_\ssq} v_\textit{ca} u_\textit{aa}^3 \left[ \frac{\eta_x^*}{\wdr+\om_\ssq} + \frac{\eta_x}{\wdr-\om_\ssq} \right] \hat{n}_\ssq - i\frac{\epsilon}{8} \frac{\bar{\om}_a \wdr}{\om_\ssq} v_\textit{ca} u_\textit{aa} u_\textit{ac}^2 \left[ \frac{\eta_x^*}{\wdr+\om_\ssq} + \frac{\eta_x}{\wdr-\om_\ssq} \right] \hat{n}_\ssc \nonumber \\ && +i\frac{\epsilon}{4} \frac{\bar{\om}_a \wdr}{\om_\ssq} v_\textit{ca} u_\textit{aa}^3 \left[ \frac{\eta_x^*}{\wdr-\om_\ssq} + \frac{\eta_x}{\wdr+\om_\ssq} \right] \hat{a}^2 - i\frac{\epsilon}{4} \frac{\bar{\om}_a \wdr}{3\om_\ssq} v_\textit{ca} u_\textit{aa}^3 \left[ \frac{\eta_x}{\wdr-3\om_\ssq} + \frac{\eta_x^* }{\wdr+3\om_\ssq} \right] \hat{a}^{\dagger 2} \nonumber \\ && +i\frac{\epsilon}{4} \frac{\bar{\om}_a \wdr}{2\om_\ssc-\om_\ssq} v_\textit{ca} u_\textit{aa} u_\textit{ac}^2 \left[ \frac{\eta_x}{\wdr+ (2\om_\ssc - \om_\ssq)} + \frac{\eta_x^*}{\wdr-(2\om_\ssc - \om_\ssq)} \right] \hat{c}^2 \nonumber \\ && -i\frac{\epsilon}{4} \frac{\bar{\om}_a \wdr}{2\om_\ssc+\om_\ssq} v_\textit{ca} u_\textit{aa} u_\textit{ac}^2 \left[ \frac{\eta_x^*}{\wdr+(2\om_\ssc + \om_\ssq)} + \frac{\eta_x}{\wdr-(2\om_\ssc + \om_\ssq)} \right] \hat{c}^{\dagger 2} \nonumber \\ && + \frac{i \epsilon}{2} \frac{\bar{\om}_a \wdr}{\om_\ssc} v_\textit{ca} u_\textit{ac} u_\textit{aa}^2 \left[ \frac{\eta_x^*}{\wdr-\om_\ssc} + \frac{\eta_x}{\wdr+\om_\ssc} \right] \hat{a} \hat{c} - \frac{i \epsilon}{2} \frac{\bar{\om}_a \wdr}{\om_\ssc} v_\textit{ca} u_\textit{ac} u_\textit{aa}^2 \left[ \frac{\eta_x}{\wdr-\om_\ssc} + \frac{\eta_x^*}{\wdr+\om_\ssc} \right] \hat{a} \hat{c}^\dag \nonumber \\ && + \frac{i \epsilon}{2} \frac{\bar{\om}_a \wdr}{\om_\ssc-2\om_\ssq} v_\textit{ca} u_\textit{ac} u_\textit{aa}^2 \left[ \frac{\eta_x}{\wdr+(\om_\ssc-2\om_\ssq)} + \frac{\eta_x^*}{\wdr-(\om_\ssc-2\om_\ssq)} \right] \hat{a}^\dag \hat{c} \nonumber \\ && - \frac{i \epsilon}{2} \frac{\bar{\om}_a \wdr}{\om_\ssc+2\om_\ssq} v_\textit{ca} u_\textit{ac} u_\textit{aa}^2 \left[ \frac{\eta_x^*}{\wdr+(\om_\ssc+2\om_\ssq)} + \frac{\eta_x}{\wdr-(\om_\ssc+2\om_\ssq)} \right] \hat{a}^\dag \hat{c}^\dag. \label{Eq:CqEff} \end{eqnarray} \end{widetext} One can determine the effective collapse operator for the cavity normal mode, $\hat{C}(\om_\ssc)$, by replacing $\hat{a} \leftrightarrow \hat{c}$, $\om_\ssq \leftrightarrow \om_\ssc$, $u_\textit{aa} \leftrightarrow u_\textit{ac}$, $u_\textit{cc} \leftrightarrow u_\textit{ca}$, and $v_\textit{ca} \leftrightarrow v_\textit{cc}$, while $\bar{\om}_a$ remains fixed. We note that there are other single-photon contributions resulting in dissipators at frequencies different from $\om_\ssc$ and $\om_\ssq$. Nonetheless, these contributions are order $\epsilon^2$ in the EME, and we therefore neglect them. The collapse operators derived above enter the EME for the qubit coupled to the resonator: \begin{eqnarray} \dot{\hat{\rho}}(t) = -i \left[ \hat{\mathcal{H}}_{\text{s},\text{eff}}(t), \hat{\rho}(t) \right] + \sum_{j=\ssq,\ssc}2 \kappa(\om_j) \mathcal{D}\left[ \hat{C}(\om_j) \right] \hat{\rho}(t).\label{Eq:EMESum} \nonumber \\ \; \end{eqnarray} State-dependent relaxation rates can be obtained as before from the Fock-state representation of the EME, which we omit here for brevity. \begin{figure}[t!] \includegraphics[width=\linewidth]{fig_magnitudes_run17b_cplx.pdf} \caption{\label{Fig:EMEQ1M1Mag}Magnitudes of a) $\eta_{x}$ as a function of $\bar{n}_\ssc$ for the parameters chosen for the EME simulation (see text); b) for the same parameters, the magnitudes of the most significant terms in the EME.} \end{figure} We now turn to the analysis of the various contributions entering the EME. We can simplify the expressions and distill an analytical interpretation of the numerical results for the parameter regime chosen. By direct calculation, we have obtained that the leading contributions to the dissipators of Eq.~(\ref{Eq:EMESum}) are as follows. For the qubit dissipator, there is the dressed single-photon relaxation in the operator $\hat{a}$, along with a correlated relaxation process $\hat{a}(\hat{c}-\hat{c}^\dagger)$ which is large when the drive is nearly resonant with the cavity \begin{eqnarray} \hat{C}(\om_\ssq)\approx&&-i\Bigg[v_\textit{ca}-\frac{\epsilon}{8}\left(\frac{\bar{\om}_a}{\om_\ssq}\vcqu_\textit{aa}^2 - 4\frac{\bar{\om}_a\om_\ssq}{\om_\ssc^2-\om_\ssq^2}v_\textit{cc}\uqcu_\textit{aa}\right) \nonumber \\ &&\;\;\;\;\;\;\;\times\left(u_\textit{aa}^2+u_\textit{ac}^2+u_\textit{aa}^2 \hat{n}_\ssq +2u_\textit{ac}^2 \hat{n}_\ssc + 2|\eta_x|^2\right)\Bigg]\hat{a} \nonumber \\ && - \frac{i \epsilon}{2} \frac{\bar{\om}_a}{\om_\ssc} v_\textit{ca} u_\textit{ac} u_\textit{aa}^2 \frac{\wdr}{\om_\ssc-\wdr} \hat{a} \left(\eta_x^*\hat{c} - \eta_x \hat{c}^\dag\right). \label{Eq:CqEffApproxSec} \end{eqnarray} Turning to the cavity dissipator, there are two leading contributions, one corresponding to single photon decay via $\hat{c}$, and one corresponding to qubit dephasing via $\hat{a}^\dagger \hat{a}$: \begin{eqnarray} \hat{C}(\om_\ssc)\approx&&-i\Bigg[v_\textit{cc}-\frac{\epsilon}{8}\left(\frac{\bar{\om}_a}{\om_\ssc}\vccu_\textit{ac}^2 - 4\frac{\bar{\om}_a\om_\ssc}{\om_\ssq^2-\om_\ssc^2}\vcqu_\textit{aa}u_\textit{ac}\right) \nonumber \\ &&\;\;\;\;\;\;\;\times\left(u_\textit{ac}^2+u_\textit{aa}^2+u_\textit{ac}^2 \hat{n}_\ssc +2u_\textit{aa}^2 \hat{n}_\ssq + 2|\eta_x|^2\right)\Bigg]\hat{c} \nonumber \\ && - i\frac{\epsilon}{8} \frac{\bar{\om}_a \wdr}{\om_\ssc} v_\textit{cc} u_\textit{ac} u_\textit{aa}^2 \frac{\eta_x}{\wdr-\om_\ssc} \hat{n}_\ssq. \nonumber \\ \label{Eq:CcEffApproxSec} \end{eqnarray} Subleading corrections from the remaining terms in Eq.~(\ref{Eq:CqEff}) are at least two orders of magnitude smaller for the parameters chosen. In the next subsection we provide numerical estimates for the relative sizes of these contributions in the EME. \subsection{Numerical results} \label{SubSec:NumerEME} Let us now turn to our numerical results based on Eq.~(\ref{Eq:EMESum}), shown in Fig.~\ref{Fig:EMEQ1M1}. Our aim is to illustrate qubit relaxation in the presence of a steady state population in the cavity. This imposes certain constraints on the numerical parameters for the simulation of EME. We have chosen (in rescaled units where ``1'' corresponds to $10$ GHz for typical experiments) \begin{eqnarray} \bar{\om}_a = 0.77 \pi, \; \bar{\om}_c = \pi,\; g = 0.025 \pi, \nonumber \\ \end{eqnarray} for the bare qubit and cavity frequencies, and qubit-cavity coupling $g$, respectively, amounting to $n_\text{crit} = \left[\Delta/(2g)\right]^2 \approx 21$ and hence the following ratio of quality factors of the dressed qubit and cavity: \begin{equation} \frac{Q_\ssq}{Q_\ssc} = \frac{\om_\ssq}{\om_\ssc} \frac{\kappa_\ssc}{\kappa_\ssq} \approx 51.5. \label{Eq:QFactorRatio} \end{equation} This choice for the bare Q-factors guarantees that the population $\langle \hat{c}^\dagger \hat{c} \rangle(t)$ relaxes to the steady state value, with a mean population $\bar{n}_c$, markedly faster than the qubit population. Additionally, we have chosen the anharmonicity parameter $\epsilon=0.1$ which corresponds to $E_\text{C}/E_\text{J} = 1/200$. The drive frequency is detuned from the cavity frequency at half of the value of the Kerr interaction between cavity and qubit, which is the typical situation for dispersive readout \cite{blais_et_al_2004,Wallraff_Strong_2004}: \begin{eqnarray} \wdr = \om_\ssc - \chi_\textit{ac} / 2, \end{eqnarray} with $\chi_{\textit{ac}} = \epsilon\om_\ssq u_\textit{aa}^2 u_\textit{ac}^2/2 \approx 1.7 \times 10^{-3} \bar{\om}_c$. Moreover, the initial state corresponds to one photon in the hybridized qubit mode, and the vacuum state for the cavity, that is \begin{equation} \hat{\rho}(0) = |1_\ssq 0_\ssc \rangle \langle 1_\ssq 0_\ssc |. \end{equation} By virtue of our choices of Q-factors in Eq.~(\ref{Eq:QFactorRatio}), the population of the qubit, which is in the excited state at the beginning of the simulation according to the initial density matrix $\hat{\rho}(0)$, will relax slowly in the presence of a relatively rapidly stabilizing steady-state population of the cavity, $\bar{n}_\ssc$. Note that it is not typical for dispersive readout that $\kappa_\ssc \approx 10^{-2} \pi$ is overwhelmingly large compared to the dispersive shift $\chi_{\textit{ac}}$. Working at low quality factors is imposed by the necessity of simulations to be performed in a reasonable amount of time. This is the consequence of not performing rotating-wave approximation resulting in widely different timescales. However, as our expressions show, we expect the EME to correct the relaxation rates multiplicatively: that is, an order of magnitude decrease of the cavity relaxation rate $\kappa_\ssc$ is expected to result in an order of magnitude decrease in the corrections predicted by the EME. This is why we present our relaxation rates rescaled by the bare relaxation rates instead of absolute units. We plot the expectation value of the photon number operator corresponding to the hybridized qubit, $\hat{a}^\dag \hat{a}$, and extract the leading exponential decay in its time-dependence. Figure~\ref{Fig:EMEQ1M1}\textit{a}) shows this time dependence for variable drive strength, parametrized by the mean steady-state population of the cavity $\bar{n}_\ssc$ [plotted in Fig.~\ref{Fig:EMEQ1M1}\textit{b)}]. The leading dependence of $\langle\hat{a}^\dag\hat{a}\rangle$ is exponential, and the rate of decay as a function of time increases visibly as a function of drive power. To extract the relaxation rate of the qubit, $\kappa_\ssq^{\text{EME}}$, numerically, as a function of $\bar{n}_\ssc$, we assume the following form for the transient qubit population: \begin{equation} \langle \hat{a}^\dagger \hat{a} \rangle (t) = e^{-2 \kappa_\ssq^{\text{EME}} t} + ..., \end{equation} where the ellipsis contains subleading oscillatory terms (negligible for our parameter choices). The result of this fit is summarized in Fig.~\ref{Fig:EMEQ1M1}\textit{c)}, where the relaxation rate obtained from fitting the EME curves of Fig.~\ref{Fig:EMEQ1M1}a) is plotted versus $\bar{n}_\ssc$: \begin{equation} \frac{\delta\kappa_\ssq^{\text{EME}}(\bar{n}_\ssc)}{\kappa_\ssq^\text{EME}(0)} \equiv \frac{\kappa_\ssq^{\text{EME}}(\bar{n}_\ssc) - \kappa_\ssq^\text{EME}(0)}{\kappa_\ssq^\text{EME}(0)}. \label{Eq:Fitted} \end{equation} For the left-hand side of Eq.~(\ref{Eq:Fitted}), we obtain a monotonically increasing correction to the qubit relaxation rate, with almost-linear behavior at low cavity photon number [solid red Fig.~\ref{Fig:EMEQ1M1}\textit{c})]. This increase is primarily due to the nearly-resonant behavior of the correlated decay term in Eq.~(\ref{Eq:CqEffApproxSec}). Note that, since the hybridization between the qubit mode and the cavity is weak, the EME dynamics closely reproduces the steady state population of the cavity predicted by the linear theory. This is illustrated, for example, by the cavity population, plotted as a function of time and drive strength in Fig.~\ref{Fig:EMEQ1M1}\textit{b}). A comparison of the relaxation dynamics of the cavity and qubit populations in the first two panels of Fig.~\ref{Fig:EMEQ1M1} reveals that the cavity population relaxes on a time scale which is markedly shorter than the interval of transient exponential decay of the qubit mode. To illustrate the essential role of number-nonconserving terms, we consider for comparison a Kerr-theory master equation simulation, which exhibits no visible renormalization of the relaxation rates [see inset of Fig.~\ref{Fig:EMEQ1M1}\textit{a})]. This theory retains the number-conserving terms of the Josephson nonlinearity up to quartic order in the undriven Hamiltonian, plus the drive: \begin{equation} \hat{\mathcal{H}}_\text{s,Kerr}(t) = \hat{\mathcal{H}}_\text{s,Kerr} + \hat{\mathcal{H}}_\text{d}(t), \end{equation} where \begin{eqnarray} \hat{\mathcal{H}}_{\text{s},\text{Kerr}} = [\om_\ssq - \lambda_\ssq^{(0)}] \hat{n}_\ssq + [\om_\ssc - \lambda_\ssc^{(0)}] \hat{n}_\ssc \nonumber \\ - \chi_\textit{ac} \hat{n}_\ssq \hat{n}_\ssc - \alpha_\ssq \hat{n}_\ssq^2 - \alpha_\ssc \hat{n}_\ssc^2. \end{eqnarray} The frequency shifts amount to \begin{eqnarray} \lambda_\ssq^{(0)}&=& \frac{ \bar{\om}_a}{8} u_\textit{aa}^2 \left[u_\textit{aa}^2 + 2u_\textit{ac}^2\right], \nonumber \\ \lambda_\ssc^{(0)}&=& \frac{ \bar{\om}_a}{8} u_\textit{ac}^2 \left[ u_\textit{ac}^2+2u_\textit{aa}^2\right], \end{eqnarray} and $\chi_{\textit{ac}}$, $\alpha_\ssq$, and $\alpha_\ssc$ have been defined in Eq.~(\ref{Eq:DefinitionsEffectiveHam}). This driven Kerr Hamiltonian would form the basis of an oversimplified theory in which the rotating-wave approximation has been performed at the level of the Hamiltonian without considering renormalization effects onto dissipators. The associated master equation amounts to adding dissipators $\mathcal{D}[\hat{a}]$ and $\mathcal{D}[\hat{c}]$, thus neglecting the essential contributions to the dissipators from the Josephson nonlinearity and from the drive term: \begin{eqnarray} \dot{\hat{\rho}}(t) = -i \left[ \hat{\mathcal{H}}_{\text{s},\text{Kerr}}(t), \hat{\rho}(t) \right] + 2 \kappa(\om_\ssc) \mathcal{D}\left[ \hat{c} \right] \hat{\rho}(t) \nonumber \\ + 2 \kappa(\om_\ssq) \mathcal{D}\left[ \hat{a} \right] \hat{\rho}(t).\label{Eq:KerrQ1M0} \end{eqnarray} As shown in the inset of Fig.~\ref{Fig:EMEQ1M1}\textit{a}), there is no renormalization of the decay rate in a Kerr-only master equation simulation. In Figure~\ref{Fig:EMEQ1M1Mag} we investigate the sizes of the various terms entering the Eqs.~(\ref{Eq:CqEffApproxSec}) and~(\ref{Eq:CcEffApproxSec}). We first note that the drive term $|\eta_x|$, which is proportional to $\sqrt{\bar{n}_\ssc}$, reaches $\approx 10^{-1}$ at $\bar{n}_\ssc = 1.0$, which verifies our condition that the drive should cause only a small deviation on the phase quadrature [Fig.~\ref{Fig:EMEQ1M1Mag}\textit{a})]. Figure~\ref{Fig:EMEQ1M1Mag}\textit{b}) shows the leading contributions in the dissipators, as a function of drive power. The absolute value of the coefficient of the single photon dissipator, normalized by $v_\textit{ca}$, has almost no renormalization as a function of drive (dashed red curve). However, this value differs from $v_\textit{ca}$, which would be the amplitude of this term in a purely linear theory. Two contributions control the dressing of the dissipators as a function of drive: the correlated decay $\hat{a} \hat{c}$ in $\hat{C}(\om_\ssq)$ (black dotted line), and the photon dephasing term $\hat{a}^\dagger \hat{a}$ in $\hat{C}(\om_\ssc)$ (dot-dashed magenta line). To further illustrate the effects of these contributions, we have devised EME numerical simulations containing subsets of the terms [Fig.~\ref{Fig:EMEQ1M1}\textit{c})]. The correlated decay $\hat{a} \hat{c}$ in $\hat{C}(\om_\ssq)$ seems to be responsible for most of the renormalization of relaxation rates in the presence of drives, as shown by EME simulations where this term is omitted (black dotted line). Moreover, the omission of the dephasing term $\hat{a}^\dagger \hat{a}$ in $\hat{C}(\om_\ssc)$ leaves the EME result largely unaffected (see dot-dashed magenta) curve. Finally, we note that the Kerr simulation (solid black line) and an EME simulation retaining only the single-photon terms (red-dashed line) both predict negligible renormalization of the qubit relaxation rate as a function of drive. Before summarizing, we would like to add a new wrinkle. We have seen that the correction from the drive-induced contributions in the EME is dominated by almost-resonant contributions $\propto 1/(\om_\ssc - \wdr)$. In a second set of numerical simulations performed with the same parameters ($\bar{\om}_a / \bar{\om}_c = 0.77$, $g/\bar{\om}_c = 0.025$), we have varied the drive frequency in the interval $[\wdr-10\chi_{\ssq\ssc}, \wdr - \chi_{\ssq\ssc}/2]$ while keeping the cavity steady-state population $\bar{n}_\ssc$ fixed at a reference value of $0.5$ photons. Our results are summarized in Fig.~\ref{Fig:EMEQ1M1VsVd}. The relaxation rate obtained from the EME only shows a markedly large renormalization close to the cavity frequency $\om_\ssc$ and decays rapidly as the drive frequency is shifted. When the drive is detuned to around $10 \chi_{\ssq\ssc}$ under the cavity frequency, there is very little renormalization discernible from the drive-induced terms, and the rate obtained from the EME matches to good approximation that corresponding to the EME of the undriven theory [Fig.~\ref{Fig:EMEQ1M1VsVd}a)]. The value of the relaxation rate $\kappa_\ssq^\text{EME}$ predicted by the EME for the undriven case is smaller than $\kappa_\ssq$, as already shown in Part I. Overall, these results are consistent with our understanding of the fact that the coefficients of the drive-induced corrections to the EME decay algebraically with the detuning of the readout drive [Fig.~\ref{Fig:EMEQ1M1VsVd}]. This suggests that there is a marked sensitivity of the renormalization of the decay rate of the qubit as a function of the detuning between the readout drive and the cavity. \begin{figure} \includegraphics[width=\linewidth]{fig_eme_kappa_versus_drive_ncrit_21.pdf} \caption{EME results versus drive frequency, at $\bar{n}_\ssc = 0.5$ steady-state cavity photons. a) The relaxation rate obtained from the EME (solid red) exhibits a large renormalization only in the vicinity of the cavity resonance. At large drive-cavity detuning $\om_\ssc - \wdr$ (ten times the cross-Kerr energy scale $\chi_{\text{ac}}$) there is almost no correction from the drive-induced terms, as exemplified by a comparison with the EME result for an undriven system (black dashed line). b) This is consistent with the coefficients of the most resonant contributions in the EME decaying algebraically with the detuning. \label{Fig:EMEQ1M1VsVd}} \end{figure} To summarize, it appears that in the driven qubit-cavity system, for a choice of parameters inspired by the setup for dispersive readout, the renormalization of the qubit relaxation rate is primarily driven by nearly-resonant, correlated decay processes corresponding to one photon leaking out of each normal mode, $\hat{a} \hat{c}$. These processes appear as drive-activated corrections to the qubit dissipator. As the drive is detuned from the cavity normal mode frequency, the strength of the terms in the dissipators corresponding to these processes decays inversely proportional to the detuning between the readout drive and the cavity frequency. \section{Summary} \label{Sec:Summary} To conclude, we have argued that the relaxation rate and the transition frequency of a driven, weakly anharmonic, superconducting qubit depend strongly on drive power. We have arrived at these conclusions by devising a perturbation theory in the weak nonlinearity and in the strength of the drive. We have shown that, to lowest order, the effect arises from the interplay of number-nonconserving terms in the nonlinear Hamiltonian with the drive, and that the lowest-order contributions of the Josephson potential, the quartic terms, predict significant corrections to qubit dynamics. Moreover, through full numerical simulation of the EME, we have quantitatively confirmed our qualitative analytical predictions. The theory presented here can be adapted to a wide range of experimental parameters. A quantitative comparison to current experiments would necessitate the inclusion of the effects of finite temperature and pure dephasing \cite{boissonneault_et_al_2009} which is the subject of future work. We expect that these refinements will only bring quantitative corrections to the results presented here, with the qualitative picture conveyed in this work, in particular the net increase of the qubit relaxation rate with drive in the dispersive readout setup, remaining intact. More generally, our results shed light on the importance of number-nonconserving terms in the theoretical description of driven nonlinear systems. In the limit of zero drive, number-nonconserving terms correspond to the counter-rotating terms of the Hamiltonian, which are frequently neglected in current theories of transmon qubit systems \cite{Nigg_BlackBox_2012,solgun_et_al_2019}. We have shown that, while number-conserving terms dress frequencies to lowest order in the strength of anharmonicity, $\epsilon$, it is the number-nonconserving terms that actually correct the collapse operators, ultimately leading to a $\epsilon-$order corrections to the qubit relaxation rate. These are linear in the mean cavity photon occupation in the steady state, for small photon numbers. This is the central finding of our work. \section{Acknowledgements} We acknowledge useful discussions with Alexandre Blais, Michel Devoret, S. M. Girvin, Zlatko Minev, Shantanu Mundhada, Ioan M. Pop, Shyam Shankar, and Yaxing Zhang. A.P. acknowledges funding by the Institut Quantique Postdoctoral Fellowship at the Universit\'e de Sherbrooke. This research was supported by the US Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award No. DE-SC0016011.
1,108,101,563,970
arxiv
\section{Introduction}\label{sec:intro} In many scientific applications, from biomedical models to combustion or air pollution modeling, stiff differential equations must be solved to carry out numerical simulations. A straightforward notion of stiffness was given by Hairer \& Wanner by stating that {\it stiff equations are problems for which explicit methods don't work} \cite{Hairer96}. The latter is mainly due to the broad spectrum of physical or numerical time scales that a numerical solver must deal with whenever a stiff time--dependent differential equation is being solved. Robust and stable methods are thus required for stiff problems in order to properly handle and damp out fast transients. In the past decades high order implicit Runge--Kutta schemes with excellent stability properties were developed and widely investigated to solve stiff problems modeled by ODEs (see \cite{Hairer96} \S~IV and references therein). The same methods can be naturally considered to solve stiff problems originating from time--dependent PDEs discretized in space. However, the high performance of implicit Runge--Kutta methods for stiff ODEs is often adversely affected by the size of the systems of nonlinear equations arising in the case of semi--discrete PDEs. In particular phenomena involving localized fronts, as considered in this work, commonly require fine spatial representations, hence potentially large systems of equations. Significant effort is required to achieve numerical implementations that solve the corresponding algebraic problems at reasonable computational expenses in terms of both CPU time and memory. Low order implicit schemes were already successfully used to simulate very complex problems modeled by stiff PDEs. This is the case, for instance, for the numerical simulation of combustion flames accounting for detailed chemical kinetics and multi--species transport (see{{\it ,\,e.g.,\,}} \cite{Bennett2009,Smooke2013} and references therein). The computational performance of high order Runge--Kutta methods, implemented in well--established production aerodynamics codes, was also assessed in the context of laminar and turbulent compressible flows \cite{Bijl2002,Carpenter05}. In particular Jacobian--free Newton--Krylov methods were investigated in conjunction with high order implicit schemes \cite{Zingg04,Bijl2005} to further reduce the computational requirements (see{{\it ,\,e.g.,\,}} \cite{Knoll2004} for a review on this subject). Similarly, high order space discretization schemes have also been implemented in this context, reducing in practice the computational stencils, hence the size of the nonlinear systems \cite{Noskov2005,Noskov2007}. Easing the computational load is also achievable by designing efficient parallelization techniques as developed, for instance, in \cite{Tosatto2011} for reactive flow solvers. Taking into account that grid adaptation techniques for unsteady problems disclosing localized fronts are specifically designed to yield high data compression, we exploit this capability here to efficiently implement implicit integration schemes for stiff PDEs. This strategy was already adopted, for instance, in \cite{Bennett1998,Bennett1999}, together with low order implicit solvers. Among the many adaptive meshing approaches developed in the literature, we consider in this work adaptive multiresolution schemes based on \cite{Harten94,Harten95}, namely the multiresolution finite volume scheme introduced in \cite{Cohen03} for conservation laws. Besides the inherent advantages of grid adaptation, multiresolution techniques rely on biorthogonal wavelet decomposition and thus offer a rigorous mathematical framework for adaptive meshing schemes \cite{cohen2000a,muller2003}. Consequently, not only approximation errors coming from grid adaptation can be tracked, but general and robust solvers can be implemented since the wavelet decomposition is independent of any physical particularity of the problem and accounts only for the spatial regularity of the discrete variables at a given simulation time. Adaptive multiresolution schemes have been successfully implemented for the simulation of compressible fluids (see{{\it ,\,e.g.,\,}} \cite{Brix2011,Domingues2011} and references therein), as well as for the numerical solution of time--dependent, parabolic \cite{Roussel03,Burger08} and stiff parabolic \cite{Duarte11_SISC,Dumont2013} PDEs. Nevertheless, to the best of our knowledge this is the fist attempt to implement high order implicit time integration schemes in the context of the adaptive multiresolution finite volume method to solve stiff PDEs. The paper is organized as follows. We give in Section~\ref{sec:scheme} a short introduction on multiresolution finite volume schemes and implicit Runge--Kutta schemes, in particular of SDIRK-- (Singly Diagonally Implicit Runge--Kutta) and RadauIIA-- type. Some key aspects of the numerical implementation of time implicit schemes on multiresolution grids are detailed in Section~\ref{sec:imple}. Finally, the numerical solution of several stiff time--dependent PDEs is investigated in Section~\ref{sec:num_res}. \section{Numerical methodology}\label{sec:scheme} Let us consider a parabolic, time--dependent PDE, \begin{equation}\label{eq:gen_prob} \left. \begin{array}{ll} \partial_t \vec u = \vec F(\partial^2_{{\boldsymbol{x}}} \vec u, \partial_{{\boldsymbol{x}}} \vec u, \vec u), & t > t_0,\, {\boldsymbol{x}} \in {\mathbb R}^d,\\%[1ex] \vec u(t_0,{\boldsymbol{x}}) = \vec u_0({\boldsymbol{x}}),& t = t_0,\, {\boldsymbol{x}} \in {\mathbb R}^d, \end{array} \right\} \end{equation} where $\vec u:{\mathbb R}\times {\mathbb R}^d \to {\mathbb R}^m$ and $\vec F:{\mathbb R}^m \to {\mathbb R}^m$, for a model with $m$ variables. For many physically inspired systems, the right hand side $\vec F$ can in general be written as \begin{equation}\label{eq:def_F} \vec F(\partial^2_{{\boldsymbol{x}}} \vec u, \partial_{{\boldsymbol{x}}} \vec u, \vec u) = \vec F(\vec u) = \vec F_1(\vec u) + \vec F_2(\vec u) + \ldots, \end{equation} where the $\vec F_i(\vec u) $, $i=1,\ldots$, stand for different physical processes. For instance, a scalar nonlinear reaction--diffusion equation with $u:{\mathbb R}\times {\mathbb R}^d \to {\mathbb R}$ would be given by $F_1(u)=- \partial_{{\boldsymbol{x}}} \cdot (D(u) \partial_{{\boldsymbol{x}}} u)$ and $F_2(u)= f(u)$ for some diffusion coefficient, $D:{\mathbb R} \to {\mathbb R}$, and a nonlinear function, $f:{\mathbb R} \to {\mathbb R}$. \subsection{Multiresolution analysis}\label{sec:MR} Without loss of generality we perform a finite volume discretization of problem (\ref{eq:gen_prob}) with $m=1$ for the sake of simplicity. According to the multiresolution finite volume scheme \cite{Cohen03}, we consider a set of nested dyadic grids over a computational domain $\Omega \subset \mathbb{R}^{d}$ as follows. Each cell $\Omega _{\lambda}$, $\lambda \in S_j$, is the union of $2^d$ finer cells of equal size $\Omega_{\mu}$, $\mu \in S_{j+1}$. The sets $S_j$ and $S_{j+1}$ are thus consecutive embedded grids over $\Omega$, where $j=0,1,\ldots,J$, corresponds to the grid--level, from the coarsest to the finest grid{{\it ,\,i.e.,\,}} $j$ equal to 0 and $J$, respectively. We denote $\disc U_j:=(u_{\lambda})_{\lambda \in S_j}$ as the spatial representation of $u$ on the grid $S_j$, where $u_{\lambda}$ represents the cell--average of $u :\, {\mathbb R} \times {\mathbb R}^d \to {\mathbb R}$ in $\Omega _{\lambda}$: \begin{equation}\label{eq3:average_finite_vol} u_{\lambda} := |\Omega_{\lambda}|^{-1} \int_{\Omega_{\lambda}} u(t,{\boldsymbol{x}})\, {\mathrm{d}} {\boldsymbol{x}}, \quad {\boldsymbol{x}}\in {\mathbb R}^d. \end{equation} Data at different levels of discretization are related by two inter--level transformations: the {\it projection} and {\it prediction} operators, briefly defined in Appendix~\ref{app:multiresolution}. Based on these operations, a multiresolution analysis allows one to define a one--to--one correspondence between two consecutive grid--levels: \begin{equation}\label{eq3:one_one_cor} \disc U_j\longleftrightarrow (\disc U_{j-1},\disc D_j), \end{equation} where the $\disc D_j$ array gathers the so--called {\it details}. The latter can be seen as estimators of the local spatial regularity of a given discretized function, in this case $\disc U_j$, and represent the information lost when coarsening the spatial grid, in this case from $\disc U_j$ to $\disc U_{j-1}$. By iteration of this decomposition, we get a multi--scale representation of $\disc U_J$ in terms of $\disc M_J := (\disc U_0,\disc D_1,\disc D_2,\cdots,\disc D_J)$: \begin{equation}\label{eq3:M_U_M} {\mathcal M}:\disc U_J\longmapsto \disc M_J, \end{equation} and similarly, its inverse ${\mathcal M}^{-1}$. This multi--scale transform amounts to a representation of $\disc U_J$ in a wavelet space spanned by a biorthogonal wavelet basis. Further details can be found in \cite{cohen2000a,muller2003}. While the transformation (\ref{eq3:M_U_M}) is exact and the multi--scale representation can be performed back and forth, real computational benefit is achieved by introducing a {\it thresholding} operator, as shown in Appendix~\ref{app:multiresolution}. This operator basically discards cells of smooth regularity whose values can be recomputed within an accuracy tolerance whenever needed. As a result a multiresolution approximation $\disc U_J^{\epsilon}$ is obtained. Defining the following normalized $\ell^2$--norm: \begin{equation*}\label{eq3:normalized_l2} \|\disc U_J\|_2^2:= 2^{-dJ} \ds \sum_{\lambda \in S_J} (u_{\lambda})^2, \end{equation*} which corresponds to the $L^2$--norm of a piecewise constant function, it can be shown that \cite{Duarte14_Poisson} \begin{equation}\label{eq3:adap_error_eps} \|\disc U_{J} - \disc U_J^{\epsilon} \|_2 \leq C {\eta_{\mathrm{MR}}}, \end{equation} where ${\eta_{\mathrm{MR}}}$ corresponds to an accuracy tolerance\footnote{ Bound (\ref{eq3:adap_error_eps}) was similarly shown in \cite{Cohen03} for both uniform and $\ell^1$--norm.}. So much is true for steady problems. When solving time--dependent problems, the same behavior is expected in terms of numerical errors introduced by the multiresolution approximation. (The spatially adapted grid is fixed during a given time integration step.) The latter was mathematically proved for hyperbolic problems in an $L^1$--norm for both classical and inhomogeneous conservation laws in \cite{Cohen03} and \cite{Hovhannisyan2010}, respectively. Moreover, numerical evidence proves similar behaviors for time--dependent, parabolic \cite{Roussel03,Burger08,Bendahmane09} and stiff parabolic \cite{Duarte11_SISC,Duarte11_JCP,DuarteCFlame} PDEs. \subsection{Implicit Runge--Kutta schemes}\label{sec:IRK} Let us now consider problem (\ref{eq:gen_prob}) discretized on an adapted grid obtained by multiresolution analysis: \begin{equation}\label{eq:gen_disc_prob} \left. \begin{array}{ll} {\mathrm{d}}_t \disc U = \disc F (\disc U), & t > t_0,\\%[1ex] \disc U(t_0) = \disc U_0,& t = t_0. \end{array} \right\} \end{equation} For the ease of reading we denote $\disc U_J^{\epsilon}$ simply as $\disc U$ of size $m\times N$, where $N$ corresponds to the number of cells in the adapted grid and thus $\disc F:{\mathbb R}^{m\times N} \to {\mathbb R}^{m\times N}$. Given a time step $\Delta t$ we consider an implicit Runge--Kutta (IRK) scheme of order $p$ for the numerical integration of the semi--discretized problem (\ref{eq:gen_disc_prob}). An $s$--stage Runge--Kutta scheme is in general defined through a set of arrays $\vec b$, $\vec c \in {\mathbb R}^s$, such that $\vec b=(b_1, \ldots, b_s)^T$ and $\vec c=(c_1,\ldots, c_s)^T$, and a matrix $\vec A \in \mathcal{M}_s({\mathbb R})$ such that $\vec A=(a_{ij})_{1 \leq i,j \leq s}$. These coefficients define the stability properties and the order conditions of the method, and are usually arranged in a Butcher tableau according to \begin{equation*} \begin{tabular}{c|c } $\vec c$ & $\vec A$ \\ \hline \\[-2.5ex] & $\vec b^T$ \end{tabular}. \end{equation*} In practice, given a set of arrays $\disc z_1,\ldots,\disc z_s \in {\mathbb R}^{m\times N}$, we have to solve the nonlinear system \begin{equation}\label{eq2:nonlinear_sys} \left( \begin{array}{c} \disc z_1\\ \vdots\\ \disc z_s \end{array}\right) =\disc A\left( \begin{array}{c} \Delta t \disc F (t_0+c_1\Delta t,\disc U_0+ \disc z_1)\\ \vdots\\ \Delta t \disc F (t_0+c_s\Delta t, \disc U_0+ \disc z_s) \end{array}\right), \end{equation} where $\disc A$ is a square block--matrix of size $s \times m \times N$ built with the coefficients $(a_{ij})_{1 \leq i,j \leq s}$ (see more details in Appendix~\ref{app:IRK}). The solution $\disc U(t_0+\Delta t)$ is then approximated by $\disc U_1$, computed as \begin{equation}\label{eq2:u1_radau} \disc U_1= \disc U_0+\ds \sum_{i=1}^sd_i \disc z_i, \qquad \vec d^T := (d_1,\ldots,d_s)=(b_1,\ldots,b_s)\vec A^{-1}. \end{equation} If all the elements of the matrix of coefficients $\vec A$ are non--zero, then we say we are considering a {\it fully IRK scheme} \cite{Hairer96}. Moreover, if \begin{equation}\label{eq:stiffly_acc} a_{sj} = b_j, \quad j = 1,\ldots,s \end{equation} then the last stage corresponds to the solution $\disc U_1$ and thus $\vec d^T = (0,0,\ldots,0,1)$ in (\ref{eq2:u1_radau}). Methods satisfying (\ref{eq:stiffly_acc}) are called {\it stiffly accurate} \cite{Prothero74} and are particularly appropriate for the solution of (stiff) singular perturbation problems and for differential--algebraic equations \cite{Hairer88,Hairer96}. An IRK approximation amounts then to solving a nonlinear system of equations of size $s \times m \times N$. The latter can be achieved by considering a {\it simplified} Newton method for system (\ref{eq2:nonlinear_sys}), recast as \begin{equation} \calbf G(\disc Z) := \disc Z - \Delta t \disc A \calbf F(\disc Z)= \disc 0, \end{equation} where $\disc Z:=(\disc z_1,\ldots,\disc z_s)^T$ and $\calbf F(\disc Z):=(\disc F(t_0+c_1\Delta t,\disc U_0+\disc z_1),\ldots,\disc F(t_0+c_s\Delta t,\disc U_0+\disc z_s))^T$. The $(k+1)$--th approximation of the solution $\disc Z$ is thus computed in two steps. First, we solve the following linear system for $\delta \disc Z^k \in {\mathbb R}^{s\times m \times N}$: \begin{equation}\label{eq2:Newton_simp} ({\mathbf{Id}}_{s\times m \times N} - \Delta t \disc A \disc J) \delta \disc Z^k = -\disc Z^k+ \Delta t \disc A \calbf F(\disc Z^k), \end{equation} where $\disc J$ is a a square block--matrix of size $s \times m \times N$ consisting of $s$ rows and $s$ columns of size $m \times N$ given by the Jacobians $\disc J_0 := \partial_{\disc U} \disc F (t_0,\disc U_0)$. Then, the previous solution $\disc Z^k$ is corrected according to \begin{equation}\label{eq:fin_Newton} \disc Z^{k+1}=\disc Z^k+\delta \disc Z^k. \end{equation} A standard way of initializing the iterative algorithm considers simply \begin{equation}\label{eq:ini_Newton} \disc z_i^0 =\disc U_0, \quad i=1,\ldots,s. \end{equation} \subsubsection{SDIRK schemes} An alternative to solving a large nonlinear system of size $s \times m \times N$ is to consider an SDIRK scheme where $a_{ij}=0$ for $j>i$, and with equal diagonal coefficients{{\it ,\,i.e.,\,}} $a_{ii}=\gamma$, $i=1,\ldots,s$. In general, $A$-- or $L$--stable SDIRK schemes can be built of order $p \leq s+1$ or $p \leq s$, respectively (see \cite{Hairer96} \S~IV.6 for more details). However, the stage order $q$ of these schemes, that is, the order achieved by one single stage, is limited to $1$. The main idea is thus to successively solve the $s$ stages by considering an $m\times N$--dimensional system at each stage, that is, for $i=1,\ldots,s$, \begin{equation} \disc z_i = \Delta t \gamma \disc F(t_0+c_i\Delta t,\disc U_0+ \disc z_i) + \Delta t \ds \sum_{j=1}^{i-1} a_{ij}\disc F(t_0+c_j\Delta t,\disc U_0+ \disc z_j), \end{equation} where the second term in the right--hand side is already known at the current stage. Adopting the same simplified Newton technique, this time stage--wise, we have to solve the following linear system for $\delta \disc z_i^k \in {\mathbb R}^{m \times N}$: \begin{align}\label{eq2:Newton_simp_SDIRK} ({\mathbf{Id}}_{m \times N} - \Delta t \gamma \disc J_0) \delta \disc z_i^k = & -\disc z_i^k+ \Delta t \gamma \disc F(t_0+c_i\Delta t,\disc U_0+ \disc z^k_i ) \nonumber \\ & + \Delta t \sum_{j=1}^{i-1} a_{ij}\disc F(t_0+c_j\Delta t,\disc U_0+ \disc z_j). \end{align} A simple way of initializing the iterative algorithm at each stage considers \begin{equation}\label{eq:ini_SDIRK4} \disc z_1^0 =\disc U_0, \quad \disc z_i^0 =\disc U_0+ \disc z_{i-1}, \quad i=2,\ldots,s. \end{equation} We will denote as SDIRK2, SDIRK3, and SDIRK4, respectively, the second, third, and fourth order SDIRK schemes here considered (Butcher tableaux in Appendix~\ref{app:Butcher_tab}). \subsubsection{RadauIIA schemes} Fully IRK schemes with a number of stages below its approximation order can be built based on collocation methods \cite{Guillon69,Wright71}, together with the simplified order conditions introduced by Butcher \cite{MR0159424}. In this case, the coefficients $(b_j,c_j)_{j=1}^s$ correspond to the quadrature formula of order $p$ such that $\int_{0}^{1} \pi(\tau)\, {\mathrm{d}} \tau = \sum_{j=1}^s b_j \pi(c_j)$ for polynomials $\pi(\tau)$ of degree $\leq p-1$. Moreover, the coefficients in $\vec c$ and $\vec A$, together with conditions for the stage order $q$, imply that at every stage $i$ the quadrature formula $\int_{0}^{c_i} \pi(\tau)\, {\mathrm{d}} \tau = \sum_{j=1}^s a_{ij} \pi (c_j)$ holds for polynomials $\pi(\tau)$ of degree $\leq q-1$. Depending on the quadrature formula considered, such as Gauss, Radau or Lobatto, different families of implicit Runge--Kutta methods can be constructed (for more details, see \cite{Hairer96} \S~IV.5). In this work we consider the family of RadauIIA methods introduced by Ehle \cite{Ehle69}, based on \cite{Butcher64}, that consider Radau quadrature formulas \cite{RadauC} such that $p=2s-1$ and $q=s$. These are $A$-- and $L$--stable schemes that are stiffly accurate methods according to (\ref{eq:stiffly_acc}). In particular we consider the third and fifth order RadauIIA schemes referred as Radau3 and Radau5 (Butcher tableaux in Appendix~\ref{app:Butcher_tab}). Note that, even though Gauss methods attain a maximum order of $p=2s$ \cite{MR0159424,Ehle68}, they are neither stiffly accurate nor $L$--stable schemes, which are both important properties for stiff problems. Approximations of lower order are obtained with Lobatto methods satisfying $p=2s-2$ \cite{MR0159424,Ehle68,Chipman71,Axelsson72}. In particular the collocation methods with $p=2s-2$ and $q=s$, known as the LobattoIIIA methods, yield stiffly accurate schemes, but these are only $A$--stable. \section{Numerical implementation}\label{sec:imple} We now discuss some particular aspects concerning the numerical implementation. We consider the multiresolution procedure presented in \cite{Duarte11_SISC}. For the sake of completeness some key details of this particular implementation will be first recalled, while more details and references can be found in \cite{Duarte_Phd}. \subsection{Construction of multiresolution grids} The adapted grid is composed of a set of nested dyadic grids: $S_j$, $j=0,1,\ldots,J$, from the coarsest to the finest. They are generated by recursively refining a given cell depending on the local regularity of the time--dependent variables, measured by the details at a given time. These grids are implemented in a multi--dimensional Cartesian finite volume framework. Following \cite{Cohen03} a centered polynomial interpolation of accuracy order $\beta = 2r+1$ is implemented for the prediction operator, computed with the $r$ nearest neighboring cells in each direction; the procedure is exact for polynomials of degree $2r$. Here we will only consider the case $\beta=3$ with one neighboring cell per direction ($r=1$) including the diagonals in multidimensional configurations. For instance, in the one--dimensional case (\ref{eq3:dyadic_1D}) the latter is given by \begin{equation*}\label{eq3:polynomial_dyadic1D_order3} \widehat{u}_{j+1,2k} = u_{j,k} + \frac{1}{8}(u_{j,k-1} - u_{j,k+1}), \qquad \widehat{u}_{j+1,2k+1} = uf_{j,k} + \frac{1}{8}(u_{j,k+1} - u_{j,k-1}). \end{equation*} Higher order formulae can be found in \cite{muller2003}, while extension to multi-dimensional Cartesian grids is easily obtained by a tensorial product of the one-dimensional operator \cite{bihari1997,Roussel03}. In general the interpolation stencil is given by $(2r+1)^d$ cells. Data compression is achieved by means of thresholding by discarding the cells whose details are below a given tolerance (see (\ref{eq3:MR_Lambda})) and thus defining a compressed set $\Lambda$ with the remaining cells. However, not all cells can be eliminated as this would prevent one from performing the multiresolution inter--grid operations. In particular all cells in the prediction interpolation stencils must be always available. Consequently, cells are gathered in a {\it graded tree} $\Lambda_\epsilon$, instead of $\Lambda$, that is, a data structure that satisfies the aforementioned conditions (see \cite{Cohen03} for more details). Notice that $\Lambda \subset \Lambda_\epsilon$ and error estimates like (\ref{eq3:adap_error_eps}) follow straightforwardly. Nevertheless, for the ease of reading we will keep the notation $\Lambda$ in the following to refer to a graded tree. A graded tree--structure is hence used to represent data in the computer memory (see also \cite{Roussel03}). Recalling the standard tree--structure terminology: if $\Omega_{\mu} \subset \Omega_{\lambda}$ belonging to consecutive grids, we say that $\Omega_\mu$ is a \textit{child} of $\Omega_\lambda$ and that $\Omega_\lambda$ is the \textit{parent} of $\Omega_\mu$. We thus define the \textit{leaves} ${\mathrm{L}}(\Lambda)$ of a \textit{tree} $\Lambda$ as the set of cells $\Omega_{\lambda}$, $\lambda \in {\mathrm{L}}(\Lambda)$, such that $\Omega_{\lambda}$ has no children in $\Lambda$. Depending on the size of the computational domain, more graded trees may be needed. Therefore, cells are distributed in $N_{\rm R}$ graded trees $\Lambda_r$, $r=1,\ldots,N_{\rm R}$, where $N_{\rm R}:=N_{{\rm R}x}N_{{\rm R}y}N_{{\rm R}z}$, and $N_{{\rm R}x}$, $N_{{\rm R}y}$, and $N_{{\rm R}z}$ stand for the number of graded trees or {\it roots} per direction. The adapted grid is thus given by sets ${\mathrm{L}}(\Lambda_r)$, $r=1,\ldots,N_{\rm R}$, with a total number of cells: $N_{\mathrm{L}}= \sum_{r=1}^{N_{\rm R}}\#({\mathrm{L}}(\Lambda_r))$. If no adaptation is required, then the maximum number of cells will be $N_{\rm L}=\#(S_J)=N_{{\rm R}x}N_{{\rm R}y}N_{{\rm R}z}2^{dJ}${{\it ,\,i.e.,\,}} the size of the finest grid. Input parameters for the multiresolution implementation are: the maximum grid--level $J$ corresponding to the finest spatial discretization; the number of roots per direction $N_{{\rm R}x}$, $N_{{\rm R}y}$, and $N_{{\rm R}z}$; and the threshold parameter ${\eta_{\mathrm{MR}}}$, which defines the numerical accuracy of the compressed representations following (\ref{eq3:adap_error_eps}). \subsection{Numerical function evaluations} Introducing the set ${\rm I}^n_{\mathrm{L}} := \{1, 2, \ldots, N^n_{\mathrm{L}}\}$, where $N^n_{\mathrm{L}}$ stands for the number of leaves during time $t \in [t_n,t_n+\Delta t_n]$, we define a bijective function $h_n:D(h_n)\to{\rm I}^n_{\mathrm{L}}$, with \begin{equation*} D(h_n):= \bigcup_{r=1}^{N_{\rm R}} {\mathrm{L}}(\Lambda^n_r). \end{equation*} The cells $(\Omega_{\lambda})_{h_n(\lambda)\in {\rm I}^n_{\mathrm{L}} }$ correspond then to the adapted grid during the current timestep $\Delta t_n$, defined by the leaves of the tree representation. Considering again $m=1$, the solution of the semi--discrete problem (\ref{eq:gen_disc_prob}) for $t\in [t_n,t_n+\Delta t_n]$ is similarly defined as $\disc U (t) = (u_{\lambda})_{h_n(\lambda)\in {\rm I}^n_{\mathrm{L}} }$, where $u_{\lambda}$ stands for the cell--average of variable $u(t,{\boldsymbol{x}})$ in $\Omega _{\lambda}$ according to (\ref{eq3:average_finite_vol}). The discrete function $\disc F (\disc U)$ in (\ref{eq:gen_disc_prob}) can be thus defined as $\disc F (\disc U)= (F_{\lambda}(\disc U))_{h_n(\lambda)\in {\rm I}^n_{\mathrm{L}} }$, where $F_{\lambda}(\disc U)$ can be further decomposed into $\varPhi_{\lambda}(\disc U)$ and $\omega_{\lambda}(\disc U)$, coming from the discretization of differential operators and source terms, respectively. In particular for a second order spatial discretization, considered in this work, the local source term $\omega_{\lambda}(\disc U)$ becomes $\omega(u_{\lambda})$, that is, it is computed using the local cell--average values. During timestep $\Delta t_n$, the time--dependent problem (\ref{eq:gen_disc_prob}) can be thus written at each cell $\Omega _{\lambda}$ of the adapted grid as \begin{equation}\label{eq:disc_prob_local} {\mathrm{d}}_t u_{\lambda} = F_{\lambda}(\disc U)= \varPhi_{\lambda}(\disc U) + \omega(u_{\lambda}), \qquad t \in [t_n,t_n+\Delta t_n],\, h_n(\lambda)\in {\rm I}^n_{\mathrm{L}}, \end{equation} \begin{equation}\label{eq:def_flux} \varPhi_{\lambda}(\disc U) := |\Omega_{\lambda}|^{-1} \sum_{\mu} |\Gamma_{\lambda,\mu}| \varPhi_{\lambda,\mu}, \end{equation} where the latter sum is made over all $\mu \neq \lambda$ such that the interface $\Gamma_{\lambda,\mu} := \overline{\Omega_{\lambda}} \cap \overline{\Omega_{\mu}}$ is not trivial{{\it ,\,i.e.,\,}} over all the neighboring cells of $\Omega_{\lambda}$; and $\varPhi_{\lambda,\mu}$ accounts for the flux across each interface. In the simplest (low order in space) schemes, the flux $\varPhi_{\lambda,\mu}$ is typically a function of $u_{\lambda}$ and $u_{\mu}$ only, while higher order schemes require considering additional cells. Without loss of generality, let us denote by $R_{\varPhi}[\lambda]$ the stencil required to compute fluxes associated with cell $\Omega_{\lambda}$. Here, we consider flux computation schemes such that all cells in $R_{\varPhi}[\lambda]$ belong to the same grid, that is, fluxes are computed on a locally uniform mesh. Problem (\ref{eq:disc_prob_local}) can be thus rewritten as \begin{equation}\label{eq:disc_prob_local_flux} {\mathrm{d}}_t u_{\lambda} = F_{\lambda}\left(\left(u_{\lambda}\right)_{\lambda \in R_{\varPhi}[\lambda]}\right), \qquad t \in [t_n,t_n+\Delta t_n],\, h_n(\lambda)\in {\rm I}^n_{\mathrm{L}}. \end{equation} The numerical integration of problem (\ref{eq:gen_disc_prob}) then involves evaluating function $F_{\lambda}$ in (\ref{eq:disc_prob_local_flux}) for the $N^n_{\mathrm{L}}$ current cells. Moreover, for a given interface $\Gamma_{\lambda,\mu}$ the following conservation property holds in a finite volume flux representation: $\varPhi_{\lambda,\mu} + \varPhi_{\mu,\lambda} = 0$. Computing $\varPhi_{\lambda,\mu}$ for $\Omega_\lambda$ amounts to evaluating also $\varPhi_{\mu,\lambda}$ for the neighboring cell $\Omega_\mu$. Let us denote $\varPhi_{\lambda,\mu}^+$ as the right flux for $\Omega_\lambda$ and $\varPhi_{\mu,\lambda}^-$ as the left flux for $\Omega_\mu$, along the direction normal to $\Gamma_{\lambda,\mu}$. Similarly, $R_{\varPhi}^+[\lambda]$ stands for the stencil required to compute $\varPhi_{\lambda,\mu}^+$ and, naturally, $R^-_{\varPhi}[\mu] \equiv R^+_{\varPhi}[\lambda]$; we thus have that $\varPhi_{\mu,\lambda}^-= -\varPhi_{\lambda,\mu}^+$. This property is thus exploited to save computations as fluxes are computed only once at each interface. The locally uniform grids are then defined by the stencil $R_{\varPhi}^+[\lambda]$ enclosing the current leaf $\Omega_{\lambda}$. Ghost cells, computed according to the inter--grid prediction operation, are used whenever one cell in the current stencil is missing. These ghost cells are also added to the adapted grid at interfaces between cells of different sizes in order to compute numerical fluxes at the highest grid--level between two neighboring cells \cite{Roussel03}. Notice that function $h_n$ is in practice used for indexation of leaves, identifying them regardless of their geometric layout. This is particularly useful to organize the computation of the entries of the Jacobian, as shown in Appendix~\ref{app:Jac}, and the linear system. All matrices are stored using a standard CSR (Compressed Sparse Row) format for sparse matrices. \subsection{Newton method and linear solver} The simplified Newton method to solve the nonlinear system (\ref{eq2:nonlinear_sys}) considers the linear system (\ref{eq2:Newton_simp}) for fully IRK schemes like the RadauIIA methods. System (\ref{eq2:Newton_simp}) is recast as \begin{equation}\label{eq2:Newton_simp_dt} (\Delta t^{-1}{\mathbf{Id}}_{s\times m \times N} - \disc A \disc J) \delta \disc Z^k = -\Delta t^{-1}\disc Z^k+ \disc A \calbf F(\disc Z^k), \end{equation} mainly to avoid updating $\Delta t \disc A \disc J$ when time step changes are required. Defining an accuracy tolerance ${\eta_{\mathrm{Newt}}}$, we consider the following stopping criterion for the iterative process: \begin{equation}\label{eq:stop_newton} \|\delta \disc Z^k \|_2 \leq {\eta_{\mathrm{Newt}}}. \end{equation} Additionally, we define a convergence rate for the Newton solver as \begin{equation}\label{eq:conv_rate} \Theta_k = \frac{\|\delta \disc Z^k \|_2}{\|\delta \disc Z^{k-1} \|_2}, \quad k\geq 1. \end{equation} For the first iteration we set $\Theta_0 = \|\delta \disc Z^0 \|_2/(2\max \disc U_0)$. We also define a maximum number of Newton iterations ${k_{\mathrm{Newt},\max}}$, and inspired by \cite{Hairer96}, computations are interrupted and restarted with a halved timestep in (\ref{eq2:Newton_simp_dt}), if any of the following happens: \begin{itemize} \item there is a $k$ such that $\Theta_k \geq 1$; \item for some $k$, we have that \begin{equation}\label{eq:err_new_max} (\Theta_k)^{{k_{\mathrm{Newt},\max}}-k-1}\|\delta \disc Z^k \|_2 \geq {\eta_{\mathrm{Newt}}}, \end{equation} where the left--hand side in (\ref{eq:err_new_max}) is a rough estimate of $\|\delta \disc Z^{{k_{\mathrm{Newt},\max}} -1} \|_2$; \item ${k_{\mathrm{Newt},\max}}$ iterations have been performed and $\|\delta \disc Z^{{k_{\mathrm{Newt},\max}}-1} \|_2 > {\eta_{\mathrm{Newt}}}$. \end{itemize} Notice that if the timestep is halved, only diagonal entries in $[\Delta t^{-1}{\mathbf{Id}}_{s\times m \times N} - \disc A \disc J]$ need to be modified; however, the resulting new matrix must be again factorized. In this work we have implemented the iterative GMRES method \cite{GMRES} to solve the linear system (\ref{eq2:Newton_simp_dt}), with right-preconditioning based on an ILUT factorization \cite{ILUT}. Considering a fixed Jacobian has the advantage that the factorization and preconditioning of matrix $[\Delta t^{-1}{\mathbf{Id}}_{s\times m \times N} - \disc A \disc J]$ needs to be performed only once, unless computations are restarted with a halved timestep. Notice that this is a purely algebraic problem that has completely lost any reminiscence of its original geometric layout, meaning that it is independent of the adapted grid generation or any other grid--related data structure or geometric consideration. Consequently, any linear solver could be used as a {\it black box} solver provided that it only needs the matrix entries and the right--hand side array as inputs. For an iterative linear solver like GMRES we define another accuracy tolerance, ${\eta_{\mathrm{LS}}}$, as stopping criterion. This tolerance is chosen such that ${\eta_{\mathrm{LS}}}=\kappa {\eta_{\mathrm{Newt}}}$, with $\kappa \leq 1$. In this work we consider, for instance, $\kappa = 10^{-2}$, unless noted otherwise. If the linear solver is taking too many iterations to converge, noted henceforth as ${k_{\mathrm{LS},\disc J}}$ iterations, we update the Jacobians in the Newton method. Re--factorization and preconditioning would be necessary in this case. The same ideas apply for the numerical implementation of the SDIRK schemes, considering at each stage system \begin{align}\label{eq2:Newton_simp_SDIRK_dt} ((\Delta t\gamma)^{-1}{\mathbf{Id}}_{m \times N} - \disc J_0) \delta \disc z_i^k = & -(\Delta t\gamma)^{-1}\disc z_i^k+ \disc F(t_0+c_i\Delta t,\disc U_0+ \disc z^k_i ) \nonumber \\ & + \sum_{j=1}^{i-1} \frac{a_{ij}}{\gamma}\disc F(t_0+c_j\Delta t,\disc U_0+ \disc z_j), \end{align} instead of (\ref{eq2:Newton_simp_SDIRK}). In principle the same block--matrix needs to be factorized for all Newton iterations at all stages. The stopping criterion (\ref{eq:stop_newton}), the convergence rate (\ref{eq:conv_rate}), as well as the conditions for halving the time step are all applied stage--wise, that is, with $\|\delta \disc z^{k}_i \|_2$ instead of $\|\delta \disc Z^{k} \|_2$. Similarly, the Jacobian is updated in (\ref{eq2:Newton_simp_SDIRK_dt}) after ${k_{\mathrm{LS},\disc J}}$ iterations. \subsection{Time--stepping strategy} Since we consider only $A$--stable IRK schemes in this work, we are uninhibited by stability issues in the choice of the timestep, which can be based solely on accuracy requirements. For some kinds of problems, a constant time step might be sufficient to capture the problem dynamics. However, more generally, an adaptive time--stepping could be considered in order to enhance the computational efficiency. In either case, the main goal is to define a time step $\Delta t$ such that the local error satisfies \begin{equation}\label{eq2:tolerance_err} \|\disc U(t_0+\Delta t) - \disc U_1\|_2 = C \Delta t^{p+1} \leq {\eta_{\mathrm{RK}}}, \end{equation} where ${\eta_{\mathrm{RK}}}$ is the desired accuracy tolerance for the $p$--th order IRK scheme. The advantage of higher order methods is that they can satisfy (\ref{eq2:tolerance_err}) with larger time steps than those achievable with conventional low order methods. A standard approach to time step control is based on numerically approximating the exact local error in (\ref{eq2:tolerance_err}), by considering a solution $\hat{\disc U}_1$ computed by a lower order method of order $\hat{p}<p$ (see, for instance, \cite{Hairer87}). In this way we use the computations at the $n$--th step to predict the local error at the next step, \begin{equation}\label{eq:err_estimate} err = \| \disc U_n - \hat{\disc U}_n \|_2 \approx \tilde{C}_n \Delta t_n^{\hat{p}+1}, \end{equation} which defines a new time step, \begin{equation}\label{eq2:time_stepping1_n} \Delta t_{{\mathrm{new}}} = \Delta t_n \ds \left(\frac{{\eta_{\mathrm{RK}}}}{err}\right)^{1/\hat{p}+1}, \end{equation} by assuming that $ {\eta_{\mathrm{RK}}} \approx \tilde{C}_{n+1} \Delta t_{{\mathrm{new}}}^{\hat{p}+1}$ with $\tilde{C}_{n+1} \approx \tilde{C}_n $. The next time step $\Delta t_{n+1}$ will be based on $\Delta t_{{\mathrm{new}}}$ if the current approximation error satisfies $err \leq {\eta_{\mathrm{RK}}}$. Otherwise, the current $n$--th solution will be disregarded, and the same $n$--th step will be integrated again with $\Delta t_{{\mathrm{new}}}$ instead of $\Delta t_n$. The lower order approximations are defined in Appendix~\ref{app:embeddedRK}. Inspired by \cite{Hairer96}, we define a safety factor $\nu_k$ that depends on the current Newton iteration $k$, the current linear solver iteration ${k_{\mathrm{LS}}}$, and the maximum number of Newton iterations ${k_{\mathrm{Newt},\max}}$, as follows \begin{equation}\label{eq:def_nu} \nu_k = \nu \times \frac{2{k_{\mathrm{Newt},\max}} +1}{2{k_{\mathrm{Newt},\max}} + \max(k,0.5{k_{\mathrm{LS}}})}, \end{equation} where $\nu>0$ is a standard safety factor close to 1. Here we typically consider $\nu = 0.9$. For the SDIRK schemes, where more than one Newton solve is required per time step, $k$ and ${k_{\mathrm{LS}}}$ in (\ref{eq:def_nu}) stand, respectively, for the maximum number of Newton and linear iterations performed within a given time step. The time step $\Delta t_{n+1}$ is thus defined as \begin{equation}\label{eq:timestepping_dyn} \Delta t_{n+1} = \min(\nu_k \Delta t_{{\mathrm{new}}}, \alpha \Delta t_{n}), \end{equation} where $\alpha>1$ limits the variation of successive time steps. Here we consider in general $\alpha = 1.5$. However, if the computations were to be performed with a constant time step $\Delta t$, we would consider the following time--stepping procedure \begin{equation}\label{eq:timestepping_const} \Delta t_{n+1} = \min(\alpha \nu_k \Delta t_{n}, \Delta t), \end{equation} which allows modifications on the chosen time step based on the performance of the Newton and linear solvers. In general the initial time step $\Delta t_0$ should be set sufficiently small to account for potentially fast transients. The numerical accuracy of the time integration is defined by the user--provided tolerance parameter, ${\eta_{\mathrm{RK}}}$. The tolerance parameter for the Newton solver is set to a lower value: ${\eta_{\mathrm{Newt}}} = \kappa {\eta_{\mathrm{RK}}}$, with $\kappa < 1$. In this way errors coming from both the Newton and linear solvers should remain smaller than those caused by the IRK scheme. \section{Numerical illustrations}\label{sec:num_res} We investigate the computational performance of the numerical strategy for three problems modeled by time--dependent stiff PDEs. In this work all the simulations were run on a standard laptop with an Intel Core i3 @ $2.27$ GHz processor and a memory capacity of $1.8$ GB. \subsection{The Belousov--Zhabotinski reaction}\label{subsec:BZ} Let us consider the numerical approximation of a model for the Belousov--Zha\-bo\-tins\-ki (BZ) reaction, a catalyzed oxidation of an organic species by acid bromated ion (see \cite{Epstein98} for more details and illustrations). The present mathematical formulation \cite{Field72,Scott94} takes into account three species: hypobromous acid $\mathrm{HBrO_2}$, bromide ions $\mathrm{Br^-}$, and cerium (IV). Denoting by $a=[\mathrm{Ce(IV)}]$, $b=[\mathrm{HBrO_2}]$, and $c=[\mathrm{Br^-}]$, we obtain a very stiff system of three PDEs given by \begin{equation} \label{eq4:bz_eq_3var_diff} \left. \begin{array}{l} \partial_t a - D_a\, \partial^2 _{\boldsymbol{x}} a = \ds \frac{1}{\mu}(-qa-ab+fc),\\[1.75ex] \partial_t b - D_b\, \partial^2 _{\boldsymbol{x}} b = \ds \frac{1}{\varepsilon}\left(qa-ab+b(1-b)\right),\\[1.75ex] \partial_t c - D_c\, \partial^2 _{\boldsymbol{x}} c = b-c, \end{array} \right\} \end{equation} where ${\boldsymbol{x}} \in {\mathbb R}^d$, with real, positive parameters: $f$, small $q$, and small $\varepsilon$ and $\mu$, such that $\mu \ll \varepsilon \ll 1$. In this study: $\varepsilon = 10^{-2}$, $\mu = 10^{-5}$, $f=1.6$, $q=2\times 10^{-3}$; with diffusion coefficients: $D_a=2.5\times 10^{-3}$, $D_b=2.5\times 10^{-3}$, and $D_c=1.5\times 10^{-3}$. The dynamical system associated with this problem models reactive, excitable media with a large time scale spectrum (see \cite{Scott94} for more details). The spatial configuration with the addition of diffusion involves propagating wavefronts with steep spatial gradients; in particular, two--dimensional spiral waves and three--dimensional scroll waves \cite{Duarte11_SISC}. \subsubsection{Numerical time integration errors}\label{subsubsec:BZ_1D} We consider problem (\ref{eq4:bz_eq_3var_diff}) in a one--dimensional configuration with Neumann homogeneous boundary conditions, discretized on a uniform grid of 1024 cells over a space region of $[0,1]$. A standard, second order, centered finite volumes scheme is employed for the diffusion term. No grid adaptation is considered here in order to assess only the numerical errors related to the time integration schemes. To obtain an initial condition, we initialize the problem with a discontinuous profile close to the left boundary; we then integrate in time until the BZ wavefronts are fully developed. Figure~\ref{fig:sol_BZ_nx1001} shows the time evolution of the propagating waves for a time window of $[0,1]$. In order to compute the local errors associated with the implicit solvers here considered, we define a reference solution for the resulting semi--discrete problem. The latter is chosen here as the solution obtained using the Radau5 scheme (\ref{eq:Radau5}), computed with a fine tolerance: ${\eta_{\mathrm{RK}}} = 10^{-14}$. \begin{figure}[!htb] \begin{center} \includegraphics[width=0.49\textwidth]{sol_BZ_nx1001_vara.pdf} \includegraphics[width=0.49\textwidth]{sol_BZ_nx1001_varb.pdf} \includegraphics[width=0.49\textwidth]{sol_BZ_nx1001_varc.pdf} \end{center} \caption{One--dimensional BZ propagating waves for variables $a$ (top left), $b$ (top right), and $c$ (bottom), at time intervals of $0.2$ within $[0,1]$ from left to right.} \label{fig:sol_BZ_nx1001} \end{figure} \begin{figure}[!htb] \begin{center} \includegraphics[width=0.49\textwidth]{local_order_sdirk.pdf} \includegraphics[width=0.49\textwidth]{local_order_radau.pdf} \includegraphics[width=0.49\textwidth]{local_order_sdirk_varc.pdf} \includegraphics[width=0.49\textwidth]{local_order_radau_varc.pdf} \end{center} \caption{Local $L^2$--errors for stiff and non--stiff components, respectively, $a$ (top) and $c$ (bottom) using the SDIRK2, SDIRK3 and SDIRK4 schemes (left), and the Euler, Radau3 and Radau5 ones (right). Dashed lines of slopes 2 to 5 (top), 3 to 5 (bottom left), and 2, 4 and 5 (bottom right) are also depicted. Error estimates $err$ given by (\ref{eq:err_estimate}) are indicated with red bullets ($\color{red}{\bullet }$) (top) for the SDIRK4 (left) and Radau5 (right) schemes.} \label{fig:local_order_BZ1D} \end{figure} Starting from the solution at $t=0.5$, Figure~\ref{fig:local_order_BZ1D} shows the local errors associated with each IRK scheme for different time steps. Both tolerances for the Newton and the linear solver are set to ${\eta_{\mathrm{Newt}}} = {\eta_{\mathrm{LS}}} = 10^{-14}$ in these computations. Notice that the stiffest variable, $a$, is directly subject to a time scale given by the small parameter $\mu = 10^{-5}$. We are thus in practice interested in time steps larger than $10^{-5}$. Considering the stiff and non--stiff components of (\ref{eq4:bz_eq_3var_diff}), $a$ and $c$, respectively, we see the following numerical behavior. For the stiff variable (see Figure~\ref{fig:local_order_BZ1D} (top)), local errors of ${\mathcal O}(\Delta t^{p+1})$ tend to ${\mathcal O}(\Delta t^{q+1})$ for relatively large time steps. For the non--stiff variable (see Figure~\ref{fig:local_order_BZ1D} (bottom)), the order reduction goes from ${\mathcal O}(\Delta t^{p+1})$ to ${\mathcal O}(\Delta t^{q+2})$. Local errors for variable $b$ (not shown), stiffer than $c$, also behave as the ones for the stiffest component, variable $a$. These results are consistent with the classical, theoretical bounds derived in \cite{Hairer88} for stiff ODEs in {\it singular perturbation} form, that is, containing a small stiffness parameter given by $\mu$ in our case. These results highlight the importance of the stage order for IRK schemes and stiff problems. In this respect RadauIIA schemes perform better than SDIRK methods. The same can be said with respect to stiffly accurate schemes when comparing, for instance, SDIRK3 with Radau3; more accurate results are obtained with the latter. As a matter of fact, a well--known conclusion is that stiffly accurate schemes guarantee better accuracies for stiff problems \cite{Prothero74,Alexander77,Hairer88}. Figure~\ref{fig:local_order_BZ1D} also shows the error estimates $err$ given by (\ref{eq:err_estimate}) for both SDIRK4 and Radau5 schemes. Notice that the actual local errors are bounded by $err$, which in particular overestimates them since $err$ is computed using a third order, embedded scheme in both cases. Finally, it has to be remarked that higher order schemes perform better in terms of numerical accuracy than low order ones like the first order Euler method, even when order reduction appears and all methods show the same low order convergence. \subsubsection{Performance comparison} We now consider problem (\ref{eq4:bz_eq_3var_diff}) in a two--dimensional configuration with Neumann homogeneous boundary conditions, using multiresolution analysis to adapt dynamically the spatial discretization grid. For the multiresolution analysis the following input parameters are considered: number of roots per direction, $N_{{\rm R}x}=N_{{\rm R}y}=1$; maximum grid--level, $J=10$; and accuracy tolerance, ${\eta_{\mathrm{MR}}} = 10^{-3}$. The finest grid has a spatial resolution of $1024 \times 1024$ over a computational domain of $[0,1]\times [0,1]$. We consider both SDIRK4 and Radau5 schemes with the following parameters: ${k_{\mathrm{Newt},\max}} = 30$, ${k_{\mathrm{LS},\disc J}} = {k_{\mathrm{Newt},\max}}$, and $\kappa = 10^{-1}$, recalling that ${\eta_{\mathrm{LS}}} = \kappa {\eta_{\mathrm{Newt}}} = \kappa^2 {\eta_{\mathrm{RK}}}$. The initial solution is taken at $t=2$, when the spiral waves are fully developed (see Figure~\ref{fig:sol_BZ_2D}), and the PDEs are then integrated until $t=2.01$. (See \cite{Duarte11_SISC} for details on the initialization of this two--dimensional configuration.) The data compression, defined as the ratio in percentage between the active and the finest grids, is of about 15\%. \begin{figure}[!htb] \begin{center} \includegraphics[width=0.495\textwidth]{sol_ini.pdf} \includegraphics[width=0.495\textwidth]{grid_ini_zoom.pdf} \end{center} \caption{Two--dimensional BZ propagating waves for variable $a$ at $t=2$ (left) and the corresponding adapted grid for the zoomed region $[0.5,1]\times[0,0.5]$ (right).} \label{fig:sol_BZ_2D} \end{figure} \begin{figure}[!htb] \begin{center} \includegraphics[width=0.49\textwidth]{dt_sdirk4.pdf} \includegraphics[width=0.49\textwidth]{dt_rad5.pdf} \end{center} \caption{Time--stepping with different accuracy tolerances, ${\eta_{\mathrm{RK}}}$, for the SDIRK4 (left) and Radau5 (right) schemes.} \label{fig:BZ_2D_dt} \end{figure} \begin{table}[!htb] \caption{Time integration with SDIRK4 for $t\in[2,2.01]$: number of time steps, $n$; maximum time step used, $\max \Delta t_n$; maximum number of Newton iterations, $\max k$; maximum number of GMRES iterations, $\max {k_{\mathrm{LS}}}$; CPU time in seconds.} \label{TableSDIRK4} \begin{center} \begin{tabular}{l|c|c|c|c|c|c|c|} \cline{2-7} \tlvs & \multirow{2}{*}{${\eta_{\mathrm{RK}}}$} & \multicolumn{5}{|c|}{SDIRK4} \\ \cline{3-7} \tlvs & &$n$ & $\max \Delta t_n$ & $\max k$ (per stage) & $\max {k_{\mathrm{LS}}}$ & CPU time (s) \\ \cline{2-7} \tlvs & $10^{-3}$ & $11$ & $1.95\times10^{-3}$ & $13$ ($3$) & $15$ & $171.93$ \\ \tlvs & $10^{-4}$ & $15$ & $1.01\times10^{-3}$ & $17$ ($4$) & $13$ & $269.28$ \\ \tlvs & $10^{-5}$ & $26$ & $4.65\times10^{-4}$ & $17$ ($4$) & $11$ & $472.14$ \\ \tlvs & $10^{-6}$ & $45$ & $2.51\times10^{-4}$ & $19$ ($4$) & $10$ & $837.62$ \\ \cline{2-7} \end{tabular} \end{center} \end{table} \begin{table}[!htb] \caption{Time integration with Radau5 for $t\in[2,2.01]$: number of time steps, $n$; maximum time step used, $\max \Delta t_n$; maximum number of Newton iterations, $\max k$; maximum number of GMRES iterations, $\max {k_{\mathrm{LS}}}$; CPU time in seconds.} \label{TableRadau5} \begin{center} \begin{tabular}{l|c|c|c|c|c|c|} \cline{2-7} \tlvs & \multirow{2}{*}{${\eta_{\mathrm{RK}}}$} & \multicolumn{5}{|c|}{Radau5} \\ \cline{3-7} \tlvs & &$n$ & $\max \Delta t_n$ & $\max k$& $\max {k_{\mathrm{LS}}}$ & CPU time (s) \\ \cline{2-7} \tlvs & $10^{-3}$ & $10$ & $2.56\times10^{-3}$ & $5$ & $60$ & $892.98$ \\ \tlvs & $10^{-4}$ & $12$ & $1.64\times10^{-3}$ & $6$ & $56$ & $1268.11$ \\ \tlvs & $10^{-5}$ & $19$ & $6.97\times10^{-4}$ & $6$ & $35$ & $2311.30$ \\ \tlvs & $10^{-6}$ & $37$ & $2.98\times10^{-4}$ & $5$ & $21$ & $3416.22$ \\ \cline{2-7} \end{tabular} \end{center} \end{table} Figure~\ref{fig:BZ_2D_dt} shows the evolution of time steps according to (\ref{eq:timestepping_dyn}), considering $\alpha=1.5$ and $\Delta t_0 = 10^{-4}$ at $t=2$ for various accuracy tolerances: ${\eta_{\mathrm{RK}}}$ between $10^{-3}$ and $10^{-6}$. For this particular problem a roughly constant time step is attained, consistent with the quasi constant propagation speed of the wavefronts. Tables~\ref{TableSDIRK4} and \ref{TableRadau5} gather information on the performance of both solvers during the time window $[2,2.01]$. As also seen in Figure~\ref{fig:BZ_2D_dt}, larger time steps for a given accuracy tolerance are used with the Radau5 scheme, even though both schemes consider a third order, embedded method to compute dynamically the integration time steps (\ref{eq2:time_stepping1_n}). For SDIRK4, increasing the accuracy of the Newton solver involves more iterations even when smaller time steps are considered, showing a rather low dependence on the time step size and a Newton solver piloted mainly by its accuracy tolerance. A different behavior is observed for Radau5 where smaller time steps involve roughly the same number of Newton iterations, regardless of the Newton accuracy tolerance, meaning that smaller time steps effectively improve the Newton solver. All this is a direct consequence of the initialization of the Newton solver; while for SDIRK4 the Newton solver at each stage is initialized using the previous stage solution and thus at some time within the current time step, this is not the case for the present Radau5 solver for which the larger the time step the worse the initial approximation. In terms of the iterative linear solver, the number of iterations decreases considerably with smaller time steps even when tighter convergence tolerances are considered. This is a direct consequence of the better preconditioning of the more diagonal--dominant matrices in (\ref{eq2:Newton_simp}) and (\ref{eq2:Newton_simp_SDIRK}) for relatively small time steps. In terms of CPU time, following Tables~\ref{TableSDIRK4} and \ref{TableRadau5} we can see that SDIRK4 is approximately $4$ to $5$ times faster than Radau5. Updating the grid together with the multiresolution operations takes approximately $10$ to $13\,$\% for SDIRK4, whereas the time integration, $84$--$90\,$\%. These numbers are within the range of values found in the literature for adaptive grid techniques (see{{\it ,\,e.g.,\,}} \cite{Duarte_Phd}). For Radau5 the multiresolution load goes down to $2\,$\% with a roughly $98\,$\% of the CPU time allocated to the time integration, showing a clear problem of performance. There are two main reasons why a straightforward implementation of Radau5 is not fully satisfactory. First of all, a better initialization of the Newton solver is required to improve the convergence rate of the linear solver, regardless of the solver considered. For example, in \cite{Hairer96} (\S~IV.8) all stages are initialized by extrapolating from the previous time step and using an interpolation polynomial based on the quadrature order conditions. Even if an adaptive grid technique can considerably reduce the increase of data storage that the latter procedure involves for multi--dimensional PDEs, we still have to introduce additional operations to initialize grid points that were not present during the previous time step. The second problem is related to the size of the algebraic systems which are basically tripled in the case of Radau5. The latter heavily impacts the performance of the linear solver. In this particular implementation, most of the overload is related to the preconditioning ILUT solver which was implemented as a {\it black box}, contrary to the GMRES solver. A tailored ILUT solver implemented specifically for this data structure may have already improved its performance before considering parallel computing implementations. \subsection{Ignition model of diffusion flames}\label{subsec:Ignition} We now consider the mathematical model derived in \cite{Thevenin95} to investigate the ignition dynamics of a diffusion flame, formed while a reactive layer is being rolled--up in a vortex. The hydrodynamics is decoupled from species and energy transport equations by adopting a standard thermo--diffusive approximation, leading to a reaction--diffusion--convection model. A two--dimensional computational domain is considered where pure and fresh hydrogen at temperature $T_{{\mathrm{F}},0}$ initially occupies the upper half part, while the remaining lower part of the domain is occupied by hot air at $T_{{\mathrm{O}},0}$. By defining a Schvab--Zeldo'vich variable $Z$ and a reduced temperature $\theta$ given by \begin{equation}\label{eq12:theta} \theta = \frac{T-T_{{\mathrm{O}},0}}{T_{{\mathrm{F}},0}-T_{{\mathrm{O}},0}}, \end{equation} the mathematical model is given by a system of equations of the form \cite{Thevenin95}: \begin{equation}\label{eq12:eq_z_theta} \left. \begin{array}{l} \ds \partial _{t} Z + v_{x} \partial _{x} Z + v_{y} \partial _{y} Z - \left(\partial ^2_{x} Z + \partial ^2_{y} Z \right)= 0, \\[2.5ex] \ds \partial _{t} \theta + v_{x} \partial _{x} \theta + v_{y} \partial _{y} \theta - \left(\partial ^2_{x} \theta + \partial ^2_{y} \theta \right) = F(Z,\theta), \end{array} \right\} \end{equation} \begin{equation*}\label{eq12:F_Z_theta} F(Z,\theta)= {\rm Da}\, \phi \chi Y_{{\mathrm{O}},0} \left[ \frac{1-Z}{\phi \tau} + \frac{1}{\chi}(Z-\theta) \right] \left[ Z + \frac{\tau}{\chi}(Z-\theta) \right] {\mathrm e}^{\left(- \tau_a/(1+\tau \theta)\right)}, \end{equation*} with physical constant parameters: ${\rm Da}=1.65\times 10^{7}$, $\phi=34.782608696$, $\chi=50$, $Y_{{\mathrm{O}},0}=0.23$, $\tau=-0.7$, and $\tau_a=8$, corresponding to $T_{{\mathrm{F}},0}=300\,$K and $T_{{\mathrm{O}},0}=1000\,$K. The velocity field $(v_{x},v_{y})$ is given by a single vortex centered on the planar interface between the two media, which varies strongly in time and space. Its tangential velocity is given by \begin{equation}\label{eq12:velocity_adim} v_{\theta}(r,t)= \ds \frac{{\rm Re}\, {\rm Sc}}{r} \left(1-{\mathrm e}^{- r^2/(4\, {\rm Sc}\, t)} \right), \end{equation} where $r(x,y)$ stands for the distance to the vortex center $(x_0,y_0)=(0,0)$, and with Reynolds and Schmidt numbers of ${\rm Re}=1000$ and ${\rm Sc}=1$, respectively. In Cartesian coordinates, the velocity of a counter-clockwise rotating vortex is thus given by \begin{equation*}\label{eq12:velocity_cart} v_{x} = \ds \left( \frac{y - y_{0} }{r}\right) v_{\theta}, \quad v_{y} = - \ds \left( \frac{x - x_{0} }{r}\right) v_{\theta}, \quad r = \left[ (x - x_{0})^2 + (y - y_{0})^2 \right]^{1/2}. \end{equation*} The physics of the phenomenon can be briefly described as follows. A rotating vortex is introduced immediately at $t = 0$. The resulting forced convection superposes to the diffusive mechanisms and accelerates the mixture of the gases. A diffusion flame then ignites along the contact surface of both media, taking into account the important difference of temperatures in those regions. Once the flame is completely ignited, it propagates outwards from the center of the computational domain. The complete phenomenon encompasses thus very different physical regimes like mixing, ignition, propagation, which can be characterized depending on the initial reactants configuration and on the imposed velocity field, as studied in detail in \cite{Thevenin95}. \subsubsection{High order temporal approximations} We consider problem (\ref{eq12:eq_z_theta}) in a two--dimensional configuration with Neumann homogeneous boundary conditions, using multiresolution analysis to adapt dynamically the spatial discretization grid. The convective term is discretized in space using a standard first--order upwind scheme. As before the multiresolution analysis is parametrized as follows: number of roots per direction, $N_{{\rm R}x}=N_{{\rm R}y}=1$; maximum grid--level, $J=10$; and accuracy tolerance, ${\eta_{\mathrm{MR}}} = 10^{-3}$. The finest grid has thus a spatial resolution of $1024 \times 1024$ over a computational domain of $[-1,1]\times [-1,1]$. The model is simulated for a time window of $[0,1.5\times 10^{-4}]$, using all the time integration solvers previously described. In all cases the following parameters were chosen: ${k_{\mathrm{Newt},\max}} = 30$, ${k_{\mathrm{LS},\disc J}} = {k_{\mathrm{Newt},\max}}$, and $\kappa = 10^{-2}$. For this highly unsteady problem the number of active grid cells increases from approximately 3\% of $1024^2$ for the initial inert configuration, up to 13\% at the final time when the diffusion flame is fully ignited along the contact surface (see Figure~\ref{fig:sol_Ignition_2D}). (See \cite{DuarteCFlame} for further details on the initialization of this problem.) \begin{figure}[!htb] \begin{center} \includegraphics[width=0.495\textwidth]{sol_final.pdf} \includegraphics[width=0.495\textwidth]{grid_final_zoom.pdf} \end{center} \caption{Two--dimensional ignition model. Temperature $T$ deduced from (\ref{eq12:theta}) at $t=1.5\times 10^{-4}$ (left) and the corresponding adapted grid for the zoomed region $[-1,0]\times[-1,0]$ (right).} \label{fig:sol_Ignition_2D} \end{figure} \begin{figure}[!htb] \begin{center} \includegraphics[width=0.49\textwidth]{Tmax_SDIRK.pdf} \includegraphics[width=0.49\textwidth]{Tmax_Radau.pdf} \includegraphics[width=0.49\textwidth]{Tmax_eta1e3.pdf} \includegraphics[width=0.49\textwidth]{dt_SDIRK.pdf} \end{center} \caption{Evolution of the maximum temperature $T_{max}$ using a time step of $\Delta t=10^{-5}$ for the SDIRK (top left) and Radau (top right) solvers. The solution computed with Radau5 and $\Delta t=10^{-6}$ is depicted with a solid black line. Similarly, time-adaptive solutions based on a tolerance of ${\eta_{\mathrm{RK}}}=10^{-3}$ are shown (bottom left). The various time steps considered for the SDIRK solvers are also illustrated (bottom right); in all cases, $\Delta t_0=10^{-8}$.} \label{fig:res_Ignition} \end{figure} Figure~\ref{fig:res_Ignition} shows the time evolution of the maximum temperature $T_{max}$ throughout the computational domain. Notice that the initial $T_{max}$ corresponds to $T_{{\mathrm{O}},0}=1000\,$K, the hot air; however, the fuel is initially at a much lower temperature of $T_{{\mathrm{F}},0}=300\,$K and thus the local temperature changes are in fact even more dramatic. First, we consider a constant time step of $\Delta t=10^{-5}$, but with the time--stepping procedure given by (\ref{eq:timestepping_const}). In all cases an initial time step of $\Delta t_0=10^{-8}$ was considered, taking into account that the velocity field (\ref{eq12:velocity_adim}) radically changes during the first time step. A tolerance of ${\eta_{\mathrm{LS}}}=10^{-5}$ is considered for the Newton solver. Figure~\ref{fig:res_Ignition} (top) clearly shows the difference between the approximations obtained with different discretization orders. As a reference solution we consider the one obtained with Radau5 and $\Delta t=10^{-6}$. The first order Euler solver introduces an ignition delay of the order of one time step. This delay is subsequently corrected by increasing the order of the time discretization with the same time step. Considering the third order approximations, SDIRK3 and Radau3, the latter performs much better for this particular problem because of its $L$--stability capabilities, given the strong transients that the stiff system (\ref{eq12:eq_z_theta}) models. As a matter of fact, the $L$--stable SDIRK2 performs also better than SDIRK3. The difference of quality of the approximations can be further assessed by considering the final temperature at $t=1.5\times 10^{-4}$, once the ignition process is achieved and the strongest transients resolved. Final $T_{max}$ goes from $2254.09\,$K for the Euler solver to $2227.96\,$K and $2224.74\,$K for SDIRK4 and Radau5, respectively. For comparison, the reference Radau5 solution yields $2224.95\,$K. With a smaller time step of $\Delta t=10^{-6}$, the Euler solution still shows a considerable difference (see Figure~\ref{fig:res_Ignition} (bottom left)) with a final $T_{max}$ of $2219.54\,$K. All the other solvers with $\Delta t=10^{-6}$ yield solutions with less than $0.3\,$K of difference with respect to the Radau5 solution, except for SDIRK3 with about $1\,$K of difference. As an illustration, Figure~\ref{fig:res_Ignition} (bottom right) also depicts the time steps considered for the various SDIRK solvers. Recalling that the time--stepping strategy is actually influenced by the performance of the Newton and linear solvers, we can see the impact of strong physical changes during the ignition process as all solvers need to use at some point smaller time steps. Similar behaviors are observed for the Euler and Radau solvers (not shown). In particular it can be seen again how the non $L$--stable SDIRK3 is the most affected solver. Notice that for these numerical experiments we have chosen a relatively large ${k_{\mathrm{Newt},\max}}$ as this allows for larger time steps according to (\ref{eq:def_nu}); consequently, ${k_{\mathrm{LS},\disc J}}$ is also large and the Jacobians are never recomputed. Time steps are thus reduced due to a bad convergence rate of the Newton solver; a more conservative lower ${k_{\mathrm{Newt},\max}}$ prevents this bad convergence rates since time steps would not even attain $10^{-5}$. In general a careful tuning of parameters should be conducted with this ``constant'' time step strategy in order to get the best possible performance. This tuning can be highly problem--dependent. That is why a time--stepping strategy based on an accuracy tolerance is very convenient, as the time steps can be effectively adapted to the various physical scenarios within a prescribed accuracy, while reducing the importance of the many parameters related to the Newton and linear solvers. Time--adaptive solutions are shown in Figure~\ref{fig:res_Ignition} (bottom left) for SDIRK4 and Radau5 with ${\eta_{\mathrm{RK}}}=10^{-3}$. In terms of CPU time, with $\Delta t=10^{-5}$ the Euler solver takes approximately $3.6$ minutes, compared to $5$ and $14.9$ for SDIRK4 and Radau5, respectively; but the physics simulated with the first order method diverges considerably from the right one. The time--adaptive SDIRK4 with ${\eta_{\mathrm{RK}}}=10^{-3}$ takes approximately $4$ minutes, becoming a very promising alternative to the cheaper but less accurate Euler scheme, especially if one takes into account that a more accurate Euler solver with $\Delta t=10^{-6}$ takes about $6$ minutes. \section{Concluding remarks}\label{sec:conclusion} We have considered high order, implicit integration schemes to solve stiff multi--dimensional PDEs on adaptive multiresolution grids. Such an adaptive technique yields highly compressed representations within a user--prescribed accuracy tolerance, considerably reducing the computational requirements of implicit Runge--Kutta schemes. In particular a competitive time--space adaptive strategy was introduced to simulate models involving different physical scenarios with a broad spectrum of time and space scales within a user--specified level of accuracy. By designing an appropriate procedure to evaluate functions and represent linear systems within the multiresolution data structure, we have implemented several implicit Runge--Kutta schemes of SDIRK-- and RadauIIA--type. The resulting linear systems are completely independent of the grid generation or any other grid--related data structure or geometric consideration. Solving the algebraic problems constitute then a separate aspect from the multiresolution analysis itself, while the same procedure remains perfectly valid for other space adaptive techniques. Three stiff models have been investigated to assess the computational performance of the numerical strategy in terms of accuracy and CPU time. The computational analyses have thus proved that stiff PDEs can be effectively approximated with high order time discretization schemes with very limited computational resources. In particular SDIRK schemes require roughly the same amount of memory than a standard, low order Euler method. More memory--demanding RadauIIA schemes can also be employed in conjunction with adapted grids; however, as previously discussed, further enhancements are required to achieve better computational performances. It was also shown that even in the presence of order reduction, high order schemes yield more accurate solutions than low order ones. The advantages of high order discretizations have been especially highlighted when dealing with highly unsteady problems. However, for problems of even larger size parallel computing capabilities must be developed within the current context to achieve overall satisfactory results. Additionally, high order space discretization schemes, well--suited for implicit schemes \cite{Noskov2005,Noskov2007,Dobbins2010}, could be also considered in conjunction with grid adaptation to further enhance the computational performance. These issues constitute particular topics of our current research.
1,108,101,563,971
arxiv
\section{Introduction} The development of transverse momentum dependent (TMD) factorization theorems (\cite{Angeles-Martinez:2015sea} and references therein) has lead to an increase of precision in predictions of observables such as the Drell-Yan (DY) transverse momentum spectrum.\\ The Parton Branching (PB) method~\cite{HAUTMANN2017446,Hautmann2018,PhysRevD.99.074008} presents an angular ordered evolution for TMD parton distribution functions (TMD PDFs or TMDs), expressed in terms of real-emission splitting functions and Sudakov form factors. The PB TMDs were fitted~\cite{PhysRevD.99.074008} to the full HERAI+II inclusive DIS data using the {\sc xFitter}~\cite{Alekhin2015} framework and are available in TMDlib~\cite{Hautmann_2014}, a library for TMDs and unintegrated PDFs. These TMDs were applied to Drell Yan production \cite{PhysRevD.100.074027,HAUTMANN2019114795,Martinez2020}.\\ With new software developments, such as the release of TMDlib2~\cite{Abdulov_2021}, which includes new functionalities (e.g. the treatment of TMD uncertainties), and the newest version of the Monte Carlo event generator {\sc CASCADE}~\cite{Jung2010} ({\sc CASCADE3}~\cite{Baranov2021}), which includes an intial state parton shower that is fully consistent with the PB TMDs, the applications of PB TMDs will increase.\\ This article will give an overview of recent developments within the Parton Branching method, both in terms of new applications as new developments in the evolution. \section{Parton Branching Evolution equations}\label{sec:PBeq} The Parton Branching evolution equations are given by: \begin{align} \tilde{\mathcal{A}}_a(x,\bm k,\mu^2)=&{\Delta_a(\mu^2)}\tilde{\mathcal{A}}_a(x,\bm k,\mu_0^2)\nonumber+\sum_b\int\frac{d^2\bm \mu^\prime}{\pi\mu^{\prime2}}\frac{\Delta_a(\mu^2)}{\Delta_a(\mu^{\prime2})}\Theta(\mu^2-\mu^{\prime2})\Theta(\mu^{\prime2}-\mu_0^2)\times\\ &\times\int_x^{z_M}dz{P_{ab}(z)}\tilde{\mathcal{A}}_b(\frac{x}{z},\bm k+(1-z)\bm \mu^\prime,\mu^{\prime2}), \label{eq:evolution} \end{align} with $\tilde{\mathcal{A}}_a(x,\bm k,\mu^2)=x{\mathcal{A}}_a(x,\bm k,\mu^2)$ the momentum weighted TMD of flavor $a$, with longitudinal momentum fraction of the proton $x$ and $\bm k$ the transverse momentum, evaluated at scale $\mu$, $P_{ab}(z)$ the real-emission part of the DGLAP splitting functions for a splitting of parton $b$ to $a$, with $z$ the longitudinal momentum fraction and the Sudakov form factor for a parton of flavor $a$ is given by $\Delta_a(\mu^2)=\exp[-\sum_b\int_{\mu_0^2}^{\mu^2}\frac{d\mu^{\prime 2}}{\mu^{\prime 2}}\int_0^{z_M}dz\ z\ P^{col}_{ba}(z,\mu^{\prime 2})]$. Angular ordering can enter the evolution through three aspects: i) the relation between the evolution scale $\mu'$ and the transverse momentum of the emitted parton $\bm q$: $(1-z)\mu'=|\bm q|$, which is embodied in all PB TMDs, ii) the scale of the strong coupling $\alpha_s(\bm q^2)$ which is present in the fitted PB-NLO-HERAI+II-2018-set2, but not in PB-NLO-HERAI+II-2018-set1, which uses $\alpha_s(\mu'^2)$, iii) the dynamical (i.e. dependent on the evolution scale) soft-gluon resolution scale $z_M=1-q_0/\mu'$, with $q_0$ the minimal transverse momentum of the emitted parton. The resolution scale seperates resolvable from non-resolvable branchings. The effects of the dynamical resolution scale have been studied in \cite{HAUTMANN2019114795} \section{Multijet-merging} Studies of TMD effects have been so far mostly used on low $p_\bot$-spectra of inclusive observables. However, the authors of \cite{Martinez:2021chk} realized that the large transverse momentum tails of TMDs, that arise naturally due to the renormalization-group evolution are used to describe multi-jet final states. A new "TMD merging" algorithm has been developed in \cite{Martinez:2021chk}, which extends the "MLM merging" procedure to include TMD initial state evolution. Compared to standard MLM, this method reduces systematical uncertainties and improves the description of higher-order emissions beyond the maximum parton multiplicity of the matrix element calculations.\\ In figure \ref{fig:multijet}, their prediction of the Z-boson $p_\bot$-spectrum and jet-multiplicity is shown. For these results the PB-NLO-HERAI+II-2018-set2 TMDs where used. The TMD merging algorithm describes the whole Z-boson $p_\bot$-range very well. The description of Jet-Multiplicity is remarkable, especially for multiplicities that are higher than the jet multiplicity of the matrix element, which is three. One can expect that the effects studied in their work will become even more important at future collider experiments, since TMD broadening grows with the evolution scale.\\ \begin{figure} \begin{minipage}{0.5\linewidth} \centerline{\includegraphics[width=0.99\linewidth]{Z-jets.png}} \end{minipage} \begin{minipage}{0.5\linewidth} \centerline{\includegraphics[width=0.99\linewidth]{JetMultiplicity.png}} \end{minipage} \caption[]{Predictions obtained with TMD merging for the production of a Z-boson in association with jets. Left: Z-boson $p_\bot$-spectrum; Right: Jet multiplicity. Figures from \cite{Martinez:2021chk}.} \label{fig:multijet} \end{figure} \section{Photon TMD} To obtain the same accuracy as current experimental programs, electroweak corrections should be applied to the before purely QCD evolution of the PB method. The most notable change in the QED corrected evolution of parton distributions is the presence of the photon density. We determined both collinear and TMD photon densities with PB method ~\cite{Jung:2021mox}. At high mass Drell-Yan (DY) production, contributions from photon-photon scattering into lepton pairs play a role. The collinear NLO QED PDFs describe well the measured dilepton mass spectrum at LHC center-of-mass energies \cite{CMS:2018mdl}. As shown in Fig.\ref{fig:photon}.a, the small contribution from Photon-initiated (PI) lepton production is also determined. The photon TMD has been used to predict the transverse momentum spectrum of DY lepton-pair production at very high masses (Fig \ref{fig:photon}.b). \begin{figure} \begin{minipage}{0.5\linewidth} \centerline{\includegraphics[width=0.99\linewidth]{DY-mass.pdf}} \caption*{a} \end{minipage} \hfill \begin{minipage}{0.5\linewidth} \centerline{\includegraphics[width=0.99\linewidth]{Zpt_massMuMu500_800.pdf}} \caption*{b} \end{minipage} \caption[]{Standard DY and photon induced mass distribution (a) and transverse momentum spectra (b) based on collinear and TMD QED PDFs} \label{fig:photon} \end{figure} \section{Four- and Five-flavor schemes} The first set of NLO collinear and TMD parton densities in four-flavor-variable-number (4FLN) scheme within the PB approach is determined \cite{Jung:2021vym}. The 4FLVN and five-flavor-variable-number (5FLVN) PB-TMD distributions \cite{PhysRevD.99.074008} were applied to predict $Z + b\bar{b}$ tagged jet production at LHC energies. In Fig. \ref{fig:4FL-5FL}, we show the predictions obtained within both schemes, which are in very good agreement with the measurements. \begin{figure} \begin{minipage}{0.5\linewidth} \centerline{\includegraphics[width=0.99\linewidth]{4FL-2bjet-pt.pdf}} \caption*{a} \end{minipage} \hfill \begin{minipage}{0.5\linewidth} \centerline{\includegraphics[width=0.99\linewidth]{4FL-2bjet-phibb.pdf}} \caption*{b} \end{minipage} \caption[]{Differential cross section for $Z+\ge2 b$ jets production as a function transverse momentum of the $Z$ boson $p_t$ (a) and the azimuthal angular separation $\Delta\phi_{bb}$ between the directions of the two $b$ jets in the transverse plane (b). Shown are the predictions obtained in the 4FLVN- and 5FLVN- schemes.} \label{fig:4FL-5FL} \end{figure} The completely different configurations of heavy flavor collinear and TMD PDFs and the corresponding initial TMD parton shower in the 4FLVN and 5FLVN schemes allow for a precise investigation of the evolution of the PB-TMD PDFs as well as the PB-TMD parton shower. \section{Conclusion} The Parton Branching method has already had several successes last years, especially in the description of the low-$p_\bot$ spectrum of the Drell Yan process at both high and low energies\cite{PhysRevD.100.074027,HAUTMANN2019114795,Martinez2020}. With the new developments, the range of applications increase, due to e.g. the new TMD merging method, which allows a description of the whole $p_\bot$-spectrum of the Drell Yan process and an accurate description of jets, even at high mltiplicity.\\ The first PB TMD PDF set within the four-flavor schemes along with the already existing sets in the five-flavor scheme opens the further investigation of PB evolution and PB TMD showers.\\ Other developments of the PB TMDs will lead to an increase in precision or an extension of the kinematical range. The first inclusion QED effects in the Parton Branching method, including the first photon TMD within the method is obtained. The inclusion of TMD splitting functions~\cite{CATANI1994475,Hautmann:2012sh,Gituliar2016,Hentschinski:2016wya,Hentschinski2018,Hentschinski:2021lsh} is underway~\cite{keersmaekers2021implementing}, and it is a first step towards a Monte Carlo that incorporates small-$x$ dynamics. \section*{Acknowledgements} We thank F. Hautmann, M. Hentschinski, H. Jung, A. Bermudez Martinez, A. Kusina, K. Kutak and A. Lelek for collaboration and discussion. STM thanks the Humboldt Foundation for the Georg Forster research fellowship.
1,108,101,563,972
arxiv
\section{Introduction} A theory of prehomogeneous vector spaces, constructed by Sato~\cite{MSato} (see also Sato--Shintani~\cite{MSatoShintani}, Kimura~\cite[Introduction]{Kimura}), provides a systematic method to construct zeta functions satisfying functional equations. The Riemann zeta function $\zeta(s)$ can be viewed as a typical example of such zeta functions, and it has many interesting and important properties. Among them, we focus on the functional equation and its completion, that is, $\zeta(s)$ can be completed to the Riemann xi function $\xi(s):= 2^{-1}s(s-1) \pi^{-s/2}\Gamma(s/2)\zeta(s)$ whose functional equation has a higher symmetric form $\xi(1-s)=\xi(s)$. Here, $\Gamma(s)$ is the ordinary gamma function. Therefore, as analogous to $\zeta(s)$, it is natural to ask whether or not zeta functions associated with prehomogeneous vector spaces can be completed. Let us state this problem more precisely. Let $\bs{\zeta}(\ul{s})$ and $\bs{\zeta}^*(\ul{s})$ $(\ul{s}\in\mathbb C^r)$ be vector-valued zeta functions associated with a regular prehomogeneous vector space and with its dual prehomogeneous vector space, respectively. Then, a functional equation between these zeta functions is written as \[\bs{\zeta}^*(\tau(\ul{s}))=A(\ul{s})\bs{\zeta}(\ul{s}),\] where $\tau$ is a suitable affine transformation on $\mathbb C^r$ and $A(\ul{s})$ is a suitable coefficient matrix. For these zeta functions $\bs{\zeta}(\ul{s})$ and $\bs{\zeta}^*(\ul{s})$, let us find square matrices $B(\ul{s})$ and $B^*(\ul{s})$ such that \[\bs{\xi}^*(\tau(\ul{s}))=\mathcal{E}\bs{\xi}(\ul{s})\quad\text{where}\quad \bs{\xi}(\ul{s}):=B(\ul{s})\bs{\zeta}(\ul{s})\text{ and }\bs{\xi}^*(\ul{s}):=B^*(\ul{s})\bs{\zeta}^*(\ul{s}),\] where $\mathcal{E}$ is the so-called $\varepsilon$-factor, that is, a diagonal matrix with entries $\exp(\theta\sqrt{-1})$ $(\theta\in\mathbb R)$. Since $B(\ul{s})$ and $B^*(\ul{s})$ may be different matrices, there is a trivial solution $B(\ul{s})=A(\ul{s})$, $B^*(\ul{s})=I$ and $\mathcal{E}=I$ ($I$ is the identity matrix) and hence we would like to make $B(\ul{s})$ and $B^*(\ul{s})$ as similar as possible. For such cases, we say that a pair $(\bs{\zeta}(\ul{s}),\bs{\zeta}^*(\ul{s}))$ can be completed to $(\bs{\xi}(\ul{s}),\bs{\xi}^*(\ul{s}))$, or the pair $(\bs{\xi}(\ul{s}),\bs{\xi}^*(\ul{s}))$ is a completion of $(\bs{\zeta}(\ul{s}),\bs{\zeta}^*(\ul{s}))$ in this paper. We shall explain this notion by a concrete example. Let $(G,\rho,V)$ be a prehomogeneous vector space in Sato~\cite[\S\S7.1 Example (A)]{FSatoI}. We use all notation in that paper without comments, and assume $v(L^{(1)*})=1$ for simplicity. For dual zeta functions, we choose $E$ as a $\mathbb Q$-regular subspace. Put $\tau(\ul{s}):=(s_1+s_2+s_3-1,\ 1-s_3,\ 1-s_2)$ for $\ul{s}=(s_1,s_2,s_3)\in\mathbb C^3$. Then, we have \[ \pmat{\xi_+(L^*_E;\,\tau(\ul{s}))\\ \xi_-(L^*_E;\,\tau(\ul{s}))} = \frac{2\Gamma(s_2)\Gamma(s_3)}{(2\pi)^{s_2+s_3}} \pmat{ \cos\bigl(\frac{\pi}{2}(s_2+s_3)\bigr) & \sin\bigl(\frac{\pi}{2}(s_2-s_3)\bigr)\\ \sin\bigl(\frac{\pi}{2}(s_2-s_3)\bigr) & \cos\bigl(\frac{\pi}{2}(s_2+s_3)\bigr)} \pmat{\xi_+(L;\ul{s})\\ \xi_-(L;\,\ul{s})} \] by Theorem~3 (iii) of that paper~\cite{FSatoI}. This coefficient matrix can be obviously diagonalized, and moreover using a formula \eqref{eq:MF} of the gamma function, we obtain a completed functional equation in such a way that by setting \[ B(\ul{s})=B^*(\ul{s})=\pi^{-\tfrac{s_2+s_3}{2}} \pmat{\Gamma\bigl(\frac{s_2}{2}\bigr)\Gamma\bigl(\frac{s_3}{2}\bigr)&0\\ 0&\Gamma\bigl(\frac{s_2+1}{2}\bigr)\Gamma\bigl(\frac{s_3+1}{2}\bigr)} \cdot\pmat{1&1\\1&-1}, \] we have \[ \bs{\eta}^*_E\bigl(\tau(\ul{s})\bigr) = \pmat{1&0\\0&-1} \bs{\eta}_E(\ul{s}),\] where \[\bs{\eta}^*_E(\ul{s})=B^*(\ul{s})\pmat{\xi_+(L^*_E;\,\ul{s})\\ \xi_-(L^*_E;\,\ul{s})},\quad \bs{\eta}_E(\ul{s}) = B(\ul{s})\pmat{\xi_+(L;\ul{s})\\ \xi_-(L;\,\ul{s})}. \] This observation seems to be new. Other dual zeta functions in \S\S7.1 of \cite{FSatoI} can be also completed, but for another example \cite[\S\S7.2 Example (B)]{FSatoI}, not all functional equations seem to be completed. In general, it is not known whether prehomogeneous zeta functions can be completed or not, except for some particular reductive cases (cf.\ \cite{SatakeFaraut}, \cite{DatsovskyWright} and \cite{Thor}). In this paper, we consider this problem for prehomogeneous vector spaces associated with homogeneous open convex cones containing no entire line (homogeneous cones for short in what follows) studied in the previous paper~\cite{N2018}, and we show that, for a certain class of homogeneous cones, the associated zeta functions have completions. Since functional equations of prehomogeneous zeta functions essentially come from those of local zeta functions, we actually work with local zeta functions. It is worthy to mention that our prehomogeneous vector spaces are not reductive but solvable. We are therefore required additional observations which are not needed in reductive cases; for example, we will make a delicate discussion about orders of zeta distributions (see~\eqref{def:zd} for definition) when we consider functional equations with respect to them (see Section~\ref{sect:zeta distribution}). We now introduce the terminologies and the notations which we need to state the precise argument. Let $\Omega$ be a homogeneous cone of rank $r$ in an $n$-dimensional real vector space $V$. By Vinberg~\cite{Vinberg}, there exists a split solvable Lie group $H$ acting on $\Omega$ linearly and simply transitively. According to Ishi~\cite{Ishi2006}, we realize $\Omega$ as a subset of the open convex cone $\mathcal{S}^+_N$ of positive-definite symmetric matrices of size $N\in\mathbb{N}$, that is, taking $r$ suitable integers $n_1,\dots,n_r$ and a suitable system $\mathcal{V}_{kj}\subset \mathrm{Mat}(n_k,n_j;\ \mathbb R)$ $(1\le j<k\le r)$ of vector spaces, we regard $\Omega$ as a homogeneous cone included in a subspace of symmetric matrices as follows: \begin{equation} \label{eqV} \Omega\cong \set{ x=\pmat{x_1I_{n_1}&\transpose{X}_{21}&\cdots&\transpose{X}_{r1}\\ X_{21}&x_2I_{n_2}&\ddots&\transpose{X}_{r2}\\ \vdots&\ddots&\ddots&\vdots\\ X_{r1}&X_{r2}&\cdots&x_rI_{n_r}}}{ \begin{array}{l} x_j\in\mathbb R\\\quad(j=1,\dots,r)\\ X_{kj}\in\mathcal{V}_{kj}\\ \quad(1\le j<k\le r) \end{array}}\cap\mathcal{S}^+_N, \end{equation} where $N:=n_1+\cdots+n_r$. The following integers and vectors are used frequently in this paper without any comments: \[ \begin{array}{c} \displaystyle n_{kj}:=\dim \mathcal{V}_{kj},\quad p_k:=\sum_{j<k}n_{kj},\quad q_j:=\sum_{k>j}n_{kj},\\[1em] \displaystyle \ul{1}=(1,\dots,1),\quad \ul{p}=(p_1,\dots,p_r),\quad \ul{q}=(q_1,\dots,q_r),\quad \ul{d}=\ul{1}+\frac{1}{2}(\ul{p}+\ul{q}). \end{array} \] Put $\Ir:=\{\pm1\}^r$. For each $\boldsymbol{\varepsilon}=(\varepsilon_1,\dots,\varepsilon_r)\in\Ir$, we denote by $\Oe$ an orbit of $H$ through $\mathrm{diag}(\varepsilon_1I_{n_1},\dots,\varepsilon_rI_{n_r})$. Note that $\Omega=\Oe[(1,\dots,1)]$. Then, $\bigsqcup_{\boldsymbol{\varepsilon}\in\Ir}\Oe$ is a Zariski open set in $V$ (cf.\ Gindikin~\cite[p.\ 77]{Gindikin64}) so that $V$ is a real prehomogeneous vector space. Let $\Delta_1(x),\dots,\Delta_r(x)$ be the basic relative invariants of $\Omega$. We denote by $\rapid$ the Schwartz space of rapidly decreasing functions on $V$. For $f\in\rapid$, we put \begin{equation} \label{eq:lzf} \lzf{f}{\ul{s}}:=\int_{\Oe}|\Delta_1(x)|^{s_1}\cdots|\Delta_r(x)|^{s_r}f(x)\,d\mu(x)\quad(\boldsymbol{\varepsilon}\in\Ir) \end{equation} which are called the local zeta functions associated with $\Oe$. Here, $d\mu(x)$ is a suitable invariant measure on $\Oe$. It is known that $\lzf{f}{\ul{s}}$ are absolutely convergent for $\mathrm{Re}\,\ul{s}>\ul{d}\sigma^{-1}$, and analytically continued to meromorphic functions of $\ul{s}$ in the whole space $\mathbb C^r$ (cf.\ \cite{BG69}). Here, $\sigma=(\sigma_{jk})_{1\le j,k\le r}$ is a unimodular matrix containing information about the basic relative invariants as \begin{equation} \label{eq:multiplier matrix} \Delta_j\bigl(\mathrm{diag}(x_1I_{n_1},\dots,x_rI_{n_r})\bigr) = x_{1}^{\sigma_{j1}}\cdots x_r^{\sigma_{jr}}\quad(x_1,\dots,x_r\in\mathbb R^\times), \end{equation} which is called the multiplier matrix of $\Omega$ (cf.\ \cite{N2014}). Note that we write $\ul{\alpha}>\ul{\beta}$ for $\ul{\alpha},\,\ul{\beta}\in\mathbb R^r$ if $\alpha_j>\beta_j$ for all $j=1,\dots,r$. Associated with the dual prehomogeneous vector space of $V$, we also have local zeta functions $\dlzf{f}{\ul{s}}$ for $\boldsymbol{\delta}\in\Ir$ and $f\in\rapid$, which are also analytically continued to meromorphic functions of $\ul{s}\in\mathbb C^r$. We shall write these local zeta functions in a vector form as \[\vlzf{f}{\ul{s}}=\bigl(\lzf{f}{\ul{s}}\bigr)_{\boldsymbol{\varepsilon}\in\Ir}\quad \text{and}\quad \vdlzf{f}{\ul{s}}=\bigl(\dlzf{f}{\ul{s}}\bigr)_{\boldsymbol{\delta}\in\Ir},\] where we equip $\Ir$ with a total order, and fix it. Let us take and fix a suitable inner product $\innV{\cdot}{\cdot}$ in $V$. The dual vector space $V^*$ of $V$ is identified with $V$ through this inner product. The Fourier transform $\Fourier$ of $f\in\rapid$ is defined as \[ \Fourier(x):=\int_Vf(y)\exp(2\pi\sqrt{-1}\innV{x}{y})\,dy, \] where $dy$ is the Euclidean measure on $V$. Let $\tau$ be the affine transformation on $\mathbb C^r$ defined by $\tau(\ul{s}):=(\ul{d}-\ul{s}\sigma)\sigma_*^{-1}$, where $\sigma_*$ is the multiplier matrix of the dual cone $\Omega^*$ of $\Omega$. For $\ul{\alpha}\in\mathbb C^r$, we write $\Gamma(\ul{\alpha}):=\Gamma(\alpha_1)\cdots\Gamma(\alpha_r)$. Then, the Gindikin gamma function $\Gamma_\Omega(\ul{\alpha})$ of $\Omega$ is defined as \begin{equation} \label{eq:defofgamma} \Gamma_{\Omega}(\ul{\alpha}) = (2\pi)^{(n-r)/2}\Gamma\Bigl(\ul{\alpha}-\frac{1}{2}\ul{p}\Bigr \end{equation} (cf.\ Gindikin~\cite{Gindikin64}). Moreover, we set \[ A(\ul{\alpha})=\biggl( \exp\Bigl\{\frac{\pi\sqrt{-1}}{2}\Bigl( \sum_{j=1}^r\varepsilon_j\delta_j\alpha_j +\frac{1}{2}\sum_{1\le j<k\le r}\varepsilon_j\delta_kn_{kj} \Bigr)\Bigr\} \biggr)_{\boldsymbol{\delta},\boldsymbol{\varepsilon}\in\Ir}, \] where the index $\boldsymbol{\varepsilon}$ runs horizontally and $\boldsymbol{\delta}$ vertically. Then, Proposition 4.3 of the previous paper \cite{N2018} gives the following functional equation of local zeta functions: \begin{equation} \label{eq:FE} \vlzf{\Fourier}{\ul{s}} = \frac{\Gamma_\Omega(\ul{s}\sigma)}{(2\pi)^{|\ul{s}\sigma|}} A\Bigl(\ul{s}\sigma-\frac{1}{2}\ul{p}\Bigr)\vdlzf{f}{\tau(\ul{s})}. \end{equation} Here, we set $|\ul{\alpha}|:=\alpha_1+\cdots+\alpha_r$ for $\ul{\alpha}\in\mathbb C^r$. For a technical reason, we assume the following condition: \begin{equation} \label{assumption} \text{For fixed $m=0, 1$, one has $\frac{\pi}{4}\sum_{j<k}\varepsilon_j\delta_kn_{kj}\equiv m\pi$ $({\rm mod}\ 2\pi)$ for all $\boldsymbol{\varepsilon},\boldsymbol{\delta}\in\Ir$.} \end{equation} Set $\mathcal{A}:=\{0,1\}^r$. We equip $\mathcal{A}$ with a total order, and fix it. According to this order, we define a diagonal matrix $\Lambda(\ul{\alpha})$ for $\ul{\alpha}\in\mathbb C^r$ by \begin{equation} \label{eq:defofD} \Lambda(\ul{\alpha}):=\mathrm{diag}\left(\frac{\pi^{|\ul{\alpha}|/2}}{\Gamma\bigl(2^{-1}(\ul{\alpha}+\ul{a})\bigr)}\right)_{\ul{a}\in\mathcal{A}}. \end{equation} Then, the main theorem is stated as follows. \begin{theorem} \label{theoremB} Assume the condition~\eqref{assumption}. Then, there exists an orthogonal matrix $J$ such that, by setting \[ \clzf{f}{\ul{s}}:=\Lambda\bigl(\ul{s}\sigma-2^{-1}\ul{p}\bigr){}^{\,t\!}J\vlzf{f}{\ul{s}},\quad \cdlzf{f}{\ul{s}}:= \Lambda\bigl(\ul{s}\sigma_*-2^{-1}\ul{q}\bigr){}^{\,t\!}J\vdlzf{f}{\ul{s}}, \] one has \[ \clzf{\Fourier}{\ul{s}}=\mathcal{E}\,\cdlzf{f}{\tau(\ul{s})}\quad(\ul{s}\in\mathbb C^r,\ f\in\rapid), \] where \begin{equation} \label{def:efactor} \mathcal{E}=(-1)^m\mathrm{diag}\Bigl(\sign\Bigr)_{\ul{a}\in\mathcal{A}}. \end{equation} \end{theorem} The transformation matrices $B(\ul{s})$ and $B^*(\ul{s})$ in this theorem are not same but similar in a sense that the difference, that is the arguments of $\Lambda$ relate to the domains in which the corresponding gamma functions $\Gamma_{\Omega}$ and $\Gamma_{\Omega^*}$ of $\Omega$ and $\Omega^*$, respectively, converge absolutely. Namely, we see by \cite[p.\ 22]{Gindikin64} that the integral \[ \int_{\Omega}\Delta_1(x)^{s_1}\cdots\Delta_r(x)^{s_r}e^{-\innV{x}{I_N}}d\mu(x) = \Gamma_{\Omega}(\ul{s}\sigma)\quad(I_N:=\mathrm{diag}(I_{n_1},\dots,I_{n_r})) \] converges absolutely when $\mathrm{Re}\,\ul{s}\sigma-\frac{1}{2}\ul{p}>0$, whereas the integral \[ \int_{\Omega^*}\Delta^*_1(y)^{s_1}\cdots\Delta^*_r(y)^{s_r}e^{-\innV{y}{I_N}}d\mu^*(y)=\Gamma_{\Omega^*}(\ul{s}\sigma_*) \] converges absolutely when $\mathrm{Re}\,\ul{s}\sigma_*-\frac{1}{2}\ul{q}>0$. Here, $\Delta^*_1(y),\dots,\Delta^*_r(y)$ are the basic relative invariants of $\Omega^*$ and $d\mu^*(y)$ a suitable invariant measure on $\Omega^*$. We shall prove Theorem~\ref{theoremB} in Section~\ref{sect:proof}. Section~\ref{sect:zeta distribution} is devoted to investigating a relationship between ${}^{t\!}J\vlzf{f}{\ul{s}}$ and zeta distributions (see \eqref{def:zd} for definition). \section{Proof of Theorem~\ref{theoremB}} \label{sect:proof} Let us start proving Theorem~\ref{theoremB}. We use all notations in Introduction. Set $\ul{w}=\ul{s}\sigma-\frac{1}{2}\ul{p}$. Then, the gamma matrix, that is the coefficient matrix in \eqref{eq:FE} can be written, by~\eqref{eq:defofgamma}, as \[ \frac{\Gamma_\Omega(\ul{s}\sigma)}{(2\pi)^{|\ul{s}\sigma|}} A\Bigl(\ul{s}\sigma-\frac{1}{2}\ul{p}\Bigr) = \frac{\Gamma(\ul{s}\sigma-\frac{1}{2}\ul{p})}{(2\pi)^{|\ul{s}\sigma-\frac{1}{2}\ul{p}|}}A\Bigl(\ul{s}\sigma-\frac{1}{2}\ul{p}\Bigr) = \frac{\Gamma(\ul{w})}{(2\pi)^{|\ul{w}|}}A(\ul{w}). \] In the first equality, we use $|\ul{p}|=n-r$. Throughout this paper, we always assume the condition~\eqref{assumption} so that $A(\ul{\alpha})$ reduces to \begin{equation} \label{def:gamma matrix} A(\ul{\alpha})= (-1)^m\biggl(\exp\Bigl(\frac{\pi\sqrt{-1}}{2}\sum_{j=1}^r\varepsilon_j\delta_j\alpha_j\Bigr)\biggr)_{\boldsymbol{\delta},\boldsymbol{\varepsilon}\in\Ir}. \end{equation} For $\ul{a}\in\mathcal{A}$, let $\kv\in\Ir[2^r]$ be a column vector defined by \[ \kv:=\bigl(\kvsub{\boldsymbol{\varepsilon}}\bigr)_{\boldsymbol{\varepsilon}\in\Ir}\in\Ir[2^r],\quad\text{where}\quad \kvsub{\boldsymbol{\varepsilon}}:=\prod_{j=1}^r\varepsilon_j^{a_j}\quad(\boldsymbol{\varepsilon}\in\Ir). \] Notice $\set{\kv}{\ul{a}\in\mathcal{A}}=\mathcal{I}_2^{\otimes r}$ so that it forms an orthogonal basis of $\mathbb R^{2^r}$ with respect to the standard inner product in $\mathbb R^{2^r}$. Let $J$ be an orthogonal matrix of size $2^r$ obtained by arraying column vectors $2^{-r/2}\kv$ in a row, that is, \[ J:=2^{-r/2}\bigl(\kv\bigr)_{\ul{a}\in\mathcal{A}}. \] For $\ul{a}\in\mathcal{A}$, we set \[\pcos{\ul{\alpha}}:=\prod_{j=1}^r\cos\Bigl(\frac{\pi}{2}(\alpha_j-a_j)\Bigr)\quad(\ul{\alpha}\in\mathbb C^r).\] \begin{lemma} \label{lemma:diagonalization} The matrix $A(\ul{\alpha})$ in~\eqref{def:gamma matrix} can be diagonalized by $J$ independent of $\ul{\alpha}$ as follows: \[ {}^{t\!}JA(\ul{\alpha})J=(-1)^m\cdot 2^r\mathrm{diag}\Bigl(\sign\Pcos(\ul{\alpha})\Bigr)_{\ul{a}\in\mathcal{A}}. \] \end{lemma} \begin{proof} We shall prove this lemma by showing that, for any $\ul{a}\in\mathcal{A}$, the vector $\kv$ is an eigenvector of $A(\ul{\alpha})$ and its corresponding eigenvalue is $(-1)^m\cdot 2^r\sign \pcos{\ul{\alpha}}$. For any vectors $\ul{a}\in\mathcal{A}$ and $\ul{\beta}=(\beta_1,\dots,\beta_r)\in\mathbb C^r$, it is easily verified that \[ \sum_{\boldsymbol{\varepsilon}\in\Ir}\kvsub{\boldsymbol{\varepsilon}}\exp\Bigl(\frac{\pi\sqrt{-1}}{2}\sum_{j=1}^r\varepsilon_j\beta_j\Bigr) = 2^r(\sqrt{-1})^{|\ul{a}|}\prod_{j=1}^r\Pcos(\ul{\beta}). \] In our case, the vector $\ul{\beta}$ is one of the vectors $(\delta_1\alpha_1,\dots,\delta_r\alpha_r)$ for some $\boldsymbol{\delta}\in\Ir$. An elementary calculation yields that \[ \cos\Bigl(\frac{\pi}{2}(\delta z-a)\Bigr) = \delta^a \cos\Bigl(\frac{\pi}{2}(z-a)\Bigr)\quad\text{for $\delta\in\{1,-1\}$ and $a\in\{0,1\}$}, \] so that we obtain $\Pcos(\ul{\beta})=\kvsub{\boldsymbol{\delta}}\Pcos(\ul{\alpha})$. This implies that, for any $\ul{a}\in\mathcal{A}$, the vector $\kv$ is an eigenvector of $A(\ul{\alpha})$, and $(-1)^m\cdot 2^r\sign \pcos{\ul{\alpha}}$ is its corresponding eigenvalue, and hence the lemma is proved. \end{proof} Recall that $\ul{w}=\ul{s}\sigma-\frac{1}{2}\ul{p}$. \begin{lemma} \label{lemma:halfGamma} For each $\ul{a}\in\mathcal{A}$, one has \[ \frac{\Gamma(\ul{w})}{(2\pi)^{|\ul{w}|}}\pcos{\ul{w}}= \frac{\pi^{r/2-|\ul{w}|}}{2^r}\cdot\frac{\Gamma\bigl(2^{-1}(\ul{s}\sigma-2^{-1}\ul{p}+\ul{a})\bigr)}{\Gamma\bigl(2^{-1}(\tau(\ul{s})\sigma_*-2^{-1}\ul{q}+\ul{a})\bigr)}. \] \end{lemma} \begin{proof} Let us recall two famous formulas of the gamma function, that is, Euler's reflection formula and Legendre's duplication formula: \[ \Gamma(z)\Gamma(1-z)=\frac{\pi}{\sin\pi z},\quad \Gamma(z)=\frac{2^z}{2\sqrt{\pi}\ }\Gamma\Bigl(\frac{z}{2}\Bigr) \Gamma\Bigl(\frac{z+1}{2}\Bigr)\quad(z\in \mathbb C). \] Combining these two formulas, we obtain the following formula \begin{equation} \label{eq:MF} \Gamma(z)\,\cos\Bigl(\frac{\pi}{2}(z-a)\Bigr)= \frac{2^z\sqrt{\pi}}{2}\cdot \frac{\Gamma\bigl(2^{-1}(z+a)\bigr)}{\Gamma\bigl(2^{-1}(1-z+a)\bigr)} \quad(z\in\mathbb C;\ a=0\text{ or }1). \end{equation} This equation implies that \[ \prod_{i=1}^r \Gamma(w_i)\cos\left(\frac{\pi}{2}(w_i-a_i)\right) = \prod_{i=1}^r \frac{2^{w_i}\sqrt{\pi}}{2}\cdot \frac{\Gamma\bigl(2^{-1}(w_i+a_i)\bigr)}{\Gamma\bigl(2^{-1}(1-w_i+a_i)\bigr)}, \] that is, \[ \Gamma(\ul{w})\pcos{\ul{w}} = \frac{2^{|\ul{w}|}\pi^{r/2}}{2^r}\cdot \frac{\Gamma\bigl(2^{-1}(\ul{w}+\ul{a})\bigr)}{\Gamma\bigl(2^{-1}(\ul{1}-\ul{w}+\ul{a})\bigr)} \quad(\ul{a}\in\mathcal{A}). \] The facts $\ul{d}=\ul{1}+(\ul{p}+\ul{q})/2$ and $\ul{d}-\ul{s}\sigma=\tau(\ul{s})\sigma_*$ show \[ \ul{1}-\ul{w} = \ul{1}-\ul{s}\sigma+\frac{1}{2}\ul{p} = \ul{d}-\ul{s}\sigma-\frac{1}{2}\ul{q} = \tau(\ul{s})\sigma_*-\frac{1}{2}\ul{q}, \] whence we arrive at the formula in the lemma. \end{proof} We are now in the final step of the proof of the main theorem. Let us recall definition of diagonal matrices $\Lambda(\ul{\alpha})$ and $\mathcal{E}$ in \eqref{eq:defofD} and \eqref{def:efactor}, respectively. We set $\ul{v}=\ul{1}-\ul{w}=\tau(\ul{s})\sigma_*-\frac{1}{2}\ul{q}$ for brevity. Notice that $\frac{r}{2}-|\ul{w}|=-\frac{1}{2}\bigl|\ul{w}\bigr| +\frac{1}{2}\bigl|\ul{v}\bigr|$. Then, Lemmas~\ref{lemma:diagonalization} and~\ref{lemma:halfGamma} yield that \[ \begin{array}{r@{\ }c@{\ }l} \displaystyle \frac{\Gamma(\ul{w})}{(2\pi)^{|\ul{w}|}}A(\ul{w}) &=& \displaystyle (-1)^m\cdot 2^r J \mathrm{diag}\left( \sign \frac{\Gamma(\ul{w})}{(2\pi)^{|\ul{w}|}} \Pcos(\ul{w}) \right)_{\ul{a}\in\mathcal{A}} {}^{\,t\!}J\\[1em] &=& \displaystyle (-1)^m\cdot 2^r J \mathrm{diag}\left( \sign \frac{\pi^{r/2-|\ul{w}|}}{2^r}\cdot \frac{\Gamma\bigl(2^{-1}(\ul{w}+\ul{a})\bigr)}% {\Gamma\bigl(2^{-1}(\ul{v}+\ul{a})\bigr)} \right)_{\ul{a}\in\mathcal{A}} {}^{\,t\!}J\\[1.1em] &=& \displaystyle J \mathrm{diag}\left( (-1)^m\,\sign \, \frac{\Gamma(2^{-1}\bigl(\ul{w}+\ul{a})\bigr)}{\pi^{|\ul{w}|/2}} \, \frac{\pi^{|\ul{v}|/2}}{\Gamma\bigl(2^{-1}(\ul{v}+\ul{a})\bigr)} \right)_{\ul{a}\in\mathcal{A}} {}^{\,t\!}J\\[1em] &=& \displaystyle J \Lambda(\ul{w})^{-1} \mathcal{E} \Lambda(\ul{v}) {}^{\,t\!}J, \end{array} \] whence we conclude that the functional equation~\eqref{eq:FE} can be completed. \hfill{}$\square$ \begin{remark} We note that there exist homogeneous cones satisfying~\eqref{assumption}. In fact, the exceptional symmetric cone $\mathrm{Herm}(3,\mathbb{O})^+$ obviously satisfies~\eqref{assumption}. Other examples are given as homogeneous cones $\Omega$ such that $n_{kj}=0$ or $4$ for all $j<k$. Such homogeneous cones can be constructed from chordal and $A_4$-free graphs (see Letac--Massam~\cite{LM2007} for definition) as follows. Let $G$ be a chordal and $A_4$-free graph of size $n$. Set \[V_G:=\set{x\in\mathrm{Herm}(n,\mathbb{H})}{x_{ij}=0\text{ if }i\not\sim j\text{ in }G}. \] Then, $\Omega_G:=V_G\cap\mathrm{Herm}(n,\mathbb{H})^+$ is a desired homogeneous cone. \end{remark} \begin{remark} Under the assumption~\eqref{assumption}, we are able to give an explicit formula to the determinant of gamma matrices $A(\ul{\alpha})$. In fact, let us assume that $r\ge2$ and set $\mathcal{A}'=\set{\boldsymbol{\varepsilon}\in\mathcal{A}}{\varepsilon_1=1}$. Notice that $Q_{\ul{a}}(\ul{\alpha})Q_{\ul{1}-\ul{a}}(\ul{\alpha})=Q_{\ul{0}}(\ul{\alpha})Q_{\ul{1}}(\ul{\alpha})$ for any $\ul{a}\in\mathcal{A}'$. By Lemma~\ref{lemma:diagonalization}, we have \[ \begin{array}{r@{\ }c@{\ }l} \det A(\ul{\alpha})&=& \displaystyle \prod_{\ul{a}\in\mathcal{A}}(-1)^m\cdot 2^r\ \sign\pcos{\ul{\alpha}} = \prod_{\ul{a}\in\mathcal{A}'} 2^{2r}\,(\sqrt{-1})^{r}Q_{\ul{a}}(\ul{\alpha})Q_{\ul{1}-\ul{a}}(\ul{\alpha})\\ &=& \displaystyle \Bigl(2^{2r}(\sqrt{-1})^r\prod_{j=1}^r\cos\frac{\pi \alpha_j}{2}\,\sin\frac{\pi \alpha_j}{2}\Bigr)^{2^{r-1}} = \Bigl(\prod_{j=1}^r2\sqrt{-1}\sin\pi\alpha_j\Bigr)^{2^{r-1}}. \end{array} \] On the other hand, an explicit formula of $\det A(\ul{\alpha})$ is not given for general $\Omega$. Let us consider for the case of the Vinberg cone $\Omega$. Namely, $\Omega$ is a homogeneous cone of rank $3$ with $n_{21}=n_{31}=1$ and $n_{32}=0$ so that the condition~\eqref{assumption} fails. In this case, the determinant of $A(\ul{\alpha})$ is calculated as \[\det A(\ul{\alpha})= 2^{12}\bigl(\sin\pi\alpha_1\bigr)^4\bigl(\sin\pi\alpha_2\bigr)^4\bigl(\sin\pi\alpha_3\bigr)^4= \Bigl(\prod_{j=1}^32\sqrt{-1}\sin\pi\alpha_j\Bigr)^4,\] and therefore, for any homogeneous cone of rank $r\ge 2$, we shall conjecture \[ \det A(\ul{\alpha})= \Bigl(\prod_{j=1}^r2\sqrt{-1}\sin\pi \alpha_j\Bigr)^{2^{r-1}}. \] \end{remark} \section{Zeta distributions} \label{sect:zeta distribution} In this section, we investigate a relationship between ${}^{t\!}J\vlzf{f}{\ul{s}}$ in the main theorem and zeta distributions defined in~\eqref{def:zd} below. Let $\mathrm{sgn}$ be the sign function of $\mathbb R^\times$, that is, $\mathrm{sgn}(x):=x/|x|$, and we set $\mathrm{sgn}(0):=0$. Let $\omega^{s,a}$ $(s\in\mathbb C,\ a=0,1)$ be the quasi-character of $\mathbb R^{\times}$ defined by $\omega^{s,a}(x):=\mathrm{sgn}(x)^a\,|x|^s$. Using this, we introduce zeta distributions $\localZ{f}$ by \begin{equation} \label{def:zd} \localZ{f}:=\int_V\prod_{j=1}^r\omega^{s_j,b_i}\bigl(\Delta_j(x)\bigr)\,f(x)\,d\mu(x) \quad (\ul{b}\in\mathcal{A},\ f\in\rapid,\ \ul{s}\in\mathbb C^r). \end{equation} We note that, for symmetric cones viewed as homogeneous spaces of reductive groups, zeta distributions are studied in \cite{BCK2018}. Let us describe $\localZ{f}$ by using $\lzf{f}{\ul{s}}$. Set $c_{\boldsymbol{\varepsilon}}:=\mathrm{diag}(\varepsilon_1I_{n_1},\dots,\varepsilon_rI_{n_r})\in V$ (cf.\ \eqref{eqV}) and let $\ul{e}_j\in\mathbb R^r$ be the row unit vector having one on the $j$-th position and zeros elsewhere. Recalling the property~\eqref{eq:multiplier matrix} of the multiplier matrix $\sigma=(\sigma_{jk})_{1\le j,k\le r}$, we have \[ \Delta_j(c_{\boldsymbol{\varepsilon}})=\varepsilon_1^{\sigma_{j1}}\cdots\varepsilon_r^{\sigma_{jr}}=\kvsup{\ul{e}_j\sigma}. \] Since $\Oe$ is the orbit of $H$ through $c_{\boldsymbol{\varepsilon}}\in V$ and since $\Delta_j$ is a relatively invariant function, we have for $x\in\Oe$ \[ \omega^{s_j,b_j}\bigl(\Delta_j(x)\bigr) = \bigl(\varepsilon_1^{\sigma_{j1}}\cdots\varepsilon_r^{\sigma_{jr}}\bigr)^{b_j} |\Delta_j(x)|^{s_j} =\kvsup{b_j\,\ul{e}_j\sigma}|\Delta_j(x)|^{s_j}, \] and thus \[ \prod_{j=1}^r\omega^{s_j,b_j}\bigl(\Delta_j(x)\bigr) = \kvsup{\ul{b}\sigma}\,|\Delta_1(x)|^{s_1}\cdots|\Delta_r(x)|^{s_r}. \] By~\eqref{eq:lzf}, we obtain \begin{equation} \label{eq:linearcomb} \localZ{f}=\sum_{\boldsymbol{\varepsilon}\in\Ir} \kvsup{\ul{b}\sigma} \lzf{f}{\ul{s}}, \end{equation} whence each $\localZ{f}$ is one of entries of ${}^{t\!}J\vlzf{f}{\ul{s}}$. This formula also tells us that analytic properties of $\localZ{f}$ are the same as $\lzf{f}{\ul{s}}$. Since the map $\mathcal{A}\ni\ul{a}\mapsto\ul{a}\sigma\ (\mathrm{mod}\ 2)\in\mathcal{A}$ is a bijection because $\sigma$ is a unimodular matrix, the correspondence of $\localZ{f}$ and entries of ${}^{t\!}J\vlzf{f}{\ul{s}}$ is one-to-one. In order to state the main theorem by using zeta distributions, we need to introduce another order in $\mathcal{A}$. Let us denote by $<$ the fixed order in $\mathcal{A}$. According to \eqref{eq:linearcomb}, we introduce a new order $\prec_\sigma$ in $\mathcal{A}$ depending on $\sigma$ by \[ \ul{b}\prec_\sigma\ul{b}'\quad\Leftrightarrow\quad \ul{b}\sigma<\ul{b}'\sigma\quad(\ul{b},\ul{b}'\in\mathcal{A}). \] Let $\mathcal{A}_\sigma$ denote the set $\mathcal{A}$ with the order $\prec_\sigma$. Then, we have \[ \bs{Z}(f;\,\ul{s}):= \bigl(\localZ{f}\bigr)_{\ul{b}\in\mathcal{A}_\sigma} = {}^{t\!}J\vlzf{f}{\ul{s}}. \] On the other hand, to the dual cone $\Omega^*$, we can associate zeta distributions $\dlocalZ{f}$ $(\ul{c}\in\mathcal{A})$, and similarly to $\localZ{f}$, they satisfy \[ \dlocalZ{f}=\sum_{\boldsymbol{\delta}\in\Ir}\kappa_{\boldsymbol{\delta}}(\ul{c}\sigma_*)\dlzf{f}{\ul{s}} \quad(\ul{c}\in\mathcal{A},\ f\in\rapid,\ \ul{s}\in\mathbb C^r). \] Therefore, we need to consider $\mathcal{A}_{\sigma_*}$ so that \[ \bs{Z}^*(f;\,\ul{s}):=\bigl(\dlocalZ{f}\bigr)_{\ul{c}\in\mathcal{A}_{\sigma_*}} = {}^{t\!}J\vdlzf{f}{\ul{s}}. \] \begin{corollary} Assume the condition~\eqref{assumption}. Putting \[ \widetilde{\bs{Z}}(f;\,\ul{s}):= \Lambda\bigl(\ul{s}\sigma-2^{-1}\ul{p}\bigr)\bs{Z}(f;\,\ul{s}),\quad \widetilde{\bs{Z}}{}^*(f;\,\ul{s}):= \Lambda\bigl(\ul{s}\sigma_*-2^{-1}\ul{q}\bigr)\bs{Z}^*(f;\,\ul{s}) \] for $\ul{s}\in\mathbb C^r$ and $f\in\rapid$, one has \[ \widetilde{\bs{Z}}(\Fourier;\,\ul{s}) = \mathcal{E}\,\widetilde{\bs{Z}}{}^*(f;\,\tau(\ul{s})),\quad \mathcal{E}=(-1)^m\mathrm{diag}\Bigl(\sign\Bigr)_{\ul{a}\in\mathcal{A}}. \] \end{corollary} We end this paper by giving a remark on orders of zeta distributions. In this paper, we have fixed orders of the basic relative invariants $\Delta_i$, $\Delta^*_j$ of $\Omega$ and $\Omega^*$, respectively, as in the previous paper~\cite{N2018}. In the case that $\Omega$ is symmetric, we have the canonical order among $\Delta^*_j$ as in the book~\cite[Chapter VII]{FK94}, which is the opposite order of ours, and if we use the canonical one, then we see that there is no necessity to specify an order in $\mathcal{A}$. For general homogeneous cones $\Omega$, let us consider a natural generalization of the canonical orders used in symmetric cases, that is, we take the opposite order among $\Delta^*_j$. In this case, the rearranged order of $\mathcal{A}_{\sigma_*}$ is equal to $\mathcal{A}_{\sigma}$ if and only if we have $A\sigma_*A^{-1}=\sigma$ where $A$ is the anti-diagonal matrix $A=\bigl(\delta_{i,r-j+1}\bigr)_{i,j}$ of size $r$. Here, the multiplier matrix $\sigma_*$ is determined by using the original order used in~\cite{N2018}. Note that $\delta_{ij}$ is the Kronecker delta. However, as in Example~\ref{ex} below, there exist homogeneous cones such that $A\sigma_*A^{-1}\ne \sigma$, and therefore, for general homogeneous cones, we do not have canonical orders such as those of symmetric cones. \begin{example} \label{ex} Let $V$ be a vector space defined by \[ V:=\set{x=\pmat{ x_{11}I_4&0&\bs{x}_{21}&0\\ 0&x_{11}I_4&0&\bs{x}_{31}\\ {}^{t\!}\bs{x}_{21}&0&x_{22}&0\\ 0&{}^{t\!}\bs{x}_{31}&0&x_{33} }}{\begin{array}{l}x_{11},x_{22},x_{33}\in\mathbb R,\\ \bs{x}_{21},\bs{x}_{31}\in\mathbb R^4\end{array}}\subset \mathrm{Sym}(10,\mathbb R). \] Then, the set \[ \Omega:=\set{x\in V}{x\text{ is positive definite}} \] is a homogeneous cone of rank $3$ whose structure constants are $n_{21}=n_{31}=4$ and $n_{32}=0$. The basic relative invariants $\Delta_1,\Delta_2,\Delta_3$ of $\Omega$ are \[ \Delta_1(x)=x_{11},\quad \Delta_2(x)=x_{11}x_{22}-|\bs{x}_{21}|^2,\quad \Delta_3(x)=x_{11}x_{33}-|\bs{x}_{31}|^2, \] and those $\Delta^*_1,\Delta^*_2,\Delta^*_3$ of $\Omega^*$ are \[ \Delta^*_1(y)=y_{11}y_{22}y_{33}-y_{22}|\bs{y}_{31}|^2-y_{33}|\bs{y}_{21}|^2,\quad \Delta^*_2(y)=y_{22},\quad \Delta^*_3(y)=y_{33}. \] The multiplier matrices $\sigma$ and $\sigma_*$ are therefore given as \[\sigma=\pmat{1&0&0\\1&1&0\\1&0&1},\quad \sigma_*=\pmat{1&1&1\\0&1&0\\0&0&1}, \] whence we see that \[ A\sigma_*A^{-1}=\pmat{1&0&0\\0&1&0\\1&1&1}\ne \sigma. \] \end{example} \section*{Acknowledgments} This work was supported by Grant-in-Aid for JSPS fellows (2018J00379). The present author is grateful to Professor Hiroyuki Ochiai for insightful comments for this work. He also would like to express his sincere gratitude to Professor Kohji Matsumoto for the encouragement and the advice in writing this paper.
1,108,101,563,973
arxiv
\section{Introduction} Precise knowledge of a planet mass and radius is essential to infer its internal structure and the presence of an atmosphere. This is especially relevant for small exoplanets (below $\sim$3\,R$_{\oplus}$), which could encompass a wide range of compositions from mini-Neptunes with volatile H/He envelopes, to ocean planets with water mantle and steam atmospheres, to ultra-hot rocky planets with molten lava-rich surfaces and heavyweight envelopes (e.g., \citealt{Winn2018,Otegi2020}). In that respect the HD\,3167 system is of particular interest, as it hosts three known planets (\citealt{Vanderburg2016}; \citealt{Christiansen2017}; \citealt{Gandolfi2017}): HD\,3167b (P = 0.96\,d, R$_\mathrm{p}$ = 1.70$\stackrel{+0.18}{_{-0.15}}$\,R$_{\oplus}$, M$_\mathrm{p}$ = 5.02$\pm$0.38 M$_{\oplus}$), HD\,3167d (P = 8.51\,d, M$_\mathrm{p}$\,sin\,$i$ = 6.90 $\pm$0.71 M$_{\oplus}$), and HD\,3167c (P = 29.84 days, R$_\mathrm{p}$ = 3.01$\stackrel{+0.42}{_{-0.28}}$ \,R$_{\oplus}$, M$_\mathrm{p}$ = 9.80$\stackrel{+1.30}{_{-1.24}}$\,M$_{\oplus}$). Planets b and c are transiting their nearby (47\,pc) and bright (V = 9) K0V star, allowing for detailed measurements of their radius and atmospheric properties. The intermediate planet d is not transiting, but is nonetheless expected to have a low mutual inclination with planet c based on dynamical calculations (\citealt{Dalal2019}). The orbital architecture of the HD\,3167 system is particularly intriguing, because the orbital plane of its innermost planet b is close to the stellar equatorial plane and perpendicular to the orbital planes of the outer planets d and c, on polar orbits around the star (\citealt{Dalal2019}, \citealt{Bourrier2021_3167}). HD\,3167 c is a mini-Neptune, that is an exoplanet smaller than Neptune still harboring a substantial volatile envelope of hydrogen and helium, or possibly a large fraction of water (eg, \citealt{Mousis2020}). Transit observations in the near-IR with the Wide Field Camera 3 (WFC3) onboard the Hubble Space Telescope (HST), combined with broadband transit measurements with Kepler/K2 and Spitzer/IRAC, revealed molecular absorption in the atmosphere of HD\,3167 c (\citealt{Guilluy2021,mikal-evans_hd3167c}). HD\,3167 b belongs to the population of ultra-short period planets (USPs, R$<$2\,R$_{\oplus}$; P$<$1\,d). These exotic worlds receive so much energy from their parent star, via irradiation, tides, or even electromagnetic induction (\citealt{Kislyakova2017,Kislyakova2018}) that they cannot retain a volatile atmosphere (eg, \citealt{Lopez2017,Winn2018}). The planetary lithosphere is further expected to weaken and melt, leading to the formation of magma oceans and potential volcanic activity, particularly on the irradiated day side (eg \citealt{Schaefer2009,Barnes2010,Gelman2011,Leger2011,Elkins2012}). When planetary equilibrium temperatures exceed $\sim$1000 K, outgassing can release massive amounts of dust and metals effluents and sustain a secondary atmosphere around the planet (\citealt{Rappaport2012,Rappaport2014,Ito2015}). \citet{Guenther2020} used high-resolution spectroscopy with UVES to search for the absorption lines of a metal-rich envelope during the transit of HD\,3167 b. While they were only able to set upper limits on tracers of this envelope (such as sodium and oxygen, \citealt{Miguel2011}), this does not preclude the possibility that their signatures vary over time (like 55\,Cnc e, \citealt{RiddenHarper2016}) or that the planet is surrounded by an envelope whose refractory content has a broadband optical signature. Measuring with high precision the radius of HD\,3167b, and of USPs in general, in combination with their mass is thus essential to investigate the presence of such exotic atmospheres and gain further insights into their mysterious nature. \\ The CHEOPS satellite (\citealt{Benz2021}) was used to characterize, with a high precision, the transits of the ultra-short period planet HD\,3167b. We combined these observations with published transit photometry of HD\,3167b and c, and with both published and new radial velocity (RV) data of the system (Sect.~\ref{sec:data_red}), to perform a global analysis of all data available on the system (Sect.~\ref{sec:data_ana_glob}), and obtain a complete and refined view of its planets bulk properties (Sect.~\ref{sec:results_sys}). These properties are used to constrain the internal structures of the two transiting planets in the system in Sect.~\ref{sec:int_struct}, and to simulate their past dynamical (Sect.~\ref{sec:dyn_evol}) and atmospheric (Sect.~\ref{sec:atm_evol}) evolution. \section{Observations and data reduction} \label{sec:data_red} \subsection{CHEOPS photometry} Transits of HD\,3167b were observed with CHEOPS within the frame of the Guaranteed Time Observation (GTO), as part of a sub-program dedicated to measuring with high precision the radius of USPs and better constrain their internal structure. Twelve visits were obtained between 2 August and 14 November 2020. We scheduled three CHEOPS orbits per visit, so that one orbit would cover the transit and two orbits would cover the pre-transit and post-transit phases, allowing us to measure the baseline stellar flux and detrend the light curve. We set an exposure time of 36.5\,s with which we expected to reach a precision of about 7\% on the transit depth in one visit. We eventually obtained an average precision of 15\% per visit due to a higher noise level than anticipated, which we partly attributed to the presence of new hot pixels in the photometric aperture. We note that no transit of planet c was caught by the CHEOPS observations. Observations were processed with the CHEOPS DRP (Data Reduction Pipeline, version 13.1.0; \citealt{hoyer_cheops-drp}), which performs aperture photometry and provides four light curves extracted with four different aperture radii. The so-called default aperture, with a radius of 25~pixels, had a consistent lowest rms noise level throughout all visits and we selected this data set for our analysis. \subsection{K2 photometry} During the 80-day long campaign 8 of its K2 mission (3 January to 23 March 2016), the Kepler space telescope acquired 30-min cadence photometry of the star HD\,3167, from which \cite{vanderburg_hd3167} discovered the transiting planets b and c. In addition to the simple aperture photometry (SAP) and the Pre-search Data Conditioning SAP (PDCSAP), the MAST archive\footnote{\url{https://archive.stsci.edu/k2}} provides several high-level science data products (HLSP) based on different photometric extraction techniques. We compared them all and identified the ones leading to light curves with the lowest noise levels: \texttt{K2SC} \citep{aigrain_k2sc} and \texttt{K2SFF} \citep{vanderburg_k2sff}. We selected the light curve provided by \texttt{K2SFF} as it preserves the photometric variability in the data. We fitted the data with a joint model of this variability and of the transit light curves, to properly propagate uncertainties throughout all parameters. \texttt{K2SFF} proposes a best solution between photometry for different aperture shapes and sizes. We compared them one by one, confirmed that the proposed best aperture has the highest signal-to-noise ratio, and subsequently used it for our analysis. \texttt{K2SFF} does not provide uncertainties on the photometric points. We thus investigated the error values generated by the other extraction methods and selected the most conservative (largest) ones for our data set, which were computed by \texttt{K2SC\_SAP}. We used the Gaia DR2 catalogue \citep{gaia_dr2_2016, gaia_dr2_2018} to check for contamination by nearby stars in the aperture. We only considered objects with G-band magnitude differences smaller than nine with HD\,3167, and found none within the aperture. \subsection{Spitzer/IRAC photometry} \label{ssec:spitzer_data} We used Spitzer data from the General Observing (GO) program 13052 (PI: M.~Werner) that contain four observations of the HD\,3167 system at 4.5 $\mu$m (channel 2). These observations covered three transits of planet b (AORs 61072896, 61072640, 68163072) on 22 and 25 October 2016 and 16 October 2018, and one transit of planet c (AOR 61070592) between 31 October and 1 November 2016. The transit of planet c was analyzed in \cite{mikal-evans_hd3167c}. The data were downloaded from the Spitzer Heritage Archive\footnote{\url{https://sha.ipac.caltech.edu}}. We extracted and pre-processed the photometry following a method described in \cite{demory_55Cnce}, which relies on the modeling of intra-pixel sensitivity of the IRAC instrument \citep{ingalls_irac} using the bilinearly-interpolated subpixel sensitivity (BLISS) mapping technique \citep{stevenson_bliss}. Our method also includes a correction as a linear function of the full-width at half-maximum (FWHM) of the pixel response function (PRF). An additional $\log^2$ ramp as a function of time was added for the pre-processing of the transit of planet c. The uncertainties associated with these corrections were propagated to the error bars of the resulting data points. The four resulting de-trended time series were sampled at a cadence of 27\,sec and we measured a negligible red noise. \subsection{HST photometry} \label{ssec:hst_data} We used five transit observations of HD\,3167c collected with HST/WFC3 using the G141 grism configuration (wavelength range 1.1-1.7\,$\mu$m). These observations are part of the GO program 15333 (PI: I. Crossfield) and they were acquired on 22 May 2018, 20 July 2018, 14 June 2019, 12 August 2019, and 5 July 2020. Each of the five visits covers seven HST orbits. We used the broadband photometric light curves presented in \cite{mikal-evans_hd3167c} that we obatined as a three-column file with the time of observation in JD\_UTC, the normalized flux, and the normalized uncertainty. The light curves were extracted from the sum of all spectra across the full wavelength range. The resulting data set is made of 69.6-second exposures at a cadence of 111 seconds. We also downloaded the raw data from the MAST archive\footnote{\url{https://archive.stsci.edu/hst/wfc3}} to have access to the housekeeping parameters. We converted the time from JD\_UTC to BJD\_TDB (barycentric Julian date in barycentric dynamical time) using the Python package \texttt{astropy} \citep{astropy1, astropy2} and assuming that the HST spacecraft is located at the center of the Earth. This approximation leads to a timing error of $\pm23.08\,\text{ms}$, which we considered negligible. The correction from JD\_UTC to BJD\_TDB is significant as it leads to correction of up to nearly 10\,minutes. The light curves feature strong systematics that are typical of HST/WFC3 observations, with one ramp repeatable as a function of HST orbital phase, and one global ramp as a function of time. The flux level also jumps every two points due to the switching from forward to backward scanning of the HST detector between two consecutive exposures. This results in having two mean flux levels that have to be fitted independently with two offsets. In addition, we carefully checked for the possibility of HD\,3167b transiting during these observations and found out that such a double event occurs in visits 3 and 4, and was not reported in previous analysis of these data sets. We therefore included planet b in the analysis of our HST time series. \subsection{Radial velocity data} The RV data analyzed in this work are coming from several instruments. We used data from HARPS programs 097.C-0948 and 098.C-0860, as published in \citealt{gandolfi_hd3167}), data from APF/Levy and Keck/HIRES, as published in \cite{Christiansen2017}, and data from the HARPS-N GTO programme (\citealt{Cosentino2012}), as well as from programs A33TAC\_15 (PI: D. Gandolfi), CAT16B\_61 (PI: H. J. Deeg), A34DDT2 and A36DDT2 (PI: G. H\'ebrard). The HARPS-N data were partly published in \cite{Christiansen2017}, \cite{gandolfi_hd3167} and \citet{Dalal2019}, but we publish here 42 new RV points from the HARPS-N GTO programme. The HARPS-N RV data were extracted from the instrument raw frames using the latest version of the ESPRESSO data reduction software (DRS, version 2.3.5). Following the work described in \cite{dumusque_harpsn_2021}, the ESPRESSO pipeline has been optimized to work with HARPS-N data. Compared to the HARPS-N DRS version 3.8, the new version of the reduction pipeline, along with the performed optimizations, provide smaller night-to-night variations, estimated to be 0.5 m/s rms compared to 0.8 m/s, and a better stability of the RVs on the long-term, due to a careful selection of Thorium lines used for calibrating the instrument. The new pipeline further extracts RVs from cross-correlation functions computed with improved binary masks (here the G9 mask closer in type to HD\,3167), built with weights more representative of the RV information content of each spectral line (\citealt{Bourrier2021_3167}). We rejected the observation `2018-01-08T20-30-55.308' because it was not possible to correct for the color effect induced by Earth atmospheric diffusion. We removed observation `2017-11-14T21-32-32.784' as well, because the corresponding RV was clearly an outlier of the RV time series, and all stars observed during the same night showed outliers as well. Finally, we removed all observations that were taken at an airmass larger than 1.7, to prevent color dependencies in the RV time series (the ADC corrects for atmospheric extinction up to an airmass of 2.0). This selection gives us a total of 213 HARPN-N RV measurements to analyze. Merging the data from all four instruments, we obtain a time series of 434 RV data points. To prevent biases induced by the Rossiter-McLaughlin signals from the planets b and c \citep{Dalal2019, Bourrier2021_3167}, we discarded the 111 data points observed during their transits. The filtered RV data set that we included in our analysis represents a total of 323 data points (39 HARPS points, 102 APF/Levy points, 55 Keck/HIRES points, and 127 HARPS-N points) covering more than 5.3 years ($\sim$ 1940 days). In addition to the RV signals, the HARPS and HARPS-N data sets included several stellar activity indicators. The HARPS data included the bisector inverse slope span (`BIS SPAN') of the cross-correlation function (CCF), the full-width at half maximum (FWHM) of the CCF, and the $\log R_{HK}'$. The activity indicators provided with the HARPS-N data were the `BIS SPAN' of the CCF, the FWHM of the CCF, the contrast, the S$_\text{MW}$-index \citep{2011arXiv1107.5325L}, the H$\alpha$-index \citep{2011A&A...534A..30G}, the $\mathrm{Na\,I}$ lines \citep{2007MNRAS.378.1007D}, and the $\mathrm{Ca\,II}$ lines. \section{Global analysis of the system} \label{sec:data_ana_glob} \subsection{Stellar properties} \label{sec:star} We derived the stellar atmospheric parameters ($T_{\mathrm{eff}}$, $\log g$, microturbulence, [Fe/H]) using ARES+MOOG, following the same methodology described in \citet[][]{Santos-13,Sousa-14,Sousa-21}. We used the latest version of ARES \footnote{The last version, ARES v2, can be downloaded at https://github.com/sousasag/ARES} \citep{Sousa-07, Sousa-15} to measure the equivalent widths (EW) of iron lines on the combined HARPS-N spectrum of HD3167. We used a minimization process to find ionization and excitation equilibrium and converge to the best set of spectroscopic parameters. This process makes use of a grid of Kurucz model atmospheres \citep{Kurucz1993} and the radiative transfer code MOOG \citep{Sneden-73}. The same method was also applied to a combined spectrum from HARPS observations, providing a completely compatible set of parameters. The stellar abundances [Mg/H] = 0.07 $\pm$ 0.03 dex and [Si/H] = 0.00 $\pm$ 0.04 dex were derived using the classical curve-of-growth analysis method assuming local thermodynamic equilibrium \citep[e.g.,][]{Adibekyan-12, Adibekyan-15}. The same codes and models were used for the abundance determinations. We determine the radius of HD\,3167 using the infrared flux method (IRFM; \citealt{Blackwell1977}) in a Markov-Chain Monte Carlo (MCMC) approach \citep{Schanche2020}. We constructed spectral energy distributions (SEDs) from stellar atmospheric models using the stellar parameters that were derived via the spectral analysis detailed above as priors. These fluxes are compared to observed broadband photometry to derive the apparent bolometric flux, and hence the stellar angular diameter and effective temperature of HD\,3167. To achieve this we retrieve data taken from the most recent data releases for the following bandpasses; {\it Gaia} G, G$_{\rm BP}$, and G$_{\rm RP}$, 2MASS J, H, and K, and {\it WISE} W1 and W2 \citep{Skrutskie2006,Wright2010,GaiaCollaboration2021} and use stellar atmospheric models from the \textsc{atlas} Catalogues \citep{Castelli2003}. The stellar angular diameter is converted to the stellar radius using the offset corrected {\it Gaia} EDR3 parallax \citep{Lindegren2021} from which we obtain $R_{\star}=0.871\pm0.006\, R_{\odot}$. Stellar mass $M_{\star}$ and age $t_{\star}$ were derived from isochrones starting from $T_{\mathrm{eff}}$, [Fe/H], and $R_{\star}$. To make our final estimates more robust we adopted two different stellar evolutionary models, namely PARSEC\footnote{\textit{PA}dova and T\textit{R}ieste \textit{S}tellar \textit{E}volutionary \textit{C}ode: \url{http://stev.oapd.inaf.it/cgi-bin/cmd}} v1.2S \citep{marigo17} and CLES \citep[Code Liègeois d'Évolution Stellaire,][]{scuflaire08}. In detail, we inferred a first pair of mass and age values by interpolating the input values within pre-computed grids of PARSEC isochrones and tracks through the isochrone placement technique presented in \citet{bonfanti15,Bonfanti2016}. To further improve the convergence we also inputted $v\sin{i}=2.41\pm0.37$ km/s \citep{Bourrier2021_3167} into the code to benefit of the synergy between isochrones and gyrochronology as described in \citet{Bonfanti2016}. The second pair of mass and age, instead, was inferred by injecting the stellar input values into the CLES code, which retrieves the best-fit output values following the Levenberg-Marquardt minimization scheme \citep[see][for the details]{salmon21}. As thoroughly described in \citet{bonfanti21}, we finally merged the two respective pairs of outcomes after carefully checking their mutual consistency through a $\chi^2$-based criterion and we obtained $M_{\star}=0.852_{-0.015}^{+0.026}\,M_{\odot}$ and $t_{\star}=10.2_{-2.4}^{+1.8}$ Gyr. Relevant stellar parameters are summarized in Tab.~\ref{tab:stellarParam}. \begin{table} \caption{HD\,3167: Stellar parameters.} \label{tab:stellarParam} \centering \begin{tabular}{llll} \hline\hline \multicolumn{2}{l}{Parameter} & Value & Method \\ \hline $T_{\mathrm{eff}}$ & [K] & $5300\pm73$ & spectroscopy \\ $\log{g}$ & [cgs] & $4.47\pm0.12$ & spectroscopy \\\relax [Fe/H] & [dex] & $0.037\pm0.048$ & spectroscopy \\\relax [Mg/H] & [dex] & $0.07\pm0.03$ & spectroscopy \\\relax [Si/H] & [dex] & $0.00\pm0.04$ & spectroscopy \\ $d$ & [pc] & $47.39\pm0.04$ & Gaia parallax\tablefootmark{(a)}\\ $\theta$ & [mas] & $0.172\pm0.001$ & IRFM \\ $R_{\star}$ & [$R_{\odot}$] & $0.871\pm0.006$ & IRFM \\ $M_{\star}$ & [$M_{\odot}$] & $0.852_{-0.015}^{+0.026}$ & isochrones \\ $t_{\star}$ & [Gyr] & $10.2_{-2.4}^{+1.8}$ & isochrones \\ $L_{\star}$ & [$L_{\odot}$] & $0.537\pm0.031$ & from $R_{\star}$ and $T_{\mathrm{eff}}$\\ $\rho_{\star}$ & [$\rho_{\odot}$] & $1.289\pm0.041$ & from $R_{\star}$ and $M_{\star}$ \\ \hline \end{tabular} \tablefoot{\tablefoottext{a}{Correction from \citet{Lindegren2021} applied}} \end{table} \subsection{Joint photometry - velocimetry analysis} We performed a joint fit combining all the photometry and velocimetry datasets described in Sect.~\ref{sec:data_red}. In the following subsections, we detail how we modeled the planetary signals (transits and RV) consistently in every time series (Sect.~\ref{sssec:planet_models}), and how we corrected the systematics related to each instrument in the CHEOPS, K2, Spitzer, HST and RV data sets (Sect.~\ref{sssec:cheops_analysis} to \ref{sssec:rv_analysis}). Our approach consisted in first performing an analysis of each data set separately to better identify their specificity, and then jointly fitting all the data together (Sect.~\ref{sssec:joint_fit}). \subsubsection{Planetary signals} \label{sssec:planet_models} The transits of planets b and c were modeled using the python package \texttt{batman} \citep{kreidberg_batman}. We selected the quadratic law to describe the effect of the stellar limb darkening, and defined a set of two free coefficients for each of the four instrument passbands. We reduced the number of free parameters by one by using the third Kepler's law and by fitting for the stellar density $\rho_\star$ instead of the normalized semi-major axes $a/R_\star$. With this approach, the planetary properties are fitted while accounting for the stellar unicity. We took advantage of exploiting photometry from four instruments to perform broadband transmission spectroscopy, letting the planet-to-star radii ratios of both transiting planets vary between each passband. We modeled the RV planetary signals with Keplerian functions, and performed the joint fit of all planets while fitting for the following parameters: the time of inferior conjunction $T_0$, the orbital period $P$, the combinations of the eccentricity and the argument of periastron $e\cos\omega$ and $e\sin\omega$, and the RV semi-amplitude $K$. The systemic velocity $v_\gamma$ was fitted independently for each instrument (see Sect.~\ref{sssec:rv_analysis}). Additional free parameters were used for the two transiting planets: the planet-to-star radii ratio $k$, the orbital inclination $i$, and a common stellar density $\rho_\star$. This represents a total of five free parameters per planet, with five more for the transit light curves. We made use of a normal prior on the stellar density $\rho_\star\sim1.289\pm0.048\,\rho_\odot$ that we derived from the stellar properties (see Sect.~\ref{sec:star}). \subsubsection{CHEOPS photometry} \label{sssec:cheops_analysis} We discarded 69 out of 3496 (1.97\%) CHEOPS data points flagged by the DRP (DRP `EVENT' flag $>0$). Among the remaining 3427 data points, we identified 48 (1.40\%) outliers by performing a $3\sigma$-clipping visit by visit, and also flagged 121 (3.53\%) points with background levels higher than $4.1\cdot10^5$ electrons beyond which correlations with the flux start to appear \citep[e.g., Fig.~2 of][]{Deline2022_WASP-189b}. After validating that planet c never transits during the several CHEOPS observations, we included a transit model for planet b only using the \texttt{batman} python package \citep{kreidberg_batman}. Sect.~\ref{sssec:planet_models} describes in details the joint modeling of planetary signals. We de-trended each visit with a Gaussian process (GP) as a function of the spacecraft roll angle using a Mat\'ern-3/2 kernel from the \texttt{celerite2} package \citep{foreman-mackey_celerite, foreman-mackey_celerite2}. The hyper-parameter values were the same for all visits, but each visit was fitted independently. We also included a slope in our model for some visits (no.~2,\,3,\,6,\,7,\,8,\,10,\,12) showing a significant linear trend. We quadratically added a jitter term to the error bars of each visit in order to account for underestimation of uncertainties. This leads to a total number of 33 free parameters for the correction of CHEOPS-related systematics, with one flux mean level per visit, one jitter term per visit, a slope for seven visits, and two hyper-parameters for the GP model. The de-trended CHEOPS transit of planet b obtained after the joint-fit are shown in Fig.~\ref{fig:cheops_transit} and the individual light curves are in Fig.~\ref{fig:cheops_raw_data}. \begin{figure} \centering \includegraphics[width=\hsize]{cheops_transit_b.pdf} \caption{CHEOPS phase-folded transit of HD\,3167\,b. Top panel shows the de-trended transit data points (blue) and the binned data (black) obtained from the joint fit. A sample of 100 transit light curves drawn from the posterior distribution is represented in orange. The lower panel shows the best-fit residuals.} \label{fig:cheops_transit} \end{figure} \subsubsection{K2 photometry} \label{sssec:k2_analysis} We performed a preliminary minimization fit to remove the transits of planets b and c (modeled with the \texttt{batman} package) and the long-term photometric variability (modeled with \texttt{celerite2} GP and a Mat\'ern-3/2 kernel) in order to identify outlying data points from the residuals. We used a $6.5\sigma$-clipping criterion on the residuals to discard 23 outliers out of the 3448 data points (0.67\%). The choice of $6.5\sigma$ was motivated as a good trade-off between efficient clipping and avoiding cutting off non-outliers in the noisiest parts of the time series. From the residuals, we spotted by eye some significant changes in the spread of data points indicating differences in the noise level over the 80-day long observations. We identified three time ranges with the middle one having the lowest apparent noise level (see Fig.~\ref{fig:k2_noise}). We investigated the cause of this phenomenon and found that it correlates well with the frequency at which K2's thrusters are firing to correct the pointing drift of the spacecraft. We identified precisely the times at which the noise level changes by selecting the times providing the best likelihood among several minimization fits of the light curve. Each fit was performed using the model described before (GP and transit models for planets b and c) and allocating an individual jitter term per time range. We computed the best-fit likelihood for several pairs of times and selected the time pairs with the maximimum best-fit likelihood. The final jump timings were fixed at BJD$_\mathrm{TDB}$ times of 2\,457\,406.95 and 2\,457\,436.07, respectively. By considering these three time ranges separately and using individual jitter terms, we aimed at limiting the bias induced by noisy regions due the underestimation of error bars, and at maximising the precision obtained from the low-noise middle region. \begin{figure} \centering \includegraphics[width=\hsize]{k2_noise.pdf} \caption{Noise level time ranges in the K2 time series. Top panel: Normalized K2 flux (blue points) with the best-fit model (transits + GP) obtained from minimization. Mid-panel: Residuals data after removing best-fit model. Bottom panel: Flag indicating when the K2 spacecraft is firing its thrusters to correct pointing drifts. In all three panels, the vertical dotted black lines show the noise jump timings providing the best likelihood.} \label{fig:k2_noise} \end{figure} For the final fit of K2 time series, we used the same model as for the preliminary fit (\texttt{batman} transit models and Mat\'ern-3/2 GP kernel). The transit models are oversampled with respect to the data cadence by a factor 30 (oversampled cadence of about 1\,min), and binned down to the sampling rate of the light curve. This technique allows one to account for distortion effects due to long integration times \citep[e.g., ][]{kipping_binning} with strong effect during ingress and egress especially. The GP model fits for the correlated noise in the data that corresponds to both instrumental systematics and stellar activity. \cite{gandolfi_hd3167} pointed out the presence of the latter in the K2 data with a significant peak in the periodogram matching the stellar rotation period at $\sim$ 24 days. We placed normal priors on both GP hyper-parameters to help convergence of the fit based on the values obtained when analysing the K2 time series alone. The prior values are $\mathcal{N}\!\left(370, 70\right)\,\mathrm{ppm}$ for the GP amlitude $\sigma_\mathrm{GP}$, and $\mathcal{N}\!\left(10, 0.5\right)\,\mathrm{days}$ for the GP correlation time scale $\rho_\mathrm{GP}$, where $\mathcal{N}\!\left(\mu, \sigma\right)$ represents a normal prior of mean $\mu$ and variance $\sigma^2$. Fig.~\ref{fig:k2_transits} shows the best joint-fit de-trended K2 transits of planets b and c. The V-shaped transit of HD\,3167\,b is due to the long cadence of the observations that averages the sharp ingress and egress with the flat regions outside and inside the transit \citep[e.g., ][]{kipping_binning}. \begin{figure} \centering \includegraphics[width=\hsize]{k2_transit_b.pdf} \includegraphics[width=\hsize]{k2_transit_c.pdf} \caption{K2 phase-folded transits of planets b (top) and c (bottom). Blue points represent the data after detrending for any other signal. The orange curve are transit models with parameter sets randomly drawn from the posterior distribution. Binned data are represented in the top panel (transit of planet b).} \label{fig:k2_transits} \end{figure} \subsubsection{SPITZER photometry} \label{sssec:spitzer_analysis} The Spitzer photometry was pre-processed prior to the joint analysis to correct for instrumental systematics (see Sect.~\ref{ssec:spitzer_data}). During the joint fit, the Spitzer observations were fitted with a transit model of planets b and c, and two additional free parameters per observation to account for the flux offset and the underestimation of uncertainties (white noise jitter term). The resulting transit light curves obtained with Spitzer are shown in Fig.~\ref{fig:spitzer_transits}. \begin{figure} \centering \includegraphics[width=\hsize]{spitzer_transit_b.pdf} \includegraphics[width=\hsize]{spitzer_transit_c.pdf} \caption{Spitzer phase-folded transits of planets b (top) and c (bottom). Blue and black points are the de-trended and binned data, respectively. The orange shaded area is made of samples drawn from the posterior distribution.} \label{fig:spitzer_transits} \end{figure} \subsubsection{HST photometry} \label{sssec:hst_analysis} We started by manually flagging one obvious outlier in the fourth orbit of the fifth transit observation. We also discarded 126 out of 722 (17.45\%) points that correspond to the first orbit of every visit, and to the first point of every orbit, following standard approach \citep[e.g., ][]{mikal-evans_hd3167c}. Given the periodicity of the systematic ramp every HST orbit, we decided to adopt an approach similar to the one typically used with CHEOPS, that is a GP detrending as a function of the spacecraft orbital phase. To properly determine the phase of each data point, we computed a precise orbital period for HST from the housekeeping parameters. We used the spacecraft latitude stored in the jitter files (\textit{*jit.fits} files) and, for each visit, we fitted the latitude variations with a sine wave. We combined the outcome of the fits and computed a precise orbital period of $P_\text{HST} = 95.230^{+0.017}_{-0.009}\,\text{min}$. The GP correction as a function of the orbital phase was performed using a Mat\'ern-3/2 kernel with a single set of hyper-parameters for all visits, but each visit was fitted individually. We let free the period of HST, while using a strong Gaussian prior $\mathcal{N}\!\left(95.23, 0.02\right)\,\text{min}$ based on the previously derived value. The orbital phase of each data point was therefore computed at each iteration as a function of the HST period value and the JD\_UTC time. For each visit, we also included two flux mean values (forward and backward scans) and a jitter term to account for underestimation of error bars. We found that the addition of a linear trend as a function of time was necessary for all visits, whereas the addition of a quadratic trend was required for visits 2 and 3 only. This leads to a total of 25 parameters for the correction of HST systematics in the data. We mentioned in Sect.~\ref{ssec:hst_data} that our careful analysis of the HST data lead to the discovery of serendipitous transits of planet b during visits 3 and 4. Therefore, we added a \texttt{batman} transit model accounting for both planets b and c and the best-fit models derived from the joint analysis are represented in Fig.~\ref{fig:hst_transits}. The data of each individual visit are shown in Fig.~\ref{fig:hst_raw_data}. \begin{figure} \centering \includegraphics[width=\hsize]{hst_transit_b.pdf} \includegraphics[width=\hsize]{hst_transit_c.pdf} \caption{HST phase-folded transits of planets b (top) and c (bottom). De-trended data corrected for the strong periodic instrumental systematics are shown in blue. Binned data points are represented in black. The orange curves are samples from the posterior distribution of the joint analysis.} \label{fig:hst_transits} \end{figure} \subsubsection{Radial velocity} \label{sssec:rv_analysis} We computed the Generalized Lomb-Scargle \citep[GLS --][]{ferraz-mello_1981_gls, zechmeister_2009_gls} periodogram of the selected RV data set after removing an offset (median value of the time series) for each instrument, and we clearly detected the signals from the three planets (see Fig.~\ref{fig:rv_periodogram}). We also have three other significant peaks. Two of them seem to be induced by the stellar rotation with one peak at the rotation period ($\sim$24\,days, also present in the K2 data; \citealt{gandolfi_hd3167}) and another at half of this value (see Sect.~\ref{ssec:results_star} for a detailed discussion). The last very significant peak spans a range of possible periods from 70 to 120 days, and it does not match any of the known objects in the system. \begin{figure} \centering \includegraphics[width=\hsize,trim={0.2cm 0.3cm 0.3cm 0.3cm},clip]{RV_LS_periodogram.pdf} \caption{Generalized Lomb-Scargle periodogram of the RV data. The colored triangles at the top represent several periods of interest: orbital periods of the planets b, c and d, expected rotation period of the star ($\sim$ 24 days), and one year. The full triangles show the main periods while the empty ones are the first three harmonics of each period. The horizontal grey lines highlight the False Alarm Probabilities (FAP) of $10^{-3}$, $10^{-4}$ and $10^{-5}$.} \label{fig:rv_periodogram} \end{figure} We analyzed the GLS periodograms of the different stellar indices available for the HARPS and HARPS-N data. We found significant peaks at $\sim$ 24 days for the $\mathrm{H}\alpha$ and $\mathrm{Na\,I}$ lines in the HARPS-N data, and nothing around 100 days in either data sets. We also looked at possible correlations between the indices and the RV signals using the Pearson, Kendall and Spearman criteria. We detected significant correlations of the HARPS data with the FWHM ($p\mathrm{-values} < 6\,10^{-3}$) and of the HARPS-N data with the $\mathrm{H}\alpha$ line ($p\mathrm{-values} < 2\,10^{-6}$). We first designed our RV model using three Keplerian functions for planets b, c and d, and a systemic velocity value for each of the four instruments (APF/Levy, Keck/HIRES, HARPS and HARPS-N). We also included a white-noise term (jitter) for each instrument to account for uncertainty underestimation. Based on the correlation analysis, we jointly fitted for linear functions of the FWHM and the $\mathrm{H}\alpha$ line to correct the HARPS and HARPS-N data, respectively. This modeling choice was motivated by the will to minimize the number of free parameters, even though these correlations may not be as strictly linear as we assume \citep[e.g., ][]{2011A&A...528A...4B, 2014A&A...566A..35S, 2019MNRAS.487.1082C}. We ran a minimization fit and sampled the parameter space with a MCMC approach. We obtained planetary parameters fully consistent with the values from both \cite{Christiansen2017} and \cite{gandolfi_hd3167}. We computed the periodogram of the residuals and retrieved the very significant peak at $\sim$ 100 days, with a false alarm probability (FAP) smaller than $10^{-10}$. We investigated the possible source of this signal by first looking at indicators of stellar activity at those periods but found no significant signatures. We also compared the periodograms of each spectrograph to search for potential discrepancy that could mean an instrumental origin for the long-period signal (see Fig.~\ref{fig:rv_periodogram_inst}). We found that both HARPS-N and Keck/HIRES data have power in this regime ($\text{FAP}<10^{-4}$). We note that there is also a hint of signal in the APF/Levy time series even though it is less significant. The HARPS data covers only 128 days in total with a poor sampling over this baseline and thus the individual periodogram does not feature any peak around 100~days. The presence of power in several data sets strongly suggested that the signal was not induced by instrumental systematics and might actually have a planetary origin. To further consolidate this hypothesis, we compared the phases of the long-period signal measured by each spectrograph by looking at the RV time series phase-folded on the detected period. Figure~\ref{fig:rv_hd3167e} shows the best result of the joint fit performed in this work where one can visually validate the consistency of the signal phase through all instruments. In the view of these different outcomes, we rejected the stellar or instrumental origins to explain the residual signal and we interpreted it as the RV signature of a fourth planet, whose presence was suggested by \citet{Dalal2019}. We ran another fit to the data including an additional Keplerian model with a uniform prior on the period spanning a large range from 60 to 200 days. The resulting semi-amplitude of the Doppler signature produced by the new planet was significant by more than 9$\sigma$ and the residual periodogram was not featuring significant peaks anymore. The comparison of the 3-planet and 4-planet models using the Bayesian and Akaike Information Criteria (BIC and AIC) clearly favored the inclusion of the new planet ($\Delta\mathrm{BIC}<-57$ and $\Delta\mathrm{AIC}<-68$). Therefore, the final RV model used in the joint fit with the photometry included a Keplerian model for each of the three known planets b, c and d, and another for the new long-period planet e. We fitted the systemic velocity and a jitter term for each instrument, and added linear correction of the HARPS and the HARPS-N data as functions of the FWHM and the $\mathrm{H}\alpha$ line, respectively. We used ten parameters in total to fit the instrument-related effects (offset, noise and decorrelation). \begin{figure} \centering \includegraphics[width=\hsize,trim={0.2cm 0.3cm 0.3cm 0.3cm},clip]{RV_LS_periodogram_inst.pdf} \caption{Generalized Lomb-Scargle periodogram of the RV data for each instrument. For comparison purposes, each periodogram is normalized by the power corresponding to a False Alarm Probability (FAP) of $10^{-4}$, which is highlighted by the horizontal dotted line.} \label{fig:rv_periodogram_inst} \end{figure} \begin{figure} \centering \includegraphics[width=\hsize,trim={0.2cm 0.3cm 0.3cm 0.3cm},clip]{rv_curves_hd3167e.pdf} \caption{Radial-velocity signals of the planet e measured by each instrument and phase-folded on the best-fit orbital period ($P=102.09$ days). Binned data are shown in black.} \label{fig:rv_hd3167e} \end{figure} The extracted planetary signals fitted from the joint analysis are represented in Fig.~\ref{fig:rv_results}. The gap visible in the RV data phase-folded on the orbital period of HD\,3167\,b is due to the removal of in-transit points to avoid being affected by the Rossiter-McLaughlin effect. \begin{figure} \centering \includegraphics[width=\hsize,trim={0cm 2.3cm 1.8cm 2.3cm},clip]{rv_curves.pdf} \caption{Phase-folded radial-velocity data of the four planets of the HD\,3167 system. The color code of the data points highlights the instrument with which they were observed (APF, HARPS, HARPS-N, HIRES). \rev{The black points show the binned data.} The multiple black curves are samples randomly drawn from the posterior distribution.} \label{fig:rv_results} \end{figure} \subsubsection{Joint MCMC fit} \label{sssec:joint_fit} We used the MCMC algorithm \texttt{emcee} \citep{foreman_emcee} to explore the parameter space simultaneously for all the parameters defining the systematic correction of each instrument and the planetary properties from the transit light curves and the radial-velocity signals. We had a total of 120 parameters and, at each MCMC iteration, we computed the global log-probability by summing the log-probability obtained with each data set (CHEOPS, K2, Spitzer, HST, RV). The MCMC run was initiated by estimating the best-fit parameters and their uncertainties on each data set independently. We used the posterior distribution to generate the first-guess parameter sets. The 1280 chains of our MCMC joint fit started with a burn-in phase of 145\,000 iterations, and then sampled the parameter space with 260\,000 steps. We kept one iteration every 2000 to reduce the effect of the chain autocorrelation. We checked the convergence of the chains by visually inspecting the trace plots and validated it based on the Gelman-Rubin criterion \citep{gelman-rubin}. \section{Revision of the system properties} \label{sec:results_sys} \subsection{Star} \label{ssec:results_star} The Rossiter-McLaughlin analysis performed by \citet{Bourrier2021_3167} revealed variations in the contrast of the stellar lines occulted by HD\,3167b and c along their respective transit chords. The authors could explain these variations through a latitudinal dependence of the stellar line contrast, which allowed them to constrain the inclination of the star with respect to the line of sight (i$_{\star}$ = 111.6$\stackrel{+3.1}{_{-3.3}}^{\circ}$ or 68.4$\stackrel{+3.3}{_{-3.1}}^{\circ}$, the two configurations being degenerate) and thus the true equatorial velocity (v$_\mathrm{eq}$ = 2.65$\stackrel{+0.47}{_{-0.42}}$\,km\,s$^{-1}$). Assuming that this scenario is correct, we combined our stellar radius (Sect.~\ref{sec:star}) with the stellar inclination and projected rotational velocity from \citet{Bourrier2021_3167} to update the true equatorial period (P$_\mathrm{eq}$ = 16.63$\stackrel{+3.0}{_{-2.6}}$ d). In addition to the planetary signals, we observe consistent peaks in the periodograms of the K2 photometry and RV data at about $\sim$24\,days, which likely trace spots on the rotating stellar surface. The difference between this period, and that derived independently from Rossiter-McLaughlin analysis for P$_\mathrm{eq}$, could indicate that the star is rotating differentially. During their lifetime, spots would spend on average more time in a region located at higher latitudes, rotating more slowly than the equator. Under this hypothesis we can estimate the spot location by assuming a solar-like law for the stellar differential rotation: \begin{equation} P_\text{eq} / P\!\left(\theta\right) = 1-\alpha \sin^2\left(\theta\right), \end{equation} with $\alpha$ the relative differential rotation rate between equator and pole, and $\theta$ the stellar latitude. We computed the value of $P\!\left(\theta\right)$ from a Gaussian fit to the periodogram peak in the K2 and RV data sets, yielding $P_\text{K2}=23.4\pm2.2$ days and $P_\text{RV}=24.1\pm1.2$ days. The close agreement between these periodic signals from different datasets give us confidence that they arise from stellar modulation, rather than instrumental variability. We then built a $\chi^2$ map of the stellar latitude as a function of $\alpha$, comparing the theoretical $P_\text{eq} / P\!\left(\theta\right)$ ratio with the measured values (Fig.~\ref{fig:diff_rot}). The relative differential rotation rate for HD\,3167 can be estimated independently from Eq.~2 of \cite{balona_diff_rot}, which is based on photometric modulations in a wide sample of \textit{Kepler} stars. Using $T_\text{eff}=5300\pm73\,\text{K}$ and $\Omega_{eq}=2\pi/P_\text{eq}$, we derive $\alpha=0.179\pm0.028$. Within 2$\sigma$, this value is consistent with the measurements of $P_\text{eq}$ and $P\!\left(\theta\right)$ for stellar latitudes $\gtrsim50\deg$. Spots on HD\,3167 would thus be located closer to its poles than on the Sun, where spots appear at latitudes of $\sim$35$ ^{\circ}$ at the beginning of a new cycle and converge toward the equator over 11 years. \begin{figure} \centering \includegraphics[width=\hsize]{HD3167_diff_rot_combined_with_alpha.pdf} \caption{Probability map of the spot latitude attributed to the K2 photometry and RV modulations as a function of the relative differential rotation rate. The colorscale represents the $\chi^2$ probability and the dotted lines highlight the 1-, 2- and 3-$\sigma$ confidence intervals. The vertical dashed lines shows the 1$\sigma$ confidence interval of the expected $\alpha$ value derived from Eq. 2 of \cite{balona_diff_rot}.} \label{fig:diff_rot} \end{figure} \subsection{Planets} \label{sec:pl_results} We derived the parameter values and uncertainties from the posterior distribution of the MCMC run. We computed the median and the 68.3\% confidence interval for each of the fitted parameters (see Table~\ref{tab:result_fitted}). We combined the MCMC chains to calculate the values and uncertainties on a series of useful parameters that were not fitted directly (Table~\ref{tab:result_derived}). Some of these derived parameters were obtained using the stellar mass or radius, and we propagated the uncertainties on these parameters by drawing random values from normal distributions based on their estimated values: $M_\star=0.852\pm0.026\,M_\odot$ and $R_\star=0.871\pm0.006\,R_\star$ (Sect.~\ref{sec:star}). \begin{table*} \caption{Fitted stellar and planetary parameters of the HD\,3167 system.} \label{tab:result_fitted} \centering \begin{tabular}{lcccr} \toprule \toprule Fitted parameters & Symbols & Values & Priors & Units \\ \midrule \midrule Planet b &&&& \\ \quad Time of inferior conjunction & $T_{0,\,b}$ & ${2\,457\,394.37421}_{-{0.00035}}^{+{0.00036}}$ & - & $\text{BJD}_\text{TDB}$ \\ \quad Orbital period & $P_b$ & ${0.95965428}_{-{0.00000029}}^{+{0.00000030}}$ & - & days \\ \quad Planet-to-star radii ratios &&&& \\ \qquad CHEOPS passband & $k_b^\text{CHEOPS}$ & ${0.01765}\pm{0.00036}$ & - & - \\ \qquad K2 passband & $k_b^\text{K2}$ & ${0.01657}_{-{0.00021}}^{+{0.00022}}$ & - & - \\ \qquad HST/WFC3/G141 passband & $k_b^\text{HST}$ & ${0.01687}_{-{0.00036}}^{+{0.00035}}$ & - & - \\ \qquad Spitzer/IRAC/Ch2 passband & $k_b^\text{Spitzer}$ & ${0.01789}_{-{0.00081}}^{+{0.00076}}$ & - & - \\ \qquad All passbands & $k_b$ & ${0.01712}_{-{0.00059}}^{+{0.00086}}$ & - & - \\ \quad Orbital inclination & $i_b$ & ${87.59}_{-{1.75}}^{+{1.63}}$ & $\mathcal{U}\!\left(0, 90\right)$ & deg \\ \multirow{2}{*}{\quad Eccentricity / argument of periastron} & $e_b\,\cos\omega_b$ & ${-0.015}_{-{0.023}}^{+{0.022}}$ & - & - \\ & $e_b\,\sin\omega_b$ & ${0.030}_{-{0.034}}^{+{0.022}}$ & - & - \\ \quad RV semi-amplitude & $K_b$ & ${3.425}_{-{0.185}}^{+{0.176}}$ & - & m/s \\ \midrule Planet c &&&& \\ \quad Time of inferior conjunction & $T_{0,\,c}$ & ${2\,457\,394.97778}_{-{0.00057}}^{+{0.00056}}$ & - & $\text{BJD}_\text{TDB}$ \\ \quad Orbital period & $P_c$ & ${29.8464948}_{-{0.0000154}}^{+{0.0000157}}$ & - & days \\ \quad Planet-to-star radii ratios &&&& \\ \qquad K2 passband & $k_c^\text{K2}$ & ${0.03073}_{-{0.00046}}^{+{0.00047}}$ & - & - \\ \qquad HST/WFC3/G141 passband & $k_c^\text{HST}$ & ${0.03176}\pm{0.00028}$ & - & - \\ \qquad Spitzer/IRAC/Ch2 passband & $k_c^\text{Spitzer}$ & ${0.02967}_{-{0.00058}}^{+{0.00055}}$ & - & - \\ \qquad All passbands & $k_c$ & ${0.03075}_{-{0.00112}}^{+{0.00103}}$ & - & - \\ \quad Orbital inclination & $i_c$ & ${89.421}_{-{0.071}}^{+{0.130}}$ & $\mathcal{U}\!\left(0, 90\right)$ & deg \\ \multirow{2}{*}{\quad Eccentricity / argument of periastron} & $e_c\,\cos\omega_c$ & ${-0.007}_{-{0.032}}^{+{0.031}}$ & - & - \\ & $e_c\,\sin\omega_c$ & ${-0.014}_{-{0.051}}^{+{0.055}}$ & - & - \\ \quad RV semi-amplitude & $K_c$ & ${2.461}_{-{0.174}}^{+{0.180}}$ & - & m/s \\ \midrule Planet d &&&& \\ \quad Time of inferior conjunction & $T_{0, d}$ & ${2\,457\,585.20}\pm{0.22}$ & - & $\text{BJD}_\text{TDB}$ \\ \quad Orbital period & $P_d$ & ${8.4783}\pm{0.0025}$ & - & days \\ \multirow{2}{*}{\quad Eccentricity / argument of periastron} & $e_d\,\cos\omega_d$ & ${-0.023}_{-{0.088}}^{+{0.084}}$ & - & - \\ & $e_d\,\sin\omega_d$ & ${0.120}_{-{0.111}}^{+{0.102}}$ & - & - \\ \quad RV semi-amplitude & $K_d$ & ${1.793}_{-{0.167}}^{+{0.165}}$ & - & m/s \\ \midrule Planet e &&&& \\ \quad Time of inferior conjunction & $T_{0, e}$ & ${2\,457\,643.6}_{-{5.3}}^{+{4.0}}$ & - & $\text{BJD}_\text{TDB}$ \\ \quad Orbital period\tablefootmark{$\dagger$} & $P_e$\tablefootmark{$\dagger$} & ${102.09}_{-{0.50}}^{+{0.52}}$\tablefootmark{$\dagger$} & - & days \\ \multirow{2}{*}{\quad Eccentricity / argument of periastron} & $e_e\,\cos\omega_e$ & ${0.012}_{-{0.108}}^{+{0.113}}$ & - & - \\ & $e_e\,\sin\omega_e$ & ${-0.089}_{-{0.185}}^{+{0.219}}$ & - & - \\ \quad RV semi-amplitude & $K_e$ & ${1.536}_{-{0.180}}^{+{0.186}}$ & - & m/s \\ \midrule Star &&&& \\ \quad Stellar density & $\rho_\star$ & ${1.284}_{-{0.047}}^{+{0.046}}$ & $\mathcal{N}\!\left(1.289, 0.048\right)$ & $\rho_\odot$ \\ \quad Limb-darkening coefficients: &&&& \\ \multirow{2}{*}{\qquad CHEOPS passband} & $u_1^\text{CHEOPS}$ & ${0.276}_{-{0.187}}^{+{0.255}}$ & - & - \\ & $u_2^\text{CHEOPS}$ & ${0.44}_{-{0.37}}^{+{0.28}}$ & - & - \\ \multirow{2}{*}{\qquad K2 passband} & $u_1^\text{K2}$ & ${0.513}_{-{0.208}}^{+{0.193}}$ & - & - \\ & $u_2^\text{K2}$ & ${0.11}_{-{0.29}}^{+{0.31}}$ & - & - \\ \multirow{2}{*}{\qquad HST/WFC3/G141 passband} & $u_1^\text{HST}$ & ${0.203}_{-{0.084}}^{+{0.080}}$ & - & - \\ & $u_2^\text{HST}$ & ${0.306}_{-{0.118}}^{+{0.120}}$ & - & - \\ \multirow{2}{*}{\qquad Spitzer/IRAC/Ch2 passband} & $u_1^\text{Spitzer}$ & ${0.143}_{-{0.102}}^{+{0.159}}$ & - & - \\ & $u_2^\text{Spitzer}$ & ${0.049}_{-{0.130}}^{+{0.163}}$ & - & - \\ \bottomrule \bottomrule \end{tabular} \tablefoot{ Uniform priors between $a$ and $b$ are represented by $\mathcal{U}\!\left(a, b\right)$. normal priors with mean $\mu$ and variance $\sigma^2$ are represented by $\mathcal{N}\!\left(\mu, \sigma\right)$. The limb-darkening coefficients correspond to the quadratic model \citep{1977A&A....61..809M}: $I\!\left(\mu\right)/I_0 = 1 - u_1\left(1-\mu\right)-u_2\left(1-\mu\right)^2$, where $\mu=\sqrt{1-x^2}$ and $x$ is the normalized radial coordinate on the stellar disk ($x=0$ at the center, $x=1$ at the limb). \tablefoottext{$\dagger$}{Note that the marginalized posterior distribution of $P_e$ has several modes spanning a large range from 79\,days to 125\,days, and that the error bars of~$\sim\!0.5\,\text{days}$ are dominated by the main mode around 102\,days.} } \end{table*} \begin{table*} \caption{Derived planetary parameters of the HD\,3167 system.} \label{tab:result_derived} \centering \begin{tabular}{lccr} \toprule Derived parameters & Symbols & Values & Units \\ \midrule \midrule Planet b &&& \\ \quad Optimal time of inferior conjunction & $T_{0,\,b}^\text{opt}$ & ${2\,458\,269.57891}_{-{0.00024}}^{+{0.00026}}$ & $\text{BJD}_\text{TDB}$ \\ \quad Impact parameter & $b_b$ & ${0.181}_{-{0.123}}^{+{0.141}}$ & $R_\star$ \\ \quad Transit duration & $T_{14,\,b}$ & ${1.6092}_{-{0.0144}}^{+{0.0172}}$ & hours \\ \quad Eccentricity\tablefootmark{$\dagger$} & $e_b$\tablefootmark{$\dagger$} & $<0.10$\tablefootmark{$\dagger$} & - \\ \multirow{2}{*}{\quad Semi-major axis} & $a_b/R_\star$ & ${4.450}_{-{0.055}}^{+{0.053}}$ & - \\ & $a_b$ & ${0.01802}\pm{0.00025}$ & AU \\ \quad Mass & $M_b$ & ${4.73}_{-{0.29}}^{+{0.28}}$ & $M_\oplus$ \\ \quad Radii &&& \\ \qquad CHEOPS passband & $R_b^\text{CHEOPS}$ & ${1.677}\pm{0.036}$ & $R_\oplus$ \\ \qquad K2 passband & $R_b^\text{K2}$ & ${1.575}_{-{0.023}}^{+{0.024}}$ & $R_\oplus$ \\ \qquad HST/WFC3/G141 passband & $R_b^\text{HST}$ & ${1.602}_{-{0.036}}^{+{0.035}}$ & $R_\oplus$ \\ \qquad Spitzer/IRAC/Ch2 passband & $R_b^\text{Spitzer}$ & ${1.700}_{-{0.078}}^{+{0.074}}$ & $R_\oplus$ \\ \qquad All passbands & $R_b$ & ${1.627}_{-{0.058}}^{+{0.083}}$ & $R_\oplus$ \\ \quad Bulk densities &&& \\ \qquad CHEOPS passband & $\rho_b^\text{CHEOPS}$ & ${5.50}_{-{0.49}}^{+{0.52}}$ & g/cm$^3$ \\ \qquad K2 passband & $\rho_b^\text{K2}$ & ${6.64}_{-{0.51}}^{+{0.52}}$ & g/cm$^3$ \\ \qquad HST/WFC3/G141 & $\rho_b^\text{HST}$ & ${6.30}_{-{0.55}}^{+{0.61}}$ & g/cm$^3$ \\ \qquad Spitzer/IRAC/Ch2 passband & $\rho_b^\text{Spitzer}$ & ${5.28}_{-{0.71}}^{+{0.87}}$ & g/cm$^3$ \\ \quad Equilibrium temperature\tablefootmark{$\ddagger$} & $T_{\text{eq},\,b}$\tablefootmark{$\ddagger$} & ${1777}\pm{27}$\tablefootmark{$\ddagger$} & K \\ \midrule Planet c &&& \\ \quad Optimal time of inferior conjunction & $T_{0,\,c}^\text{opt}$ & ${2\,458\,439.605096}_{-{0.000147}}^{+{0.000149}}$ & $\text{BJD}_\text{TDB}$ \\ \quad Impact parameter & $b_c$ & ${0.451}_{-{0.120}}^{+{0.078}}$ & $R_\star$ \\ \quad Transit duration & $T_{14,\,c}$ & ${4.869}_{-{0.025}}^{+{0.026}}$ & hours \\ \quad Eccentricity\tablefootmark{$\dagger$} & $e_c$\tablefootmark{$\dagger$} & $<0.15$\tablefootmark{$\dagger$} & - \\ \multirow{2}{*}{\quad Semi-major axis} & $a_c/R_\star$ & ${44.01}_{-{0.54}}^{+{0.52}}$ & - \\ & $a_c$ & ${0.1783}\pm{0.0025}$ & AU \\ \quad Mass & $M_c$ & ${10.67}_{-{0.81}}^{+{0.85}}$ & $M_\oplus$ \\ \quad Radii &&& \\ \qquad K2 passband & $R_c^\text{K2}$ & ${2.919}_{-{0.048}}^{+{0.049}}$ & $R_\oplus$ \\ \qquad HST/WFC3/G141 passband & $R_c^\text{HST}$ & ${3.017}_{-{0.033}}^{+{0.034}}$ & $R_\oplus$ \\ \qquad Spitzer/IRAC/Ch2 passband & $R_c^\text{Spitzer}$ & ${2.819}_{-{0.058}}^{+{0.056}}$ & $R_\oplus$ \\ \qquad All passbands & $R_c$ & ${2.923}_{-{0.109}}^{+{0.098}}$ & $R_\oplus$ \\ \quad Bulk densities &&& \\ \qquad K2 passband & $\rho_c^\text{K2}$ & ${2.35}_{-{0.21}}^{+{0.23}}$ & g/cm$^3$ \\ \qquad HST/WFC3/G141 & $\rho_c^\text{HST}$ & ${2.133}_{-{0.177}}^{+{0.187}}$ & g/cm$^3$ \\ \qquad Spitzer/IRAC/Ch2 passband & $\rho_c^\text{Spitzer}$ & ${2.61}_{-{0.25}}^{+{0.28}}$ & g/cm$^3$ \\ \quad Equilibrium temperature\tablefootmark{$\ddagger$} & $T_{\text{eq},\,c}$\tablefootmark{$\ddagger$} & ${565.0}_{-{8.5}}^{+{8.6}}$\tablefootmark{$\ddagger$} & K \\ \midrule Planet d &&& \\ \quad Optimal time of inferior conjunction & $T_{0,\,d}^\text{opt}$ & ${2\,457\,797.16}\pm{0.21}$ & $\text{BJD}_\text{TDB}$ \\ \quad Eccentricity\tablefootmark{$\dagger$} & $e_d$\tablefootmark{$\dagger$} & $< 0.44$\tablefootmark{$\dagger$} & - \\ \multirow{2}{*}{\quad Semi-major axis} & $a_d/R_\star$ & ${19.02}\pm{0.23}$ & - \\ & $a_d$ & ${0.07703}_{-{0.00108}}^{+{0.00106}}$ & AU \\ \quad Minimum mass & $M_d\,\sin i_d$ & ${5.03}\pm{0.50}$ & $M_\oplus$ \\ \quad Equilibrium temperature\tablefootmark{$\ddagger$} & $T_{\text{eq},\,d}$\tablefootmark{$\ddagger$} & ${859.5}_{-{12.9}}^{+{13.0}}$\tablefootmark{$\ddagger$} & K \\ \midrule Planet e &&& \\ \quad Eccentricity\tablefootmark{$\dagger$} & $e_e$\tablefootmark{$\dagger$} & $<0.60$\tablefootmark{$\dagger$} & - \\ \multirow{2}{*}{\quad Semi-major axis} & $a_e/R_\star$ & ${99.93}_{-{1.59}}^{+{1.65}}$ & - \\ & $a_e$ & ${0.4048}_{-{0.0074}}^{+{0.0077}}$ & AU \\ \quad Minimum mass & $M_e\,\sin i_e$ & ${9.74}_{-{1.15}}^{+{1.20}}$ & $M_\oplus$ \\ \quad Equilibrium temperature\tablefootmark{$\ddagger$} & $T_{\text{eq},\,e}$\tablefootmark{$\ddagger$} & ${374.8}_{-{7.3}}^{+{7.1}}$\tablefootmark{$\ddagger$} & K \\ \midrule \bottomrule \end{tabular} \tablefoot{ \tablefoottext{$\dagger$}{Upper limits on the orbital eccentricities are computed with a confidence probability of 99.73\%.} \tablefoottext{$\ddagger$}{Equilibrium temperatures are derived from the equation $T_\text{eq}=T_\text{eff}/\sqrt{2a/R_\star}$, which assumes black-body emissions for both the planet and the star, a Bond albedo $A_B=0$, and a perfect heat redistribution in the planetary atmosphere (uniform temperature).} } \end{table*} All the fitted and derived parameter values are consistent with the ones reported by \cite{Christiansen2017} and \cite{gandolfi_hd3167}. The inclusion of the CHEOPS, HST and Spitzer data sets allows us to improve significantly the precision on the orbital periods of planets b, c, and d by factors of $\sim40$, $>50$ and $\sim17$, respectively. We obtain a better precision on HD\,3167\,b and c planet-to-star radii ratios in the K2 passband analyzed in \cite{Christiansen2017} and \cite{gandolfi_hd3167}, and we improve by more than a factor two the absolute planetary size thanks to the smaller uncertainty on the stellar radius. We also reduce the errors on the absolute and minimum masses of b, c and d thanks to the improved RVs reduction and additional datapoints. These improvements on the planets mass and radius lead to an overall reduction of the uncertainty on the bulk densities of HD\,3167\,b and c by more than a factor three (see Fig.~\ref{fig:mass-radius}). \begin{figure} \centering \includegraphics[width=\hsize, trim={0.2cm 0.4cm, 0.2cm 0.2cm}, clip]{mass-radius.pdf} \caption{Mass-radius diagram. The grey point represent all the exoplanets from the Extrasolar Planets Encyclopaedia (\protect\url{http://www.exoplanet.eu/}) as of 1\,March\,2022, with masses and radii known with a precision better than 30\%. HD\,3167\,b and c are represented in the different instrument passbands analyzed in this work (CHEOPS, K2, HST/WFC3/G141 and Spitzer/IRAC/Ch2). The black dash-dotted lines indicate two iso-density profiles matching the planets b and c. The colored solid and dashed lines are indicative chemical compositions as computed by \cite{zeng_2019}.} \label{fig:mass-radius} \end{figure} We detect the fourth planet HD\,3167\,e with a semi-amplitude significance $>8\sigma$ and a minimum mass of ${9.73}_{-{1.15}}^{+{1.17}} M_\oplus$. This planet was hinted in the RV data previously available to \cite{Dalal2019}, as a $0.03M_J$ outer companion with an orbital period of 78\,days that could explain the peculiar orbital architecture of the system. The order of magnitude of both the mass and the period we derive matches well their original estimates. We note that the orbital period of this new planet has a very peculiar marginalized posterior distribution. Indeed, the uncertainty of about 0.5\,days listed in Table~\ref{tab:result_fitted} is dominated by the main mode of the distribution. However, the MCMC solution does also explore other possible orbital periods that are less likely but nevertheless span a large range from 79\,days to 125\,days (see Fig.~\ref{fig:corner_plot_hd3167e}). We explain this distribution and its invariance with respect to the time of inferior conjunction $T_{0,\,e}$ by the fact that the data are strongly unevenly sampled. Most of the data points (94\%) were taken during the first year of observation (over about 200 days) and the remaining 6\% are HARPS-N data spread over four years. Therefore, $T_{0,\,e}$ is strongly constrained by the bulk distribution of the first year that lasts less than two periods of planet e. The multi-modal distribution of $P_e$ reflects the uneven sampling of the RV time series by highlighting the periods that best match the scattered data. The other orbital parameters of HD\,3167\,e are well defined and not correlated with $P_e$. \begin{figure} \centering \includegraphics[width=\hsize, trim={0.1cm 0.2cm, 0.25cm 0.1cm}, clip]{corner_plot_hd3167e.pdf} \caption{Corner plot of the posterior distribution of the planetary parameters of HD\,3167\,e. The orbital period shows a very peculiar distribution that is not significantly correlated to any other parameter.} \label{fig:corner_plot_hd3167e} \end{figure} The orbits of the four planets are fully consistent with circular configurations with upper limits (99.73\% confidence) on their eccentricities of $e_b<0.11$, $e_c<0.15$, $e_d<0.45$ and $e_e<0.61$ for planets b, c, d and e, respectively. We provide average transit depths and planetary radii for planets b and c, obtained from the merged distributions over the four available instrumental passbands (CHEOPS, K2, HST/WFC3/G141, Spitzer/IRAC/Ch2). To further characterize the system, we allowed the radii of the two transiting planets to vary independently in those passbands. We measure consistent radii for the inner planet b, which is expected from an USP planet unable to retain any volatile atmosphere. However, we note a significant difference ($>3.5\sigma$) between the radius obtained for planet c in the HST/WFC3/G141 ($\lambda \sim 1.4 \mu m$) and the Spitzer/IRAC/Ch2 ($\lambda \sim 4.5 \mu m$) passbands (see Fig.~\ref{fig:planet_radii}). This difference could arise from broadband variations in the optical depth of the planet atmosphere linked to its chemical composition and physical structure (\citealt{Guilluy2021,mikal-evans_hd3167c}). \begin{figure} \centering \includegraphics[width=\hsize, trim={2.5cm 0.2cm, 2.2cm 0.5cm}, clip]{planet_radii.pdf} \caption{Posterior distributions of the planet-to-star radii ratios of planets b and c measured in the four instrument passbands.} \label{fig:planet_radii} \end{figure} \subsection{Stellar companions} \label{sec:compa} In order to check for stellar companions lying within the environment of HD\,3167, high angular resolution optical speckle interferometric imaging was performed. HD\,3167 was observed on 2021 June 28 UT using the ‘Alopeke speckle instrument on Gemini North (\citealt{Scott2021}). ‘Alopeke provides simultaneous speckle imaging in two narrow bands (562\,nm and 832\,nm) with output data products including a reconstructed image and robust contrast limits on companion detections (\citealt{Howell2011,Howell2016}). The night had clear skies and good seeing ($<$1.0 arcsec) during the observations. As shown in Figure~\ref{fig:imaging}, we detect no stellar companions which are brighter than two delta-magnitudes within 0.1” and no companions brighter than five to 8.5 magnitudes within the angular separation limits of 0.1” to 1.2”. Using a distance of d = 47\,pc for HD\,3167, these angular and luminosity limits on stellar companions correspond to main sequence stellar types of K6V (at 0.94 au) and M2.5V to M4.5V between 4.7 to 56.4 au. \begin{figure} \centering \includegraphics[width=\hsize, trim={0cm 0cm, 0cm 0cm}, clip]{HD3167_20210628_562_832_final.pdf} \caption{5-$\sigma$ contrast curves of HD\,3167 from Gemini North/‘Alopeke at 562\,nm (blue) and 832\,nm (red). The insert shows our reconstructed 832 nm speckle image.} \label{fig:imaging} \end{figure} \section{Planetary internal structures} \label{sec:int_struct} Using the derived stellar and planetary properties (in particular the mass and average transit depths reported in Table~\ref{tab:result_derived}), we computed the internal structure of both transiting planets using a Bayesian analysis, and following the method described in \citet{Leleu2021}. We recall here the two main elements in this method: the assumed priors and the forward model. Our forward model computes the radius of planets as a function of some hidden parameters: mass of the solid Fe/S core, fraction of Fe in the core, mass of the silicate mantle and its composition (Si, Mg and Fe molar ratios), mass of the water layer, mass of the gas envelope (composed in this model of pure H/He), equilibrium temperature of the planet, and age (which is supposed to be the same as the age of the star). We assume in our model that the Si/Mg/Fe ratio in the bulk planet is the same as in the star (\citet{Dorn2015}, \cite{Thiabaud2015}). Note that recently, \citet{Adibekyan2021} have shown that these ratio are indeed correlated but may not follow a 1-to-1 correlation. Including this in the model is the subject of future work. Regarding the priors, the core, mantle and water mass fraction (relative to the non-gas part) follow a uniform prior (subject to the constraint that they add up to one), whereas the mass fraction of the H/He layer follows a prior which is uniform in log. We finally note that the gaseous (H/He) part of the planet does not influence, in our model, the `non-gas' part of the planet (core, mantle and water layer). This means that the innermost layers of the planet are not modified by the potential compression and thermal isolation effect from the gas envelope. Fig.~\ref{fig:corner_IS} shows the resulting internal structure of both planets presented as corner plots and summarized in Table~\ref{tab:intstructure}. Planet c hosts a substantial gaseous envelope, weighing a little less than 0.2 $M_\oplus$, whereas its fraction of water is unconstrained. We emphasize that this result depends on the assumed priors. In particular, the resulting planetary model would be more gas rich and less water-rich if the H/He layer followed a uniform prior. One of our main findings is that planet b mass and radius seem to be inconsistent with a pure iron-core and silicate-mantle structure whose composition would reflect the Fe/Si/Mg ratio in the star. Indeed, the density of the planet is smaller than what would be expected for such a model. Since our model assumes the inner layers of the planet to be unaffected by the influence of the gas envelope, for planet b we underestimate the temperature of the ‘non-gas’ part of the planet. If we increase the temperature of the core and mantle layers in our model, we do observe an increase in radius of up to 2\% for pure iron-core and silicate-mantle structures matching the Fe/Si/Mg ratio of the star. However, this effect alone is not enough to explain the observed radius of the planet. We hence conclude that a light element must be present in the planet. Our fit converges toward a negligible mass fraction of gas, which is expected considering that the intense irradiation of this USP would lead a H/He atmosphere to be lost extremely fast. However, the mass fraction of water for our model of HD\,3167 b is quite well constrained and non zero. With an equilibrium temperature in excess of 1600\,K (Table~\ref{tab:result_derived}) any water layer would be made of steam, which has been shown to be much more resilient to atmospheric loss (e.g., \citealt{Lopez2012}). It should be kept in mind that our model assumes a fully differentiated planet. It is possible that water is mixed with a magma ocean covering HD\,3167b, in which case its actual mass fraction of water would be reduced compared to the one we derive (see \cite{Dorn2021}). More detailed internal structure model accounting for this mixing, and for the existence of a dust- and metal-rich envelope, are required to better constrain the true nature of this planet. \begin{center} \begin{table} \centering \caption{Interior structure properties of planets b and c. The errors correspond to the 5\% and 95\% percentiles.} \label{tab:intstructure} \begin{tabularx}{0.67\columnwidth}{ l l X } \toprule \textbf{Property (unit)} & \multicolumn{2}{c}{\textbf{Values}} \\ \hline & \textbf{HD3167b} &\textbf{HD3167c} \\ M\textsubscript{\textit{core}}/M\textsubscript{\textit{total}} & \mbox{$0.17^{+0.12}_{-0.14}$} & \mbox{$0.14^{+0.13}_{-0.12}$} \\ M\textsubscript{\textit{water}}/M\textsubscript{\textit{total}} & \mbox{$0.12^{+0.17}_{-0.10}$} & \mbox{$0.25^{+0.22}_{-0.22}$} \\ log(M\textsubscript{\textit{gas}}) & \mbox{$-9.38^{+2.47}_{-2.36}$} & \mbox{$-0.81^{+0.27}_{-0.39}$} \\ Fe\textsubscript{\textit{core}} & \mbox{$0.90^{+0.09}_{-0.08}$} & \mbox{$0.90^{+0.09}_{-0.08}$} \\ Si\textsubscript{\textit{mantle}} & \mbox{$0.39^{+0.08}_{-0.05}$} & \mbox{$0.39^{+0.08}_{-0.05}$} \\ Mg\textsubscript{\textit{mantle}} & \mbox{$0.46^{+0.11}_{-0.10}$} & \mbox{$0.46^{+0.11}_{-0.10}$} \\ \bottomrule \end{tabularx} \end{table} \end{center} \begin{figure*} \subfloat{{\includegraphics[width = 8.5cm]{corner_small_b.png}}} \qquad \subfloat{{\includegraphics[width = 8.5cm]{corner_small_c.png} }} \caption{Corner plot showing the results on the interior composition models of HD3167b (left) and HD3167c, (right). The vertical dashed lines and the 'error bars' given at the top of each column represent the 5~\% and 95~\% percentiles. The internal structure parameters are the mass fraction of the core and the water layer with respect to the solid planet, the molar fraction of Si and Mg in the mantle, the molar fraction of Fe in the inner core and the logarithm of the gas mass in Earth masses.} \label{fig:corner_IS} \end{figure*} \section{Dynamical evolution} \label{sec:dyn_evol} With planet b aligned with its host star (\citealt{Bourrier2021_3167}) and the distant planet c on a polar transit (\citealt{Dalal2019}), the HD\,3167 system is particularly rich and interesting for dynamical studies. Its planets have wide orbital separations and are far from mean motion resonances, so that their dynamics is fully secular. Moreover the age of the system suggests that its orbital configuration is dynamically stable. The dynamical analysis by \citet{Dalal2019} suggested that planet b could have stayed aligned with the equatorial plane of the star, which has since been confirmed by \citet{Bourrier2021_3167}. They further showed that planets c and d have a low mutual inclination, and that the three planets known at that time could not, by themselves, have caused the polar orbit of planet c. \citet{Dalal2019} thus proposed that the orbits of planets c and d could have been tilted with respect to the star due to a massive outer companion, whose existence we have confirmed in the present study (Sect.~\ref{sec:pl_results}). It is thus natural to investigate whether this planet e could indeed explain the polar orbit of planet c. To gain some insights onto the dynamics of the system, we consider an analytical framework describing the precession of the orbits \citep{Boue2006}. Following \cite{Boue2014}, we compute the characteristic frequencies $\nu^{k/j}$ that represents the relative influence of the body $k$ over the direction of the angular momentum of body $j$. We refer to \citet[][Sec. 5.2.]{Dalal2019} for a more precise description of the model used here. To summarize, if $\nu^{k/j}\ll\nu^{j/k}$, the angular momentum direction of $j$ is almost constant while the angular momentum of $k$ precesses around. \begin{table} \centering \caption{Characteristic precession frequencies for different interactions in the system for the current configuration as well as during the system formation. The typical relative uncertainty is~10\%. \label{tab:frequencies}} \begin{tabularx}{0.8\columnwidth}{l c c} \toprule & {\bf Old star} & {\bf Young star}\\ \midrule $P_S$ & 18 d & 3 d\\ $k_2$ & 0.018 & 0.18\\ \midrule $\nu^{b/S}\ {(\rm rad.yr^{-1})}$ & $3.94 \times 10^{-5}$ & $1.26 \times 10^{-2}$\\ $\nu^{S/b}\ {(\rm rad.yr^{-1})}$ & $2.28 \times 10^{-6}$ & $1.29 \times 10^{-4}$\\ \midrule $\nu^{b/d}\ {(\rm rad.yr^{-1})}$ & \multicolumn{2}{c}{$1.04 \times 10^{-4}$ } \\ $\nu^{d/b}\ {(\rm rad.yr^{-1})}$ & \multicolumn{2}{c}{$2.33 \times 10^{-4}$ } \\ $\nu^{d/c}\ {(\rm rad.yr^{-1})}$ & \multicolumn{2}{c}{$1.45 \times 10^{-4}$ } \\ $\nu^{c/d}\ {(\rm rad.yr^{-1})}$ & \multicolumn{2}{c}{$4.51 \times 10^{-4}$ } \\ $\nu^{e/c}\ {(\rm rad.yr^{-1})}$ & \multicolumn{2}{c}{$1.30 \times 10^{-4}$ } \\ $\nu^{c/e}\ {(\rm rad.yr^{-1})}$ & \multicolumn{2}{c}{$9.26 \times 10^{-5}$ } \\ \bottomrule \end{tabularx} \end{table} We study the dynamics of the system at two different epochs. First, in its current configuration, with a stellar rotation period of about 17 d. Second, right after the system's formation, when the star was rotating fast and its spin had a stronger influence onto planet b. The different characteristic frequencies\footnote{A similar analysis was performed by \cite{Dalal2019} but we found a typo in the code computing the frequencies. The main conclusions of \cite{Dalal2019} are unchanged but the planet-planet interactions were underestimated which means that planet b is not as strongly coupled to the star as stated in the paper.} in these two configurations are summarized in Table~\ref{tab:frequencies}. While the precession frequencies ruling the planet interactions are the same in both settings, there is a significant change for the interaction between the star's spin and the planets as we can see on the frequencies \(\nu^{b/S}\) and \(\nu^{S/b}\). The change is not only due to the shorter rotation period of the star early on, but also because the second fluid Love number $k_2$ can be significantly larger for a fast rotating star \citep{Becker2020}. We adopt a value of $k_2=0.18$ for the fast rotating star, an order of magnitude larger than the expected value for HD 3167 today. \subsection{System stability} In the present system configuration we have \(\nu^{b/d},\nu^{d/b}\gg\nu^{S/b}\gg\nu^{b/S}\), which indicates that planet b's orbit precesses around the angular momentum of the outer planets and that the star plays a negligible role. Moreover, the stellar spin is dynamically unaffected by the planets. We confirm this hypothesis by running an N-body simulation using the integrator \texttt{WHFast} \citep{Rein2015a} from the library \texttt{Rebound} \citep{Rein2012a}. We include relativistic corrections as well as the influence of the stellar $J_2$ using the library \texttt{Reboundx} \citep{Tamayo2019}. The stellar spin is fixed and along the $z$-axis. For this particular simulation, the initial orbits are assumed circular, the planets c, d and e are assumed coplanar. We use an initial condition compatible with the 3D configuration determined by \cite{Bourrier2021_3167}, \(i_b=30^\circ\), \(\Omega_b=100^\circ\) and \(i_{d,c,e}=100^\circ,\Omega_{d,c,e}=0^\circ\). As a result, the mutual inclination between planet b and the rest of the system is $i_{bc}=103^\circ$. This approximate initialization is sufficient because we only want to illustrate the typical dynamics at play. In reality, there is likely a non zero inclination between planet d, c, and e since planets d and e are not transiting. We integrate the system over $100\ {\rm kyr}$ and plot on Figure~\ref{fig:bprec} the planetary inclinations with respect to the star, as well as the mutual inclination between planet b and c. During this short integration, we observe no evolution of the eccentricities. We see that the mutual inclination between b and c, as well as the inclination of the outer planets are almost constant while planet b orbital plane precesses around the orbital plane of the outer planets. We conclude that, while planet b is not strongly coupled with the star today, a primordial misalignment of the outer planets with respect to the star and planet b leads to a stable configuration, compatible with the observations. However, early in the history of the system, we have \(\nu^{S/b}\gg\nu^{b/d},\nu^{d/b},\nu^{b/S}\) which means that planet b could have been coupled with the star instead of the outer planets. If planets c and d gained their large obliquities early on, planet b would have stayed close to the stellar equator. \begin{figure} \includegraphics[width=0.9\columnwidth]{currentconf.pdf} \caption{Inclination evolution in the current system configuration, assuming the planets c,d and e are close to coplanar.\label{fig:bprec}} \end{figure} \subsection{Spin-orbit misalignment by planet e} We now investigate whether planet e could be the cause of the polar orbits of planets c and d. The characteristic frequency \(\nu^{e/c}\) is not negligible, which suggests that planet e could tilt planets c and d if it was originally inclined with respect to the star. \cite{Boue2014} have determined that an external companion can tilt a planetary system as a whole if the coefficient \begin{equation} \beta_{{\rm KL,dc}} = \frac{m_{\rm comp}}{m_{\rm d}}\left(\frac{a_{\rm c}}{a_{\rm d}}\right)^2\left(\frac{a_{\rm c}}{b_{\rm comp}}\right)^3\ll1, \label{eq:KLcondition} \end{equation} where \(b_{\rm comp}=a_{\rm comp}\sqrt{1-e_{\rm comp}^2}\). However, planet e is too close to c to tilt the system without triggering Kozai-Lidov oscillations for planet d and c \citep{Kozai1962,Lidov1962}. Indeed, we have $\beta_{\rm KL}=0.88\pm0.13$. As a result, a large mutual inclination of planet e with respect to c excites the orbital eccentricities, eventually leading to the system destruction. We run a numerical simulation starting with the inner planet b, d and c on coplanar, circular orbits within the star equatorial plane and a planet e on an orbit tilted by $80^\circ$. The simulation lasts 200 kyr and we plot on Figure \ref{fig:KL} the eccentricities and inclinations as a function of time. As expected planets c, d and e enter Kozai-Lidov oscillations and the eccentricities grow to values close to 0.5, which is excluded by the observations (at 2-$\sigma$, $e<$0.3 for planets d and e and $e<$0.08 for c) and is enough to trigger the dynamical instability of the system. Moreover, in that scenario planet b would remain in the plane of planets d and c, which confirms that the outer system had to get misaligned early-on when the coupling between planet b and the star was stronger. Planet e thus cannot explain the polar orbit of planet c. \begin{figure} \includegraphics[width=\linewidth]{inclined_e_KL_old.pdf} \caption{Evolution of the inclinations and eccentricities assuming the current star properties and a planet e initially misaligned with the system.\label{fig:KL}} \end{figure} \subsection{Spin-orbit misalignment by an outer companion.} \begin{figure} \centering \includegraphics[width=\linewidth]{usefulcomp.pdf} \caption{Range of masses and semi-minor axis that allows a companion to tilt the outer planets of the system while preserving its mutual inclinations. The system is not destroyed for companions below the line $\beta_{{\rm KL,ce}}=1$ and tilting is possible for a frequency ratio larger than 1. The companion has a significant influence on the system if its angular momentum is larger than the system's angular momentum. Otherwise, the companion's orbit precesses around the system angular momentum and does not induce large obliquity for the system. The green region is the 5-$\sigma$ limit set by direct imaging constraints. The purple and blue curve are the 5-$\sigma$ limit set by RVs, assuming a circular orbit and inclinations of 90 and 35$^{\circ}$.} \label{fig:usefulcomp} \end{figure} While a misaligned planet e leads to the instability of the system, we explore whether a more distant companion could explain the present configuration. Such an hypothetical companion should lie in a range of masses and semi-major axes that verify the Kozai-Lidov condition \begin{equation} \beta_{{\rm KL,ce}} = \frac{m_{\rm comp}}{m_{\rm c}}\left(\frac{a_{\rm c}}{a_{\rm e}}\right)^2\left(\frac{a_{\rm e}}{b_{\rm comp}}\right)^3\ll1, \label{eq:KLcondition-ce} \end{equation} while being able to tilt the system as a whole. We plot on Figure \ref{fig:usefulcomp} the ratio of the precession frequency $\nu^{\rm co./pl.}/\nu^{\rm S/pl.}$ that represents the relative influence of a companion onto the outer planets with respect to the influence of the star onto the system as a function of the semi-minor axis and the mass of a potential companion. We also plot the levels of $\beta_{{\rm KL,ce}}=1$ and $\beta_{{\rm KL,ce}}=0.1$. A companion can tilt the outer planets if condition \eqref{eq:KLcondition-ce} is met and precession frequencies verify $\nu^{\rm co./pl.} \gg \nu^{\rm S/pl.}$. Additionally, the companion has a significant influence on the inclination of the system if its angular momentum is larger than the system's angular momentum. Otherwise, the system orbital plane remains unchanged and the companion orbital plane precesses around. We plot this theoretical constraint in Fig.~\ref{fig:usefulcomp}. We further included in Fig.~\ref{fig:usefulcomp} the constraints derived from our direct imaging measurements (Sect.~\ref{sec:compa}). We converted luminosity differences with the K0V type star HD\,3167 into spectral types for various separations, and then used Table 5 from \citet{Pecaut2013} to assign mean masses to these spectral types. We also assumed circular orbits to estimate the semi-minor axes (which is the most conservative assumption). While the masses adopted for a given dwarf subtype are tentative, it provides an approximate upper limit on the companion mass. The direct imaging constraints impose that it orbits within $\sim$30\,au from the star and cannot be more massive than about 0.1 solar mass. Finally we included in Fig.~\ref{fig:usefulcomp} the stringent constraints from our RV dataset. The constraints change little with the unknown orbital inclination of the companion unless it is seen nearly pole-on. The RV constraints rule out most, if not all, the configurations where an outer companion could lead to a significant misalignment of the system. We conclude that the polar orbits of the outer planets are most likely due to an early misalignment during the system formation that did not rely on a companion still present in the system. Mechanisms that do not rely on binary companion or secular interactions have been proposed such as magnetic coupling between the young star and the disk \citep{Lai2011,Foucart2011,Romanova2021}. \begin{figure*} \includegraphics[width=\linewidth]{HD3167_final_parhist_planets_r0.png} \caption{Posterior distributions of the initial atmospheric mass fractions for planets HD\,3167 b and c derived by \texttt{PASTA}. The light-blue line represents the distribution of the estimated present-day atmospheric mass-fraction. The orange horizontal lines indicate the uninformative prior distributions. \label{fig:fatm}} \end{figure*} \section{Atmospheric evolution} \label{sec:atm_evol} To constrain the atmospheric evolution of the planets in the HD\,3167 system and the stellar rotation history of the host star, we employed the tool \texttt{P}lanetary \texttt{A}tmospheres and \texttt{S}tellar Ro\texttt{T}ation R\texttt{A}tes \citep[\texttt{PASTA};][]{bonfanti21b}. \texttt{PASTA} uses the measured system parameters and the present-day atmospheric mass fractions determined by the internal structure modeling (see Section \ref{sec:int_struct}) to return posterior probability distributions for the initial atmospheric mass-fraction of each planet, further constraining the history of the stellar rotation rate. Because of the need of an estimate of the present-day atmospheric mass fraction, or at least of a radius measurement, to constrain the evolution of the planetary atmosphere, this tool in its full capability can only be employed on the two transiting planets HD\,3167 b and c. Since the present-day stellar rotation rate is not well defined, we employed a uniform prior ranging between 15 and 20 days. \subsection{Transiting planets HD\,3167 b and c} Planet HD\,3167 b orbits very close to its host star, resulting in it having being subject to large amounts of X-ray and extreme ultraviolet (XUV) irradiation, particularly in the early phases of its evolution. \texttt{PASTA} predicts that this planet has lost its primary H/He-dominated atmosphere at some point in the past, and thus the code is unable to constrain the initial atmospheric mass-fraction, resulting in a uniform posterior distribution (left panel of Figure \ref{fig:fatm}). For planet HD\,3167 c, \texttt{PASTA} prefers evolutionary tracks for which atmospheric mass loss did not play a significant role. This is represented by the posterior distribution of the initial atmospheric mass fraction that peaks around the present-day value (right panel of Figure \ref{fig:fatm}). However, the results also indicate that evolutionary tracks characterized by significant mass loss, though less likely, are not completely excluded. In case the host star was a particularly fast rotator when it was young, it would indeed have emitted a significant amount of XUV radiation \citep{Sanz-Forcada2011}. Therefore, we explored this possibility more thoroughly. Unfortunately, the characteristics of HD\,3167 c, which is the only transiting planet in the system still holding its primordial H/He-dominated atmosphere, do not enable \texttt{PASTA} to constrain the rotation history of the star. This is represented by a rather flat posterior distribution of the stellar rotation rate after 150 Myr that we use as proxy to illustrate the evolution of the stellar rotation rate (Figure \ref{fig:pRotIni}). The rotation rate distribution of stars with a mass comparable to that of HD\,3167 and member of open clusters with ages of about 150 Myr is bimodal \citep[e.g.,][]{Johnstone2015c}: one peak represents fast rotators at a rotation rate close to one day, while the other peak represents moderate rotators at roughly six days. The observed distribution is shown in Figure \ref{fig:pRotIni}. We have performed additional runs with \texttt{PASTA}, imposing priors on the stellar rotation rate at 150 Myr corresponding to Gaussian fits to each of the two peaks of the distribution. Assuming the host star was a fast rotator when it was young, the relative occurrence of evolutionary tracks presenting some significant atmospheric loss increases compared to when not constraining the stellar rotation type at all. However, we still obtain a posterior distribution of the initial atmospheric mass fraction which peaks close to the present-day atmospheric mass fraction, indicating that most likely atmospheric loss has not played a major role in the evolution of this planet independently of the evolutionary history of the stellar rotation rate. \begin{figure} \centering \includegraphics[width=\hsize]{HD3167_final_pRotLin.png} \caption{Posterior distribution (blue line) of the stellar rotation rate of HD\,3167 after 150 Myrs derived by \texttt{PASTA}. The purple area represents the highest posterior density (HPD) interval of the distribution. The black line represents the distribution of the stellar rotation rate of young open cluster stars with mass comparable to that of HD\,3167 based on the collection of data provided by \citet{Johnstone2015c}.} \label{fig:pRotIni} \end{figure} \subsection{Non-transiting planets HD\,3167 d and e} Since HD\,3167 d and e are not transiting, it is not possible to estimate their current atmospheric mass fraction, and thus it is not possible to use \texttt{PASTA} to infer the initial atmospheric mass fraction. To estimate the current atmospheric content of both planets, we start from assuming that the planets accreted a primordial H/He-dominated atmosphere of \citep{mordasini2020} \begin{equation} \frac{M_{\rm env,0}}{M_{\oplus}} = 0.024 \left( \frac{M_{\rm c}}{M_{\oplus}} \right)^{2.23} \left( \frac{\rm a}{1 \rm AU} \right)^{0.72}\,, \label{eq:mordasini} \end{equation} where $M_{\rm env,0}$ is the envelope-mass, $M_{\rm c}$ the core-mass, and $a$ the planetary orbital separation. We used \texttt{PASTA} to compute the atmospheric evolution of HD\,3167 d and e starting it with the atmospheric mass fraction given by Equation~(\ref{eq:mordasini}). The simulations also require an estimate of the evolution of the rotation rate of the host star. Since \texttt{PASTA} has been unable to constrain the stellar rotation history using planets b and c, we further assumed a value of the rotation rate of the host star at 150 Myr of 5.44 days. This value corresponds to the mean of the distribution of stellar rotation rates of young open cluster stars with mass comparable to HD\,3167 (Figure \ref{fig:pRotIni}). As HD\,3167 d orbits quite close to its host star and has a low mutual inclination with the transiting HD\,3167c (Sect.~\ref{sec:dyn_evol}), we assumed the measured lower mass limit $M_d sin(i)$ to be a good approximation of the core mass. Through Equation (\ref{eq:mordasini}), we then estimate an initial atmospheric mass fraction of 0.029. \texttt{PASTA}'s evolution simulation predicts for this planet to have lost all of its primordial H/He-dominated envelope via photo-evaporation. The atmospheres of planets b and d therefore seem to have had very similar evolutionary paths. HD\,3167 e on the contrary orbits significantly further away than HD\,3167 c, for which we already determined that hydrodynamic mass loss was only important if the star would have been a very fast rotator. Therefore, we assumed for the measured lower mass limit $M_e sin(i)$ to be a good approximation of the initial total mass of the planet after the dispersion of the protoplanetary nebula. With these assumptions, Equation (\ref{eq:mordasini}) leads to an initial atmospheric mass fraction of 0.147. Using the estimates of the initial total mass and atmospheric mass fraction, estimates on the core-mass and envelope-mass after the dispersion of the nebula can be provided. Assuming an earth-like density for the core, this results in an estimate of the average density of the planet at the beginning of its atmospheric evolution. This average density then changes due to atmospheric loss as the evolution progresses. As expected, \texttt{PASTA}'s evolution simulation predicts no significant mass loss for this planet, resulting in a Saturn-like density of about 0.67 g/cm$^3$. This value is however heavily dependent on Equation (\ref{eq:mordasini}), as it estimates the initial atmospheric mass fraction and therefore the initial core- and envelope-mass. \subsection{Comparison with previous studies} \citet{Kubyshkina2019} applied an earlier version of \texttt{PASTA} and \citet{bonfanti21b} the same version of \texttt{PASTA} used here on the HD\,3167 system considering the system parameters available at the time. \citet{bonfanti21b} focused on the two transiting planets b and c, while \citet{Kubyshkina2019} did also investigate the atmospheric evolution of the non-transiting planet d. For the two close-in planets b and d the two previous studies agree with our conclusion that the planets should have lost all of their primary H/He-dominated envelopes. Both \citet{Kubyshkina2019} and \citet{bonfanti21b} ran their fits considering the planetary radii, instead of the current atmospheric mass fractions, which led them to assume slightly larger atmospheric mass fractions compared to what was used here. Furthermore, both studies used significantly longer present-day stellar rotation rates, with \citet{Kubyshkina2019} using a prior peaking at roughly 25 days, while \citet{bonfanti21b} used an even larger peak value of about 50 days, based on gyrochronological considerations. \citet{bonfanti21b} obtained that the star was likely to be a slow rotator, which might be related to the large value of the current stellar rotation period they considered. Along the lines of our results, they concluded that planet c has most likely retained most of its primary H/He-dominated atmosphere and, therefore, has not undergone significant mass loss. In contrast, \citet{Kubyshkina2019} concluded that the young star was a moderate-to-fast rotator, in agreement with our hypothesis. \section{Conclusions} \begin{figure} \centering \includegraphics[width=\hsize]{Population.png} \caption{Mass-period (top panel) and mass-radius (bottom) diagrams of the exoplanet population in the Earth-Neptune range. HD\,3167 planets are shown as green disks (HD\,3167d and e are positioned at their measured minimum mass). } \label{fig:Population} \end{figure} We performed a joint analysis of transit photometry (CHEOPS, K2, HST/WFC3, Spitzer/IRAC) and radial velocimetry (HARPS-N, HARPS, APF/Levy, Keck/HIRES) to refine the bulk and orbital properties of the planets orbiting the bright and nearby star HD\,3167. New CHEOPS photometry and RV measurements were added to the published datasets, which were re-analyzed using improved techniques. We first revised the stellar age, mass and radius. The discrepancy found between the activity signal measured in the K2/RV data and the equatorial period derived from RM analysis of HD\,3167b and c can be tentatively attributed to stellar differential rotation (at a rate of $\sim$18\%) and spots located at higher latitudes than $\approxsup$50$^{\circ}$. We confirmed the RV drift measured by \citealt{Dalal2019} as HD\,3167e, a fourth non-transiting planet with minimum mass of $\sim$10\,M$_{\oplus}$ and a 102\,day period. This discovery sheds a new light on the peculiar orbital architecture of the HD3167 system. The present-day system is dynamically stable, with the orbits of its four planets consistent with circular configurations, but HD3167b orbits close to the stellar equatorial plane while HD3167d and c are on nearly coplanar, polar orbits. Using an analytical approach to investigate the secular dynamical history of the system, we showed that the tilting of the outer planets must have happened early in the system history or planet b, initially coupled with the star, would have become coupled with planets d-c and followed their misalignment. Yet planet e cannot explain the polar orbits of planets d-c without destroying the system through Kozai-Lidov oscillations, which further implies that these three planets have low mutual inclinations. We finally explored the possibility that an unknown, more distant companion tilted the outer planetary system. Our analytical estimates, combined with constraints from velocimetry and direct imaging data (Gemini North / ‘Alopeke), however rule out this possibility unless the companion is a massive sub-solar body with a highly inclined orbit, or a star in the birth cluster that later unbound from the system. Our revision of the system properties improves the precision on the transiting planets radii by a factor two. When combined with the refined planet masses, this allows us to reduce the uncertainty on their density by a factor three, which is of particular interest to our understanding of their nature. Our internal structure retrievals show that the low density of HD\,3167b cannot be explained by a pure iron-core and silicate-mantle, suggesting that the planet contains a substantial fraction of lighter elements. It could be water, mixed with a magma ocean and/or in a steam atmosphere resilient to evaporation, or it could be a more exotic envelope made of dust and metals. In contrast we find that HD\,3167c is a mini-Neptune hosting a substantial volatile envelope, with a gaseous mass fraction of $\sim$0.2\,M$_{\oplus}$ or larger if the planet is water-poor. The different passbands used for transit observations allowed us to search for broadband spectral variations in these two planet radii. We measure consistent sizes for HD\,3167b, as expected from the absence of a volatile envelope. In contrast we measure significant spectral variations in the size of HD\,3167c, with a smaller radius in the infrared. These results strengthen the interest and amenability of HD\,3167c for follow-up observations at all wavelengths, to better determine its atmospheric structure and catalog its chemical content. Transit follow-up at high spectral resolution in the ultraviolet-optical domains, or phase curve measurements in the infrared, would also be of interest to disentangle the possible scenarios for the interior of HD\,3167b. We emphasize that our data analysis improves the precision on the planetary orbital periods by more than one order of magnitude, which will greatly help future transit follow-up. Finally, we use atmospheric simulations to bring additional insight into the history and nature of the HD3167 planets. Due to its strong irradiation, HD\,3167b lost any primordial volatile envelope shortly after its formation. In contrast, we find that atmospheric loss did not play a significant role in the evolution of HD\,3167c, regardless of the stellar evolutionary history, so that its present-day atmosphere may still trace its primordial composition. With reasonable assumptions on the current atmospheric mass fraction of HD\,3167d-c, we further find that planet d likely lost all of its atmosphere through photo-evaporation while planet e was unaffected and retains a substantial gaseous envelope. To summarize, our revised picture of the HD\,3167 system (Fig.~\ref{fig:Population}) consists in : \begin{itemize} \item HD\,3167 : an old K-type star, initially a moderate-to-fast rotator \item HD\,3167b : a transiting ultra-short period planet with a heavyweight envelope, initially coupled with the star and thus still orbiting near the stellar equatorial plane \item HD\,3167d : a non-transiting super-Earth with no gaseous envelope, which followed a similar atmospheric evolution as planet b but a similar dynamical evolution as planet c,e \item HD\,3167c : a massive transiting mini-Neptune, which likely kept its primordial envelope and was tilted its present polar orbit early in the system history \item HD\,3167e : a non-transiting planet that likely followed the same atmospheric and dynamical evolution as planet c, which implies that it orbits in a nearby plane and is similar in mass. \end{itemize} In-depth characterization of the HD\,3167 system and comparison with other multi-planet systems will shed more light on its origins and evolution. \begin{acknowledgements} We thank the referee for their appreciative and constructive review. We warmly thank Thibault Kuntzer for his early analysis of the HD\,3167 photometry, and Tom Mikal-Evans for sharing the HST/WFC3 broadband photometry reduced in \cite{mikal-evans_hd3167c}. This work has been carried out in the frame of the National Centre for Competence in Research PlanetS supported by the Swiss National Science Foundation (SNSF). DE acknowledges financial support from the Swiss National Science Foundation for project 200021\_200726. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (project {\sc Spice Dune}, grant agreement No 947634; project {\sc Four Aces}, grant agreement 724427; project {\sc SCORE}, grant agreement No 851555). CHEOPS is an ESA mission in partnership with Switzerland with important contributions to the payload and the ground segment from Austria, Belgium, France, Germany, Hungary, Italy, Portugal, Spain, Sweden, and the United Kingdom. The CHEOPS Consortium would like to gratefully acknowledge the support received by all the agencies, offices, universities, and industries involved. Their flexibility and willingness to explore new approaches were essential to the success of this mission. ACC acknowledges support from STFC consolidated grant numbers ST/R000824/1 and ST/V000861/1, and UKSA grant number ST/R003203/1. S.G.S. acknowledge support from FCT through FCT contract nr. CEECIND/00826/2018 and POPH/FSE (EC). ACC and TGW acknowledge support from STFC consolidated grant numbers ST/R000824/1 and ST/V000861/1, and UKSA grant number ST/R003203/1. YA, MJH and JAE acknowledge the support of the Swiss National Fund under grant 200020\_172746. We acknowledge support from the Spanish Ministry of Science and Innovation and the European Regional Development Fund through grants ESP2016-80435-C2-1-R, ESP2016-80435-C2-2-R, PGC2018-098153-B-C33, PGC2018-098153-B-C31, ESP2017-87676-C5-1-R, MDM-2017-0737 Unidad de Excelencia Maria de Maeztu-Centro de Astrobiologí­a (INTA-CSIC), as well as the support of the Generalitat de Catalunya/CERCA programme. The MOC activities have been supported by the ESA contract No. 4000124370. S.C.C.B. acknowledges support from FCT through FCT contracts nr. IF/01312/2014/CP1215/CT0004. XB, SC, DG, MF and JL acknowledge their role as ESA-appointed CHEOPS science team members. ABr was supported by the SNSA. This project was supported by the CNES. The Belgian participation to CHEOPS has been supported by the Belgian Federal Science Policy Office (BELSPO) in the framework of the PRODEX Program, and by the University of Liège through an ARC grant for Concerted Research Actions financed by the Wallonia-Brussels Federation. L.D. is an F.R.S.-FNRS Postdoctoral Researcher. This work was supported by FCT - Fundação para a Ciência e a Tecnologia through national funds and by FEDER through COMPETE2020 - Programa Operacional Competitividade e Internacionalizacão by these grants: UID/FIS/04434/2019, UIDB/04434/2020, UIDP/04434/2020, PTDC/FIS-AST/32113/2017 \& POCI-01-0145-FEDER- 032113, PTDC/FIS-AST/28953/2017 \& POCI-01-0145-FEDER-028953, PTDC/FIS-AST/28987/2017 \& POCI-01-0145-FEDER-028987, O.D.S.D. is supported in the form of work contract (DL 57/2016/CP1364/CT0004) funded by national funds through FCT. B.-O.D. acknowledges support from the Swiss National Science Foundation (PP00P2-190080). R.D.H. is funded by the UK Science and Technology Facilities Council (STFC)'s Ernest Rutherford Fellowship (grant number ST/V004735/1). MF and CMP gratefully acknowledge the support of the Swedish National Space Agency (DNR 65/19, 174/18). DG gratefully acknowledges financial support from the CRT foundation under Grant No. 2018.2323 ``Gaseousor rocky? Unveiling the nature of small worlds''. M.G. is an F.R.S.-FNRS Senior Research Associate. SH gratefully acknowledges CNES funding through the grant 837319. KGI is the ESA CHEOPS Project Scientist and is responsible for the ESA CHEOPS Guest Observers Programme. She does not participate in, or contribute to, the definition of the Guaranteed Time Programme of the CHEOPS mission through which observations described in this paper have been taken, nor to any aspect of target selection for the programme. This work was granted access to the HPC resources of MesoPSL financed by the Region Ile de France and the project Equip@Meso (reference ANR-10-EQPX-29-01) of the programme Investissements d'Avenir supervized by the Agence Nationale pour la Recherche. ML acknowledges support of the Swiss National Science Foundation under grant number PCEFP2\_194576. PM acknowledges support from STFC research grant number ST/M001040/1. GSc, GPi, IPa, LBo, VNa and RRa acknowledge the funding support from Italian Space Agency (ASI) regulated by “Accordo ASI-INAF n. 2013-016-R.0 del 9 luglio 2013 e integrazione del 9 luglio 2015 CHEOPS Fasi A/B/C”. This work was also partially supported by a grant from the Simons Foundation (PI Queloz, grant number 327127). IRI acknowledges support from the Spanish Ministry of Science and Innovation and the European Regional Development Fund through grant PGC2018-098153-B- C33, as well as the support of the Generalitat de Catalunya/CERCA programme. GyMSz acknowledges the support of the Hungarian National Research, Development and Innovation Office (NKFIH) grant K-125015, a PRODEX Institute Agreement between the ELTE E\"otv\"os Lor\'and University and the European Space Agency (ESA-D/SCI-LE-2021-0025), the Lend\"ulet LP2018-7/2021 grant of the Hungarian Academy of Science and the support of the city of Szombathely. V.V.G. is an F.R.S-FNRS Research Associate. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. Some of the observations in the paper made use of the High-Resolution Imaging instrument ‘Alopeke obtained under Gemini LLP Proposal Number: GN/S-2021A-LP-105. ‘Alopeke was funded by the NASA Exoplanet Exploration Program and built at the NASA Ames Research Center by Steve B. Howell, Nic Scott, Elliott P. Horch, and Emmett Quigley. Alopeke was mounted on the Gemini North (and/or South) telescope of the international Gemini Observatory, a program of NSF’s OIR Lab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. on behalf of the Gemini partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigación y Desarrollo (Chile), Ministerio de Ciencia, Tecnología e Innovación (Argentina), Ministério da Ciência, Tecnologia, Inovações e Comunicações (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). \end{acknowledgements} \bibliographystyle{aa}
1,108,101,563,974
arxiv
\section{INTRODUCTION} The dwarf planet Haumea \citep{2006ApJ...639.1238R}, its two moons \citep{2005ApJ...632L..45B,2006ApJ...639L..43B}, and its collisional family \citep{2007Nature..446..296} provide important constraints on the formation of Kuiper belt and outer solar system. This well-studied object is the fastest-rotating large body in the solar system \citep{2006ApJ...639.1238R} with rotational variability in color \citep{2008AJ....135.1749L,2009AJ....137.3404L}, an unexpectedly high density \citep{2014EM&P..111..127L}, and large albedo \citep{2010Natur.465..897E}. It has two moons on dynamically excited orbits \citep[][hereafter RB09]{2009AJ....137.4766R} which have scaled mass ratios and distances similar to the Earth-Moon system. Dynamical, photometric, and spectroscopic observations of objects in the vicinity of Haumea clearly indicate a collisional family of icy fragments with similarly high albedos \citep{2007AJ....134.2160R,2008ApJ...684L.107S,2010A&A...511A..72S}. However, though the expected dispersion velocity of these fragments is of order several hundred meters per second, the observed dispersion is well constrained within $\sim$150 m s$^{-1}$. The apparent lack of high velocity ejecta is confirmed by observational surveys and dynamical studies \citep[e.g.,][]{2012ApJ...749...33F,2012MNRAS.421.1331L,2012Icar..221..106V}, though it is possible that some high velocity ejecta would be unrecognizable dynamically \citep{2011ApJ...733...40M} and/or compositionally \citep{2012A&A...544A.137C,2012AJ....143..146B}. There is no simple high-probability formation scenario that naturally explains all of these observational constraints: Haumea's rapid near-breakup rotation rate, the two moons on distant and dynamically warm orbits, and a collisional family that is an order of magnitude smaller in velocity dispersion than expected. Though multiple explanations and variations have been proposed \citep[e.g.,][]{2008AJ....136.1079L,2009ApJ...700.1242S,2010ApJ...714.1789L,2011ApJ...733...40M,2012MNRAS.419.2315O,2013AJ....146...89C}, none have adequately and self-consistently explained all of the unique features of this interesting system and its family. Attempting to place the formation of the Haumea system in context with other similar systems in the Kuiper belt quickly leads to comparisons with Kuiper belt objects (KBOs) of similar sizes, particularly Eris, Pluto, and Makemake. Of these, Pluto is the best understood due to a wealth of observational data and the recent flyby by the New Horizons Mission \citep{2015Sci...350.1815S}. Furthermore, there are similiarities between some of the theories for the formation of Haumea's satellites \citep[e.g.][]{2010ApJ...714.1789L} and for the formation of Pluto's satellites \citep[e.g.][]{2011AJ....141...35C}: both suggest a relatively large impactor with very low incoming velocities that undergo a grazing collision to form a satellite system. With the discovery of a retinue of small satellites exterior to Charon's orbit -- now dubbed Styx, Nix, Kerberos, and Hydra -- there is a renewal in interest in observational constraints on the formation of the Pluto system \citep{2006Natur.439..943W,2011IAUC.9221....1S,2012IAUC.9253....1S,2015Natur.522...45S}. Standard explanations for the formation of Nix and Hydra were already problematic \citep{2006Sci...313.1107W,2008arXiv0802.2951L}, and the characteristics of Styx and Kerberos are even more puzzling \citep[][]{2012CeMDA.114..341P,2014AJ....147....8K,2014arXiv1407.1059C}. For example, in the current orbital configuration, the dynamical stability of Styx requires that Charon's eccentricity at its present semi-major axis was never above $\sim$0.035, using the circumbinary stability criterion of \citet[][see Equation 3]{1999AJ....117..621H}. Thus, the discovery of Styx combined with dynamical stability immediately precludes some of the more extreme proposed orbital histories of \citet{2014Icar..233..242C} if Styx formed concurrently with Charon \citep{2014arXiv1407.1059C}. Long-term dynamical stability can also place some of the best constraints on the masses of these small moons \citep{2012ApJ...755...17Y,2013AJ....146...89C,2015arXiv150505933P,2015Natur.522...45S}. The discovery of small moons around Pluto and their ability to add constraints to the understanding of this system suggests that all asteroid and KBO binaries and triples be searched for additional moons. We recommend the continuation of this standard practice, even when an initial companion is identified. For KBOs, satellite searches are observationally difficult for multiple reasons. First, acquiring data of sufficient depth and resolution to identify faint moons of faint KBOs usually requires a considerable amount of time at the best telescopes in the world, such as the \emph{Hubble Space Telescope} (HST) or 8-10 meter class telescopes with Laser Guide Star Adaptive Optics. The only KBOs with large amounts of continuous high-quality data are Pluto and Haumea. Second, the discovery of small moons can be frustrated by their \emph{a priori} unknown satellite orbital motion during long exposures. Faint, fast-moving moons can then evade detection even with the best data, using standard analysis methods. Therefore, an enhanced methodology to search for faint moving moons is required. In an attempt to better understand the formation of the Haumea system, we use a large set of consecutive HST observations to perform a search for very small moons around Haumea similar to those discovered around Pluto ($\S$2). To search for faint, fast-moving moons well below the single-exposure limit, we implemented the non-linear shift-and-stack method proposed by \citet[][hereafter PK10]{2010PASP..122..549P} for the discovery of KBOs ($\S$3). Adapted to the problem of finding additional satellites, this method was both efficient and effective. Though no additional satellites of Haumea were detected ($\S$4), with careful characterization of this null detection, we set strong limits on the size and location of possible undiscovered moons ($\S$5) and discuss the implications for understanding of Haumea's satellite system ($\S$6). \section{OBSERVATIONS} In determining the ideal set of observations for a deep satellite search, a balance must be struck between including the largest number of observations and considering the motion of putative satellites during the total observational baseline. The standard stacking method of adding images that have been co-registered to the position of the primary to enhance sensitivity to faint satellites is limited to observational arcs where the satellite's relative position remains within a region not much larger than $\sim$1 Point Spread Function (PSF) Full Width at Half Maximum (PSF FWHM). Our use of non-linear shift-and-stack can mitigate this problem significantly and allows us to perform a sensitive search on longer timescales. In particular, our HST Program 12243 observed with a wide filter for 10 consecutive orbits and is an excellent dataset for a deep satellite search; with the technique discussed below, we can search these observations even for close-in satellites which traverse a significant fraction of an orbit during the 15-hour baseline, corresponding to several PSF widths. These observations are our main focus as they are clearly the best for a deep satellite search ($\S$2.1); however, we also inspected other observations for the additional satellites of Haumea ($\S$2.2). \subsection{HST 12243: 10 Orbits in July 2010} HST Program 12243 obtained 10 orbits of observations over the course of $\sim$15 hours in July 2010. This program used the Wide Field Camera 3 (WFC3) UVIS imager with the F350LP (long-pass) filter. The primary goal of these observations was the the detection of a mutual event between the inner moon Namaka and Haumea (RB09). In order to produce high cadence time series photometry of the proposed mutual event (and to avoid saturation), exposures were limited to $\sim$45 seconds. To prevent a costly memory buffer download, only a 512x512 subarray of the full WFC3 camera was used with a field of view of $\sim$20.5 arcseconds. HST tracked at Haumea's rate of motion (except for controlled dithering) to maintain its position near the center of the field of view throughout the observations. The geocentric distance to Haumea at time of observations was 50.85 AU. At this distance, 1" corresponds to 36900 km, 1 WFC3 pixel (.04 arcseconds) corresponds to 1475 km, and the entire subarray field of view corresponds to $\sim$750000 km. Parts of the last two orbits were affected by the South Atlantic Anomaly. This caused a portion of the these orbits to lose data entirely, and another portion was severely affected with cosmic rays and loss of fine pointing precision. The most offensive frames were discarded for the purpose of the satellite search, leaving 260 individual exposures. The center of Haumea was identified by eye in combination with a 2-d Gaussian fitting routine. With these well-determined preliminary Haumea locations, all images were co-registered to Haumea's position. In this Haumea-centric frame, cosmic rays and hot pixels were identified by significant changes in brightness at a particular position using robust median absolute deviation filters. A detailed and extensive image-by-image investigation of cosmic rays by eye confirmed that this method was very accurate at identifying cosmic rays and other anomalies. Furthermore, the automatic routine did not flag the known objects (Haumea and its two moons: Hi'iaka and Namaka) as cosmic rays, nor were any other specific localized regions identified for consistent masking (i.e., putative additional satellites were not removed). TinyTim software\footnote{\texttt{http://www.stsci.edu/hst/observatory/focus/TinyTim}} was used to generate local point-spread function (PSF) models. As in RB09, these PSF models were then fit to the three known objects using standard $\chi^2$ minimization techniques \citep{2009ASPC..411..251M}. This identified the best-fit locations and heights of the scaled PSFs, with bad pixels masked and thus not included in the $\chi^2$ calculation. The astrometric positions of the known satellites relative to Haumea seen by this method were in clear accord with their projected orbital motion from RB09. Despite Namaka's nearness to Haumea (which was purposely chosen, as the goal of the observations was to observe a mutual event), it is easily distinguishable in the first few orbits. The best fit PSFs are visually inspected and found to be good fits for all images. These PSFs are then subtracted, removing a large portion of the three signals, but leaving non-negligible residuals; these residuals are caused by imperfect PSFs (note that Haumea may be marginally resolved in some images) and standard Poisson noise. Using the updated best-fit centers of Haumea, these PSF-subtracted images are re-coregistered to Haumea's best-fit location (though the additional shifts from the preliminary 2-d Gaussian centers are very small). As described below, these same PSF-subtracted images are used to perform the non-linear shift-and-stack. Moons with negligble motion relative to Haumea can be identified in this image by stacking all the observations in the Haumea-centered reference frame (including fractional pixel shifts implemented by IDL's \texttt{fshift}). Throughout this investigation, we create stacks by performing a pixel-by-pixel median of the images, which is less sensitive to cosmic rays, bad pixels, and errors in the PSF subtraction of the known bodies. This results in a small decrease in sensitivity, as, assuming white noise, the noise level grows a factor of $\sqrt{\frac{\pi}{2}}$ faster when using the median over the mean. This will correspond to only a $\sim 20\%$ difference in brightness sensitivity, or a $\sim 10\%$ difference in radius, and we find this acceptable so that the other effects mentioned above can be mitigated. A portion of this stacked image around Haumea is shown in Figure \ref{stationary}. Detailed investigation of this deep stack by eye by each co-author yielded no clear satellite candidates. The median stacks were also automatically searched using the IDL routine \texttt{find}, which uses a convolution filter with FWHM of $1.6$ pixels to identify positive brightness anomalies. Detections with SNR of 5 or greater were examined; none were found that were consistent with an additional satellite (e.g., having PSF-like shape). Scaling from the SNR of Haumea and using a more conservative detection limit SNR of 10, this places a limit on non-moving satellites as fainter than about $V\simeq27.1$, corresponding to Haumea satellites with radii less than about $8$ km (see $\S$5). \begin{figure} \centering \plotone{f1.eps} \caption{A portion of the median stack of 260 images from 10 orbits of HST WFC3 data (Program 12243). Individual images are co-registered to be stationary in the Haumea-centric frame, with best-fit TinyTim PSFs for Haumea, Hi'iaka and Namaka subtracted. The brightness has been stretched significantly to highlight the residuals. These residuals will limit sensitivity near Haumea, but the diffraction spikes and the majority of the PSFs have been removed. Above and to the left of Haumea lie the residuals from Hi'iaka which is 1.23" away (45600 km projected distance) in this stack. Vertical darker columns are due to minor uncorrected pixel sensitivity. The image is aligned so that Astronomical North is up. } \label{stationary} \end{figure} \subsection{Other Observations} HST has observed Haumea during many programs for multiple reasons. Program 11518 was proposed to obtain astrometry of both moons and is 5 independent orbits of observations spread over 2 weeks (RB09). Although it would be interesting to investigate the possibility of combining these data in a long-baseline non-linear shift-and-stack, given the existence of other more sensitive datasets, we investigated only the single-orbit median stacked images. Motion during a single 45-minute HST orbit is small compared to the PSF width, even for the shortest satellite orbital periods. HST Program 11971 was 5 consecutive orbits and HST Program 12004 was 7 consecutive orbits, both attempts to observe the last satellite-satellite mutual events. The latter program was within a few weeks of the HST 4th Servicing Mission but was still executed. Unfortunately, for 6.5 of the 7 orbits, the STIS shutter was closed and no on-sky data were taken. For the 5-orbit Wide Field Planetary Camera 2 observation of Program 11971, we median-stacked images centered on Haumea and searched for additional sources by eye and using IDL's \texttt{find} as described above. We investigated stacks of individual orbits and of the entire 5-orbit sequence and found no sources consistent with additional satellites. Though the non-linear shift-and-stack method below could fruitfully be applied to these observations, the WFC3 observations are considerably deeper and we opted to focus on our best dataset. Finally, we obtained some long-duration ($\sim$5 hours) observations of Haumea using the Laser Guide Star Adaptive Optics system at the Keck Observatory. Co-registered stacks of this data also showed no clear additional satellites, though the known satellites were very easily detected. \section{METHODS} For the detection of faint bodies, with signal-to-noise ratio (SNR) per image of $\lesssim$5, a useful approach is the co-addition (``stacking'') of multiple images. With the 260 images in our dataset, this method can increase the SNR by $\sim$$\sqrt{\frac{2}{\pi}}\sqrt{260} \approx 13$, thereby searching for satellites with radii $\sim\sqrt{13}\approx 3.6$ times as small as could be detected in a single image. If the object does not remain apparently stationary (within $\lesssim$1 FWHM) over the course of the observation, the simple co-addition will result in insufficient overlap between images to yield the expected increase in SNR. If the motion of the object is known, images can be first shifted to compensate for this motion, and the images added with the object localized regaining nearly the full sensitivity: this is the meaning of ``shift-and-stack''. Linear searches with shift-and-stack have been used to discover satellites in the past \citep{2004Icar..169..474K,2004Natur.430..865H, 2013DPS....4520601S} although these searches did not need to use the non-linear shift-and-stack method we employ below. In a situation where the motion is unknown, such as a broad search for KBOs, a large set of possible paths on the sky can be considered, and each path independently used as the basis for a shift-and-stack, as described by PK10. A composite image results from each proposed orbital path (which we call a ``sky track''), and each stack can be searched for faint satellites which emerge from the noise due to shifting the image accurately enough to (mostly) compensate for its motion. To minimize statistical false-positives and to increase computational tractability, it is important to identify a near-minimal number of sky tracks that will faithfully reproduce all the possible motions without performing redundant searches. PK10 suggested an algorithm for identifying the most important non-redundant set of sky tracks, which we fruitfully employ: generate a large number of random sky tracks based on the full range of expected motion (within desired search parameters) and then remove tracks that are similar to one another. We have adapted this technique for our search. There is a distinction between a general KBO search and a satellite search, which, it is important to note, is largely ignored in the method presented here. This distinction is that, in a broad KBO search, a sky track could be valid for any part of an image; that is, there is little correlation between position and motion. This is not the case for a satellite orbiting a given primary, in which a specific motion only applies to a small spatial region. The more highly curved tracks are, the more specific to a particular region they are --- a curved orbital arc translated to the other side of Haumea would not make physical sense. The method described below involves shifting and stacking the entirety of each image, and searching the whole of the composite image, when in fact the track upon which the shift-and-stack is based applies to only a small subset of each image. In addition to the computational cost of shifting and searching larger images than is necessary, this overuse of the images could potentially result in an increase in statistical false-positives. However, neither of these effects manifest in a noticeable way --- neither computation time nor an abundance of false-positives limit our search method. This suggests that we are near the optimal minimum number of sky tracks searched, or have at least reached an acceptably small number. Discussed in greater detail below, an overview of our search algorithm is as follows: \begin{enumerate} \item Generate a large bank of physically reasonable putative sky tracks by randomly selecting from plausible Keplerian satellite orbital parameters. \item Fit each sky track with non-linear polynomials in time (shift rates). If the shift rates for two distinct sky tracks are similar enough (quantified below), discard one. \item Continue searching for sky tracks until a nearly-complete non-redundant set is identified. \item For each track, create a composite image. This is done by overlaying the dataset (in our case, 260 images) upon itself, with the images shifted by the appropriate shift rates such that an object on that track will appear in the same place in each image. Co-add the images into one composite. \item Search each composite image for satellite candidate sources. \end{enumerate} The use of non-linear polynomial fits allows the shift rates to more accurately capture curved orbits than simple linear fits. For the motion of even the fastest detectable Haumean satellites over the timescale of our observations, we find that quadratic fits to the x and y positions are always sufficient. Note that the polynomial fits are included for convenience in describing the sky tracks; the actual positions of a putative satellite could be used, but the difference between the actual positions and the best-fit quadratic approximate was negligible. Including non-linear rates is often expected to greatly expand the number of dissimilar shift rates to the point of computational impracticality, but we find that an appropriate criterion for similarity of shift rates easily permits the inclusion of quadratic rates. \subsection{Generation of Sky Tracks} In a typical shift-and-stack search for KBOs, the putative sky tracks are selected from a grid of the six degrees of freedom needed to describe an object in a Keplerian orbit \citep[PK10, ][]{2004AJ....128.1364B}. For the purposes of a KBO satellite, particularly that of a primary with other known satellites, it is convenient to instead sample the space of Keplerian orbital elements relative to the primary: semi-major axis ($a$), eccentricity ($e$), inclination ($i$), longitude of the ascending node ($\Omega$), argument of periapse ($\omega$), and mean anamoly at epoch ($M$). Sampling in this space allows for direct control over the types of orbits that are searched, making it straightforward to exclude unphysical motions. In our case, we also benefit from a well-known mass of the primary; if this is not known, a variety of plausible values could be sampled for the generation of sky tracks. For this search, $a$ and $e$ were randomly sampled from orbits with semi-major axes between 5310 and 368000 km and eccentricities less than 0.5, while the orbital angles $i$, $\Omega$, $\omega$ and $M$ were allowed to assume any value. All parameters were chosen from the sample space uniformly, with the exception of $a$, which was sampled on a log scale to increase the likelihood of sampling an orbit in the regime of fast-moving satellites. The lower bound on $a$ is a constraint imposed by our sensitivity of detection. This limit corresponds to 3.75 pixels (15 milliarcseconds) on the WFC3, at which distance from the center of Haumea the subtraction noise is considerable enough to make reliable detections difficult (see Figure \ref{stationary}). The upper bound on $a$ is set much larger than the semi-major axes needed to shift images (as opposed to investigation of the unshifted stack). At a distance where the satellite's maximum velocity would cause it to travel less than one PSF FWHM over the course of the 15 hour observational period (here $\sim$27 m s$^{-1}$), shift-and-stack is unnecessary, giving the upper limit of $a \simeq 150,000$ in our search. This semi-major axis is $\sim$3 times the semi-major axis of Hi'iaka, whose motion in these frames is detectable, but $\lesssim$0.5 pixels. For the upper limit on $a$, we doubled this number to be conservative. \label{notmoving} Much of this orbital parameter space can be excluded on physical grounds, reducing the number of shift rates necessary to well-sample the space. Any putative orbit which crossed paths with the known satellites was rejected, as was any orbit with periapsis less than $3000$ km. These weak restrictions on orbital elements did not appreciably affect the selection of shift-and-stack parameters and additional tests (described below) show that we are sensitive to objects on practically any orbit with semi-major axis $\gtrsim$10000 km. \subsection{Non-linear Fitting and Shift Rate Similarity} Having created a bank of physically plausible orbits, we then generate a set of shift rates with which to create composite images to search for satellites. Orbital parameters were converted into sky coordinates relative to Haumea, right ascension ($\Delta$RA) and declination ($\Delta$Dec), for each image, as described in RB09. We assumed an instantaneous Keplerian orbit for the position of the satellite as this is an excellent approximation over the course of our observations. In our case, orbital acceleration was quite important, as we desired to search orbital periods down to $\sim$40 hours, of which the 15-hour observational arc is a sizable fraction. Therefore, the sky positions $\Delta$RA and $\Delta$Dec were fit with quadratic polynomials in time, which we found were sufficient to accurately describe the non-linear motion in every case. In order to minimize the number of sky tracks, we eliminated tracks which were similar to one another, as suggested by PK10. To determine if two tracks were similar, we focused on the final requirement that the shift rates localize the flux of satellite so that it can be identified in the stacked image. If the flux of a satellite traveling along the second orbit would be well-localized by the shift rates of the first, then there would be considerable overlap of the flux between images when shifted according to the rates of the first. This criterion can be quantified by calculating the overlap fraction between two shift-and-stack rates using the reasonable assumption that the WFC3 PSF is nearly Gaussian, with FWHM of .067 arcseconds ($\approx$1.7 pixels). For a pair of shift rates, the overlap was defined for each image in the dataset as the integral of the product of two such Gaussians separated by the difference in the two rates ($\Delta$RA and $\Delta$Dec) at the time of that image. We call this the overlap between two orbits as it is calculated from the product of two overlapping PSFs, but it is distinct from the concept of the overlap in co-added images. If the median overlap (normalized to 1 for perfect coincidence) was greater than a pre-specified threshold, it was considered that a sufficient fraction of the flux of the proposed satellite would have been collected by the stack of an existing sky track, and the new track was rejected as unnecessary. The goal is to build up a bank of sky tracks known to be mutually distinct. After accepting the first track into the bank, each subsequent track was compared to the previously selected tracks in the bank using the above overlap criterion. We experimented with different overlap threshold criteria and found the overall results mostly insensitive to the specific value chosen. In general, we required a median overlap of less than 0.7 with each previously accepted shift-and-stack tracks to accept the proposed track as distinct enough to add to the bank. By drawing from a large set of orbits covering the desired search space, this method efficiently builds a bank of mutually distinct shift rates that are also the most relevant (PK10). However, unlike a grid search, random orbital draws can continue indefinitely. Thus, we also require a ``stopping criterion'' to decide when the bank is large enough for practical use. To determine the upper limit on the number of necessary shift rates, we noted that the sample space saturates quickly; that is, the rate of acceptance drops off drastically after 10-15 shift rates are chosen. Consequently, the number of orbits rejected between successive accepted rates grows very quickly. Our criterion for a dense sampling was that the number of rejected rates between successive accepted rates was at least equal to the total number of rates rejected so far. Put another way, the selection was stopped when the acceptance of each new shift rate required doubling the number of sampled orbits, which typically occurred after testing hundreds of thousands of random orbits. This is an exponentially slow process, which suggests that once past this threshold, we have reached the limits of our rate selection method. Practically speaking, we found that this stopping criterion still generated enough shift rates to recover injected sources with a complete variety of orbits. Together with the above ranges for satellite orbital parameters, this method yielded only $\sim$35 sufficiently distinct orbits over which to search. Considering the size of the parameter space (linear and quadratic terms for shifts in both x and y directions), it might seem surprising that so small a set of potential orbits spans the space. However, a large portion of the space consists of short, almost entirely linear tracks, where the quadratic corrections are of limited importance. The only strongly quadratic orbits are very near to Haumea, which also have large linear rates. In other words, there are strong correlations between the allowed linear and quadratic coefficients of physically-plausible tracks. The result is a relatively small number of non-linear shift rates that efficiently cover the desired search space (PK10); these tracks are illustrated in Figure \ref{tracks}. Contrasted with the orbital element motivated sampling presented here, the number of shift rates in the case of a quadratic sky motion grid search would have been much larger. \begin{figure} \centering \plotone{f2.eps} \caption{The non-linear shift-and-stack rates. Arcs show the displacement of images (relative to position at the middle image) over the course of the 15 hour observation. Each arc represents a different shift rate or ``sky track.'' Horizontal and vertical axes show differential right ascension ($\Delta$RA) and declination ($\Delta$Dec) in arcseconds and pixels (1 pixel = 0.04 arcseconds = 1475 km). The circle at bottom left has diameter of .067 arcseconds, the FWHM of WFC3's PSF. Following the method suggested by PK10, we generate random non-linear shift rates out of Keplerian orbital elements. We reject as duplicates rates for which the overlapping PSFs would catch at least 70\% of the flux if moving at the same rate as an existing orbit (see $\S$3.2). This method requires only $\sim$35 non-linear rates to cover the vast majority of parameter space. The ``sky tracks'' associated with these rates are mostly symmetric about the origin as seen above, with slight asymmetries arising from the projection of eccentric orbits into the skyplane, and the variation in orbital speed throughout an orbit. Almost all rates are substantially quadratic, which shows the importance of the non-linear approach. As can be seen, the use of quadratic shift rates allowed us to probe the region near Haumea where satellites would execute sizable fractions of an orbit during the 15-hour observation. Implantation of artificial sources on orbits randomly drawn from the same Keplerian elements showed an excellent recovery rate (see Figures \ref{avsb} and \ref{svsb}).} \label{tracks} \end{figure} \subsection{Creation of Composite Image} With our bank of non-degenerate sky tracks, we can now perform the non-linear shift-and-stack procedure. Each track corresponds to a specific set of $\Delta$RA and $\Delta$Dec values of a putative satellite relative to Haumea. We used \texttt{adxy}, a routine from the IDL Astro Library which uses astrometric data from the image headers, to convert these sky coordinates into on-image pixel positions, thus yielding the desired pixel shifts. The prepared images were shifted (including fractional pixel shifts implemented by IDL's \texttt{fshift}) and stacked using the pixel-by-pixel median of the images as described above. In preparation for the automated search, many images were investigated in detail by eye. \subsection{Sensitivity} To test the sensitivity of this method, artificial sources were implanted into the images with a range of random brightnesses. Their positions and rates of motion were determined by orbits randomly drawn from the same space mentioned above (but without restriction of non-crossing orbits with the known moons). The implants were generated by scaling from the actual PSF of Haumea (when brightest). This source was implanted into the images at the pixel positions corresponding to the randomly-chosen orbit. A subimage of 200 x 200 pixels was used for the search: the outer reaches of this subimage have objects that are practically not moving (see $\S$\ref{notmoving}) and any object beyond this region would have the same detectability threshold as stationary objects. This was done for a large number of orbits, with a new set of images created for each. Stacks were generated in the exact same manner as the real images, with the same $\sim$35 shift rates making new median stacks for each new set of images. These stacks were inspected using the same automated search routine (IDL's \texttt{find}). To distingush detections of implanted sources from the detection of the three known bodies, we examined the output of the search for the sets of stacks with no sources implanted. All detections here were due to known bodies, and the positions were used to establish a mask with three regions, one for each known body, to reject detections that were not due to implanted sources. In this way, detections could be automatically classified as a recovery of an implant or as a false positive due to the known bodies. These automated classifications were extensively verified with an investigation that included searching by eye and found to be very robust. Due to the application of a threshhold SNR by the \texttt{find} routine, objects in the vicinity of Haumea, while still far enough and bright enought to be seen by eye, may be rejected by the routine itself (not our masks). The presence of the primary nearby leads to an artificially high computed background noise level, which reduces the computed SNR significantly, causing the object to appear below threshhold. Any stacks with sources at risk of being left undetected due to this effect were searched by eye by multiple coauthors, and any that were detected in this processes were considered to be recovered for the purposes of our results, shown below. The success rates of finding implanted objects places constraints on additional satellites of Haumea: any recovered implanted source represents a satellite that we can say with reasonable certainty is not present in the Haumea system. \section{RESULTS} The implantation and successful recovery of faint moving sources clearly indicated the effectiveness of our non-linear shift-and-stack method. Nevertheless, we did not detect any additional satellites around Haumea and no candidate satellites were found that were worthy of additional investigation. A careful characterization of this null result is able to place strong limits on the brightness and separation of undiscovered Haumean satellites. These limits are summarized by Figures \ref{avsb} and \ref{svsb}, which show the results of our search for each implanted source. The source is either recovered, rejected (for being too close to one of the three known objects, usually Haumea), or ``missed'' because was too faint to be detected, or because it fell off the 200 x 200 subimage that was searched. Note that the ``rejected'' category is primarily composed of objects that we not clearly detected by the automated routine, but were detected in a blind search by eye by multiple coauthors; these consistent entirely of objects that are $\lesssim$0.2" (5 pixels) from Haumea. Figure \ref{avsb} plots semi-major axis against brightness of the sources as a fraction of that of Haumea, while Figure \ref{svsb} shows the projected distance (in arcseconds) of the moving sources versus the brightness. Assuming the same albedo ($p \simeq 0.7$) as Haumea, the relative brightness corresponds to the radius of a spherical satellite, which is also shown. \begin{figure} \centering \plotone{f3.ps} \caption{(Color online) Results from the sensitivity survey. The figure shows implanted sources that were either recovered (blue stars), not recovered (red crosses), or recovered but rejected due to confusion with existing sources (green squares). The horizontal axis is the semi-major axis of implanted objects in thousands of kilometers. The left-hand vertical axis is brightness relative to Haumea (when brightest). The right-hand vertical axis is radius of a spherical satellite assuming an albedo ($\sim$0.7) similar to Haumea. Diamonds represent know satellites Namaka and Hi'iaka; the vertical lines are guides to the eye at their respective semi-major axes. Purple triangles represent the moons of Pluto --- Nix, Hydra and Kerberos --- according to brightness relative to the primary. (The smallest moon Styx, with brightness approximately $6\times 10^{-6}$ that of Pluto, is below the range of brightness represented on this figure.) Because of differences in geocentric distance and albedo, the approximate radius does not directly apply to these three points. Figure \ref{svsb} is similar but shows distance in projected separation instead of semi-major axis. Bounds on brightness and semi-major axis of were chosen as described in $\S$3.1. The unrecovered implantations at semi-major axis $\gtrsim 200\times 10^3$ km are not found because their distance from Haumea often places them outside the subimages searched. This figure shows that satellites with radii as low as $\sim$8 km would be detectable in much of the space searched, and that our lower detection limit on semimajor axis is limited by the properties of the dataset, not by the sensitivity of the non-linear shift-and-stack technique. Nix and Hydra-like objects would be detected around Haumea, while Styx and Kerberos-like objects would still be too faint, mostly due to Haumea's further distance (50 AU compared to Pluto's 30 AU). } \label{avsb} \end{figure} \begin{figure} \centering \plotone{f4.ps} \caption{(Color online) Results from the sensitivity survey. The lower horizontal axis is the sky-planet projected separation from Haumea in arcseconds, while the upper axis gives approximate projected distance from Haumea in thousands of kilometers. The vertical axes are brightness and radius of implanted sources, as described in the caption to Figure \ref{avsb}. Symbols connected by horizontal lines show the maximum and minimum apparent distance from Haumea of the implanted object during the 15-hour "observation." As in Figure \ref{avsb}, implanted sources were either recovered (blue stars), not recovered (red crosses), or recovered but rejected due to confusion with existing sources (green squares). Note that implantations at separation greater than 4 arcseconds are unrecovered because they fall outside the region of the $\sim$4" subimages that were searched (see $\S$5.3). The vertical dashed line is a guide to the eye for this rough cutoff. No sources were implanted at separations larger than 10 arcseconds, corresponding to the upper limit on semi-major axis shown in Figure \ref{avsb}. Diamonds represent Hi'iaka and Namaka as they appear in the observaion; Namaka's separation is only given for the first four orbits where its presence is measured reliably enough for precise astrometry; as these observations were designed to catch Namaka in a mutual event, its projected separation would approach very low values if all ten orbits were included.} \label{svsb} \end{figure} \section{DISCUSSION} The constraints on undiscovered Haumean satellites can be divided into three categories based on orbital semi-major axis: close-in satellites ($a$ $\lesssim$ 10000 km), intermediate satellites (10000 km $\lesssim$ $a$ $\lesssim$ 350000 km), and distant satellites ($a$ $\gtrsim$ 350000 km). \subsection{Limits on Close-in Satellites} At a semi-major axis of $\sim$10000 km, the maximum separation of a satellite from Haumea would be 6.9 WFC3 pixels. Within 7 pixels ($\lesssim 4$ PSF FWHM) of Haumea, it is very difficult to recover objects due to imperfect subtraction of Haumea's PSF. It is possible that an empirical PSF subtraction would perform better for recovering very close-in satellites, but we do not consider such an approach here. As can be seen in Figure \ref{svsb}, there is the expected anti-correlation between the brightness of an object that can be recovered and the separation from Haumea: close in, only brighter objects can be found. However, there are dynamical reasons to expect that this region is nearly devoid of satellites. Due to Haumea's highly triaxial shape, the orbital region near Haumea is strongly perturbed and long-term stable orbits are difficult to maintain. According to \citet[][]{1994Icar..110..225S}, periods less than about 10 times the spin period are unlikely to be stable due to primary-spin-satellite-orbit resonances. In Haumea's case, this is exacerbated by the additional effects of tidal evolution and other dynamically excited satellites \citep[][]{1999AJ....117..603C,2013AJ....146...89C,2014arXiv1407.1059C}. An orbital period that is 10 times the spin period corresponds to a semi-major axis of about 5000 km (about 5 times the long-axis radius of Haumea). While about twice as distant as the Roche radius, for long-term dynamical stability, we consider this the inner limit. Even if satellites were originally found in such short orbits, it is possible that long-term tidal evolution would have moved them to a more detectable distance. A detailed analysis by \citet{2013AJ....146...89C} calls into question the idea first proposed that the satellites tidally evolved outwards from orbits near the Roche lobe. While extensive tidal evolution might not have taken place, it is worth noting that scaling the tidal evolution from the properties of the other satellites \citep{2005ApJ...632L..45B}, indicates that even for the smallest satellites we could have detected (which evolve the shortest distance due to tides), tidal evolution would have placed them near or beyond the $\sim$5000 km detection threshold. There remains a range of semi-major axes from 5000-10000 km that could potentially harbor very small undetected satellites which would be somewhat protected from dynamical and tidal instability. By lying well within Haumea's PSF, these satellites also generally evade detection. Furthermore, some satellites would not have been detected if they had an orbital phase placing them at undetectably small distances (although this is mitigated somewhat by observations at a variety of times). Overall, it is difficult to hide stable inner ($a \lesssim$10000 km) Haumean satellites with radii $\gtrsim$30 km. \subsection{Intermediate Satellites} At semi-major axes between about 10000 and 350000 km lies the region of satellites near where the other two moons are detected (at semi-major axes of 25600 km for Namaka and 49900 km for Hi'iaka, RB09). At this distance, contamination from Haumea is negligible and the main limitation to detecting satellites is insufficient SNR or falling beyond the edge of the image. By using the non-linear shift-and-stack we maximize the search depth, particularly closer to Haumea. The search depth can be reported as relative brightness (in magnitudes and flux) and as the radius of a spherical satellite assuming the same albedo as Haumea. As is usual for such deep searches, the recovery rate is a function of magnitude (Figure \ref{avsb},\ref{svsb}). We reliably detect satellites at -9.2 magnitudes (0.0002 relative brightness, radius of 10 km), our recovery rate is roughly 50\% at -10 magnitudes (0.0001 relative brightness, radius of 8 km), and our best case recovery is at -10.4 magnitudes (0.00007 relative brightness, radius of 6 km). Following typical practice, we summarize the recovery depth using the 50\% recovery rate. Note that it is possible that the albedo of the satellites is even higher; using Haumea family member 2002 TX300's measured albedo of 0.9 \citep{2010Natur.465..897E} instead of Haumea's presumed 0.7 albedo \citep{2014EM&P..111..127L} would imply a radius detection threshold of only $\sim$7 km (or $\sim$5 km in the best case). While close approaches to Hi'iaka and Namaka as projected on the sky would result in a missed detection for faint objects, this is generally unlikely (even for orbits coplanar with the known satellites which are near edge-on, RB09). Close approaches to Hi'iaka and Namaka are negligibly unlikely to happen at more than one epoch\footnote{Unlike irregular satellites of the giant planets, long-term tidal stability precludes Hi'iaka or Namaka from being binaries themselves.}, thus any missed detection would be mitigated for moderately bright objects by the non-detection of satellites in other datasets. Thus, we expect that this region of the Haumean system does not contain undiscovered satellites larger than $\sim$8 km in radius. Our results compare favorably with the current state of knowledge regarding the small satellites of Pluto. From the New Horizons flyby, we now have detailed knowledge of the albedoes (about 0.5) and sizes of the small satellites: $\sim$10 km for Styx and Kerberos and $\sim$40 km for Nix and Hydra \citep{2015Sci...350.1815S}. As Figure \ref{avsb} shows, we predict that a satellite of apparent magnitude relative to Haumea similar to that of Hydra or Nix around Pluto (-8.7 and -9.2 magnitudes respectively) would fall above our detection limit. With the higher expected albedo (0.7) of Haumean satellites, we would have detected objects as large as Styx and Kerberos. We conclude that Haumea very likely does not contain small satellites similar to Pluto's. \begin{deluxetable*}{llrrrrrc} \label{magnitudes} \tabletypesize{\footnotesize} \tablewidth{0pt} \tablecaption{Summary of Estimated Properties of Dwarf Planet Satellites} \tablehead{\colhead{Object} & \colhead{Satellite} & \colhead{Relative Brightness} & \colhead{$H_{sat}$\tablenotemark{a}} & \colhead{$V_{sat}$\tablenotemark{a}} & \colhead{Radius\tablenotemark{b}} & \colhead{$a$\tablenotemark{c}} & \colhead{Ref}\\ & & (magnitudes) & & & (km) & ($10^3$ km) &} \startdata Haumea & Hi'iaka & $-3.3$ & $3.4$ & $20.5$ & 200 & $50$ & 1 \\ Haumea & Namaka & $-4.6$ & $4.7$ & $21.8$ & 150 & $26$ & 1\\ Haumea & ``close'' upper limit & $-6.7$ & $6.8$ & $23.9$ & 30 & $\lesssim$10 & 2\\ Haumea & ``intermediate'' upper limit & $-10.0$ & $10.1$ & $27.6$ & 8 & 10-350 & 2\\ Haumea & ``distant'' upper limit & $-6.2$ & $6.3$ & $23.4$ & 40 & $\gtrsim$350 & 2\\ \hline Pluto & Charon & $-2.6$ & 1.9 & 16.6 & 350 & $20$ & 3 \\ Pluto & Hydra & $-8.7$ & 8.0 & 22.7 & 41 & $64$ & 3 \\ Pluto & Nix & $-9.2$ & 8.5 & 23.2 & 35 & $49$ & 3 \\ Pluto & Kerberos & $-12$ & 11 & 26 & 12 & $59$ & 4 \\ Pluto & Styx & $-13$ & 12 & 27 & 11 & $42$ & 5 \\ \hline Eris & Dysnomia & $-6.7$ & 5.5 & 25.4 & 60 & $37$ & 6 \\ Eris & ``close'' upper-limit & $-5.8$ & 4.6 & 24.5 & 80 & $\gtrsim$18 & 6 \\ Eris & ``distant'' upper-limit & $-8.2$ & 7.0 & 26.9 & 30 & $\gtrsim$37 & 6 \\ \hline Makemake & S/2015 (136472) 1 & $-7.8$ & 7.4 & 24.7 & 25 & $\sim$100 & 8\\ Makemake & upper-limit &$-10$ & 9.6 & 26.9 & 8 & $\gtrsim$30 & 7,8 \\ \enddata \tablecomments{Magnitudes and semi-major axes of bodies in KBO systems. The relative magnitude of the faintest detectable bodies in our search is -10, comparable to that of Hydra and Nix. For Eris and Makemake, values are more approximate and/or interpolated from published estimates. We do not list the large number of KBO binaries \citep[e.g.][]{2008ssbn.book..345N} or KBO triple 1999TC36 \citep{2010Icar..207..978B} since the formation of these systems appears to be distinct from processes associated with dwarf planets. In particular, these binaries tend to be nearly equal brightness without known small additional companions.} \tablenotetext{a}{Approximate absolute magnitude ($H$) or approximate apparent magnitude in a typical optical filter ($V$) of the satellite. These are calculated combining the relative magnitude with the absolute and typical apparent magnitudes of the KBOs from JPL Horizons. These are meant mostly for illustration purposes and generally have significant uncertainties of $\lesssim$1 magnitude.} \tablenotetext{b}{Radius estimate in kilometers, listed for illustration purposes only. Quoted radii for the highly ellipsoidal small satellites of Pluto are volumetric means (S. Porter, pers. comm.). Note that these have albedoes of 0.5, somewhat less than assumed for Haumea's moons. For simplicity and ease of inter-comparison, observed moons of Eris and Makemake are given an estimated albedo of 0.7 like the Haumea moons. The actual albedo and size of these moons is not well constrained.} \tablenotetext{c}{Approximate semi-major axis in units of thousands of kilometers. For upper-limits, this is the approximate range of semi-major axes where this limit applies. The discovery of S/2015 (136472) 1 by \citet{2016arXiv160407461P} within the magnitude and distance ``upper-limit'' quoted by \citet{2008ssbn.book..335B} is easily attributed to the difficulty of detecting moons with small semi-major axes and/or edge-on orbits in single-epoch observations when the actual on-the-sky separation is often small enough to render the moon indistinguishable from the primary \citep{2016arXiv160407461P}. The upper-limits reported here should be understood with that caveat. } \tablerefs{ (1) RB09 \citep{2009AJ....137.4766R} \quad (2) $\S$4, this paper \quad (3) \citet{2006Natur.439..943W} \quad (4) \citet{2011IAUC.9221....1S} \quad (5) \citet{2012IAUC.9253....1S} \quad (6) \citet{2007Sci...316.1585B} \quad (7) \citet{2008ssbn.book..335B} \quad (8) \citet{2016arXiv160407461P} } \end{deluxetable*} \subsection{Distant Satellites} Satellites with semi-major axes beyond 350000 km may not have been detected in the Program 12243 WFC3 data due to the small field-of-view employed for the subarray observations. Other HST and Keck observations that were not as deep covered a larger area and were also searched for satellites. We estimate that satellites larger than about 40 km in radius (again assuming an albedo similar to Haumea's) would have been detected even several tens of arcseconds away by, e.g., the WFPC2 observations (with a field of view of 162"). Because the motion of sattelites in this region is negligible over the relevant timescales, the shift-and-stack method is not necessary. Using half the size of Haumea's Hill sphere at perihelion as an estimate of the full region of stable satellites \citep{2008AJ....136.2453S}, the semi-major axis of the most distant stable satellites would be about 4.6 $\times$ $10^6$ km or 124". About half of this volume has been covered down to 40 km in radius. For comparison, the Program 12243 deep observations covered separations up to about 10" around Haumea or 350000 km. Thus, this limit on very small intermediate-range satellites corresponds to about 0.5\% of the stable region radius. \section{CONCLUSIONS} By efficient application of the PK10 method for non-linear shift-and-stack and recovery of known implanted sources, we have strongly limited the possibility of undetected satellites in orbit around Haumea. As Figure \ref{svsb} shows, we detect no satellites larger than $\sim$8 km in radius with separations between 10000 and 350000 km. This same region around Pluto contains Charon and 4 small satellites which, by size, would all have been detected in this search. Nearer to Haumea, diffraction limits make distinguishing small satellites difficult, but there are dynamical reasons to expect that this region is mostly unpopulated. Further from Haumea, other observations would have detected satellites larger than $\sim$40 km in radius within much of the entire region of possible stable satellites. Significant improvement in the detection limits on smaller satellites would require extensive observations that are unlikely in the foreseeable future until, perhaps, deep observations with the James Webb Space Telescope. Though Pluto contains multiple small moons and some formation theories \citep[e.g.,][]{2008arXiv0802.2951L} predict them in the Haumea system, we find no additional Haumean moons. Considering upper limits from other studies (summarized in Table 1), Nix/Hydra analogues would have been discovered if present around Makemake and they would be near the detection threshold around Eris. As the properties of the dwarf planet satellite systems differ significantly, it was not anticipated that Pluto's small satellites would necessarily find counterparts around Haumea, though it seems that Makemake may have a satellite of similar size \citep{2016arXiv160407461P}. Our null result affirms that, for the time being, Pluto is the only known KBO with a retinue of small satellites, though such could have been detected or nearly detected around all four dwarf planets. This implies that the satellite systems may result from somewhat different formation pathways, although all the dwarf planet satellites are probably connected with a collisional formation. Pluto's small satellite system may be connected with Charon since, from a dynamical perspective, the other dwarf planet satellites are more like small moons compared to the near-equal-sized Pluto-Charon binary. We demonstrate that the non-linear shift-and-stack is a valuable tool for satellite searches. Utilizing the application techniques developed herein, this method can sufficiently capture the nonlinearity of the orbits of fast-moving satellites close to the primary. We have applied this technique to the regime of searching for sub-threshold satellites around Haumea, but it could also be used for other long-observation datasets (PK10). Besides discovery of new moons, it has promise for improving astrometric parameters for known faint moving satellites (e.g., precovery observations of Styx and Kerberos). The tractability of the non-linear shift-and-stack also promotes the possibility of applying this to the general search for KBOs, as originally proposed by PK10. Other applications for improving sensitivity are also possible, e.g. searching for moving exoplanets in direct imaging campaigns \citep{2013ApJ...771...10M}. To facilitate further analyses, all data and source codes used in this project are available upon request. The sensitivity and tractability of the method presented in this work suggests that, when appropriate, it should be applied to other satellite searches in the solar system. The non-detection of small satellites around Haumea increases our understanding of this intriguing object and contributes to the our understanding of the formation and evolution of multiple KBO systems. \acknowledgements We thank Alex Parker, Danielle Hastings, and the anonymous referee for discussions and suggestions that improved the manuscript. DR acknowledges the support of a Harvard Institute for Theory and Computation Fellowship. This work is based on NASA/ESA Hubble Space Telescope Program 12243. Support was provided by NASA through grants HST-GO-12243 from the Space Telescope Science Institute (STScI), which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. \bibliographystyle{apj}
1,108,101,563,975
arxiv
\section{Introduction} Event shape variables are known to exhibit rather large hadronisation effects, not treated by purely perturbative calculations. It has been shown \cite{Webber:1994zd} that the size of these corrections varies as $(\Lambda/Q)$, with $Q$ being the hard scale of the process, for most of the event shape observables. This article focuses on power corrections in the approach initiated by Dokshitzer and Webber \cite{Dokshitzer:1995zt}, since these are the most complete theory and often used in comparisons to data. There is a wealth of studies performed in \mbox{$e^+e^-$~} annihilation which gives support to this concept, cf.\ to contribution {\bf R002} in these proceedings. It is interesting to extend these studies to deep-inelastic scattering (DIS), in order to investigate the universality of the ansatz and to check for possible modifications of the hadronisation process due to the presence of a proton remnant. After power corrections to mean values of event shapes variables became available \cite{Webber:1995ka}, the H1 and later also the ZEUS Collaboration published analyses which corroborated this concept \cite{Adloff:1997gq,Chekanov:2002xk}. A comprehensive review on the subject of event shapes can be found in \cite{Dasgupta:2003iq}. When comparing the $ep$ scattering to \mbox{$e^+e^-$~} annihilation, one moves from an s-channel to a t-channel exchange, with the negative four momentum transferred $Q$ corresponding to the center of mass energy $\sqrt s$. At HERA a large range of the scale is available in a single experiment, typically stretching $ 5 < Q< 115{\,\rm GeV}$. The theoretical treatment is complicated by initial state singularities related to the incoming proton, which are absorbed in the parton density functions (pdf). In order to reject the proton dissociation part of an event (the remnant), the event shapes are calculated in the Breit frame of reference, where the separation between particles from the hard scattering and the remnant is clearest. The boost to the Breit frame is determined by the event kinematics and the boosted particles are separated into hemispheres with pseudorapidity $\eta<0$ (current hemisphere) and $\eta>0$ (remnant hemisphere). Usually only particles of the current hemisphere enter the event shape definition, where this hemisphere resembles to some extend one half of an \mbox{$e^+e^-$~} event. In order to ensure infrared safety at all orders a minimal energy in the current hemisphere is applied as part of the observable's definition, e.g.\ $E_{CH}>Q/10$. \section{Event Shape Variables} The observables covered in the following may be separated into three categories: 2-jet event shapes, 3-jet event shapes and jet rates. The term ``$n$-jet'' denotes that at least $n$ particles in the final state are needed for a non vanishing value of the variable. Note that particles from the proton remnant is not included in $n$. Common examples for 2-jet event shapes are thrust, jet broadening, jet mass and the C-parameter. For this class of observables the most advanced theory predictions are available, therefore many analyses concentrate on those. 3-jet variables which have been proposed are the out-of-plane momentum and the azimuthal correlation. While the theoretical predictions are yet less complete for these variables, additional insights to power corrections are possible due to sensitivity to hadronisation from a gluon and applications to hadron-hadron collisions. Closely related to event shapes are jet rates which make use of a jet clustering algorithm, such as the $k_t$ or the JADE algorithm. Here the algorithm is applied on particles on both hemispheres and the $n$-jet rate is defined as the maximum value of the cut off parameter $y_{\rm cut}$ where the event is clustered to $n+1$ jets ($+1$ denotes the proton remnant). A particular distinction made for 2-jet event shapes is only possible for DIS: between ones making use of the exchanged boson axis (mostly from a virtual photon) in their definition and those without that reference axis, as for the definitions used in \mbox{$e^+e^-$~} annihilation. Namely for thrust and jet broadening both variants are investigated, whereby the explicit use of the boson direction implies sensitivity to radiation into the remnant hemisphere through recoil effects on the current quark~\cite{Dasgupta:2002dc}. The thrust variable $\tau$\/ with respect to the boson axis is defined as \begin{eqnarray} \tau=1-T\quad\mathrm{with}\quad T&=&\frac{\sum_h|\vec p_{z,h}|}{\sum_h|\vec p_h|} \ , \label{eqn:thrust} \end{eqnarray} and the thrust variable $\tau_C$\ \begin{eqnarray} \tau_C=1-T_C\quad\mathrm{with}\quad T_C&=&\max_{\vec n_T} \frac{\sum_h|\vec p_h\cdot \vec n_T|}{\sum_h|\vec p_h|} \ , \label{eqn:thrustc} \end{eqnarray} where the direction $\vec n_T$ maximises the sum of the longitudinal momenta of all particles in the current hemisphere along this axis. The Jet Broadening is defined as \begin{eqnarray} B&=&\frac{\sum_h|\vec p_{t,h}|}{2\sum_h|\vec p_h|} \ . \label{eqn:bparameter} \end{eqnarray} The squared Jet Mass is calculated as \begin{eqnarray} \rho&=&\frac{(\sum_h E_h)^2 - (\sum_h\vec p_h)^2}{(2\sum_h|\vec p_h|)^2}. \label{eqn:jetmass} \end{eqnarray} In the following the symbol $\rho_0$ is used, which indicates that in the above definition the hadrons are treated as massless, replacing the energy $E_h$ by the modulus of the 3-momentum $|\vec p_h|$. This adjustment is made since the theoretical predictions assume the partons to be massless. Mass effects can be huge, especially for the jet mass observable. The $C$-Parameter \begin{eqnarray} C&=&\frac{3}{2}\frac{\sum_{h,h'}|\vec p_h||\vec p_{h'}|\sin^2\theta_{hh'}} {(\sum_h|\vec p_h|)^2} \ , \label{eqn:cparameter} \end{eqnarray} where $\theta_{hh'}$ is the angle between particles $h$ and $h'$. Note that in all the definitions the momenta are defined in the Breit frame and the sums extend over all particles in the current hemisphere. The out-of-event plane momentum is defined as \begin{eqnarray} K_{\textrm{out}}={\sum_h}' |p_h^{\textrm{out}}|. \end{eqnarray} Results are presented in terms of $K_{\textrm{out}}/Q$. Here $p_h^{\textrm{out}}$ is the out-of-plane momentum of the hadron $h$ with the event plane defined to be formed by the proton momentum $\vec P$ in the Breit frame and the unit vector $\vec n$ which enters the definition of thrust major: \begin{eqnarray} T_M=\max_{\vec n}\frac{1}{Q}{\sum_h}' |\vec p_h\cdot \vec n|,\qquad \vec n\cdot \vec P=0 \end{eqnarray} To avoid measurements in the beam region, the sum indicated by ${\sum_h}'$ extends over all hadrons with pseudo-rapidity $\eta<3$ in the Breit frame. The restriction to only the current hemisphere ($\eta<0$), as for the 2-jet shapes, would be too restrictive because of the extended phase space available for three partons. For reasons discussed in \cite{Banfi:2001ci}, only events with $p_t\sim Q$ should be selected, which is accomplished by a cut on the (2+1)-jet resolution $y_2$ defined by the $k_t$ clustering algorithm: $0.1<y_2<2.5$. The azimuthal correlation between the hadrons labelled $h$ and $i$ is defined as \begin{displaymath} \chi=\pi-|\phi_h-\phi_i| \ , \end{displaymath} where the observable is constructed by summing over all hadron pairs with a weight \begin{displaymath} w=\frac{p_{th}p_{ti}}{Q^2}. \end{displaymath} The azimuth in the Breit frame of hadron $h$ is denoted by $\phi_h$. Predictions of mean values of 2-jet event shapes are available up to next-to-leading order (NLO) in the strong coupling, together with power corrections. Mean values of 2-jet rates for the JADE and the $k_t$ algorithm have been compared to NLO calculations, however the power corrections for this observables are not completely known. Soft gluon resummations to next-to-leading log precision (NLL) have been performed for distributions of 2-jet and 3-jet event shapes. These have been matched to NLO fixed order predictions for the 2-jet event shapes only, power corrections are known for both. Distributions of jet rates can only be compared to NLO alone, as no matched soft gluon resummation nor power corrections are available yet. \section{Mean Values} In the Dokshitzer and Webber approach, the mean value of an event shape variable is modified through non-perturbative effects by an additive constant~\cite{Dokshitzer:1995zt} \begin{equation} \mean{F} = { \mean{F}^{\textrm{\scriptsize pQCD}}}+a_F{ \mathcal{P}}, \end{equation} where $a_F$ is of order one and can be calculated perturbatively. The power correction term $\mathcal{P}$ is assumed to be universal for all event shape variables. It is proportional to $1/Q$ and evaluated to be \begin{equation} \mathcal{P}=\frac{16}{3\pi}\mathcal{M}\frac{\mu_I}{Q} \left [\alpha_0(\mu_I)-\alpha_s(Q)-\frac{\beta_0}{2\pi} \left(\ln \frac{Q}{\mu_I}+\frac{K}{\beta_0}+1\right) \alpha_s^2(Q) \right ] \; , \label{pformula} \end{equation} where $\beta_0=11-2\,n_f/3$, $K = 67/6 -\pi^2/2-5\, n_f/9$, and $n_f=5$ is the number of active flavours. The so-called Milan factor $\mathcal{M}\simeq0.95$ in the $\overline{\mbox{\rm MS}}$\ scheme ensures the universality at the two-loop level~\cite{Dasgupta:1998xt}. This ansatz results in only one single non-perturbative parameter $\alpha_0 = \mu_I^{-1}\int_0^{\mu_I} \alpha_\mathrm{eff}(k)\,\mathrm{d}k$, being the first moment of the effective coupling integrated over the low scale region up to the matching scale $\mu_I$. $\alpha_0$ has to be determined from data and is expected to be around $0.5$. For the perturbative part of the calculation the NLO programs DISENT \cite{Catani:1997vz} and DISASTER++ \cite{Graudenz:1997gv} have been used. The uncertainty due to missing higher orders of the perturbative series is by convention estimated by a variation of the renormalisation scale by a factor of two ($\mu_r=Q/2$ and $\mu_r=2Q$). \subsection{2-jet Event Shapes} \begin{figure} \includegraphics[width=100mm]{d05-225f3.eps} \caption{Mean values of event shape variables corrected to the hadron level as a function of the scale $Q$. The data, presented with statistical errors (inner bars) and total errors (outer bars), are compared with the results of NLO QCD fits including power corrections (PC). The dashed curves show the NLO QCD contribution to the fits \cite{Aktas:2005tz}.} \label{f5} \end{figure} The H1 Collaboration measured the mean values of five event shapes as function of $Q$ \cite{Aktas:2005tz}, shown in Fig.~\ref{f5}. A steady decrease of the means with rising $Q$ is observed and a good description of the data is obtained by a fit of DISASTER++ supplemented by power corrections \cite{Dasgupta:1998ex,Dasgupta:1998xt}. For comparison the fixed order contribution alone is also given, which demonstrates the importance of the power corrections, especially for the non-photon-axis variables $\tau_c$, $\rho_0$ and C-Parameter. Earlier predictions of the jet broadening \cite{Dokshitzer:1998pt} were not able to describe the data because of a more involved perturbative non-perturbative interplay of this observable. \begin{figure} \includegraphics[width=100mm]{zeusmeans.eps} \caption{The mean values of event-shape variables as a function of $Q$ . The solid lines are the results of the fit to the data of the predictions of the sum of NLO pQCD calculations from DISASTER++ and the power corrections. The dashed lines are the DISASTER++ contribution to the fit alone \cite{Everett:2004fg}.} \label{f6} \end{figure} Similar results are obtained by the ZEUS Collaboration \cite{Everett:2004fg}, shown in Fig.~\ref{f6}. Note the logarithmic abscissa and the differing labels for the event shapes used compared to H1 where $1-T_T$ corresponds to $\tau_c$, $1-T_\gamma$ to $\tau$, $B_\gamma$ to $B$ and $M^2$ to $\rho_0$. Surprisingly the fitted power correction for the thrust with respect to the boson axis ($1-T_\gamma$) turns out to be slightly negative. The fitted values of $\alpha_s(m_Z)$ and $\alpha_0$ are shown in Fig.~\ref{f7} and \ref{f8}. \begin{figure} \includegraphics[width=110mm]{zeusmeansfit.eps} \caption{Extracted parameter values for ($\alpha_s$,$\alpha_0$) from fits to the mean values of the shape variables. The vertical line and the shaded area indicate the world average of $\alpha_s(m_Z)$ \cite{Everett:2004fg}.} \label{f7} \end{figure} \begin{figure} \includegraphics[width=100mm]{d05-225f7.eps} \caption{Results of fits to the mean values of $\tau$, $B$, $\rho_0$, $\tau_C$ and the $C$-parameter in the $(\alpha_s,\alpha_0)$ plane. The $ 1\sigma$ contours correspond to $\chi^2 = \chi^2_{\rm min}+1$, including statistical and experimental systematic uncertainties \cite{Aktas:2005tz}. The value of $\alpha_s$ (vertical line) and its uncertainty (shaded band) are taken from \cite{Bethke:2004uy}.} \label{f8} \end{figure} The 1-sigma contours denote experimental errors alone, which are only at the few percent level. These errors are mainly caused by imperfect knowledge of the electromagnetic energy scale of the calorimeter, which enters the boost to the Breit frame of reference. Not shown are the sizeable theoretical errors (dominated by scale uncertainties due to missing higher orders) of about ten percent. In both analyses $\alpha_0$ is found to be universal at the $10\%$ level, and the spread of the fitted $\alpha_s$ happens to be larger than expected from the experimental errors. There are some differences observed with the results from both collaborations, e.g. the values of $\alpha_0$ fitted by H1 cluster around a slightly higher value than those from ZEUS. However, the overall pattern looks quite similar, with $\tau$ having a large error and a huge correlation between the fitted parameters. Also the observables which make not use of the virtual boson axis ($\tau_c$,$\rho_0$ and C-parameter) prefer rather high values of $\alpha_s(m_Z)$ (compared to the world mean), in contrast to the broadening $B$, which exhibits the lowest values in both analyses. In contrast to \mbox{$e^+e^-$~} annihilation, where the perturbative coefficients are constant, in DIS they depend on $x$ according to the parton density functions and the accessible $x$-range at different values of $Q$. Variables defined with respect to the boson axis, like $\tau$ are expected to show a stronger dependence on $x$, which is demonstrated in Fig.~\ref{f9}. \begin{figure} \includegraphics[width=140mm]{d99-193f4.eps} \caption{Mean values of $\tau$ (left) and $C$ (right) versus $Q$ in four different bins of $x$ calculated with DISENT. The lines connect the means belonging to the same $x$ bin \cite{Adloff:1999gn}.} \label{f9} \end{figure} While the perturbative contributions are expected to depend on $x$, the power corrections must not, in order of the concept to work. A former analysis by the ZEUS Collaboration \cite{Chekanov:2002xk} observed an $x$ dependent power correction parameter $\alpha_0$ for $\tau$, $\tau_c$ and even $\rho_0$, however this observation was not confirmed by more recent investigations. \subsection{Jet Rates} Jet rates show in general smaller hadronisation corrections than event shapes. Power corrections $\propto (\Lambda/Q)^{2p}$ have been proposed for the JADE and the $k_t$ jet algorithms, with suggested values of $p=1/2$ and $p=1$, respectively. Reliable calculations of the coefficients $a_f$ have not been presented yet. In an H1 analysis \cite{Adloff:1999gn} an attempt was made to fit this coefficients together with $\alpha_s$ and $\alpha_0$, the resulting theory curves are shown in Fig.~\ref{f10}. \begin{figure} \includegraphics[width=130mm]{d99-193f6.eps} \caption{Mean values of $y_{fJ}$ and $y_{kt}$ as a function of $Q$ The error bars represent statistical and systematical uncertainties. Upper part: The full line corresponds to a fit of the pQCD calculation without power corrections. Lower part: The full line corresponds to a power correction fit according to the Dokshitzer-Webber approach. The dashed line shows the pQCD contribution of DISENT in these fits \cite{Adloff:1999gn}.} \label{f10} \end{figure} While data are described quite well by the fit, the hadronisation corrections turn out to be negative and the fitted values of $\alpha_s$ are unphysical. Since the effect of hadronisation is so small for this class of variables, even more precise experimental data could help to resolve this issue. \section{Distributions} Compared to mean values, the study of the spectra of event shape variables offers several advantages. Firstly, the shape of the distributions is governed by QCD and thus offers more information available for fits. Secondly, when fitting it is possible to restrict the range used to an interval where the theory prediction is reliable. \subsection{2-jet Event Shapes} Towards low values of the shape variables there are terms due to soft gluon radiation, which become dominant, but are not included in the NLO prediction used for the mean values. A significant improvement of the description is obtained with a resummation of these soft gluons, matched to the NLO part. Such a calculation is available in DIS for the 2-jet variables $\tau$, $B$, $\tau_c$, $\rho_0$ and C-Parameter. \begin{figure} \includegraphics[width=64mm]{B_p_pt.eps} \includegraphics[width=64mm]{B_p_ptnp.eps} \caption{Spectra of the Jet Broadening for $\mean{Q}=15{\,\rm GeV}$. The prediction by NLOJET++ and soft gluon resummation (left) needs to be supplemented by power corrections in order to describe the data (right).} \label{f11} \end{figure} Fig.~\ref{f11} shows exemplarily that a large part of the central region of a spectrum is nicely reproduced by the theory, even at a rather low scale of $Q=15{\,\rm GeV}$, where hadronisation effects are not small. The hadronisation correction in the Dokshitzer and Webber approach results in a shift for the differential distributions \begin{equation} \frac{1}{\sigma_{\mathrm{tot}}}\frac{\mathrm{d}\sigma(F)}{\mathrm{d} F}=\frac{1}{\sigma_{\mathrm{tot}}}\frac{{ \mathrm{d}\sigma^{\mathrm{pQCD}}}(F-a_F\mathcal{ P})}{\mathrm{d} F}, \end{equation} with the same coefficient $a_F$ and power correction $\mathcal{P}$ as for the mean values. This shift cannot be valid over the whole spectrum, at low values of $F$ it may be applied only for $F \gg a_F\mathcal{ P} \sim \mu_I/Q$~\cite{Dokshitzer:1997ew}. Moreover, at large values of $F$ higher order corrections are substantial and the perturbative calculation is not reliable. In consequence for each analysis one has to decide on the interval used for fits, depending on the observable and $Q$. For the upper bounds the values given in \cite{Dasgupta:2002dc} are used, while for the lower bounds of the fit intervals different methods are applied. In case of the jet broadening the picture is more complicated, as the shift is supplemented by a squeeze \cite{Dasgupta:2001eq}. The calculations shown in the following were performed using DISASTER++ for the fixed order part and the DISRESUM package \cite{Dasgupta:2002dc} containing the resummed contributions and the power corrections. In a recent publication \cite{Aktas:2005tz} the H1 Collaboration has measured the differential distributions for the five event shapes quoted above, shown together with a QCD fit in Fig.~\ref{f1} and \ref{f2} \begin{figure} \includegraphics[width=130mm]{h1distributions1.eps} \caption{ Normalised event shape distributions corrected to the hadron level for $\tau_C$, $\tau$ and $B$. The data are presented with statistical errors (inner bars) and total errors (outer bars). The measurements are compared with fits based on a NLO QCD calculation including resummation (NLL) and supplemented by power corrections (PC). The fit results are shown as solid lines and are extended as dashed lines to those data points which are not included in the QCD fit \cite{Aktas:2005tz}.} \label{f1} \end{figure} \begin{figure} \includegraphics[width=130mm]{h1distributions2.eps} \caption{Normalised event shape distributions corrected to the hadron level for $\rho_0$ and the $C$-parameter. For details see the caption of Fig.~\ref{f1}.} \label{f2} \end{figure} The data cover a wide range of $\langle Q \rangle = 15 - 116~{\,\rm GeV}$. Except for the highest $Q$ bins, the precision of the measurements is not statistically limited. For each variable the shape of the spectra changes considerably with increasing $Q$, becoming narrower and evolving towards low values. The results of the combined fit for $\alpha_0$ and $\alpha_s(m_Z)$ are displayed in Fig.~\ref{f3}. \begin{figure} \includegraphics[width=110mm]{h1fitsdistributions.eps} \caption{Fit results to the differential distributions of $\tau$, $B$, $\rho_0$, $\tau_C$ and the $C$-parameter in the $(\alpha_s,\alpha_0)$ plane \cite{Aktas:2005tz}. The $ 1\sigma$ contours correspond to $\chi^2 = \chi^2_{\rm min}+1$, including statistical and experimental systematic uncertainties. The value of $\alpha_s$ (vertical line) and its uncertainty (shaded band) are taken from \cite{Bethke:2004uy}.} \label{f3} \end{figure} The quality of the fits, expressed in terms of $\chi^2$ per degree of freedom, is found to be between 0.5 and 1.4. It was checked that the results are stable against the omission of data points at the edges of the fit intervals. For all event shape variables consistent values for $\alpha_s(m_Z)$ and $\alpha_0$ are found, with a maximum difference of about two standard deviations between $\tau$ and $C$. The fitted results for the five observables appear to be grouped for those with ($\tau$, $B$) and without ($\tau_c$, $\rho_0$ and $C$) referring to the boson axis in their definition. A strong negative correlation between $\alpha_s(m_Z)$ and $\alpha_0$ is observed for all variables. The values of the strong coupling $\alpha_s(m_Z)$ are in good agreement with the world average~\cite{Bethke:2004uy}, shown for comparison as the shaded band. The non-perturbative parameter $\alpha_0\simeq 0.5$ is confirmed to be universal within $10\%$. A combined analysis of all event shape variables yields \begin{eqnarray} \alpha_s(m_Z) & = & 0.1198 \pm 0.0013\ ({\rm exp})\ ^{+0.0056} _{-0.0043}\ ({\rm theo}) \ , \nonumber \\[1ex] \alpha_0 & = & 0.476 \pm 0.008 \ ({\rm exp})\ ^{+0.018} _{-0.059}\ ({\rm theo}) \ , \nonumber \end{eqnarray} where the theoretical error is derived from the renormalisation scale uncertainty. The total errors are dominated by this renormalisation scale uncertainty, which suggests that missing higher order terms in the perturbative calculation are important. A less consistent picture with respect to the universality of $\alpha_0$ is provided by preliminary results from the ZEUS Collaboration, shown in Fig.~\ref{f12} \begin{figure} \includegraphics[width=110mm]{zeusdistrfit.eps} \caption{Extracted parameter values for ($\alpha_s$,$\alpha_0$) from fits to differential distributions of shape variables \cite{Everett:2004fg}. The vertical line and the shaded area indicate the world average of $\alpha_s(m_Z)$.} \label{f12} \end{figure} While the fitted values of $\alpha_s(m_Z)$ are found to agree with the world average, a rather low value of $\alpha_0$ is obtained for the C-Parameter, not covered by the experimental errors. However, only for $\tau$ and $B$ the value of $\chi^2$ per degree of freedom is around unity, for the remaining variables it is around five. So the question arises whether the boundaries of the fit were stretched beyond the region where the theory is predictive. It is surprising that almost no (or even negative) hadronisation was found for the thrust (wrt. the boson axis) in case of the ZEUS mean value analysis ($\alpha_0<0.3$ in Fig.~\ref{f6}), while the differential distributions extracted from the same data to not show this property (with $\alpha_0>0.5$). The event shape spectra used for the fit appear to be consistent with those from H1, but there are some differences in the treatment of the systematic errors. Moreover, there is some freedom when choosing options in the matching of the fixed order and the resummed part of the calculation and in deciding on proper fit intervals. Still, both analyses agree with respect to several findings: $\alpha_s(m_Z)$ is compatible with the world mean (which is not the case for the event shape mean values), $\alpha_s(m_Z)$ and $\alpha_0$ have a negative correlation coefficient and the best description (quantified by $\chi^2$/d.o.f.) is obtained for the boson axis variables. In order to draw more decisive conclusions about the validity of power corrections a consistent handling of the theory parameters, fit intervals and systematic errors would be desirable. Also, reduced theory uncertainties would help a lot, e.g.\ by a refined combination procedure \cite{Jones:2003yv} or even a NNLO calculation. In case of the H1 analysis, the assumption of universal power corrections is assumed to be valid, and a fit as a function of $Q$, according to the renormalisation group equation is performed, with the resulting scale dependence of $\alpha_s(Q)$ shown in Fig.~\ref{f16}. \begin{figure} \includegraphics[width=90mm]{h1runas.eps} \caption{The strong coupling $\alpha_s$ as a function of the scale $Q$. The individual fit results, shown as points with error bars, are obtained from fits to the differential distributions in $\tau_C$, $\tau$, $B$, $\rho_0$ and $C$ within each $Q$ bin. The errors represent the total experimental uncertainties. For each event shape observable a value of $\alpha_s(m_Z)$ is indicated in the plot, determined from a fit to the $\alpha_s(Q)$ results using the QCD renormalisation group equation. The corresponding fit curves are shown as full lines. The shaded bands represent the uncertainties on $\alpha_s(Q)$ from renormalisation scale variations \cite{Aktas:2005tz}.} \label{f16} \end{figure} \subsection{Jet Rates} \begin{figure} \includegraphics[width=64mm]{H1prelim-03-033.fig7.eps} \includegraphics[width=64mm]{H1prelim-03-033.fig8.eps} \caption{The jet rate $y_2$ (left) and $y_3$ (right) \cite{Kluge:2003sa} compared to a calculation based on NLOJET++, where hadronisation corrections are determined with RAPGAP.} \label{f13} \end{figure} The longitudinally invariant $k_t$ algorithm is the de-facto standard for jet analyses in DIS. For this algorithm in its exclusive mode, the H1 Collaboration determined distributions of 2-jet and 3-jet rates \cite{Kluge:2003sa} as a function of $Q$, shown in Fig.~\ref{f13}. Since the power corrections to this observables are not yet calculated, the data are compared to a NLO calculation, corrected for hadronisation effects by the Lund string model as implemented in the RAPGAP \cite{Jung:1995gf} event generator. At scales $Q>25{\,\rm GeV}$ a good description is provided for both jet rates, only at lower values deviations are found, which might be due to insufficient treatment of the hadronisation. However, as for the 2-jet event shapes, a soft gluon resummation matched to the fixed order part, might be important. Unfortunately, such a calculation is not available yet. \subsection{3-jet Event Shapes} Compared to 2-jet event shape, the 3-jet variables differ in that they are sensitive to large angle emissions between hard jets. Also it has been proposed to use such variables in a hadron-hadron collision environment. Preliminary measurements from the H1 \cite{Kluge:2003sa} and ZEUS \cite{Everett:2004fg} Collaborations are shown in Fig.~\ref{f14} and \ref{f15}. \begin{figure} \includegraphics[width=90mm]{zeuskout.eps} \caption{$K_{out}$ distribution compared to LEPTO and the LO+NLL+PC calculation shown in bins of \mbox{$100{\,\rm GeV}^2<Q^2<500{\,\rm GeV}^2$} and {$500{\,\rm GeV}^2<Q^2<800{\,\rm GeV}^2$} \cite{Everett:2004fg}.} \label{f14} \end{figure} \begin{figure} \includegraphics[width=80mm]{H1prelim-03-033.fig10.eps} \caption{Measured values of the azimuthal correlation \cite{Kluge:2003sa}. The data are compared with results from the RAPGAP event generator.} \label{f15} \end{figure} The theory predictions are not yet on the level of precision as for the 2-jet variables. A rather good description is obtained for the azimuthal correlation by the RAPGAP event generator, which relies on parton showers and the Lund string model. The out-of-plane momentum is compared to a LO+NLL calculation with power corrections, which does not a too bad job at high $Q^2$. Some shift with respect to the data is seen, but one should keep in mind that the fixed order part of the calculation is only at leading order. As DIS data of good precision is available for both observables, the calculation at NLO+NLL precision would be highly desirable in order to draw further conclusions about power corrections. \section{Summary} Concerning the mean values of 2-jet event shapes overall consistent findings are reported by H1 and ZEUS, with support for universal power corrections. A rather large spread in the fitted values of $\alpha_s(m_Z)$ and the large scale uncertainties of the fitted results suggest that the currently available fixed order calculations at NLO are not sufficient for this application. For distributions of 2-jet event shapes soft gluon resummations are being used. In a recent H1 publication the fits to the spectra resulted in compatible values for $\alpha_0$ and the strong coupling among the five studied observables. In addition the obtained values of $\alpha_s(m_Z)$ were found to be consistent with the world mean. A somewhat larger spread in $\alpha_0$ is reported by a ZEUS preliminary result, but again the fitted $\alpha_s(m_Z)$ are compatible with the world mean. Discrepancies in the findings of the two HERA experiments can be at least partly attributed to uncertainties due to determination of the fit intervals and due to differing matching prescriptions (between the fixed order the resummed calculations), as the measured data appear to be compatible with each other. Within the H1 analysis it was shown that the results were not sensitive to changes in the fit interval. The topic of distributions from 3-jet event shapes is experimentally more difficult, because of reduced statistics and declined detector resolution. However, as good DIS data are on the market for two of these variables, a perturbative NLO+NLL could, when available, allow for a stringent test of power corrections beyond 2-jet variables. In case of the jet rates, besides a NLO+NLL calculation more insight into the power correction coefficients is eligible to draw further conclusions. On the other hand, the observed universality of $\alpha_0$ for both distributions and mean values of the 2-jet event shape variables gives already strong support for the concept of power corrections. NB: After this workshop the ZEUS Collaboration made available new results on event shapes \cite{Chekanov:2006hv}. The main characteristics of the fitted results have not changed considerably compared to the preliminary results shown in this article. \subsection*{Acknowledgments} \label{Ack} I would like to thank Mrinal Dasgupta, Yuri Dokshitzer and Gavin Salam for the organisation of this stimulating workshop. Also I would like to thank Hans-Ulrich Martyn for useful comments on this manuscript and for the collaboration on the subject of event shapes over the last years.
1,108,101,563,976
arxiv
\section{Introduction} Fast Radio Bursts (FRBs) are a new class of millisecond-timescale, decimeter-wavelength radio transients that potentially occur at cosmological distances (Lorimer et al.\,2007; Thornton et al.\,2013; Spitler et al.\,2014). The dispersion measures of FRBs exceed the expected contribution from the Galactic Interstellar Medium (ISM) by factors of several, leading to the suggestion that the Inter-Galactic Medium (IGM) contributes the bulk of the dispersion measure. Comparison against IGM models (Ioka 2003; Inoue 2004; see also McQuinn 2013) implies that these bursts are detected at redshifts up to $z \approx 1$. Moreover, at least two of the bursts exhibit clear evidence of temporal smearing caused by electron density inhomogeneities that must be external to the Galaxy; the $\sim 1\,$ms timescale of the observed smearing exceeds the expected Galactic contribution by at least two orders of magnitude (Cordes \& Lazio 2002 (NE2001)). A puzzling characteristic of the FRB population is that the detection rate increases with Galactic latitude. This result is based on the dearth of FRB detections in the low- and mid-latitude components of the 1.4\,GHz Parkes High-Time Resolution Universe (HTRU) survey (Keith et al. 2010), despite the large fraction of survey time spent at these latitudes. Petroff et al. (2014) conclude that, even after taking into account selection effects such as enhanced sky temperature, increased dispersion measure smearing and excess scattering at low Galactic latitudes, the FRB distribution is non-isotropic with 99\% confidence (see also Burke-Spolaor \& Bannister 2014). It has been suggested that interstellar scattering offers an explanation for this latitude dependence (Petroff et al.\,2014). The specific suggestion is that diffractive interstellar scintillation amplifies the emission from some FRBs so that events which would ordinarily be too weak to be detected are pushed above the threshold of detectability. Similar suggestions have been advanced in similar contexts, notably in relation to the effect of diffractive scintillation on SETI signals (Cordes, Lazio \& Sagan 1997) and the effect of gravitational lensing on quasar number counts (Turner 1980). There are several {\it prima facie} motivations to favour this suggestion. \begin{itemize} \item[1] The implied distribution appears unphysical if it is intrinsic to the progenitor population. Extragalactic sources should exhibit no latitude dependence whereas Galactic sources confined to the disk should exhibit an excess of events at low latitudes. One can imagine a Galactic population consisting of a very local population or conversely a population in an extended halo but even these populations show some biases towards low Galactic latitudes (see, e.g., the debate over the progenitor population of gamma-ray bursts in the 1990s as summarised by Fishman \& Meegan (1995)). \item[2] No known absorption effect can quench the radiation in such a latitude-dependent manner at the frequencies at which FRB radiation is detected. Free-free absorption can in principle influence the properties of the observed population, but this is known not to be an important consideration for other types of Galactic populations at this frequency (e.g. pulsars). \item[3] Insterstellar scintillation qualitatively explains the observed Galactic latitude dependence. The decorrelation bandwidth of the scintillation at high Galactic latitudes approximately matches the value required for scintillation to amplify the radiation; specifically it can be comparable to or larger than the observing bandwidth of the HTRU survey. Large amplifications due to scintillation are only possible under this circumstance. For the stronger scintillations closer to the Galactic plane where the decorrelation bandwidth is considerably smaller than the observing bandwidth, the average over a large number of diffractive scintillations across the observing band would result in no nett amplification of the signal. Thus, at low Galactic latitudes the observed flux densities would approach the intrinsic flux densities of the events themselves, which may not exceed the threshold for detectability with HTRU. \end{itemize} In this paper we examine the scintillation hypothesis critically. A particular implication is that the scintillation should alter the observed FRB flux density distribution in a specific manner. We derive the observed flux density distribution in terms of the intrinsic flux density distribution and the probability distribution of the amplification provided by diffractive scintillation. A quantitative model is introduced in \S\ref{sec:ScintEnhance}. The discussion is confined to detections made at the Parkes radiotelescope, which account for all but one of the FRBs detected to date, since it is not yet possible to ascertain whether detections at other telescopes exhibit a similar Galactic latitude dependence. In \S\ref{sec:Compare} we assess this model as an explanation of the FRB Galactic latitude dependence and examine its implications for FRB surveys. Our conclusions are presented in \S\ref{sec:Conclusions}. \section{Scintillation enhancement of FRB flux densities} \label{sec:ScintEnhance} We consider the amplification of radiation due to diffractive scintillation caused by the turbulent ISM of our Galaxy as a possible explanation of the observed latitude dependence of FRB detections. For diffractive scintillation to viably enhance the radiation observed from an FRB two conditions must be satisfied: First, the decorrelation bandwidth of the interstellar diffractive scintillations must be comparable to or larger than about half the observing bandwidth. The random amplifications due to diffractive scintillation are correlated only over a finite decorrelation bandwidth, $\Delta \nu_{\rm dc}$. If the observing bandwidth, $\Delta \nu_b$, spans a large number of decorrelation bandwidths, no nett enhancement would be observed. At observing frequencies $1.2-1.5\,$GHz where FRB detections have been made, large decorrelation bandwidths are typically only encountered at high Galactic latitudes. For instance, the NE2001 (Cordes \& Lazio 2002) scattering model predicts that the decorrelation bandwidth is in the range $\Delta \nu_{\rm dc} = 1.5-3.5\,$MHz at $|b| \approx 30^\circ$ at 1.5\,GHz; this is much smaller than the $\Delta \nu_b = 320\,$MHz used in the FRB detections made with Parkes multibeam receiver. The NE2001 model is poorly constrained at high Galactic latitudes and there is substantial evidence that regions of weaker scattering, with commensurately larger decorrelation bandwidths, do in fact exist at high Galactic latitudes. This is derived, for instance, from the scattering properties of intra-day variable quasars viewed through lines of sight $|b|>30^\circ$ off the Galactic plane. The $\approx 3$-$5$\,GHz transition frequency between weak and strong scintillation deduced for many well-studied intra-day variable quasars (Dennett-Thorpe \& de Bruyn 2003, Macquart et al.\,2000; Kedziora-Chudczer et al.\,1997; Bignall et al.\,2003) implies decorrelation bandwidths of $40$-$190$\,MHz at 1.5\,GHz. There are several high latitude pulsars with distances $\ga 300\,$pc (i.e.\,above most of the turbulent electron layer) with values of $\Delta \nu_{\rm dc}$ at 1.4\,GHz which exceed $\approx 300\,$MHz, notably B1237$+$25 and B0031-07 (Bhat, Gupta \& Rao 1998). Moreover, the absence of discernible frequency structure in the observed spectra of the brightest reported FRBs (Thornton et al. 2013) itself provides evidence that the decorrelation bandwidth must either be smaller than the channel resolution (390\,kHz), or must be larger than the 320\,MHz HTRU observing bandwidth for the lines of sight probed by the FRBs detected to date. \\ The second condition is that scattering by turbulence external to our Galaxy must not corrupt the properties of the incident radiation in a manner that destroys the ability of the ISM to amplify the signal. This effect is relevant because it appears that at least two FRBs are subject to significant scattering beyond our Galaxy: the $\tau \sim 1\,$ms pulse smearing observed in both the Lorimer burst and FRB\,110220 is $\sim 10^4$ times larger than can be plausibly accounted for by scattering within the Galaxy (Macquart \& Koay 2013; Luan \& Goldreich 2014). The essential requirement is that the extragalactic scattering does not alter the coherence properties of the radiation in a manner that prevents interstellar scattering from modulating the incident signal. In practice, this means that the spatial coherence length of the wavefront scattered by extragalactic turbulence, $s_{\rm IGM}$, is no smaller than the size of the scattering disk associated with the interstellar scintillations, $s_{\rm scat}$. For $\Delta \nu_{\rm dc}$ of $300\, \Delta_{300} \,$MHz at 1.4\,GHz and an effective distance to the interstellar turbulence of $D_{\rm kpc}\,$kpc, the size of the scattering disk is $s_{\rm scat} = 2 \times 10^9 \,\Delta_{300}^{-1/2} D_{\rm kpc}^{1/2}\,$m. We estimate $s_{\rm IGM}$ using eqs.(12) \& (14) of Macquart \& Koay (2013) at 1.4\,GHz to be \begin{eqnarray} s_{\rm IGM} = \frac{\lambda}{2 \pi \sqrt{c \, \tau (1+z_L)} } \left( \frac{D_{\rm L} D_{\rm S}}{ D_{\rm LS} } \right)^{1/2} \sim \left( \frac{\tau}{1\,{\rm ms}} \right)^{-1/2} \begin{cases} 3 \times 10^8 \left( \frac{ D_S}{1\,{\rm Gpc}} \right)^{1/2}\,{\rm m}, & D_{\rm LS} \approx D_{\rm L}, \\ 3 \times 10^{11} \left( \frac{D_L}{1\,{\rm Gpc}} \right) \left( \frac{ D_{\rm LS}}{1\,{\rm kpc}} \right)^{-1/2}\,{\rm m}, & D_{\rm LS} \ll D_{\rm S}, \\ \end{cases} \end{eqnarray} where the scattering occurs an angular diameter distance $D_{\rm L}$ (with associated redshift $z_L$) from the observer due to an event at a distance $D_{\rm S}$, and the angular diameter distance between the source and scattering plane is $D_{LS}$. Since $s_{\rm IGM} \ga s_{\rm scat}$ we see that in most cases the effect of extragalactic scattering is insufficient to corrupt the radiation incident upon the interstellar medium. This condition is easily met if the scattering is associated with the host galaxy (i.e. $D_{\rm LS} \ll D_{\rm S}$). The one possible exception is if the scattering time substantially exceeds a millisecond and if the extragalactic scattering takes place at distances intermediate to the FRB itself. We also note in passing that there is a decorrelation bandwidth associated with the extragalactic scattering. However, its value of $\sim 1\,$kHz for scattering times $\sim 1\,$ms is much smaller than the spectral resolution of HTRU filterbank, rendering these spectral variations unobservable. In summary, we conclude that both conditions are plausibly satisfied for most of the FRB population detected to date, and it is therefore possible in principle that interstellar diffractive scintillation has a substantial effect on the observed flux densities of detected FRBs. \subsection{Flux density distribution} To determine the flux density distribution of events incident upon the telescope, we must model both the effect of the scintillations and the initial (unperturbed) flux density distribution of the transient events. In the regime of strong diffractive scintillation, the probability of observing an amplification, $a$, over the mean source flux density from a single scintle is (Mercier 1962; Salpeter 1967, see also Narayan \& Goodman 1989; Gwinn et al. 1998) \begin{eqnarray} p_{a,1}(a) = \exp \left( - a \right). \label{pa1} \end{eqnarray} In the opposite limit in which a large number of scintles, $N$, contribute to the flux density across a finite observing bandwidth the central limit theorem implies the amplification distribution approaches a normal distribution, \begin{eqnarray} p_{a,N}(a) = \frac{1}{\sqrt{2 \pi \sigma_N^2}} \exp \left( - \frac{(a-1)^2}{2 \sigma_N^2}\right), \label{paN} \end{eqnarray} with a mean of unity and a standard deviation $\sigma_N \sim 1/\sqrt{N}$. The distribution of observed event flux densities subject to enhancement by scintillation is computed by examining the probability density of the variate $Z = S_\nu \times a$, the product of the amplification with $S_\nu$, the intrinsic burst flux density. For a differential distribution of event flux densities, $p_S (S_\nu)$, between some minimum and maximum flux densities $S_{\rm min}$ and $S_{\rm max}$ respectively, the observed distribution is \begin{eqnarray} p_Z (Z) &=& \int_{S_{\rm min}}^{S_{\rm max}} p_S(S_\nu) p_{a,1} \left( \frac{Z}{S_\nu} \right) \frac{dS_\nu}{S_\nu} \label{pZeq}. \label{VariateMultiply} \end{eqnarray} To understand the implications of this basic result, we parameterize the differential flux density distribution in terms of a power law, \begin{eqnarray} p_S(S_{\nu}) = \begin{cases} K S_{\nu}^{-5/2 + \delta} & S_{\rm min} < S_{\nu} < S_{\rm max}, \\ 0 & \mbox{otherwise}, \\ \end{cases} \qquad K = \frac{r \left(\delta -\frac{3}{2} \right)}{S_{\rm min}^{-{3/2}+\delta} - S_{\rm max}^{-3/2+\delta}}, \label{pS} \end{eqnarray} where $r$ is the overall detection rate integrated over the entire flux range over which events occur, $S_{\rm min} < S_{\nu} < S_{\rm max}$. A finite flux density cutoff at $S_{\rm max}$ is introduced as a means of investigating the effect of scintillation enhancement if a steep decline is evident at high flux densities. This is introduced as a means of investigating an aspect of the specific suggestion made in Petroff et al.\,(2014). However, we remark that $S_{\rm max}$ is a free parameter, and may be taken to be arbitrarily large. Indeed, the physical origin of such a cutoff is unclear: a more physical parameterisation would consider events distributed between lower and upper cutoff intrinsic luminosities in conjunction with some distribution of event distances. Even for an extreme luminosity distribution in which FRBs were standard candles, the distribution would exhibit no sharp cutoff if the events were homogeneously distributed. In general, the value of $S_{\rm max}$ is finite if FRBs occur only at cosmological distances (i.e. at a certain minimum distance) and there is a finite maximum burst luminosity. The distribution scales as $S_{\nu}^{-5/2}$ if the progenitor population does not evolve as a function of redshift and effects due to the curvature of spacetime across cosmological distances are neglected. The factor $\delta$ takes into account evolution in the progenitor population and the non-Euclidean geometry of the Universe at $z \ga 1$: similar effects in the quasar population give rise to number counts with $-0.5 \la \delta \la 0.5$ (see, e.g., Wall 1980; Wall 1994). Evaluating eq.(\ref{VariateMultiply}) for enhancement by a single scintle, we obtain \begin{eqnarray} p_{Z,1}(Z) &=& K Z^{-5/2 + \delta} \left[ \Gamma_{5/2 -\delta} \left( \frac{Z}{S_{\rm max}} \right) - \Gamma_{5/2 -\delta} \left( \frac{Z}{S_{\rm min}} \right) \right] , \label{FullS0soln} \\ &\approx& K \begin{cases} S_{\rm min}^{-5/2 + \delta}/(\frac{5}{2} - \delta) & Z \ll S_{\rm min}, \\ Z^{-\frac{5}{2} +\delta} \Gamma \left( \frac{5}{2} - \delta \right) & S_{\rm min} \ll Z \la S_{\rm max}, \\ S_{\rm max}^{-\frac{5}{2}+\delta} \left( \frac{Z}{S_{\rm max}} \right)^{-1} e^{\frac{-Z}{S_{\rm max}} } & Z \gg S_{\rm max}, \\ \end{cases} \end{eqnarray} where $\Gamma_{a}(Z) = \int_Z^\infty t^{a-1} e^{-t} dt$ is the incomplete gamma function. The effect of scintillation is to enhance the event rate for all flux densities $Z \gg S_{\rm min}$, as illustrated in Figure \ref{fig:pZ}. It introduces three significant effects: (i) it extends the distribution beyond $S_{\rm max}$ into an exponentially-decreasing tail, thus pushing a small fraction of events near $S_{\rm max}$ to yet higher flux densities, (ii) it enhances the event rate over the range $S_{\rm min} \ll Z \la S_{\rm max}$ by a factor $\Gamma(5/2+\delta)$ but the distribution retains the same power-law index as the original distribution, and (iii) it draws a portion of the low flux density distribution near $S_{\rm min}$ into a flat region that extends down to zero. It is straightforward to verify that, as expected, the event rate integrated over the distribution remains identical to the original rate. The distribution extensions (i) and (iii) are, respectively, attributable to the fact that scintillation amplifies the flux density of a small fraction of events near $S_{\rm max}$ and that it similarly de-amplifies a fraction of events near $S_{\rm min}$. The enhancement associated with (ii) is more subtle, and may be regarded as the effect of Eddington bias: because the distribution decreases steeply with flux density any effect that scatters objects in flux density preferentially scatters more objects from low to high flux density than vice versa. It follows that the greater the steepness of the distribution (i.e. the greater the value of $-\delta$), the greater the enhancement in event rate, $\Gamma(5/2-\delta)$, due to this bias. \begin{figure} \centerline{\epsfig{file=pZ.eps, scale=0.7}} \caption{The distribution of observed flux densities $p_Z(Z)$ (blue solid line) for an initial flux density distribution (purple dashed line) that is nonzero over the range $S_{\rm min} < Z < S_{\rm max}$ and with $\delta =0$. The effect of the diffractive scintillations is to draw out the high end of the distribution into a tail that decreases like $Z^{-1} \exp(-Z)$, increase the differential event counts over the range $S_{\rm min} \ll Z \la S_{\rm max}$ and to extend the low luminosity component of the distribution to zero flux density. } \label{fig:pZ} \end{figure} For completeness, we also consider the flux density distribution where a large number of scintles, $N$, contribute to the overall measurement of the flux density across the observing bandwidth, using eq.(\ref{paN}): \begin{eqnarray} p_{Z,N}(Z) = K Z^{-\frac{5}{2}+\delta} \left\{ \frac{2^{-\frac{1}{4} - \frac{\delta}{2}}}{\sqrt{ \pi}} \sigma_N^{\frac{1}{2} - \delta} \left[ \sigma_N \Gamma \left(\frac{5 - 2 \delta }{4}\right) \, _1F_1\left(\frac{2 \delta -3}{4};\frac{1}{2};\frac{-1}{2 \sigma_N ^2}\right)+\sqrt{2} \Gamma \left(\frac{7- 2 \delta }{4}\right) \, _1F_1\left(\frac{2 \delta -1}{4};\frac{3}{2};\frac{-1}{2 \sigma_N ^2}\right) \right] \right\}, \end{eqnarray} where $_1 F_1$ is a confluent hypergeometric function, and we have taken the limits $S_{\rm min}=0$ and $S_{\rm max} \rightarrow \infty$ in order to make the problem analytically tractable. The expression in the curly brackets represents the correction to the event rate over the non-scintillating signal. In the limit $N \rightarrow \infty$ this distribution approaches the intrinsic distribution given by eq.(\ref{pS}). For finite values of $N$, there is still some small increase in the event rate over the range $S_{\rm min} \ll Z \la S_{\rm max}$, but this diminishes rapidly with $N$, as shown in Figure \ref{fig:approachLim}. The important point is that the overall enhancement is less than $\approx 5$\% for the small scintillation bandwidths typical of diffractive scintillation closer than $30^\circ$ to the Galactic plane where one has $N \ga 30$ for typical HTRU observing parameters. \begin{figure} \centerline{\epsfig{file=LimitingAmp.eps, scale=0.5}} \caption{The amplification of the event rate with the number of scintles that contributes to the flux density in the limit in which $N$ is large and $S_{\rm min} \rightarrow 0$ and $S_{\rm max} \rightarrow \infty$.} \label{fig:approachLim} \end{figure} \subsection{Flux density distribution measured with a single-pixel or multibeam receiver} A complication arises in comparing the observed flux density distribution of FRBs with foregoing results because all FRB detections have hitherto been made with non-interferometric radio telescopes. Each event is detected at an unknown angular distance from the beam centre, and the observed signal-to-noise is attenuated by an unknown amount according to the beam shape. However, although the effect of beam attenuation is unknown for any given event, it is nonetheless possible to derive its statistical effect on a population of events. We quantify this effect here. Our analysis is informed by two aspects of the HTRU survey: \begin{itemize} \item[1] With the exception of the Lorimer burst (Lorimer et al. 2007), each FRB was detected in only a single beam of the Parkes multibeam receiver, with no detections in adjacent beams at a significance exceeding $\approx 5 \sigma$. The beam centres of the Parkes multibeam receiver are placed two full-width-at-half-maximum (FWHM) beamwidths apart, hence we deduce that every detected FRB occurred at an angular separation no greater than this distance from the beam centre. This argument is not rigorously true for events detected by one of the outer mulitbeam receivers because the event may have occurred on the outerward edge of the beam. However, the number of FRBs detected in the outer beams of the Parkes multibeam receiver compared to those found in the inner beams is inconsistent with detections far beyond the primary beam pattern.\\ \item[2] The beam shape is frequency dependent, and if the FRB occurred beyond the first null of the beam at any frequency within the 1180-1500\,MHz observing range, the spectrum would exhibit extremely large gradients across the band and may even reverse slope. The absence of such spectral features in Parkes FRBs reported to date suggests that no event was detected at Parkes at an angular separation close to the first beam null or beyond it. (The FRB detected by Spitler et al.\,(2014) at Arecibo demonstrates the large spectral gradients possible when the event is detected far from the beam centre.) \end{itemize} We would like to compute the probability distribution of the attenuation. We consider the attenuation associated with events whose locations fall within the half power point of a gaussian beam, $B_{\rm gauss}(\theta ) = \exp ( - \theta^2/2 \theta_b^2 )$, where $\theta_b$ is a measure of the beamwidth. The beamshape of every one of the 13 Parkes beams is distinct, because each off-axis beam suffers from effects of asymmetry caused by coma and diffraction of the radiation around the telescope feed legs. However, the effects of beam asymmetry are mitigated substantially by the fact that we are only interested in events that occur well within the first beam null. Given the large uncertainties associated with the measured FRB flux density distribution at present, we are justified in deriving the attenuation probability distribution only to first order assuming this approximation. The probability that an event occurs a distance $\theta$ from the beam centre is proportional to $2 \pi \theta d\theta$, and the normalised probability distribution for an event that occurs out to a maximum distance $\theta_{\rm edge}$ is $p_\theta (\theta) d\theta= 2 \theta d\theta / \theta_{\rm edge}^2$. The probability of detecting an event with beam attenuation factor, $y$, is the probability of obtaining that value of $\theta$ which corresponds to $y=B (\theta)$. We derive the distribution of beam attenuations by changing the variable of the probability distribution $p_\theta(\theta)$ to $B(\theta)$, and the attenuation probability density $P_y(y)$ is derived from $p_\theta$ using the relation, \begin{eqnarray} P_y(y) = \frac{p_\theta [g(y)] }{\left| B' [g(y)] \right|}, \end{eqnarray} where $g(y) = \theta$ is the inverse function of $B(\theta)$. Considering that event detections are confined to within two half-beamwidths from the pointing centre, namely $\theta_{\rm edge} = 2 \theta_b \sqrt{2 \ln 2}$, the resulting attenuation distribution is, \begin{eqnarray} P(y) = \frac{1}{(4 \ln 2)\, y}, \qquad \frac{1}{16} <y< 1. \end{eqnarray} The cutoff at $y=1/16$ corresponds to the fraction of power received two half-beamwidths from the pointing centre. The average attenuation is $\langle y \rangle = 0.39$. The equivalent flux-density, ${\cal S}$, at which any given event is detected is proportional to the product of two random variates: the incident flux density (after amplification by scintillation) and the telescope attenuation, $y$. For a population whose flux densities are initially distributed according to a power law, altered by diffractive scintillation (see eq.(\ref{FullS0soln})), and then subject to the random effects of beam attenuation, the measured equivalent flux density distribution is, using eq.(\ref{VariateMultiply}), \begin{eqnarray} p_{\rm S/N} \left( {\cal S} \right) &=& \frac{1}{4 \ln 2} \int_{1/16}^1 p_{Z,1} \left( \frac{{\cal S}}{y} \right) \frac{dy}{y^2} \\ &=& \frac{K {\cal S}^{-5/2 + \delta}}{ \left( \frac{3}{2} - \delta \right) 4 \ln 2} \left[ G({\cal S},S_{\rm max}) - G({\cal S} ,S_{\rm min}) \right], \\ &\null& \qquad \mbox{where } G({\cal S},S_0) = \Gamma_{\frac{5}{2} -\delta} \left( \frac{\cal S}{S_0} \right) - 2^{4 \delta-6} \Gamma_{\frac{5}{2} -\delta} \left( \frac{16 \,{\cal S}}{S_0} \right) + \left( \frac{{\cal S}}{S_0}\right)^{\frac{3}{2}-\delta} \left( e^{-\frac{16 {\cal S}}{S_0}} - e^{-\frac{{\cal S}}{S_0}} \right). \end{eqnarray} The behaviour of this distribution is approximated as follows: \begin{eqnarray} p_{S/N} \left( {\cal S} \right) \approx \frac{K \, {\cal S}^{-5/2+\delta}}{4 \ln 2} \begin{cases} \frac{ 15 }{5/2-\delta} {\cal S}^{5/2-\delta} \left[ S_{\rm min}^{-5/2+\delta} - S_{\rm max}^{-5/2+\delta} \right] & {\cal S} \ll S_{\rm min}, \\ \Gamma \left( \frac{3}{2} - \delta \right) \left[ 1 - 16^{-3/2 + \delta} \right] & S_{\rm min} \ll {\cal S} \la S_{\rm max}, \\ \left( \frac{\cal S}{S_{\rm max}} \right)^{1/2-\delta} \left[ e^{-{\cal S}/S_{\rm max}} - \frac{1}{16} e^{-16 {\cal S}/S_{\rm max}} \right] & {\cal S} \ga S_{\rm max}. \\ \end{cases} \label{SmeasApprox} \end{eqnarray} A plot of the distribution of the equivalent flux densities, $p_{S/N} ({\cal S})$, is shown in Figure \ref{fig:AttenDistn} along with the intrinsic (scintillation-affected) flux density distribution $p_{Z,1}(Z)$ and the analytic approximation to the distribution described by eq.(\ref{SmeasApprox}). In the regime $S_{\rm min} \ll {\cal S} \la S_{\rm max}$ the slope of the distribution remains the same as the underlying distribution. However, in the regime ${\cal S} \gg S_{\rm min}$ the distribution of equivalent flux densities is systematically lower than the original distribution, $p_{Z,1}$, as a result of the fact that each event detected is subject to some degree of attenuation. In essence, this is because no event occurs precisely at the beam centre. For a given attenuation $1/16 < y <1$, the number of events detected at a given flux density ${\cal S}$ is the number of events whose flux density incident on the telescope is actually ${\cal S}/y$. \begin{figure} \centerline{\epsfig{file=AttenuatedDistribution.eps,scale=0.7} \epsfig{file=AttenuatedDistributionZoom.eps,scale=0.7}} \caption{A graphic illustration of the effect of beam attenuation on the distribution of observed event flux densities. The distribution $p_{S/N}({\cal S})$ (purple) is the distribution measured at the telescope for an intrinsic distribution $p_{Z,1}(Z)$ (given by eq.\,(\ref{FullS0soln}), blue curve), taking into account the fact that events may be detected anywhere within the half-power point of the beam. The calculation assumes a Gaussian beam shape. The analytic approximations to $p_{S/N}({\cal S})$ in the limits ${\cal S} \ll S_{\rm min}$, $S_{\rm meas} \ll {\cal S} \la S_{\rm max}$ and ${\cal S} \ga S_{\rm max}$, given by eq.\,(\ref{SmeasApprox}), are shown by the dashed line. The beam-attenuated flux density distribution follows the same slope as the underlying distribution over the range $S_{\rm min} \ll {\cal S} \la S_{\rm max}$ but is offset from it. The green curve shows the flux density distribution for events whose flux densities are not affected by scintillation. The right plot displays a zoomed region of the left panel to demonstrate the difference in the amplitudes of the various distributions.} \label{fig:AttenDistn} \end{figure} For comparison with the scintillation-enhanced distribution, we derive the equivalent flux density distribution for events that are not subject to appreciable enhancement by diffractive scintillation. As noted above, this situation applies to events located within $\sim 30^\circ$ of the Galactic plane, where a large number of scintles contribute to the observed flux density, since the decorrelation bandwidth of the diffractive scintillations is much smaller than the observing bandwidth of the HTRU survey. The S/N distribution is \begin{eqnarray} p_{S/N}({\cal S}) &=& \frac{1}{4 \ln 2} \int_{1/16}^1 p_S \left( \frac{\cal S}{y}\right) \frac{dy}{y^2} \\ &=& \frac{K \, {\cal S}^{-5/2+\delta} }{4 \ln 2 \left(\frac{3}{2} - \delta \right)} \begin{cases} \left( \frac{S_{\rm min}}{\cal S} \right)^{-3/2+\delta} - 16^{-3/2+\delta} & S_{\rm min}/16 < {\cal S} \la S_{\rm min}, \\ 1 - 16^{-3/2+\delta} & S_{\rm min} \la {\cal S} < S_{\rm max}/16, \\ 1- \left( \frac{S_{\rm max}}{\cal S} \right)^{-3/2+\delta} & S_{\rm max}/16 < {\cal S} < S_{\rm max}. \\ \end{cases} \label{pSnoScint} \end{eqnarray} Comparison of eqs.(\ref{SmeasApprox}) and (\ref{pSnoScint}) in the regime $S_{\rm min} \ll {\cal S} \la S_{\rm max}$ reveals the ratio of the scintillation-enhanced to the unenhanced flux density distribution is $\Gamma(5/2-\delta)$. \section{Discussion} \label{sec:Compare} There are two mechanisms by which interstellar scintillation can enhance the detection rate of FRBs at a given flux density, and we discuss the viability of each in turn as an explanation of the disparity in FRB event rates between high and low Galactic latitudes. The first mechanism is relevant when there is an upper bound to the distribution of FRB flux densities. The effect of enhancement by a single diffractive scintle is to extend the distribution beyond this cutoff into an exponentially-decreasing tail. Since this tail falls faster than any power law, the mechanism is only of practical relevance if the cutoff is extremely sharp. We disfavour this mechanism as a practical explanation of the FRB Galactic latitude dependence because it is not clear how the necessary sharp cutoff could arise in practice. Moreover, the prediction that the differential source counts follows an exponential distribution appears to be at variance with the observed distribution of FRB S/N values. The second mechanism gives rise to an increase in the detection rate but maintains the intrinsic slope of the differential event rate distribution. It does not rely on the presence or otherwise of an upper cutoff in the distribution. Rather, this enhancement comes at the expense of a depressed event rate at the low flux density end of the distribution. The phenomenon may be interpreted as an effect of Eddington bias. The effect of scintillation is to mix populations with different initial flux densities. Scintillation is equally likely to amplify the radiation as it is to de-amplify it. However, if the initial flux density distribution is sufficiently steep, the absolute number of low flux density sources redistributed to higher flux densities greatly exceeds the number of high flux density sources redistributed to low flux densities. When the initial population follows a power law in flux density, the nett effect is to increase the event rate of sources at high flux density relative to those with flux densities near the lower flux density cutoff of the distribution, $S_{\rm min}$. For flux densities $S_{\rm min} \ll S_\nu \la S_{\rm max}$ the distribution retains the same power-law index as the initial distribution, but the differential event rate is increased by a factor $\Gamma(5/2 - \delta)$. The nett enhancement is a factor of $\approx 30$\% for a population whose event rate scales as $S_\nu^{-5/2}$ (i.e. for a non-evolving population and neglecting cosmological effects, $\delta =0$). However, as shown in Figure \ref{fig:enhance}, the enhancement is extremely sensitive to the slope of the distribution: a steeper distribution with $\delta = -1/2$ yields an enhancement of a factor of 2.0, while $\delta = -1$ increases the event rate by a factor of 3.3 over the initial rate. On the other hand, the effect works in the opposite direction for distributions shallower than $S_\nu^{-2}$, with $\delta = 1/2$ yielding no nett enhancement in event rate, and $\delta = 1$ causing an $\approx 11$\% decrement in the event rate. Diffractive interstellar scintillation explains in principle the observed disparity in the detection rate of FRBs at high and low Galactic latitudes. Accepting this explanation as viable, we can then infer limits on the steepness of the FRB event rate flux density distribution. Of the 9 FRBs detected to date by the HTRU survey, only 2 have been detected a latitudes $<30^\circ$ and yet the on-sky time at these low latitudes is 23\% higher than at high latitudes. This leads to a 4.7:1 ratio between FRB rates above and below $30^\circ$ (see also Petroff et al.\,2014). Although subject to Poisson statistics, we can place a lower bound on the ratio of high- to low-latitude event rates of 3:1. This in turn implies that FRB event rate distribution must scale more steeply than $S_\nu^{-3.4}$. The strong departure from the source count index associated with a homogeneously distributed population (whose index is $\alpha=2.5$) suggests that the parent population of FRBs evolves strongly over cosmic time. Although we do not compare here the implied source count distribution against the predictions of specific models, we remark that such evolution would not be unexpected in a model in which the parent population is tied to the star formation rate. \subsection{Implications for FRB detection rates} The model poses an important implication for the FRB progenitor population. In particular, the FRB rate derived by Thornton et al. (2013) was based on the observed rate at high Galactic latitudes. As subsequent observations have shown (Petroff et al. 2014) the observed rate at low Galactic latitudes is about 4 times lower. Our model shows that the rate at low latitudes is a better reflection of the true rate, and the Thornton rate should therefore be revised down by a factor of $\sim$4. At low observing frequencies, the boost in the rate due to scintillation does not happen as $\Delta \nu_{\rm dc}$ is very much smaller than at 1.4~GHz and so the revised rate is more appropriate. Coupled with our uncertain knowledge of the spectral index of FRBs, this may explain the lack of FRB discoveries so far at low frequencies. The model also predicts a steep flux density distribution with the implication that an order of magnitude increase in the event rate at 1.4~GHz could be achieved with a factor 2.6 increase in sensitivity and that sensitivity is more important than field-of-view. The Arecibo telescope has the highest sensitivity for FRB searches; we estimate that the detection rate using the Arecibo multibeam at 1.4 GHz should be some 14 times that of Parkes surveys according to our model. At high latitudes Arecibo should detect $\sim$1 FRB per day and should detect $\sim$1 FRB every 4 days at low latitudes. Spitler et al. (2014) detected 1 FRB in 12 days observing at low latitudes with Arecibo, not inconsistent with our prediction especially given the difficulty of disentangling FRBs from pulsar phenomenon at these latitudes. Our model also bodes well for the future of FRB surveys with interferometers such as the JVLA, ASKAP (Macquart et al.\,2010) and the SKA providing the searches can be done coherently at high time resolution over the entire field of view. The progression from the weak- to strong-scintillation limit is expected to follow a trend from high to low Galactic latitudes. However, we do not present a detailed prediction of the event rate on Galactic latitude. The low number of FRB detections does not yet merit such a detailed comparison. Moreover, we do not expect there to be a straightforward mapping between enhancement and Galactic latitude. The turbulent scattering properties of the ISM are known to be highly inhomogeneous and, while there is a general trend to decreasing scattering strength with Galactic latitude, they also depend on Galactic longitude and other details particular to each individual line of sight. This is particularly pertinent here because there is a strong selection bias to detect only FRBs that are subject to weaker diffractive scintillation; in other words, we expect FRBs to be detected preferentially along sight-lines with anomalously weak scattering. \begin{figure} \centerline{\epsfig{file=Enhancement.eps, scale=0.7 }} \caption{The enhancement in the event rate for FRBs subject to diffractive scintillation by a single scintle across the observing band as function of $\delta$ for a flux density distribution scaling as $S_\nu^{-5/2+\delta} \equiv S_\nu^{-\alpha}$.} \label{fig:enhance} \end{figure} \section{Conclusion} \label{sec:Conclusions} Our conclusions are as follows: \begin{itemize} \item[--] Galactic diffractive interstellar scintillation can explain the observed disparity in event rates of FRBs between high and low Galactic latitude without altering the slope of the flux density distribution. The enhancement caused by scintillation is greatest when the decorrelation bandwidth of the scintillation is comparable to or greater than the observing bandwidth. As the decorrelation bandwidth decreases, the enhancement in the event rate caused by scintillation diminishes. \item[--] The maximum enhancement in event rate over the intrinsic event rate is $\Gamma(5/2-\delta)$ for an event flux density distribution that scales as $S_\nu^{-5/2+\delta}$. A consequence of the model is that the frequency of FRB progenitors is a factor $\approx \Gamma(5/2 -\delta)$ lower than supposed based on the observed FRB event rate, which is dominated by the rate count at high Galactic latitudes. Given the $> $3:1 event rate disparity between high and low-latitude FRBs, the model places a limit of $\delta < -1$ on the steepness of the flux density distribution. \item[--] The steepness of the event rate distribution implied by the scintillation model implies that only a minor improvement in sensitivity is required to detect FRBs at substantially higher rates. The integrated event rate for a telescope with a field of view $\Omega$ and sensitive to bursts down to a flux density $S_0$ scales as $\Omega \, S_0^{-3/2+\delta}$. This strong dependence on $S_0$ argues that sensitivity is a more important criterion for the detection of FRBs than instantaneous field of view. \end{itemize} \section*{Acknowledgments} Parts of this research were conducted by the Australian Research Council Centre of Excellence for All-sky Astrophysics (CAASTRO), through project number CE110001020. We are grateful to Steve Spangler whose insightful review improved this manuscript. We acknowledge stimulating conversations with Ue-Li Pen and Bryan Gaensler.
1,108,101,563,977
arxiv
\section{\label{sec:intro}Introduction} Excitation energy transfer (EET) processes represent an important subclass of transport phenomena in open quantum systems \cite{May2000}. Studies of EET processes in condensed matter, artificial nano-materials and biological systems inspire applied renewable energy research, as well as provide fundamental insights into important natural processes. The latter is especially true for biological research. In the field of photosynthesis, soon after it was recognized in 1990s that closely packed aggregates of (bacterio)chlorophylls are responsible for light energy harvesting and initial energy transport, EET processes became the prime subject of theoretical studies. A consistent picture of EET dynamics in photosynthetic molecular aggregates has emerged towards the end of past century \cite{VanAmerongen2000,May2000}. The field has drawn heavily from the previous advances in experimental laser science \cite{Fleming1986,Demtroder2008}, the theory of time resolved spectroscopy \cite{Mukamel1995Book}, the theory of transfer processes in molecular crystals \cite{Silinsh1994} and the whole development in the field of dissipative dynamics \cite{Weiss2008,Fain2000}. Master equations for the reduced density matrix (RDM), mostly in Markovian, but also in non-Markovian formulations \cite{Cerrillo2014}, often in conjugation with response function theory of non-linear ultrafast spectroscopy \cite{Mukamel1995Book}, form the basis of most of the successful theories of EET processes in photosynthesis \cite{Chenu2015}. Depending on the type of the molecular system, various types of master equations describe the EET processes successfully. In general, there are two limits in which master equations provide correct dynamics. Both these limits are defined by the region of validity of certain perturbation theories. If, for instance, the resonance coupling between two molecular systems can be considered a small parameter, one arrives at a set of RDM equations containing rates similar to the well-known F\"{o}rster resonance energy transfer rates \cite{Foerster1946,Foerster1948}. If, on the other hand, the system--bath coupling can be considered a small parameter, one arrives at master equations related to those proposed originally by A. G. Redfield in the framework of nuclear magnetic resonance \cite{Redfield1965}. The original result of F\"{o}rster can be conveniently written in terms of experimentally accessible quantities (see e. g. \cite{VanAmerongen2000}) and it has gained substantial popularity due to its intuitive character. In molecular complexes, simple predictions of the F\"{o}rster theory often fail \cite{Mukai1999,Scholes2000,Zigmantas2006}, but generalizations of the original ideas to interactions between the whole molecular complexes is straightforward \cite{Mukai1999,Scholes2001,Jang2004}. Similarly, Redfield equations adapted for molecular aggregates proved to be an extremely versatile tool \cite{VanAmerongen2000,May2000,Valkunas2013}. Frenkel exciton model in conjugation with the Redfield equations, often in combination with the F\"{o}rster theory, is behind a substantial part of the qualitative insight we have into the inner workings of photosynthesis (see e.g. \cite{Novoderezhkin2004,Novoderezhkin2011,Zigmantas2006,Cho2005,Adolphs2006}). The two general theories described above are often stretched beyond their formal region of validity, even though they rarely cease to deliver meaningful physical insights. Despite current attention to fine effects of underdamped intramolecular vibrational modes \cite{Christensson2012,Tiwari2013,Chin2013,Womick2011,Butkus2014a} and to details of spectral density shapes \cite{Kreisbeck2012}, and despite recent efforts to introduce a paradigm shift in understanding of the origin of the photosynthetic EET efficiency \cite{Ball2011,Scholes2010,Wolynes2009}, the overall picture of this process, as drawn by the master equations of the F\"{o}rster and Redfield types, remains valid \cite{Strumpfer2012} (see e. g. \cite{Wilkins2015} for quantitative results). Recent theoretical advances enabling exact numerical solutions of some types of energy transfer problems e.g. by Hierarchical equations of motion (HEOM) \cite{Ishizaki2009,Ishizaki2009a,Ishizaki2009b} or by the Time evolving density with orthogonal polynomial algorithm (TEDOPA) approach \cite{Oviedo-Casado2016,Rosenbach2016}, also do not change the picture qualitatively, but rather improve quantitative aspects of our understanding. The approximate F\"{o}rster/Redfield theories will continue to play an important role in our understanding of EET phenomena for years to come. A testimonial to this are recent works attempting to improve their accuracy in various regimes of approximation \cite{Jang2002,Jang2008,Jang2011,Banchi2013,Kimura2016c,Chang2013,Sun2016a}. Our effort, presented in this work, can be understood as an integral part of this trend. We are motivated by our recent study of ultrafast energy transfer between the bright $S_{2}$ state of a carotenoid molecule and the second lowest excited state, the so-called $Q_{x}$ state, of (bacterio)chlorophyll ((B)Chl) and related molecules \cite{Perlik2015}. By means of an explicit non-perturbative treatment of carotenoid vibrational modes, it was possible to show in Ref. \cite{Perlik2015} that multi-vibrational-quantum transitions on the carotenoid are responsible for the measured sub 100 fs energy transfer time from carotenoid to chlorophyll. This transition is so fast that it outcompetes even the internal relaxation from $S_{2}\rightarrow S_{1}$ in the carotenoid. Any such explicit inclusion of vibrational degrees of freedom (DOF) into the Hamiltonian becomes quickly numerically too expensive when the size of the molecular system grows. It would therefore be of interest to see if methods treating nuclear DOF as a bath can reproduce the above mentioned ultrafast EET rates. The multi-quantum character of the transitions suggests that any theory of energy transfer in this system has to go beyond second order in system-bath interaction. This excludes ordinary Redfield theory from consideration, and suggests F\"{o}rster and Modified Redfield theories as the possible candidates, due to their (partial in case of Modified Redfield) non-perturbative character with respect to system--bath coupling. In their standard formulations (see e.g. \cite{Valkunas2013} Chap. 11.5 and 11.6) it is assumed that the state of the bath corresponding to the initial condition of the EET is adapted to the excited state potential energy surfaces (PES). This assumption works reasonably well for the F\"{o}rster theory and slow EET rates, where the initial equilibration of the bath in the excited state occurs much faster than the subsequent/competing EET. In such a formulation the equations are, strictly speaking, valid on time-scales on which short time processes, such as bath reorganization or dephasing of coherences, are already over. In case of Modified Redfield theory we find the same limitation. Modified Redfield theory was, moreover, derived only for populations of delocalized excitonic levels \cite{Zhang1998,Yang2002}. While the reason for not treating coherence elements of the density matrix is technical (the derivation within the projection operator technique is only possible with an operator projecting on the diagonal elements of the density matrix only), in the time interval in which the theory is valid, the coherence elements of the RDM should be zero anyway. Standard formulations of the F\"{o}rster and Modified Redfield theories are therefore not suitable for the problem of $S_{2}\rightarrow Q_{x}$ transition, because the bath DOF of the carotenoid, when the system is excited to $S_{2}$, have no time to equilibriate. Not only the transfer from $S_{2}$ to $Q_{x}$ occurs fast, also the depopulation of $S_{2}$ due to other competing channels is ultrafast. It is therefore much more reasonable to assume that the bath is close to the state in which it was right after the excitation (experiment is performed with ultrashort pulses). This excited bath state corresponds, in Condon approximation, to the bath equilibrium established prior to excitation on the electronic ground-state. On the excited state PES, such initial state of the bath represents a highly non-equilibrium state, and it is contrasted here with the usual electronic excited state equilibrium assumed in the standard formulations of the two relaxation theories. Our task is therefore to derive the non-equilibrium equivalents of the F\"{o}rster and Modified Redfield theories. The paper is organized as follows: First the theoretical background is reviewed in Sec.~\ref{sec:theoretical_background}. It starts with a derivation of a quantum master equation in Sec.~\ref{sec:deriv_QME} using the projection operator formalism under the assumption that the bath is not in thermal equilibrium initially. Then the general expression is specified for the cases of F\"{o}rster and Modified Redfield population transfer in Sec.~\ref{sec:Foerster_transfer} and \ref{sec:modified_Redfield_transfer}, respectively. The results of model calculations , in particular a comparison of the time-dependencies of F\"{o}rster and Modified Redfield rates obtained from standard and non-equilibrium approaches, extraction of transfer rates from the population dynamics of a system with additional relaxation channels, and effects of finite excitaion pulse width, are discussed in Sec.~\ref{sec:results_discussion}. Special attention is paid to the donor--acceptor energy gap dependence of the transfer rates which is investigated in Sec.~\ref{sec:frequency_dependence_transfer_rates}. An overview of the main aspects of this article is given in Conclusions, Sec.~\ref{sec:Conclusions}. \section{Theoretical background} \label{sec:theoretical_background} \subsection{Derivation of quantum master equation using projection operator formalism} \label{sec:deriv_QME} Let us consider a dimer donor--acceptor system which undergoes population transfer subsequent to electronic excitation from ground- to singly excited state of the donor. We assume that the initial state of the donor can be factorized into an electronic part, represented by the electronic ground-state of the donor, and an equilibrium state of the bath. The bath includes both the intramolecular nuclear DOF of the donor molecule and the degrees of freedom (DOF) of the environment of the donor. The factorization assumption is reasonable as we assume that the energy gap is optical, and no thermal electronic excitation can therefore exist. The effect of electronic excitation on the bath DOF coincides with a displacement of the bath oscillators. The total Hamiltonian of the dimer (including its environmental DOF) can be decomposed into a reference Hamiltonian $H^{0}$ and a perturbation Hamiltonian $H'$. Different choices of the reference and perturbation Hamiltonians enable us to apply perturbation theory with validity in different ranges of system parameters. The Liouville equation for time evolution of the density matrix $\rho$ is formulated in the interaction picture with respect to $H'$ with Liouville operator ${\cal L}_I(t)$ including the time-dependence of the perturbation Hamiltonian under the influence of the reference Hamiltonian $H'$: \begin{equation} \dot{\rho}(t)=-i {\cal L}_I(t) \rho(t)=-i \left[ H'(t), \rho(t) \right], \end{equation} where $H'(t)=\exp(-iH^0t/\hbar)H'\exp(iH^0t/\hbar)$. To facilitate a systematic treatment of the combined dynamics of system and bath, the bath degrees of freedom can be traced out by applying a projection operator \cite{Yang2002}. This projection operator includes the projection on the eigenstates $|a\rangle$ of the system, the bath density matrix in thermal equilibrium $\rho_{b,eq,g}$ of the electronic ground-state and the trace over the bath $Tr_q$ in terms of \begin{equation} \label{eq:projection_operator} {\cal P} A=\sum_a \rho_{b,eq,g} | a \rangle \langle a | Tr_q \{ A_{aa} \}. \end{equation} Furthermore, by defining a complementary projection operator ${\cal Q}=1-{\cal P}$ and by inserting the identity ${\cal P}+{\cal Q}$ after ${\cal L}_I$ \cite{May2000}, we obtain coupled differential equations \begin{equation} \label{eq:Liouville_equation_projectors_P_Q} \begin{split} {\cal P} \dot{\rho}(t)&=-i {\cal P} {\cal L}_I(t) ({\cal P}+{\cal Q}) \rho(t), \\ {\cal Q} \dot{\rho}(t)&=-i {\cal Q} {\cal L}_I(t) ({\cal P}+{\cal Q}) \rho(t). \end{split} \end{equation} Inserting the solution of the second component of Eq.~\ref{eq:Liouville_equation_projectors_P_Q} into the first component yields the Nakajima-Zwanzig identity (see e.g. Ref. \cite{Valkunas2013}) \begin{equation} \label{eq:Nakajima_Zwanzig_identity} {\cal P} \dot{\rho}(t)=-i {\cal I}_{NZ}(t)-i {\cal L}_{NZ}(t) {\cal P} \rho(t)-{\cal K}_{NZ}(t,{\cal P} \rho), \end{equation} where in contrast to the terms ${\cal I}_{NZ}(t)$ and ${\cal L}_{NZ}(t)$ the term ${\cal K}_{NZ}(t,{\cal P} \rho)$ includes a convolution of the time evolution of the density matrix with a memory kernel. However, Eq.~(\ref{eq:Nakajima_Zwanzig_identity}) can be recast into a convolutionless form \begin{equation} \label{eq:Nakajima_Zwanzig_identity_convolution_free} {\cal P} \dot{\rho}(t)=-i {\cal I}_{CL}(t)-{\cal K}_{CL}(t) {\cal P} \rho(t), \end{equation} which after taking the trace corresponds to \begin{equation} \label{eq:QME_solution} \begin{split} &\sum_a | a \rangle \langle a | Tr_q \{ \dot{\rho}_{aa}(t) \}= \\ &-i \sum_a | a \rangle \langle a | Tr_q \{ ({\cal L}_I(t) \rho_{b,eq,g})_{aa} \} \sigma_{I}(0) \\ &-\int_0^t d\tau \sum_a | a \rangle \langle a | Tr_q \{ ({\cal L}_I(t) {\cal Q} {\cal L}_I(\tau) \rho_{b,eq,g})_{aa} \} \sigma_{I}(t). \end{split} \end{equation} By identifying the remaining terms as \begin{equation} \label{eq:QME_R-term} \begin{split} &{\cal R}(t)={\cal K}_{CL}(t) \\ &=\int_0^t d\tau \sum_a | a \rangle \langle a | Tr_q \{ ({\cal L}_I(t) {\cal Q} {\cal L}_I(\tau) \rho_{b,eq,g})_{aa} \}, \end{split} \end{equation} and \begin{equation} \label{eq:QME_I-term_first_order} {\cal I}(t)=-i {\cal I}_{CL}(t)=-i \sum_a | a \rangle \langle a | Tr_q \{ ({\cal L}_I(t) \rho_{b,eq,g})_{aa} \} \sigma_{I}(0), \end{equation} the quantum master equation (QME) for the system density matrix $\sigma_I=\sum_a | a \rangle \langle a | Tr_q \{ \rho_{aa} \}$ becomes \cite{Jang2008} \begin{equation} \label{eq:QME_system} \dot{\sigma}_I(t)=-{\cal R}(t)\sigma_I(t)+{\cal I}(t). \end{equation} Note that $\sigma_I(t)$ remains diagonal during the time evolution because all operators in Eq.~\ref{eq:QME_system} are diagonal by definition. Under the assumption that the perturbation Hamiltonian $H'$ entering in ${\cal L}_I$ is off-diagonal in the basis of the electronic eigenstates, it is possible to formulate ${\cal R}(t)$ as a product of two components. One of them is a population transfer superoperator \cite{Seibt2014} acting only on the system density matrix, and the other one accounts for the influence of the bath. The inhomogeneous term ${\cal I}(t)$ includes the commutator of the electronic component of $H'$ with $\sigma_I(0)$. \subsection{Calculation of F\"{o}rster transfer rates} \label{sec:Foerster_transfer} For the case of F\"{o}rster transfer between a donor and an acceptor, where the resonance Coulomb couplings $J_{nm}$ between the molecular transitions on molecules $n$ and $m$ are small compared to the system-bath coupling, the matrix elements of the perturbation Hamiltonian read as \begin{equation} H'_{mn}=J_{m n} |m \rangle \langle n |. \end{equation} Besides the standard description F\"{o}rster theory, a non-equilibrium generalization has been derived previously \cite{Jang2002}, however without applying the cumulant expansion to obtain a compact line shape function based formulation. We follow the derivation steps given in \cite{Yang2002} under the modified assumption that the bath in the excited electronic state is not equilibrated initially. It rather corresponds to the equilibrium bath related to the electronic ground state. To formulate rate expressions for further evaluation via cumulant expansion, Eq.~(\ref{eq:QME_solution}) is expressed in terms of tensor elements accounting for population transfer between an initially populated donor molecule (index $m$) and an acceptor (index $n$) as \begin{equation} \label{eq:Foerster_rate_homogeneous_term_matrix_elements_general_expression} \begin{split} &{\cal R}_{nn mm}(t)=2 \Re \int^{t}_{0} {\rm d}t Tr_q \{ \exp(i H_{m}^{0} t) H'_{m n} \\ &\times \exp(-i H_{n}^{0} t) \exp(i H_{n}^{0} t') H'_{n m} \exp(-i H_{m}^{0} t') \rho_{b,eq,g} \}. \end{split} \end{equation} Different from Eq.~(\ref{eq:QME_R-term}), in Eq.~(\ref{eq:Foerster_rate_homogeneous_term_matrix_elements_general_expression}) we take twice the real part of the expression, as the complex conjugate tensor element yields complex conjugated contribution. Furthermore, in deriving Eq.~(\ref{eq:Foerster_rate_homogeneous_term_matrix_elements_general_expression}) we assume that only the unity operator entering in ${\cal Q}$ remains, as the term containing ${\cal P}$ becomes zero. Because of the off-diagonal form of the perturbation Hamiltonian and the initially diagonal form of the density matrix, the selection of diagonal elements by the projection operator given in Eq.~(\ref{eq:projection_operator}) makes the respective expression vanish. The Hamiltonian operators $H_k$ of the donor ($k=m \in \{ 1,2 \}$) and the acceptor ($k=n \in \{ 1,2 \}, n \neq m$) contain electronic excitation energies $e^0_k$, reorganization energies $l_k$, bath phonon energies $e^{ph}_k$ and energy gap coordinates $u_k$ associated with system-bath coupling \cite{Yang2002}. The bath component of ${\cal R}_{nn mm}(t)$ can be formulated in terms of time-ordered exponentials containing integrals over energy gap coordinates in the interaction picture (see e.g. \cite{Mukamel1995Book}) \begin{equation} \label{eq:time_evolution_energy_gap_coordinates} u_k(\tau)=\exp(i e^{ph}_k \tau) u_k \exp(-i e^{ph}_k \tau). \end{equation} In the framework of the second-order cumulant expansion, the line shape functions can be identified as \begin{equation} \label{eq:line_shape_function_correlation_function} g_{k}(\tau)=\int_{0}^{\tau} d\tau' \int_{0}^{\tau'} d\tau'' Tr_q \{ u_k(\tau'') u_k(0) \} \end{equation} under the assumption that the bath fluctuations associated with the singly excited states of donor and acceptor can be considered as uncorrelated, so that the cumulant expansion only yields line shape functions related to either the donor or the acceptor. Eq.~(\ref{eq:QME_R-term}) then becomes \begin{equation} \label{eq:Foerster_rate_homogeneous_term_line_shape_functions} \begin{split} &{\cal R}_{nn mm,bath,noneq}(t)= 2 |J_{mn}|^2 \\ &\Re \bigg( \int_0^t dt' \exp(i (e^0_m-e^0_n) (t-t')) \exp(i (l_m-l_n) (t-t')) \\ &\exp \left( -g_{n}(t-t')-g_{m}(t-t') \right. \\ &\left. +2i\Im(g_{m}(t))-2i\Im(g_{m}(t')) \right) \bigg) \end{split} \end{equation} For comparison, the analogous expression in the case of the standard F\"{o}rster approach reads \cite{Yang2002} \begin{equation} \label{eq:standard_Foerster_rate_homogeneous_term_line_shape_functions} \begin{split} &{\cal R}_{nn mm,bath,std}(t)= 2 |J_{mn}|^2 \\ &\Re \bigg( \int_0^t dt' \exp(i (e^0_m-e^0_n) (t-t')) \exp(i (-l_m-l_n) (t-t')) \\ &\exp \left( -g_{n}(t-t')-g_{m}(t-t') \right) \bigg) \end{split} \end{equation} The F\"{o}rster transfer rate can also be expressed in terms of an integral over the product of absorption and complex conjugate emission component, which in the case of the standard description correspond to \begin{equation} A_n(t')=\exp(-i (e^0_n+l_n) t') \exp \left(-g_{n}(t')\right) \end{equation} and \begin{equation} F_m(t')=\exp(-i (e^0_m-l_m) t') \exp \left(-g^{*}_{m}(t')\right) \end{equation} respectively. The rate expression then becomes \begin{equation} \label{eq:standard_Foerster_rate_product_absorption_emission} {\cal R}_{nn mm,bath,std}(t)= 2 |J_{mn}|^2 \Re \left( \int_0^t dt' F^{*}_m(t') A_n(t') \right). \end{equation} Note that in the limit of large time argument $t-t'$, the integrand functions from Eqs.~(\ref{eq:Foerster_rate_homogeneous_term_line_shape_functions}) and (\ref{eq:standard_Foerster_rate_homogeneous_term_line_shape_functions}) become equivalent, because $\lim_{\tau \to \infty}\dot{g}_k(\tau)=-l_k$ \cite{Zhang1998}. However, this limit is not applicable for small values of $t-t'$, so that the different approaches lead to different results for $t'$ approaching the upper integration border. An analogous approach as in the derivation of the homogeneous term given in Eq.~(\ref{eq:Foerster_rate_homogeneous_term_line_shape_functions}) for the formulation of the inhomogeneous term from Eq.~(\ref{eq:QME_I-term_first_order}) leads to \begin{equation} \label{eq:Foerster_rate_homogeneous_term_second_order} \begin{split} &{\cal I}_{nn mm,bath}(t)= 2 J_{mn} \Im \left( \exp(-i (e^0_m-e^0_n) t) \right. \\ &\left. \exp(-i (l_m-l_n) t) \exp(-g^{*}_{n}(t)-g_{m}(t)) \right). \end{split} \end{equation} The evolution of the system can be treated separately by introducing a superoperator for population transfer from state $m$ to state $n$, which can be expressed in terms of matrix elements of an operator $\Theta$ with \begin{equation} \label{eq:definition_Theta_Foerster} \Theta_{mn}=|m \rangle \langle n|. \end{equation} This off-diagonal operator accounts for the influence of the electronic component of the perturbation Hamiltonian. The matrix elements of the relaxation superoperator are \cite{Seibt2014} \begin{equation} {\cal K}_{nnmm} \; \bullet = [\Theta_{mn},\Theta_{nm} \; \bullet - \bullet \; \Theta_{mn}]. \end{equation} These matrix elements enter in ${\cal R}_{nn mm}(t)={\cal R}_{nn mm,bath}(t) {\cal K}_{nnmm}$. Note that the selection of diagonal elements in the system eigenbasis according to the definition of the projection operator from Eq.~(\ref{eq:projection_operator}) implicitly enters in the relaxation superoperator. For the inhomogeneous term, one obtains ${\cal I}_{nn mm}(t)=\sum_a | a \rangle \langle a | \left( {\cal I}_{nn mm,bath}(t) [\Theta_{nm}, \sigma_I(0)] \right)_{aa}$. Selecting diagonal elements from the off-diagonal commutator expressions makes ${\cal I}_{nn mm}(t)$ vanish. The rate equation can then be formulated as \begin{equation} \label{eq:rate_equation} \dot{\sigma}_{I,nn}(t)=-\sum_m {\cal R}_{nnmm}(t)\sigma_{I,mm}(t). \end{equation} \subsection{Calculation of Modified Redfield transfer rates} \label{sec:modified_Redfield_transfer} In the case that the system-bath coupling is sufficiently small, it can be treated in a perturbative way, while the resonance coupling enters in the description of the system via a transformation to the so-called exciton basis. The exciton eigenstates $|k\rangle$ are expressed in terms of linear combinations of the localized singly excited states $|n\rangle$ as \begin{equation} \label{eq:transformation_exciton_basis} | k \rangle = \sum_n j_{kn} | n \rangle. \end{equation} Reference Hamiltonian contains exciton eigenenergies $E_k^0$, phonon energies $e_n^{ph}$, reorganization energies $l_n$ and energy gap coordinates $u_n$, where $l_n$ and $u_n$ are weighted by products of transformation coefficients $a_{k_1 k_2}(n)=j_{k_1 n} j_{k_2 n}, \{ k_1, k_2 \} \in \{ k, k' \}$ with equal indices. The reference Hamiltonian reads as \begin{equation} \label{eq:reference_Hamiltonian} H_k^0=\left[ E_k^0 + \sum_n (a_{kk}(n) l_n + e_n^{ph} + a_{kk}(n) u_n) \right] \left| k \rangle \langle k \right|. \end{equation} In contrast, the perturbation Hamiltonian \begin{equation} \label{eq:perturbation_Hamiltonian} H'_{k k'}=H_{k k'}^{el-ph}=(1-\delta_{k k'}) \left[ \sum_n a_{kk'}(n) u_n \right] \left| k \rangle \langle k' \right| \end{equation} contains products of transformation coefficients with different indices. The rate can be expressed in terms of line shape functions $g_{n,k_1 k_2 k_3 k_4}(\tau)=a_{k_1 k_2}(n) a_{k_3 k_4}(n) g_{n}(\tau)$ and reorganization energies $l_{n,k_1 k_2 k_3 k_4}=a_{k_1 k_2}(n) a_{k_3 k_4}(n) l_{n}$ with the shorthand notations $g_{k_1 k_2 k_3 k_4}(\tau)=\sum_n g_{n,k_1 k_2 k_3 k_4}(\tau)$ and $l_{k_1 k_2 k_3 k_4}=\sum_n l_{n,k_1 k_2 k_3 k_4}$. Details of the derivation are given in the Supporting Information. As in the case of the F\"{o}rster description, also in the integrand of the Modified Redfield rate expression we can identify an absorption component $A_{k}$ and an emission component $F_{k'}$, the latter taken as complex conjugate. However, there is also an additional component $N_{k k'}$ consisting of line shape function derivatives, so that the rate expression reads as \begin{equation} \label{eq:modified_Redfield_rate_R} \begin{split} &{\cal R}_{kk k'k',bath,noneq}(t)= 2 \Re \left( \phantom{\int} \right. \\ &\left. \int_0^t dt' F^{*}_{k'}(t,t') A_{k}(t,t') N_{k k'}(t,t') \right), \end{split} \end{equation} with \begin{equation} \label{eq:fluorescence_modified_Redfield_noneq} \begin{split} &F_{k'}(t,t')= \exp(-i E_{k'}^0 (t-t')) \\ &\exp(-i l_{k'k'k'k'} (t-t')) \exp \left( -g^{*}_{k'k'k'k'}(t-t') \right. \\ &\left. -2i\Im(g_{k'k'k'k'}(t))+2i\Im(g_{k'k'k'k'}(t')) \right), \end{split} \end{equation} \begin{equation} \label{eq:absorption_modified_Redfield_noneq} \begin{split} &A_{k}(t,t')= \exp(-i E_{k}^0 (t-t')) \\ &\exp(-i l_{kkkk} (t-t')) \exp \left( -g_{kkkk}(t-t') \right), \end{split} \end{equation} and \begin{equation} \label{eq:Nterm_modified_Redfield_rate_noneq} \begin{split} &N_{k k'}(t,t')= \exp \left( 2 g_{kkk'k'}(t-t') \right. \\ &\left. -2i \Im(g_{kkk'k'}(t))+2i\Im(g_{kkk'k'}(t')) \right) \\ &\times \{ \ddot{g}_{k'kkk'}(t-t') \\ &-[\dot{g}_{k'kk'k'}(t-t')-\dot{g}_{k'kkk}(t-t') \\ &-2i\Im(\dot{g}_{k'kk'k'}(t))] \times [\dot{g}_{k'k'kk'}(t-t') \\ &-\dot{g}_{kkkk'}(t-t')-2i \Im(\dot{g}_{k'k'kk'}(t'))] \}. \end{split} \end{equation} The standard Modified Redfield rate can be expressed as \begin{equation} \label{eq:standard_modified_Redfield_rate_R} \begin{split} &{\cal R}_{kk k'k',bath,std}(t)= 2 \Re \left( \phantom{\int} \right. \\ &\left. \int_0^t dt' \tilde{F}^{*}_{k'}(t,t') \tilde{A}_{k}(t,t') \tilde{N}_{k k'}(t,t') \right), \end{split} \end{equation} with \begin{equation} \label{eq:fluorescence_modified_Redfield_std} \begin{split} &\tilde{F}_{k'}(t,t')= \exp(-i E_{k'}^0 (t-t')) \\ &\exp(+i l_{k'k'k'k'} (t-t')) \exp \left( -g^{*}_{k'k'k'k'}(t-t') \right), \end{split} \end{equation} \begin{equation} \label{eq:absorption_modified_Redfield_std} \begin{split} &\tilde{A}_{k}(t,t')= \exp(-i E_{k}^0 (t-t')) \\ &\exp(-i l_{kkkk} (t-t')) \exp \left( -g_{kkkk}(t-t') \right), \end{split} \end{equation} and \begin{equation} \label{eq:Nterm_modified_Redfield_rate_std} \begin{split} &N_{k k'}(t,t')=\exp(2i l_{kkk'k'} (t-t')) \\ &\exp ( +2 g_{kkk'k'}(t-t) ) \times \{ \ddot{g}_{k'kkk'}(t-t') \\ &-[\dot{g}_{k'kk'k'}(t-t')-\dot{g}_{k'kkk}(t-t') \\ &+2i l_{k'kk'k'}] \times [\dot{g}_{k'k'kk'}(t-t') \\ &-\dot{g}_{kkkk'}(t-t')+2i l_{k'k'kk'}] \}. \end{split} \end{equation} As in the F\"{o}rster description, the integrand functions of the rate expressions from non-equilibrium and standard treatments become identical in the limit of large time arguments $t-t'$. Furthermore, as in the F\"{o}rster case, the inhomogeneous term vanishes, and the homogeneous component of the population transfer dynamics can be described by introducing a population transfer superoperator. The respective expressions given in Eqs.~(\ref{eq:definition_Theta_Foerster})--(\ref{eq:rate_equation}) are also valid in the Modified Redfield case after replacing $n$ and $m$ with $k$ and $k'$. \section{Results and discussion} \label{sec:results_discussion} \subsection{Time-dependence of transfer rates} \label{sec:time_dependence_transfer_rates} In this section we calculate transfer rates for a model donor--acceptor system motivated by previously studied carotenoid-chlorophyll and carotenoid-purpurin dyads \cite{Perlik2015}. The dyads in Ref. \cite{Perlik2015} are strongly heterogeneous dimers with large donor--acceptor energy gaps compared to the excitonic coupling. Because we concentrate on studying our newly developed rate theory in this work, we choose the system parameters with a certain freedom to demonstrate the properties of the rates. The parameters of the dyads from Ref. \cite{Perlik2015}, are to be taken as a motivation only. Nevertheless, to distinguish the different characters of the molecules composing the dimer, we refer to them as carotenoid (Car) and chlorophyll (Chl), respectively. All calculations are performed at room temperature, 293 K, and we ignore the so-called static disorder of the transition energies of the molecules. Resonance coupling is set to $J=\unit[-119]{cm^{-1}}$ as in Ref. \cite{Perlik2015}. Let us first neglect underdamped oscillations characteristic for carotenoid energy gap correlation function. Such underdamped oscillations with reorganization energy $\lambda_{UO,i}$, central frequency $\omega_{UO,i}$ and damping constant $\gamma_{UO,i}$ can be included in terms of a spectral density \begin{equation} \label{eq:spectral_density_underdamped_oscillator} J_{UO,i}(\omega)=2 \lambda_{UO,i} \frac{\omega^2_{UO,i}\omega \gamma_{UO,i}}{(\omega^2-\omega^2_{UO,i})^2+\omega^2\gamma_{UO,i}^2}, \end{equation} where $i \in \{ \rm{Car},\rm{Chl} \}$. To neglect this spectral density contribution, we set $\lambda_{UO,Car}$ to zero at first, while $\lambda_{UO,Chl}$ is always taken as zero in this work. We describe the energy gap fluctuations of both components of the dimer by low-frequency overdamped Brownian oscillator spectral densities with reorganization energy $\lambda_{BO,i}$ and damping constant $\Lambda_{BO,i}$ inversely proportional to the decay time $\tau_{BO,i}$ \begin{equation} \label{eq:spectral_density_Brownian_oscillator} J_{BO,i}(\omega)=2 \lambda_{BO,i} \frac{\omega \Lambda_{BO,i}}{\omega^2+\Lambda_{BO,i}^2}, \; i \in \{ \rm{Car},\rm{Chl} \}. \end{equation} We choose $\lambda_{BO,\rm{Car}}=\unit[67]{cm^{-1}}$, $\lambda_{BO,\rm{Chl}}=\unit[60]{cm^{-1}}$, $\tau_{BO,\rm{Car}}=\unit[30]{fs}$ and $\tau_{BO,\rm{Chl}}=\unit[47]{fs}$ for the calculations. The line shape function components are obtained from the sum spectral density $J_{i}(\omega)=J_{UO,i}(\omega)+J_{BO,i}(\omega)$ via the standard formula \begin{equation} \label{eq:line_shape_function} \begin{split} g_{BO,i}(t)&=\frac{1}{2 \pi} \int^{\infty}_{-\infty} d \omega \frac{1-\cos(\omega t)}{\omega^2} \coth \left( \frac{\omega}{2 k_B T} \right) J_{i}(\omega) \\ &+\frac{i}{2 \pi} \int^{\infty}_{-\infty} d \omega \frac{\sin(\omega t)-\omega t}{\omega^2} J_{i}(\omega), \end{split} \end{equation} where $i \in \{ \rm{Car},\rm{Chl} \}$. To demonstrate the differences of the time-dependencies of rates calculated under the standard equilibrium and the non-equilibrium bath conditions, we first assume the energy gap between the dimer site energies to take the value of $\omega_{21}=\unit[-100]{cm^{-1}}$. This value is comparable with the reorganization energies of the Brownian oscillators. \begin{figure}[h] \includegraphics*[width=\columnwidth]{Figure1.eps} \caption{\label{fig:combined_figure_time_dependent_rates_without_vibrations} Upper row: Time-dependence of population transfer rates from standard formulation of (a) F\"{o}rster and (b) Modified Redfield approach. Lower row: Time-dependence of population transfer rates from non-equilibrium formulation of (c) F\"{o}rster and (d) Modified Redfield approach. The black and red lines correspond to $k_{1 \to 2}$ and $k_{2 \to 1}$ in the F\"{o}rster case and to $k_{\alpha \to \beta}$ and $k_{\beta \to \alpha}$ in the Modified Redfield case, respectively. The difference between site energies was chosen as $\omega_{21}=\unit[-100]{cm^{-1}}$. Intramolecular vibrations of the carotenoid were not taken into account in terms of underdamped oscillators. The other parameters are specified in Sec.~\ref{sec:time_dependence_transfer_rates}. } \end{figure} Time-dependent rates from both F\"{o}rster and Modified Redfield approach are shown in Fig.~\ref{fig:combined_figure_time_dependent_rates_without_vibrations}. The two theories are formulated in different bases and they refer to transitions between states of different kinds. We denote states localized on individual molecules (local basis) by numbers ($1,2$) and the delocalized eigenstates of the dimer (excitonic basis) by Greek letters ($\alpha,\beta$). For definition of these states see supporting information. The standard F\"{o}rster theory (stFT) \cite{Yang2002} results in the rates $k_{1 \to 2}=-{\cal R}_{22 11}$ and $k_{2 \to 1}=-{\cal R}_{11 22}$ corresponding to the black and red line in Fig.~\ref{fig:combined_figure_time_dependent_rates_without_vibrations} (a), respectively. Both rates reach asymptotic values already at about $\unit[100]{fs}$. These asymptotic values are different from the ones of the rates $k_{\alpha \to \beta}=-{\cal R}_{\beta \beta \alpha \alpha}$ and $k_{\beta \to \alpha}=-{\cal R}_{\alpha \alpha \beta \beta}$ obtained from the standard Modified Redfield theory (stMRT) \cite{Yang2002} (see Fig.~\ref{fig:combined_figure_time_dependent_rates_without_vibrations} (b)). Even though the detailed balance condition in the strict sense only applies to the rates obtained from standard Redfield theory (stRT), the ratio of the asymptotic values of the rates from Fig.~\ref{fig:combined_figure_time_dependent_rates_without_vibrations} (b) exhibits a deviation of less then $\SI{2}{\percent}$ from the thermal population of the exciton states with energy gap of $\unit[260]{cm^{-1}}$ for the given values of $J$ and $\omega_{21}$. Similar findings are obtained for the time-dependencies of the non-equilibrium F\"{o}rster theory (noneqFT) rates and the non-equilibrium Modified Redfield theory (noneqMRT) rates, as shown in the subfigures (c) and (d) of Fig.~\ref{fig:combined_figure_time_dependent_rates_without_vibrations}, respectively. In the case of noneqMRT the deviation from detailed balance is even smaller. The noneqFT rates take longer to reach their asymptotic values than the ones of the stFT. This effect is clearly recognizable by comparing the back-transfer rates (red lines). Note that the finding of similar asymptotic values for rates from standard and non-equilibrium approach cannot be generalized, as discussed in Sec.~\ref{sec:theoretical_background}. By comparing Fig. 1b and Fig. 1d one finds differences between noneqMRT and stMRT in the oscillatory dynamics during the early time evolution. Damping of these oscillations indicates bath equilibration. At the upper border of the displayed time interval the rates from noneqMRT and stMRT reach similar values. Close to this upper interval border the oscillations of the rates are almost completely damped out, so that the rates can be approximately considered as asymptotic. \begin{figure}[h] \includegraphics*[width=\columnwidth]{Figure2.eps} \caption{\label{fig:combined_figure_time_dependent_rates_with_vibrations} F\"{o}rster and modified Redfield rates displayed in analogy to Fig.~\ref{fig:combined_figure_time_dependent_rates_without_vibrations}, however with inclusion of intramolecular vibrations in the calculation (parameters specified in Sec.~\ref{sec:time_dependence_transfer_rates}) and under the assumption of an energy gap $\omega_{21}=\unit[-1500]{cm^{-1}}$. } \end{figure} In the next step we include intramolecular vibrational modes characteristic of the carotenoid energy gap correlation function. We introduce an underdamped oscillator with reorganization energy $\lambda_{UO,Car}=\unit[1800]{cm^{-1}}$, vibrational frequency $\omega_{UO,Car}=\unit[1390]{cm^{-1}}$ and damping constant $\gamma_{UO,Car}$ inversely proportional to the decay constant $\tau_{UO,Car}=\unit[200]{fs}$. Our model carotenoid only exhibits one effective high-frequency mode for simplicity. We again study the difference between the rates obtained from the standard and our non-equilibrium approaches. We increase the gap between the donor and acceptor site energies to $\omega_{21}=\unit[-1500]{cm^{-1}}$. This energy corresponds more closely to the actual case of the carotenoid-chlorophyll dyad \cite{Perlik2015}. The time-dependencies of the rates are now more influenced by the presence of the intramolecular carotenoid vibrations than by the low-frequency Brownian oscillator contributions of the environment. The resulting stFT rates are displayed in Fig.~\ref{fig:combined_figure_time_dependent_rates_with_vibrations} (a). Similar to the results from calculations without vibrations, the respective rates converge toward asymptotic values. Only up to about $\unit[100]{fs}$ rudimentary oscillations are recognizable. Analogous findings are obtained for the Modified Redfield rates. The rate $k_{\alpha \to \beta}$ reaches a larger asymptotic value than the corresponding F\"{o}rster rate $k_{1 \to 2}$ (see Fig.~\ref{fig:combined_figure_time_dependent_rates_with_vibrations} (b)). If a non-equilibrium description is chosen, damped oscillations appear throughout the considered time interval (see Fig.~\ref{fig:combined_figure_time_dependent_rates_with_vibrations} (c) and (d)). These oscillations can be attributed to the intramolecular vibrations of the carotenoid. By considering the evolution of the average of non-equilibrium rates and disregarding the deviations caused by the oscillations, one finds that this average also approaches an asymptotic value. However, convergence of the average rate takes place considerably slower than in the case of standard description, thereby indicating the equilibriation process. As the excitonic coupling is much smaller than the reorganization energies of the monomer components, the criterion for applicability of the Modified Redfield approach seems not to be fulfilled at first sight. However, a more careful examination shows that not the size of the reorganization energies by themselves, but rather the size of the related off-diagonal system-bath coupling elements matters for an estimation whether the criterion for applicability of the Modified Redfield approach is fulfilled. The off-diagonal system-bath coupling scales with a product of the coefficients from transformation between localized basis and exciton basis, which become smaller when the energy gap between donor and acceptor increases. Therefore, in the case of the parameter values specified in the discussion above, the requirements for Modified Redfield approximation are likely to apply, rather than those of F\"{o}rster approximation. \subsection{Extraction of transfer rates from population dynamics with laser pulse effects} \label{sec:extraction_transfer_rates} When considering underdamped intramolecular vibrational modes, the transfer rates $k_{2 \to 1}$ and $k_{\beta \to \alpha}$ obtained from the two non-equilibrium approaches may exhibit negative values in the very early time evolution up to about $\unit[20]{fs}$. If this effect is not sufficiently compensated by the complementary rates $k_{1 \to 2}$ and $k_{\alpha \to \beta}$, it can lead to unphysical populations outside the range between $\unit[0]{}$ and $\unit[1]{}$. The presence of such unphysical populations seems to be related to the positivity issue, reported previously for Markovian quantum master equations and explained by transient non-Markovian effects before sufficient relaxation of the bath has taken place \cite{Suarez1992,Gaspard1999,Cheng2005}. These effects can average out under the smoothing influence of additional relaxation channels and excitation by a laser pulse with finite width, as discussed below. \begin{figure}[h] \includegraphics*[width=\columnwidth]{Figure3.eps} \caption{\label{fig:level_scheme} Level scheme of a donor-acceptor complex of carotenoid and chlorophyll, where in F\"{o}rster description after electronic excitation from $S_0$ to $S_2$ intermolecular population transfer from $S_2$ to $Q_x$ is facilitated by resonant emission of the carotenoid and absorption of the chlorophyll component. The fast competing intramolecular population transfer channels $S_2 \to S_1$ and $Q_x \to Q_y$ limit the carotenoid emission to the early stage of the equilibration process in $S_2$. } \end{figure} To describe a donor-acceptor complex of carotenoid and chlorophyll molecules by our dimer model system, we identify the states $\unit[1]{}$ and $\unit[2]{}$ with the $S_2$ state of the carotenoid and the $Q_x$ state of the chlorophyll component, respectively. Decay of the populations in $S_2$ and $Q_x$ accounts for intramolecular population transfer from $S_2$ to $S_1$ and from $Q_x$ to $Q_y$, as sketched in Fig.~\ref{fig:level_scheme}. Those competing population transfer channels allow intermolecular population transfer only at an early stage of the equilibration process in $S_2$. Furthermore, for the population dynamics, also effects of finite pulse width in the electronic excitation from the carotenoid ground state $S_0$ to $S_2$ play a role. Under the assumptions that initially only $S_0$ is populated and that the pulses are weak enough not to induce a significant depopulation of $S_0$, the dynamics of the populations $p_{S_2}$, $p_{S_1}$, $p_{Q_x}$ and $p_{Q_y}$ can be expressed in terms of the rates $k_{S_2 Q_x}$, $k_{S_2 S_1}$ and $k_{Q_x Q_y}$ as \begin{equation} \begin{split} &\dot{\vec{p}}(t') =\frac{d}{dt'} \left( \begin{array}{cccc} p_{S_2}(t') \\ p_{S_1}(t') \\ p_{Q_x}(t') \\ p_{Q_y}(t') \end{array} \right) \\ &=\left( \begin{array}{cccc} -k_{S_2 Q_x}(t')-k_{S_2 S_1} & 0 & 0 & 0 \\ k_{S_2 S_1} & 0 & 0 & 0 \\ 0 & 0 & k_{S_2 Q_x}(t')-k_{Q_x Q_y} & 0 \\ 0 & 0 & 0 & k_{Q_x Q_y} \end{array} \right) \\ &\left( \begin{array}{cccc} p_{S_2}(t') \\ p_{S_1}(t') \\ p_{Q_x}(t') \\ p_{Q_y}(t') \end{array} \right). \end{split} \end{equation} Inclusion of effects caused by an excitation pulse with time-dependent amplitude $A_{pulse}(t)$ leads to \begin{equation} \vec{p}(t)=\int_{0}^{t} d\tau |A_{pulse}(\tau)|^2 \int_{\tau}^{t} dt' \dot{\vec{p}}(t'); \;\; p_i(\tau)=\delta_{i,S2}. \end{equation} To demonstrate the smoothing influence of a finite pulse, the populations $p_{\alpha}$ (black line) and $p_{\beta}$ (red line) resulting from the non-equilibrium Modified Redfield rates shown in Fig.~\ref{fig:combined_figure_time_dependent_rates_with_vibrations} are plotted in Fig.~\ref{fig:demonstration_influence_finite_pulse_width} together with the corresponding populations from dynamics under the influence of a finite excitation pulse (green and blue line). Note that in this calculation, additional relaxation channels have not been taken into account to keep the comparison simple. \begin{figure}[h] \includegraphics*[width=\columnwidth]{Figure4.eps} \caption{\label{fig:demonstration_influence_finite_pulse_width} Time-dependence of populations $p_{\alpha}$ (black line) and $p_{\beta}$ (red line) resulting from the non-equilibrium Modified Redfield rates shown in Fig.~\ref{fig:combined_figure_time_dependent_rates_with_vibrations} together with the corresponding populations extracted from dynamics under the influence of a finite excitation pulse (green and blue line). Note that additional relaxation channels have not been taken into account in this calculation because their influence in decreasing $p_{\alpha}$ and $p_{\beta}$ complicates the comparison. } \end{figure} If the additional relaxation channels are included, an averaged population transfer rate can be obtained as \begin{equation} \label{eq:transfer_rate_from_population_dynamics} k_{S_2 Q_x,avg}=\lim_{t \to \infty} k_{S_2 S1} \frac{p_{Q_y}(t)}{p_{S_1}(t)}. \end{equation} Besides the calculated time-dependent transfer rate $k_{S_2 Q_x}$, additional relaxation channels between $S_2$ and $S_1$ with a time constant of $\unit[95]{fs}$ and between $Q_x$ and $Q_y$ with a time constant of $\unit[20]{fs}$ were assumed. To obtain a realistic description, the FWHM of the pulse was taken as $\unit[19.2]{fs}$, in agreement with the pulse width in the experiment. \subsection{Dependence of the transfer rates on the donor-acceptor energy gap} \label{sec:frequency_dependence_transfer_rates} \begin{figure}[h] \includegraphics*[width=\columnwidth]{Figure5.eps} \caption{\label{fig:em_Car_abs_Chl_lambdaBO_factor_1e-1} Emission spectrum of carotenoid (blue line) and absorption spectrum of chlorophyll (red line) with parameters specified in Sec.~\ref{sec:time_dependence_transfer_rates}. } \end{figure} Investigation of the relaxation rate dependence on the donor--acceptor energy gap by evaluating Eq.~(\ref{eq:transfer_rate_from_population_dynamics}) allows further interpretation of the underlying processes. For the interpretation of the results from the F\"{o}rster approach, it is useful to compare the energy gap dependence of the rates with the overlap integral of the absorption spectrum of the acceptor (chlorophyll) and the emission spectrum of the donor (carotenoid) as a function of the difference between the electronic excitation energies (see Eq.~(\ref{eq:standard_Foerster_rate_product_absorption_emission}) for the respective expression in the time domain). From now on, all energy gap dependencies discussed in this paper will be understood as dependencies on the the energy differences between the localized states of the donor (carotenoid) and acceptor (chlorophyll) molecules. This definition will be followed even in cases where substantial delocalization exists and localized states are not spectroscopically addressable. Due to visual similarity between ordinary absorption spectra and the plots of the energy gap dependence of the spectral overlap and the rates to be presented below, we use the usual spectroscopic vocabulary for their description. For convenience of expression we therefore speak of peaks, bands, their intensities etc. also when describing energy gap dependencies of relaxation rates. \begin{figure}[h] \includegraphics*[width=\columnwidth]{Figure6.eps} \caption{\label{fig:combined_figure_frequency_dependence_rates_lambdaBO_factor_1e-1_displayed_frequency_range_changed} Left column: Energy gap dependence of population transfer rates obtained from standard formulation of (a) F\"{o}rster ($k_{1 \to 2}$) and (b) Modified Redfield approach ($k_{\alpha \to \beta}$). Together with the F\"{o}rster rate the dependence of the overlap of absorption and emission spectrum from Fig.~\ref{fig:em_Car_abs_Chl_lambdaBO_factor_1e-1} as a function of the difference between the electronic excitation energies of acceptor and donor is shown. Right column: Energy gap dependence of population transfer rates from non-equilibrium formulation of (c) F\"{o}rster and (d) Modified Redfield approach. The parameters are specified in Sec.~\ref{sec:time_dependence_transfer_rates}. } \end{figure} In Fig.~\ref{fig:em_Car_abs_Chl_lambdaBO_factor_1e-1}, the emission spectrum of our carotenoid model and the absorption spectrum of our chlorophyll model are shown as blue and red lines, respectively. The overlap integral as a function of the energy gap between the electronic excitation energies of acceptor and donor is shown in Fig.~\ref{fig:combined_figure_frequency_dependence_rates_lambdaBO_factor_1e-1_displayed_frequency_range_changed} (a) as a red curve. In the same subfigure the energy gap dependence of the stFT rate given by Eq.~(\ref{eq:standard_Foerster_rate_homogeneous_term_line_shape_functions}) is displayed as a black curve. With respect to the peak positions and relative values of maxima both results agree well. In the noneqFT rates, the distribution of relative peak intensities changes. In particular, recognizable peaks also appear at positive values of the difference between the electronic excitation energies of acceptor and donor (see Fig.~\ref{fig:combined_figure_frequency_dependence_rates_lambdaBO_factor_1e-1_displayed_frequency_range_changed} (c)). These findings can be explained in an illustrative way by the potential diagrams shown in Fig.~\ref{fig:sketch_Foerster_transfer}, where donor emission (left hand side) and acceptor absorption (right hand side) are sketched. The solid blue arrow is related to emission in the case of the standard description, whereas the dashed blue arrow illustrates emission from a non-equilibrium initial state. In the stFT treatment, donor emission is assumed to take place from the equilibrated excited state. Therefore, the largest possible frequency of a vibrational peak in the emission spectrum corresponds to the difference between the minima of the excited state potential and the ground-state potential, i.e.\ to the electronic excitation energy. In the acceptor absorption spectrum the energetic position of the single peak corresponds to the electronic excitation energy as well. Thus, an overlap between absorption and emission spectrum can only be obtained if the difference between the electronic excitation energies of acceptor and donor is smaller than zero (or slightly larger than zero by an amount determined by the peak widths). In the case of noneqFT rates, the emission can take place already at an early stage of bath relaxation, so that the energetic position of vibrational bands in the emission spectrum can become larger than the electronic excitation energy of the carotenoid. Therefore, peaks of the energy gap dependent rate can also appear at a positive difference between the electronic excitation energies of acceptor and donor. \begin{figure}[h] \includegraphics*[width=\columnwidth]{Figure7.eps} \caption{\label{fig:sketch_Foerster_transfer} Potential diagrams of donor (left hand side) and acceptor (right hand side) for the illustration of F\"{o}rster transfer with standard and non-equilibrium treatment. } \end{figure} The dependencies of the Modified Redfield rates on the gap between the electronic excitation energies in the site basis are shown on the right hand side of Fig.~\ref{fig:combined_figure_frequency_dependence_rates_lambdaBO_factor_1e-1_displayed_frequency_range_changed} on the same scale as the corresponding results from the F\"{o}rster approach. Even for equal site energies in the localized basis, i.e.\ at an energy gap of zero, the excitonic coupling leads to a splitting between the exciton states, which have delocalized character in this case. In the limit of energy gaps far exceeding the excitonic coupling the upper and lower exciton state assume the character of the upper and lower localized state, respectively, with only a small mixing contribution from the other localized state. Therefore, at energy gaps further to the negative region the results from F\"{o}rster and Modified Redfield description become more similar than in the region close to zero. In both F\"{o}rster and Modified Redfield rates the relative intensities of the vibrational bands at more negative frequency gaps in the case of non-equilibrium treatment become larger than those from standard treatment. This change of the relative intensity of the vibrational bands stems from the dependence of the relative oscillator strengths of transitions from the populated excited state levels to the vibrational levels of the electronic ground state. Which levels are populated depends on whether equilibration has taken place or not. Besides these similarities, Modified Redfield rates also exhibit remarkable differences compared to the F\"{o}rster rates: In the region of energy gaps close to zero an intensive band appears, which is cut in the panels on the right hand side of Fig.~\ref{fig:combined_figure_frequency_dependence_rates_lambdaBO_factor_1e-1_displayed_frequency_range_changed} for an easier comparison with the F\"{o}rster rates, but displayed in its full height in the corresponding subfigures of Fig.~\ref{fig:combined_figure_frequency_dependence_Redfield_rates_lambdaBO_factor_1e-1}. Such a peak does not appear in the F\"{o}rster rates. This difference stems from the influence of the factor consisting of line shape function derivative terms in the Modified Redfield rate expressions (see Eqs.~(\ref{eq:modified_Redfield_rate_R}) and (\ref{eq:standard_modified_Redfield_rate_R})). The second-derivative term in this factor corresponds to the correlation function between fluctuations attributed to different exciton states and facilitates transfer between the latter, provided that it contains frequency components resonant with the effective energy gap between the exciton states. This consideration leads to the following conclusion: Population transfer in the framework of F\"{o}rster theory is facilitated by the resonance coupling between configurations with electronic excitation localized on donor or acceptor, while the bath only plays a role in tuning the energy gap. In the noneqFT, differences in rates compared to the results from the stFT approach, originate from the inclusion of the bath equilibration process in the excited state of the donor and the resulting modification of the transition frequency of donor emission. In contrast, the bath in the Modified Redfield treatment not only contributes by modifying the effective energy gap between the exciton states during equilibration, but also facilitates population transfer directly by bath DOF explicitly coupling the initial and final states. \begin{figure}[h] \includegraphics*[width=\columnwidth]{Figure8.eps} \caption{\label{fig:combined_figure_frequency_dependence_Redfield_rates_lambdaBO_factor_1e-1} Energy gap dependence of population transfer rates $k_{1 \to 2}$ obtained from Modified Redfield standard approach (black line), Modified Redfield non-equilibrium approach (red line) and standard Redfield treatment (blue line) with the parameters specified in the caption of Fig.~\ref{fig:combined_figure_frequency_dependence_rates_lambdaBO_factor_1e-1_displayed_frequency_range_changed}. } \end{figure} In the following we will explain findings in the energy-gap dependence of the Modified Redfield rates by referring to terms in the respective rate equations given in Eqs.~(\ref{eq:modified_Redfield_rate_R}) and (\ref{eq:standard_modified_Redfield_rate_R}). In particular, explanation of the intensive band at zero energy gaps in the Modified Redfield rates and of the increased intensities of the neighboring side bands compared to the F\"{o}rster rates requires a closer consideration. Note, however, that for energy gaps approaching zero the Modified Redfield description becomes less appropriate and that selection of different parameters can lead to significant changes of the relative band intensity in this region. Nevertheless, the differences between rates from stMRT or noneqMRT description and from stRT description which we are going to discuss in the framework of our model assumptions and parameters are of general validity. In Fig.~\ref{fig:combined_figure_frequency_dependence_Redfield_rates_lambdaBO_factor_1e-1}, the dependence of stMRT and noneqMRT rates (black and red line, respectively) and of the stRT rate (blue line) on the donor--acceptor energy gap is shown. While in the stMRT and noneqMRT rates the peak in the region of zero energy gap has an amplitude by about one order of magnitude larger than the side bands, in the stRT rate the amplitude of this band is comparable to the one of the side band at an energy gap corresponding to the underdamped oscillator frequency. The broadening of the side band in stRT mainly stems from the extraction of the rate from the population dynamics, whereas in the corresponding bands of the stMRT and noneqMRT rates, a substantial broadening contribution is already included in the rate expressions themselves. We will now give some interpretations of features in the energy-gap dependence of the rates by drawing attention to terms in the rate expressions. In the considered case of zero energy gap between the localized states, the coefficients for transformation to the exciton basis become equal in absolute value, which indicates maximal delocalization of the exciton states. As a consequence, in the stMRT and noneqMRT rates, the combination of all complex exponentials containing line shape functions, reorganization energies and exciton eigenenergies becomes equal to one. The rate expression integrand reduces to two terms. The first term consists of a second-derivative line shape function expression, i.e.\ a bath correlation function, while the other involves first-derivative line shape function components and reorganization energies. We will denote the first term as ``correlation function term'', and the second term as ``line-shape-function derivative term''. Note that we continue to consider the special case of equal site energies. The integral over the correlation function term in Eqs.~(\ref{eq:modified_Redfield_rate_R}) and (\ref{eq:standard_modified_Redfield_rate_R}) can be identified with the stRT rate \cite{Pisliakov2006}, as it yields the Fourier component of the correlation function at the excitonic splitting frequency. The line-shape-function derivative term reduces to a time-independent product of reorganization energies in the case of stMRT rates, which is multiplied with a complex exponential oscillating with the excitonic splitting frequency. Its integration therefore yields an oscillating contribution to the transfer rate. In the noneqMRT rate the situation is similar, at least under the assumption of a linearization of the imaginary part of line shape functions in high-temperature approximation \cite{Mukamel1995Book}. Note that, in both stMRT and noneqMRT rates, a slight difference between the acceptor and donor site energies can already be sufficient to lead to convergence of the integral expression, because of the additional appearance of real parts of first line-shape-function derivatives which account for the dissipative influence of fluctuations in the transitions between the pure exciton states. Because the integral over the line-shape-function derivative term does not converge to a constant value in the case of zero energy gap, its interplay with the correlation function term in governing the relaxation dynamics gains importance. In the correlation function term, the integration is expected to yield a constant for non-zero temperature at long times. If the respective asymptotic rate contribution is sufficiently large, population transfer takes place fast enough, so that the oscillating rate contribution of the line-shape-function derivative product term contributes significantly only within a finite time window given by the timescale of the resulting population transfer dynamics. Also the influence of competing channels with phenomenological rate constants has a similar effect. At zero energy we thus have two terms of which one is of the same order as the corresponding stRT prediction. Our finding of a relatively large value of the stMRT and noneqMRT rates at zero energy gap can be therefore attributed to the transient influence of the oscillating rate contribution of the line-shape-function derivative term in the course of the relaxation process. For non-zero energy gap between the localized states the influence of fluctuations on the transfer process enters in first order and leads to peaks at negative-signed energy gaps in the energetic regions of Brownian and underdamped oscillator component. Different from the stRT rate, the stMRT and noneqMRT rates exhibit further side bands with progression towards increasingly negative frequency range, as in the F\"{o}rster description. These side bands in the MRT rates can be attributed to the involvement higher-order system-bath coupling terms in the population transfer process. Such effects are taken into account in the framework of the second-order cumulant expansion and enter in terms of the exponentials with line-shape-function arguments in the respective rate expressions. The vibrational structures in the energy gap dependencies of transfer rates indicate the role of intramolecular vibrations of the carotenoid in establishing resonant energy transfer. While in the case of F\"orster transfer it is obvious that not only the vibrational structure of the excited state, but also the one of the electronic ground state plays a role in this respect, in the Modified Redfield case the situation is more involved. There, the information about coupling between excited state configurations and its connection to the bath dynamics is hidden by the description in the exciton basis. Even though intermediate transitions to the electronic ground state are not commonly associated with the picture of transfer between exciton states, the dynamics of the electronic ground state implicitly enters in the line shape functions. For an energy gap of roughly $\unit[-1500]{cm^{-1}}$, which can be considered as a realistic value for the dyad, the noneqMRT approach results in a rate larger than the corresponding F\"{o}rster rate by about a factor of $\unit[2]{}$. As mentioned previously, the size of the energy gap relative to the excitonic coupling determines how much the localized states contribute to an exciton state. This participation ratio, which also enters as a scaling of the off-diagonal fluctuations, is quantified by a product of coefficients from transformation between localized basis and exciton basis. In this way it gains influence on the criterion whether the Modified Redfield approach yields an appropriate description. For large enough energy gap, the Modified Redfield approach can even be preferable in cases where the reorganization energy of bath components (including pseudo-modes which enter in terms of underdamped oscillators) are much larger than the excitonic coupling. At the same time, the involvement of bath fluctuations in population transfer for non-zero energy gaps leads to an enhancing influence of such bath components on the transfer rate. Strong system-bath coupling results in an increase of the intensity of side bands in energy gap dependence of the rates (particularly recognizable for lower-order side bands) compared to the F\"{o}rster description. Although we do not explicitly account for vibronic coupling, our present study confirms the tendencies reported in Ref.~\cite{Perlik2015}. In Ref.~\cite{Perlik2015}, larger rates were obtained than estimations based on F\"{o}rster theory predict. As discussed in the present article, such an increased rate can be explained also by Modified Redfield theory. Ground state vibrations were found to play an important role for the transfer efficiency by tuning the energy difference between initial and final state of the transfer process. Such influence is easily understood based on the F\"{o}rster description, however, the same picture is valid in the Modified Redfield description. In the case of resonant donor-acceptor energy gap, involvement of intramolecular vibrational modes enhances energy transfer rates by accepting excess energy corresponding to multiple vibrational quanta. Crucially, we find that our noneqMRT captures this behavior without treating intramolecular vibrational modes explicitly in the Hamiltonian. \section{Conclusions} \label{sec:Conclusions} We developed rate description of population transfer under the assumption that the bath degrees of freedom of the donor molecule, after its electronic excitation, are in a non-equilibrium state with respect to the excited state potential energy surface. We derived expressions for the population transfer rates by generalizing the standard F\"{o}rster and Modified Redfield descriptions. For a model system similar to a carotenoid-chlorophyll dyad from our previous study, we compared the time-dependencies of energy transfer rates obtained from the standard equilibrium treatment and from our non-equilibrium generalization, and we related the differences with the process of equilibration. The carotenoid-chlorophyll system is insofar appropriate as an example for application of our derived non-equilibrium description, as the short timescales of intramolecular population transfer through competing channels limit the intermolecular population transfer to an early stage of the bath equilibration. We extracted experimentally relevant rate values from the ratio of asymptotic populations of states populated through competing relaxation channels, and we studied their dependence on the donor-acceptor energy gap. These dependencies reveal and confirm important role of ground state vibrational states of the donor in establishing resonance condition for ultrafast energy transfer. Crucially, these conclusions were reached without the need to explicitly incorporate underdamped vibrational modes into the system Hamiltonian. This was in turn enabled by the non-perturbative character of our non-equilibrium rate description. \section*{Supplementary material} See supplementary material for the detailed derivation of the non-equilibrium Modified Redfield rates. \section*{Acknowledgments} This work was supported by the Czech Science Foundation (GACR) grant no. 14-25752S and by the Impuls grant in Physics from the Neuron Fund for Support of Science.
1,108,101,563,978
arxiv
\section{Introduction} X-ray photoelectron spectroscopy (XPS) is a very common {\em in-situ} and {\em ex-situ} tool used in modern laboratories to probe the stoichiometry of a given material, as well as the oxidation states and local chemical environment of a given element \cite{XPS_general, XPS_general2, CoreLevel, Hufner}. As the core levels of different chemical elements easily differ by tens to hundreds of electron-volts (eV), the peaks in the photoelectron distribution as a function of kinetic energy provide us with information on which chemical elements are present and, to a very good approximation, their relative abundance in a sample. When focusing on the photoelectron signals coming from one particular core level of one particular element, the different local environments around the targeted ions can result in a multi-peak structure, typically within an energy range of about 10 eV, from which the oxidation states of the probed element can be inferred \cite{PhysRevB.38.6084, Gonzalez-Elipe_1988_Si, Miller_2002_C}. A more sophisticated aspect of XPS is the electron screening due to the created core hole \cite{CoreLevel}: once a photoelectron is generated, the sample is left with a core hole (positively charged) that modifies the potential of valence electrons. The response of valence electrons to the core hole is usually referred to as the final-state effect, in the sense that the observed spectrum does not really correspond to that of the neutral sample before being irradiated, but rather to the energy spectrum in the presence of a core hole. The typical lifetime of a core hole is about $10^{-15}$ s \cite{CoreLevel}, which results in an energy broadening of $\sim$0.1 eV. Accordingly, peak features that are larger than 0.1 eV in the core-hole spectrum can, in principle, be observed and resolved. The final-state effect introduces even more features and complexities to the XPS spectrum, as electron correlation is essential to the process of core-hole screening. For example, the XPS spectra of a metallic system typically has an asymmetric shape (orthogonality catastrophe) when taking the scattering of the core-hole potential into account \cite{PhysRevLett.18.1049, Mahan, Doniach}. In addition, if the targeted ion has degenerate localized orbitals (such as $3d$ or $4f$ orbitals) in a metallic phase, a uniform system also displays multiple XPS peaks. To properly describe such systems theoretically, an Anderson impurity model including both localized correlated orbitals and uncorrelated bath orbitals is required \cite{Kotani_1974, PhysRevB.28.4315, CoreLevel}. For the transition metal (TM) oxides, the valence states have to include both oxygen $2p$ and TM $d$ orbitals, as their energy difference and their mutual hopping amplitude are comparable in energy. Therefore, a minimal model for XPS spectra of transition metal oxides includes a TM-O$_6$ cluster \cite{PhysRevB.33.8060, PhysRevB.45.1612,Kotani_93, CoreLevel}. Although complicated, once the XPS spectrum is properly interpreted, it provides a quite good estimate of material-specific parameters such as inter-site hopping amplitude $t$ and Hubbard on-site repulsion $U$. In this paper, we reexamine the origin of the multi-peak structure in the XPS spectra of nominally $d^1$ transition metal oxides including NbO$_2$, SrVO$_3$ \cite{SVO}, and LaTiO$_3$, as well as that of lightly $n$-doped $d^0$ SrTiO$_3$ (STO) \cite{PhysRevB.83.035410,/content/aip/journal/apl/100/26/10.1063/1.4731642, /content/aip/journal/jap/116/4/10.1063/1.4891225}, In particular, we propose a cluster-bath model and argue that it is the final-state effect rather than the presence of multiple oxidation states that accounts for the observed multi-peak XPS structure in these materials. Based on our interpretation, the multiple XPS peaks are intrinsic to the materials, and do not necessarily imply the existence of spatially localized ions with different oxidation states or of separate phases. The rest of the paper is organized as follows. In Section II we give a brief overview of the XPS core level spectra of these four oxides. In particular, we distinguish between the initial-state effect and final-state effect. In Section III we present our experimental results and point out their common features and their implications. In Section IV we provide a simple model to illustrate the final-state effect, which is crucial to reconciling the seemingly conflicting observations. Several experimental results are discussed accordingly. The key dimensionless parameter to determine the relative importance between initial-state and final-state effects is identified. A brief conclusion is given in Section V. In the Appendices we provide the details of our calculations. \section{Overview of XPS} In an XPS experiment, photons of energy $h \nu$ are directed to the sample and photoelectrons of kinetic energy $E_{kin}$ come out [see Fig.~\ref{fig:XPS_illu}(a)]. Energy conservation requires that \beq h \nu + E_{GS} (N) = E_{kin} + E_{core} (N-1) + \phi. \label{eqn:XPS_basic} \eeq Here $E_{GS} (N)$ is the ground state energy of the sample with the filled core level, $E_{core} (N-1)$ is the energy with a core hole ($N-1$ is used to denote the presence of a core hole), and $\phi$ is the work function. By shifting the kinetic energy by $E_{kin} \rightarrow \omega = E_{kin} + \phi - h\nu$, the photoelectron intensity as a function of $\omega$ is given by \beq \begin{split} \rho (\omega) &= \sum_n |\langle n (N-1) | c |GS \rangle|^2 \times \delta(\omega - [E_{GS} (N) - E_{core,n} (N-1) ] \,) \\ &= \frac{1}{\pi} \langle GS| c^{\dagger} \left[ \omega - (E_{GS} (N) - H_{tot}) - i \delta \right]^{-1} c|GS \rangle. \end{split} \label{eqn:XPS_rho} \eeq Here $c^{\dagger}$ is the creation operator of a core electron, $|GS \rangle$ and $| n (N-1) \rangle$ are, respectively, the ground state without a core hole, and eigenstates with a core hole \cite{CoreLevel, Hufner,PhysRevB.28.4315}. Once $H_{tot}$ is specified, the second line of Eq.~\eqref{eqn:XPS_rho} is used to compute the XPS spectra. Note that $\rho(\omega)$ is non-zero only when $\omega = E_{GS} (N) - E_{core} (N-1)$. What the XPS spectrum reflects is the core-hole energy spectrum weighted by the matrix element $|\langle n (N-1) | c |GS \rangle|^2$. The XPS spectrum is also routinely plotted as a function of binding energy $E_B$, defined as $E_B \equiv h \nu - \phi - E_{kin} = -\omega$ \cite{binding}. For the purpose of this work, the constant energy shift is not important and we focus only on the dependence of the spectrum on the ``relative binding energy'' or ``relative kinetic energy''. Conventionally, one distinguishes between the initial-state and final-state effects in the XPS spectrum \cite{Hufner, CoreLevel}. For the initial-state effect [Fig.~\ref{fig:XPS_illu}(b)], the valence electrons are not affected by the created core hole. In this case the XPS peak position is determined by the core-level energy $\epsilon_c$ only. Within this scenario, any observed multi-peak structure in the measured XPS spectrum implies that targeted ions (where the photoelectrons are ejected from) experience different environments within the same sample. For example at the Si/SiO$_2$ interface, the observed multiple peaks in the Si $2p$ spectrum, which corresponds to different Si oxidation states (from Si$^{0+}$ to Si$^{4+}$), are used to deduce and quantify the formation of SiO$_x$ at the interface \cite{PhysRevB.38.6084}. For the final-state effect [Fig.~\ref{fig:XPS_illu}(c)], the valence electrons do feel and respond to the potential caused by the creation of a core hole. In this case a spatially uniform system can also lead to additional peak structure around $\epsilon_c$ in the XPS spectrum. A classic example is CeNi$_2$, which is a nominally $f^0$ material but displays three XPS peaks (from Ce $3d$ core level), identified as $f^0$, $f^1$, $f^2$ \cite{PhysRevB.27.7330}. It was realized by Kotani and Toyozawa \cite{Kotani_1974, Kotani_99}, and by Gunnarsson and Sch\"onhammer \cite{PhysRevB.28.4315} that the multiple peaks in this material originate from the final-state effect, where the valence electrons response to the presence of a core hole, especially the core-hole-induced energy change of Ce $4f$ levels, plays an important role. Simply put, for the initial-state effect, the ions of different nominal charges {\em preexist} in the sample; for the final-state effect, the ions of different nominal charges are {\em created} after the applying photons produce core holes. We believe the experimentally observed multi-peak structure in nominally $d^1$ and $n$-doped $d^0$ transition metal oxides should be understood as being due to the final-state effect. In the following we shall provide our experimental and theoretical analysis that leads to this conclusion. The key parameter determining the relative importance between initial-state and final state effects will be discussed in Section IV.E. \section{Experiments and key features} In order to properly analyze the intrinsic XPS spectra of nominally $d^1$ transition metal oxides, we need to be able to grow single phase, crystalline layers of these materials and then measure their XPS spectra without exposing the samples to air, as these materials are not thermodynamically stable in the ambient and will slowly oxidize. The samples of NbO$_2$, SrVO$_3$, and LaTiO$_3$, as well as SrTiO$_3$ with several dopants, are grown in a molecular beam epitaxy (MBE) chamber and then transferred {\em in situ} to a high resolution photoemission chamber. The two chambers are connected by an ultrahigh vacuum transfer line with a base pressure of $<1 \times 10^{-9}$ Torr, allowing for sample transfer between the growth and analysis chamber within 5 min. The photoemission chamber consists of a monochromated Al K$\alpha$ photon source ($h \nu$ = 1486.6 eV) and a VG Scienta R3000 analyzer. XPS spectra of the valence band, O $1s$, Nb $3d$, V $2p$, Ti $2p$, Sr $3d$, and La $3d$ are taken (as appropriate) at a pass energy of 100 eV with an analyzer slit setting of 0.4 mm, resulting in an overall instrumental resolution of 350 meV (primarily limited by the energy resolution of the x-ray source). The analyzer is calibrated such that the Fermi level of a clean silver foil is at a binding energy of 0.00 eV and the Ag $3d_{5/2}$ core level is at 368.28 eV. Undoped SrTiO$_3$ is nominally $d^0$, while the remaining three materials are nominally $d^1$: in the ionic limit, SrTiO$_3$ has no electron occupying the Ti $3d$ orbital; NbO$_2$ has one electron occupying the Nb $4d$ orbital; SrVO$_3$ and LaTiO$_3$ have one electron occupying the V $3d$ and Ti $3d$ orbital, respectively. NbO$_2$ films are grown on 111-oriented SrTiO$_3$ substrates as described in more detail elsewhere \cite{Posadas_NbO2.APL}. Both SrVO$_3$ and LaTiO$_3$ films are grown on 100-oriented SrTiO$_3$ substrates at a temperature of 600-800$^{\circ}$C using co-deposition of matched metal fluxes in the presence of between $3\times 10^{-9}$ to $2\times 10^{-8}$ Torr of molecular oxygen with a total growth rate of $\sim$0.4 nm/min. All films reported here are crystalline as-deposited, with pseudo-rutile structure for NbO$_2$ \cite{Posadas_NbO2.APL} and perovskite structure for SrVO$_3$ and LaTiO$_3$, as determined by reflection high energy electron diffraction (RHEED). We systematically vary the oxygen pressure during growth to determine the conditions that would result in the ideal O:Nb, O:Ti, and O:V ratios in the films. The transition metal to oxygen ratios are determined by the integrated intensities of the relevant XPS core level spectra (O $1s$ for oxygen) and the appropriate atomic sensitivity factors, as well as verifying that the Sr:V and La:Ti ratios are very close to one. The atomic sensitivity factors used are empirical values as reported by Wagner et al. \cite{Wagner.ASF2, Wagner.ASF} and adjusted to give ideal oxygen to metal ratios for the compounds Nb$_2$O$_5$, V$_2$O$_5$, and undoped SrTiO$_3$. In the following, we present our experimental results for the transition metal core level spectra for single phase, nominally $d^1$ materials, and for $n$-doped SrTiO$_3$, as measured using {\em in situ} XPS. All materials are sufficiently conductive at room temperature such that there is negligible ($<0.1$ V) sample voltage during the measurement. For each material, we show a core level spectrum for an under-oxidized, optimally oxidized, and over-oxidized sample for comparison. The detailed results for each material are presented in the following sections. In the Supplementary Material \cite{Supplementary}, we provide RHEED data for stoichiometric SrVO$_3$, LaTiO$_3$, and NbO$_2$ to further demonstrate our sample quality. \subsection{NbO$_2$} The Nb $3d$ core level in Nb$_2$O$_5$ is located at a binding energy of 207.7 eV (Nb 3$d_{5/2}$) and has a spin-orbit pair at 2.7 eV higher binding energy (Nb $3d_{3/2}$). To model the Nb $3d$ multi-peak structure in NbO$_2$, we assume that the spin-orbit pairs are of the same width and that their separation is the same as in Nb$_2$O$_5$. Two or three pairs of peaks (pseudo-Voigt line shape) are used as needed to fit the data. For the optimally oxidized case (O/Nb = 2.0) shown in Fig.~\ref{fig:NbO2} (a), we find two components. The first component has a binding energy of 206.5 eV with a width of 0.9 eV, while the second component has a binding energy of 207.5 eV and a larger width of 1.8 eV. If we assign the 207.5 eV feature to be the $d^0$ component, the $d^0$ component is 55\% of the integrated intensity while the $d^1$ component is 45\% of the integrated intensity. If we electron dope the system by removing oxygen to form an under-oxidized NbO$_2$ phase [Fig. ~\ref{fig:NbO2} (b)] with O/Nb = 1.9, we find that both the $d^0$ component at 207.3 eV and the $d^1$ component at 206.0 eV decrease slightly in relative amount to 52\% and 40\% of the signal. A new component ($d^2$) emerges at a binding energy of 204.5 eV with a relative amount of 8\%. On the other hand, if we add excess oxygen to the system and form over-oxidized NbO$_2$ [Fig. ~\ref{fig:NbO2} (c)] with O/Nb = 2.1, the shape of the spectrum changes qualitatively. The $d^0$ component (at 207.6 eV) becomes sharper (width of 1.5 eV) and increases to 62\%, while the $d^1$ component at 205.9 eV (width of 1.1 eV) drops to 38\%. \subsection{SrVO$_3$} For SrVO$_3$, we look at the V $2p$ core level. For comparison, in pure V$_2$O$_5$, the V $2p_{3/2}$ peak is located at a binding energy of 517.9 eV, with the $2p_{1/2}$ spin-orbit pair located at 7.4 eV higher binding energy. The $2p_{1/2}$ peak is significantly broader than the $2p_{3/2}$ peak due to Coster-Kronig transitions. To model V $2p$ spectra, the widths of all $2p_{3/2}$ components are constrained to be the same and the widths of all $2p_{1/2}$ peaks are also constrained to be the same. There is no restriction on the relative widths of the $2p_{3/2}$ and $2p_{1/2}$ peaks within each component, however. The $2p_{3/2}$ to $2p_{1/2}$ separation of each component is also fixed to be the same as that of V$_2$O$_5$. Three sets of spin-orbit pairs of peaks are used to fit all the SrVO$_3$ data. Because the O $1s$ core level is near the V $2p$ levels, O $1s$ signals are also collected in the same measurement and included in the fitting. For the optimally oxidized case with O/V = 3.0 [Fig.~\ref{fig:SrVO3} (a)], the spectrum consists of three distinct components. The widths of the $2p_{3/2}$ peaks are 1.5 eV. The first peak has a binding energy of 517.9 eV ($d^0$) with a relative concentration of 60\%. The second peak ($d^1$) has a binding energy of 516.2 eV with a relative concentration of 27\%. The third peak ($d^2$) has a binding energy of 514.5 eV with a relative concentration of 13\%. Reducing the O/V ratio to 2.7 [Fig.~\ref{fig:SrVO3} (b)] results in a significant decrease in the $d^0$ component at 518.1 eV to 36\%. The $d^1$ component at 516.2 eV increases to 36\% while the $d^2$ component at 514.7 eV increases to 28\%. On the other hand, slightly over-oxidizing the SrVO$_3$ to have an O/V ratio of 3.1 [Fig.~\ref{fig:SrVO3} (c)] alters the relative amounts of the three components to 65\% for $d^0$, 22\% for $d^1$, and 13\% for $d^2$. \subsection{LaTiO$_3$} For LaTiO$_3$, we use the Ti $2p$ core level. The Ti $2p_{1/2}$ level is significantly wider than the $2p_{3/2}$ level due to Coster-Kronig transitions. We model the Ti $2p$ spectra using the same kind of constraints on widths and spin-orbit separation as in the V $2p$ modeling. For comparison, the Ti $2p_{3/2}$ level of stoichiometric SrTiO$_3$ (Ti$^{4+}$) is located at 458.9 eV with a $2p_{3/2}$ to $2p_{1/2}$ separation of 5.6 eV. Two or three pairs of peaks are used to model the LaTiO$_3$ Ti $2p$ spectra as needed. For the optimally oxidized sample with O/Ti = 3.0 [Fig.~\ref{fig:LaTiO3} (a)], there are two components. The first one ($d^0$) is located at a binding energy of 458.4 eV with a relative concentration of 45\%. The second component ($d^1$) is located at a binding energy of 456.9 eV with a relative concentration of 55\%. The widths of both $2p_{3/2}$ peaks is 1.7 eV. When LaTiO$_3$ is under-oxidized to yield an O/Ti ratio of 2.8 [Fig.~\ref{fig:LaTiO3} (b)], we see the emergence of a third component ($d^2$) with a binding energy of 454.9 eV and a relative amount of 8\%. The other two components are both slightly reduced in amount to 40\% for $d^0$ and 52\% for $d^1$. For the slightly over-oxidized case, with O/Ti = 3.1 [Fig.~\ref{fig:LaTiO3} (c)], we see a significant increase in the $d^0$ component to 69\% with a slight shift in binding energy to 458.9 eV. The $d^1$ component (at binding energy 457.0 eV) correspondingly decreases to 31\%. \subsection{$n$-doped SrTiO$_3$} In stoichiometric SrTiO$_3$, the $2p_{3/2}$ peak shows a single feature about 1 eV wide with no shoulder \cite{note1}. The $2p_{1/2}$ peak is significantly broader than the $2p_{3/2}$ peak due to Coster-Kronig transitions \cite{Coster_Kronig}. Fig.~\ref{fig:XPS_STO} shows the Ti $2p$ XPS spectra for 15\% La doped SrTiO$_3$ (Sr$_{1-x}$La$_{x}$TiO$_3$) \cite{PhysRevLett.70.2126,/content/aip/journal/apl/100/26/10.1063/1.4731642, /content/aip/journal/jap/116/4/10.1063/1.4891225}, 10\% Nb doped SrTiO$_3$ (SrTi$_{1-x}$Nb$_{x}$O$_3$) \cite{PhysRevB.61.12860}, and oxygen-deficient SrTiO$_3$ (SrTiO$_{3-x}$) \cite{NatMetal_STO_2D, Hatch_13, Rice_2014}. In all these $n$-doped SrTiO$_3$, a small shoulder located about 1.5 eV lower than the Ti$^{4+}$ peak emerges, and is typically interpreted as a Ti$^{3+}$ ($d^1$) peak. Two important features should be pointed out. First, the position and strength of Ti$^{3+}$ peak are not sensitive to photoelectron emission angle (not shown), indicating that this signal is not a surface effect. Second, the position of Ti$^{3+}$ peak is dopant-independent, indicating that this peak is very likely to be intrinsic to doped SrTiO$_3$. We will show in the next section that these two observations are consistent with the final-state interpretation. \subsection{Common features of $d^1$ transition metal oxide spectra} We summarize this section by pointing out the key common features of the XPS spectra of these $d^1$ transition metal oxides: the transition metal core level spectra of these materials all display at least two, and sometimes three distinct components (where a component refers to a pair of peaks related by spin-orbit coupling); a single component is never observed even in the optimally oxidized single phase films. These XPS peaks can be assigned as $d^0$ (Nb$^{5+}$, V$^{5+}$, Ti$^{4+}$), $d^1$ (Nb$^{4+}$, V$^{4+}$, Ti$^{3+}$), and $d^2$ (Nb$^{3+}$, V$^{3+}$, Ti$^{2+}$) oxidation states. As a general trend, electron doping (via oxygen vacancies) increases the intensity of the $d^2$ peak at the expense of the $d^0$ and $d^1$ peaks, whereas hole doping (via oxygen excess) increases that of the $d^0$ peak and decreases the intensity of the $d^2$ peak if present. Based on the initial-state effect, one might naively infer from the XPS results that the optimally oxidized samples contain significant amounts of regions of different oxidation states (such as Nb$_2$O$_5$ which is nominally d$^0$). However, this interpretation is not consistent with RHEED from the samples, which should clearly show the presence of incommensurate monoclinic/amorphous Nb$_2$O$_5$ or pyrochlore La$_2$Ti$_2$O$_7$/Sr$_2$V$_2$O$_7$ phases, if they are present in such large amounts. Quantitatively, if we assume the peak intensity of a particular component is proportional to the abundance of that particular oxidation state, this implies that roughly one half of the sample on average is in the highest oxidation state. For example, from the XPS of SrVO$_3$ [Fig.~\ref{fig:SrVO3}], one expects 60\% of the sample to consist of pyrochlore Sr$_2$V$_2$O$_7$ which should be, but is not, reflected in the diffraction data, which still shows a single phase, epitaxial 100-oriented pervoskite film. It should also be noted that the oxygen to transition metal ratio has been carefully controlled during growth (as described above), spanning the range from under-oxidized to over-oxidized. Furthermore, we also note that growing at very low oxygen pressures that result in an oxygen to metal ratio significantly less than the ideal value still results in the presence of a peak that is associated with the $d^0$ oxidation state. The presence of a strong $d^0$ peak in stoichiometric SrVO$_3$ has been interpreted by Takizawa et al. \cite{PhysRevB.80.235104, Takizawa_thesis} as being due to excess oxygen (forming V$^{5+}$) decorating the surface of SrVO$_3$ resulting in a $\sqrt{2}\times \sqrt{2}$ reconstruction pattern. As shown in the Supplementary Materials \cite{Supplementary}, we also observe the surface reconstruction in RHEED. By comparing the XPS spectra before and after the Ar sputtering (which removes the surface atoms), we conclude that both the surface reconstruction (initial-state effect) and final-state effect contribute to the multi-peak structure in the case of SrVO$_3$. In a vacuum-cleaved single crystal of SrVO$_3$, only a weak $d^0$ feature is observable \cite{Eguchi2007421}. The seemingly conflicting results from the XPS data and the single phase nature of the optimally oxidized films can be naturally reconciled if the occurrence of the multi-peak structure in the XPS spectra is intrinsic to these $d^1$ materials (i.e. the spatially uniform $d^1$ system by itself displays multiple peaks in XPS). In the next section we argue that it is indeed the case once the final-state effect is considered, and provide a simple model to illustrate this point. \section{Model and theoretical analysis} \subsection{Model and parameters} To explain the observed multi-peak structure in the XPS spectra, we propose a cluster-bath model which resembles that proposed in Ref.~\cite{PhysRevB.76.035101, PhysRevB.78.075103}. It contains three parts [see Fig.~\ref{fig:ClusterXPS_bath_cubic}(a)]: \beq H_{tot} = H_{cluster} + H_{bath} + H_{cl-bath}. \label{eqn:H_tot} \eeq $H_{cluster}$ describes a TM-O$_6$ (TM can be Ti, V, or Nb) cluster that includes at least five TM $d$ and three O $2p$ orbitals (for each of six oxygen atoms). We point out that our model does not qualitatively distinguish between the $3d$ and $4d$ orbitals, or $3d$ orbitals of different chemical elements: they only correspond to different parameters in the model. When taking the cubic symmetry into account, only ten of twenty-three total orbitals couple to one another \cite{12_orbital}. Using $\Gamma$ to label the orbital symmetry (three $t_{2g}$ $xy$, $yz$, $zx$ and two $e_g$ $3z^2-r^2$, $x^2-y^2$ orbitals), the cluster Hamiltonian \cite{Cluster_note} is \beq \begin{split} H_{cluster} &= \sum_{\Gamma,\sigma} \left\{ \epsilon_p (\Gamma) n_{p, \Gamma,\sigma} + \epsilon_d (\Gamma) n_{d, \Gamma,\sigma} + V(\Gamma) [d^{\dagger}_{\Gamma,\sigma} p_{\Gamma,\sigma} + H.c.] \right\} \\ &+ \frac{U}{2} \sum_{(\Gamma, \sigma) \neq (\Gamma', \sigma')} n_{d, \Gamma,\sigma} n_{d, \Gamma',\sigma'} - U_{dc} (1-n_{core}) \sum_{\Gamma, \sigma} n_{d, \Gamma, \sigma} + \epsilon_c n_{core}. \label{eqn:dp_cluster} \end{split} \eeq Here $n_{d, \Gamma,\sigma} = d^{\dagger}_{\Gamma,\sigma} d_{\Gamma,\sigma}$, $n_{p, \Gamma,\sigma} = p^{\dagger}_{\Gamma,\sigma} p_{\Gamma,\sigma} $ are respectively the TM $d$ and O $2p$ number operators for the orbital labeled by $(\Gamma, \sigma)$ ($\sigma$ labels the spin). $\epsilon_d (\Gamma) $ and $\epsilon_p (\Gamma)$ are energies of TM $d$ and O $2p$ orbitals, and $V(\Gamma)$ describes their hybridizations. $U$ is the energy cost when the $3d$ occupation of the TM atom is more than one. $n_{core}$ is the number operator of the core level and $\epsilon_c$ the core-level energy (approximately $-459.0$ eV for Ti $2p$, $-518.0$ eV for V $2p$, $-208.0$ eV for Nb $3d$). The term with $U_{dc}$ approximates how valence electrons respond to the core hole: in the presence of a core hole (i.e. $\langle n_{core} \rangle = 0$), all TM $d$ levels are shifted down by $U_{dc}$ to screen the core hole. By fitting to published experimental data from XPS of SrTiO$_3$, ellipsometry, and angle-resolved photoemission spectroscopy (ARPES) \cite{Kotani_93, Zollner, J.Appl.Phys.90.6156, Hatch_13, PhysRevB.79.113103}, we take $\epsilon_d (e_g) = 2.0 $ eV, $\epsilon_d (t_{2g}) = 0 $ eV, $\epsilon_p (\Gamma) = -3.0$ eV, $V (e_g) = 2.5 $eV, $V (t_{2g}) = -1.3$ eV, $U=6.0$ eV, and $U_{dc} = 8.0$ eV \cite{Fitting}. This problem can be solved exactly by the technique introduced by Gunnarsson and Sch\"onhammer \cite{PhysRevB.28.4315, key_trick}, and the details are provided in Appendix B. For the metallic phase, the occupation of each local orbital fluctuates. To capture this effect, we further introduce a set of bath orbitals, which simulate the role of TMO conduction bands, coupling to each $d$ orbital. For Hamiltonians involving the bath: \beq \begin{split} H_{bath} + H_{cl-bath} &= \sum_{\Gamma,\sigma} \left[ \int \epsilon \,b^{\dagger}_{\epsilon \Gamma \sigma} b_{\epsilon \Gamma \sigma} d\epsilon + \int [V(\epsilon, \Gamma) d^{\dagger}_{\Gamma \sigma} b_{\epsilon \Gamma \sigma} +H.c.]d\epsilon, \right] \\ \pi |V(\epsilon, \Gamma)|^2 &= \frac{2V^2}{B^2} \sqrt{B^2 - (\epsilon-\epsilon_0)^2} d\epsilon. \end{split} \label{eqn:bath_coupling} \eeq Here $ b_{\epsilon \Gamma \sigma}$ denotes the bath orbitals of energy $\epsilon$, orbital symmetry $\Gamma$ and spin $\sigma$. Inclusion of the bath introduces charge fluctuation in the cluster (via exchange of particles with the bath) that is used to model the fluctuation in the occupation of local orbital in the metallic phase \cite{PhysRevB.28.4315} (see Appendix A for a simple explanation). We use $\epsilon_0 = 2.0$ eV, $B = 2.0$ eV (so the bath levels range from 0 to 4.0 eV, roughly the SrTiO$_3$ conduction bandwidth) to approximate the SrTiO$_3$ conduction bands, and take $V=0.3$ eV which is approximately the effective hopping between two adjacent Ti $3d$ orbitals \cite{Zollner,PhysRevB.87.161102}. It turns out that the exact value of $V$ plays a relatively minor role in the XPS spectrum (see Appendix B). In the calculation, we introduce the chemical potential $\mu$ to specify the number of total electrons (filling) in the whole system (bath and cluster): all bath levels below $\mu$ are filled. Qualitatively larger $\mu$ corresponds to larger average $d$ occupation in the bulk material. To extract the essential feature of these $d^1$ materials, we only vary $\mu$ but keep all other parameters fixed. In other words, the valence levels of Ti $3d$, V $3d$, and Nb $4d$ are not distinguished in our simulation. The XPS spectrum is calculated using Eq.~\eqref{eqn:XPS_rho}, and the details are given in the Appendix B. \subsection{Results from an isolated cluster} Before discussing our results using the total Hamiltonian Eq.~\eqref{eqn:H_tot}, we first present the results from the isolated cluster (zero impurity-bath coupling). In particular we shall identify the origin of each peak. Fig.~\ref{fig:d-pcluster}(c) shows the XPS spectrum of an isolated cluster -- ten electrons are filled to mimic the nominally $d^0$ system. There are three pronounced peaks, labeled as $|L \rangle$, $|M \rangle$, and $|U \rangle$ referring to their relative lower, middle, and upper binding energies. These features can be understood by considering the following three states $|d^{0} \underline{L}^0\rangle$, $|d^{1} \underline{L}^1\rangle$, and $|d^{2} \underline{L}^2\rangle$ \cite{Kotani_93, CoreLevel}. Here $|d^{0} \underline{L}^0\rangle$ represents the ``reference'' state where all O $2p$ orbitals are filled, and $|d^{i} \underline{L}^i\rangle$ represents the state of $i$ particle-hole (p-h) pairs with respect to $|d^{0} \underline{L}^0\rangle$ [see Fig.~\ref{fig:d-pcluster}(a) for illustration]. Without the core hole, the ground state $|GS \rangle$ is a linear combination of these three states. States with larger number of p-h pairs are significantly less important due to the on-site energy $U$. In the presence of a core hole (we use $|d^{i} \underline{L}^i \underline{c}\rangle$ to denote states in the presence of a core hole), the relative energies of these three states change, and the resulting core-hole eigenstates (including the d-p hybridization) are labeled as $|L \rangle$, $|M \rangle$, $|U \rangle$. These three lowest eigenstates account for the three pronounced peaks in the computed spectrum. From Eq.~\eqref{eqn:XPS_rho}, the peak strength is given by $|\langle X| c |GS \rangle|^2$ for $X=L,M,U$. We emphasize that, due to the strong d-p hybridization, all core-hole eigenstates $|L \rangle$, $|M \rangle$, $|U \rangle$ have significant $|d^{i} \underline{L}^i \underline{c}\rangle$ ($i=0,1,2$) components. Comparing with the experimentally observed XPS SrTiO$_3$ spectrum [Fig.~\ref{fig:XPS_STO}], we note that: (i) the strongest peak $|L\rangle$ is conventionally assigned as the Ti$^{4+}$ ($d^0$) $2p_{3/2}$ peak; (ii) the weak peak $|M \rangle$ is buried under the $2p_{1/2}$ peak caused by the spin-orbit coupling of the core electron, and is not observed; (iii) the calculated $|U\rangle$ peak corresponds to the charge transfer satellite feature at a binding energy of approximately 471.0 eV \cite{Kotani_93,Comparison_TiO2_STO} and appears to be much sharper than that in the experiment, because we neglect the coupling between valence electrons and the core spin that provides additional decay channels for states of higher binding energies \cite{Kotani_93,core_spin_broadening}. In the following discussion we only focus on the strongest and lowest peak, which is the one used to determine the different oxidation states. \subsection{Results including bath} Inclusion of the bath introduces charge fluctuation in the cluster, as the cluster can now exchange particles with the bath orbitals (see Appendix A). More specifically, instead of the fixed number of electrons in the cluster, the total ground state wave function has a general form \beq |GS \rangle = \sum_{i=0} \alpha_i |n_{cl} + i \rangle_{cl} \otimes |n_{b} - i \rangle_{bath} \otimes |1 \rangle_{core}. \label{eqn:particle_fluctuation} \eeq Here $|n_{cl} + i \rangle_{cl} \otimes |n_{b} - i \rangle_{bath} \otimes |1 \rangle_{core} $ represents a state which has $n_{cl} + i$ particles in the cluster, $n_{b} - i $ particles in the bath and a filled core level. Using the same notation as Eq.~\eqref{eqn:particle_fluctuation}, the isolated cluster calculation presented in the previous subsection only has $|n_{cl}=10 \rangle_{cl} \otimes |n_{b} \rangle_{bath} \otimes |1 \rangle_{core}$, with $|n_{cl}=10 \rangle_{cl}$ including all possible $|d^{i} \underline{L}^i\rangle$ ($i$=0 to 10 in principle) components. When exchanging particles with the bath, the states such as $|n_{cl}=11 \rangle_{cl}$ ($|d^{i+1} \underline{L}^i\rangle$, $i =0$ to 9), $|n_{cl}=12 \rangle_{cl}$ ($|d^{i+2} \underline{L}^i\rangle$, $i =0$ to 8) also contribute to the $|GS \rangle$. Similarly, in the presence of a core hole, the $n$th eigenstate with energy $E_{core,n} (N-1)$ has the general form \beq |n (N-1) \rangle = \sum_{i=0} \beta^{(n)}_i |n_{cl} + i \rangle_{cl} \otimes |n_{b} - i \rangle_{bath} \otimes |0 \rangle_{core}. \label{eqn:particle_fluctuation_minus} \eeq Applying Eq.~\eqref{eqn:particle_fluctuation} and Eq.~\eqref{eqn:particle_fluctuation_minus} to Eq.~\eqref{eqn:XPS_rho}, the XPS spectrum displays peaks at $\omega_n = E_{GS} (N) - E_{core,n} (N-1)$ with weight $|\sum_i \beta^{(n)}_i \alpha_i |^2$. From this general analysis, we see that including the charge fluctuation naturally leads to multiple XPS peaks, which correspond to different particle number in the cluster. In Fig.~\ref{fig:ClusterXPS_bath_cubic}(c) we show the calculated XPS spectra for $\mu=0.2, 1.0, 1.5, 2.0$, and 2.5 eV. Starting from the highest chemical potential, the $\mu=2.5$ eV XPS spectrum shows three distinct peaks. By analyzing the wave functions, they correspond to $|n_{cl}=10 \rangle_{cl}$, $|n_{cl}=11 \rangle_{cl}$ and $|n_{cl}=12 \rangle_{cl}$ in Eq.~\eqref{eqn:particle_fluctuation_minus}, and are therefore labeled as $d^0$, $d^1$, $d^2$ respectively. Using the notation within the isolated cluster, the $d^0$, $d^1$ and $d^2$ peaks come from states of $|L\rangle$ ($\in |n_{cl}=10 \rangle_{cl}$), $|d^1 \underline{L}^0 \underline{c} \rangle $ ($\in |n_{cl}=11 \rangle_{cl}$), $|d^2 \underline{L}^0 \underline{c} \rangle$ ($\in |n_{cl}=12 \rangle_{cl}$) respectively, as shown in Fig.~\ref{fig:ClusterXPS_bath_cubic}(b). Decreasing $\mu$ reduces the intensities of $d^2$ and $d^1$ peak but increases that of $d^0$. This is because lowering the chemical potential decreases the probability of adding electrons to the cluster from the bath, resulting in a smaller $|n_{cl}=11 \rangle_{cl}$ and $|n_{cl}=12 \rangle_{cl}$ components in $|GS \rangle$ and consequently weaker $d^1$ and $d^2$ peak intensities. However, we stress that once the cluster and bath can exchange particles, a single $d^1$ XPS peak is never obtained in our calculation; the $d^0$ peak is always present. \subsection{Discussion} \subsubsection{Comments on experiments} We now discuss several experiments based on the calculation. The main conclusion from our model calculation is that, once charge fluctuation is taken into account, the nominally $d^1$ or $n$-doped $d^0$ transition metal oxides are expected to display multiple peaks in their XPS spectra, even in the absence of other oxidation states. In other words, our theory implies that a multi-peak structure in XPS is {\em general} for these materials if charge fluctuation cannot be neglected. We first discuss three observations in $n$-doped SrTiO$_3$ samples based on the general consequences of the final-state interpretation. First, the Ti$^{3+}$ peak position is dopant independent and is an intrinsic property of the Ti atom, or more precisely the TiO$_6$ cluster. Indeed, in lightly $n$-doped SrTiO$_3$, the Ti$^{3+}$ peaks all appear in the same position relative to the Ti$^{4+}$ peak \cite{PhysRevB.83.035410, /content/aip/journal/jap/116/4/10.1063/1.4891225,/content/aip/journal/apl/100/26/10.1063/1.4731642} (Fig.~\ref{fig:XPS_STO}). Special attention is paid to the Nb-doped SrTiO$_3$ (or Nb-doped TiO$_2$ \cite{PhysRevB.61.13445}), where even in the ionic limit, there can only be Nb$^{4+}$ ions (i.e. Nb keeps one $4d$ electron), but not Ti$^{3+}$ ions. Within our interpretation, the Nb gives its $4d$ electron to the conduction band, resulting in a metallic state and nominally Nb$^{(5-x)+}$ and Ti$^{(4-x)+}$ ions (instead of Nb$^{4+}$ and Ti$^{4+}$), with Ti$^{(4-x)+}$ ions providing the XPS Ti$^{3+}$ signal. Second, the Ti$^{3+}$/(Ti$^{4+}$+Ti$^{3+}$) ratio is routinely used to estimate the dopant concentration, and gives very reasonable values, which are consistent with other experiments such as Hall measurements and Rutherford backscattering for low to moderate doping \cite{PhysRevB.83.035410, /content/aip/journal/jap/116/4/10.1063/1.4891225,/content/aip/journal/apl/100/26/10.1063/1.4731642}. According to our theory, this is possible because the Ti$^{4+}$ is the highest oxidation state and contains only one main peak. The Ti$^{3+}$ signal therefore appears as an extra, distinct side peak when reducing the average Ti oxidation state via doping. For a nominally $d^1$ system (that will be discussed shortly), multi-oxidation peaks exist intrinsically in the first place, and doping does not introduce a new peak. Also, we expect that using the Ti$^{3+}$/(Ti$^{4+}$+Ti$^{3+}$) ratio always slightly underestimates the dopant concentration as the nominally pure Ti$^{3+}$ material already has significant Ti$^{4+}$ signal. This is consistent with the results in Ref.~\cite{/content/aip/journal/jap/116/4/10.1063/1.4891225}. Finally, one cannot really distinguish the initial-state and final-state effect based solely on the XPS spectrum. Both spatially localized Ti$^{3+}$ ions or a uniformly distributed Ti$^{(4-x)+}$ can account for the XPS Ti$^{3+}$ peaks. The key difference between these two scenarios is that the former implies the presence of an in-gap state, whereas the latter does not. To differentiate between them, one should probe the valence states to see if there is an in-gap signal. In oxygen-deficient SrTiO$_3$, an in-gap signal is observed in ARPES \cite{OV_Arpes_2002, NatMetal_STO_2D, Hatch_13}. In this case the XPS Ti$^{3+}$ peak can be due to the presence of localized Ti$^{3+}$ ions. We note that in the literature, an oxygen vacancy is suggested to be a single donor \cite{Hou_10, PhysRevLett.111.217601}, which would result in nominally localized Ti$^{3.5+}$ ions (we favor this view). Within the final-state effect, Ti$^{3.5+}$ ions also lead to a separate XPS Ti$^{3+}$ peak. It is worth noting that in the LaAlO$_3$/SrTiO$_3$ interface, the oxygen vacancies are responsible for the majority of charge carrier \cite{PhysRevB.75.121404,PhysRevLett.102.176805}. However, the x-ray absorption spectrum does not indicate the existence of Ti$^{3+}$ ions \cite{PhysRevLett.102.166804,PhysRevLett.111.087204}. For the nominally $d^1$ TMO, all the optimally oxidized $d^1$ samples we have grown (as well as vacuum-cleaved single crystal Ti$_2$O$_3$ \cite{Comparison_TiO2_STO}), demonstrate the XPS spectra showing multiple components. By viewing the multi-component structure as being caused by the final-state effect, the existence of these multiple components does not require the presence of different oxidation states in the sample. Even though XPS data show multiple components, the systematic way in which the oxygen content is controlled in the growth experiments, in combination with the single phase RHEED patterns observed, precludes the existence of different oxidation environments in the optimally oxidized samples. The final-state interpretation reconciles the seeming conflict between XPS data, the single phase pattern in RHEED measurements, as well as the careful, systematic way in which the oxygen content is controlled in these growth experiments, which precludes the existence of different oxidation environments. Moreover, our calculation shows the same doping dependence of the relative peak intensities: increasing the electron doping decreases the $d^0$ peak intensity and causes an increase in the intensity of the $d^2$ peak. This qualitative agreement between theory and experiment leads us to believe that the multi-peak structure in the single phase $d^1$ transition metal oxides actually originates from the final-state effect and is intrinsic. Certainly, as mentioned previously, one cannot rule out the initial-state effect, and ions of higher oxidation states ($V^{5+}$ for example) may exist at or near the surface of the sample. As observed in some vanadates \cite{Eguchi2007421,PhysRevB.80.235104, Takizawa_thesis}, these ions also result in $d^0$ signal. However, we notice that even if these ions do exist, the $d^0$ signals appear to be too strong ($d^0$ and $d^1$ peaks are of comparable strength) to be interpreted as being solely from them. In fact, we believe in SrVO$_3$, the surface reconstruction (initial-state effect) and final-state effect {\it both} contribute to the observed $d^0$ peak (see the Supplementary Materials \cite{Supplementary}). \subsubsection{Limitations of the theory} There are two uncertainties in our model which make a more quantitative analysis difficult. First it is not easy to map the chemical potential $\mu$ to the average $d$ occupancy in the bulk material. Second, the energy distribution of bath orbitals and the cluster-bath coupling are also hard to determine. However, the multi-peak structure is insensitive to these uncertainties (see Appendix B). Namely, as long as there are particle exchanges between the cluster and the bath, there are multiple peaks in the XPS spectrum. For this reason we believe the conclusions drawn from our model are qualitatively correct. \subsubsection{Charge fluctuation} We now discuss the origin and the importance of charge fluctuation. The charge fluctuation cannot be neglected in the metallic state, where particle exchange with the Fermi sea causes fluctuation in the occupation of local orbitals \cite{PhysRevB.28.4315}. Accordingly, charge fluctuations in doped or metallic samples should not be neglected, and multiple XPS peaks in these samples are expected (and indeed observed) \cite{PhysRevB.61.13445}. For undoped, nominally insulating $d^1$ materials, the criterion of being metallic is not always satisfied at low temperature. Uncorrelated $d^1$ materials are expected to be band metals. The samples we have studied, NbO$_2$ and LaTiO$_3$, are both metallic at high temperature and undergo a metal-to-insulator transition at 1080 K (of Peierls type) \cite{0295-5075-58-6-851} and 125 K (of Mott type) \cite{PhysRevLett.69.1796} respectively; SrVO$_3$ is intrinsically metallic \cite{PhysRevLett.104.147601}. Note that SrVO$_3$ already shows the $d^2$ peak in the optimally oxidized sample [Fig.~\ref{fig:SrVO3} (a)], indicating its relatively strong charge fluctuation due to its metallic nature. Specific to our experimental conditions, all $n$-doped SrTiO$_3$ are metallic at room temperature. LaTiO$_3$ is already metallic at room temperature, which easily allows for charge fluctuation. For NbO$_2$, the sample is still nominally insulating at room temperature, but its relatively small band gap of $\sim$1.0 eV \cite{Posadas_NbO2.APL} likely results in non-negligible concentration of electrons in the conduction band at room temperature. The fact that no sample charging is observed during XPS measurements indicates that there is sufficient conductivity in the samples at room temperature (sufficient thermally excited carriers in the conduction band) to allow for charge fluctuation to occur. Therefore, although the charge fluctuation in the undoped, nominally insulating $d^1$ materials can be weaker compared to the doped samples, we believe it is still non-negligible. \subsection{Relative importance of the initial-state and final-state effects} We would like to conclude our theoretical analysis by addressing the relative importance of the initial-state and final-state effects. From Eq.~\eqref{eqn:dp_cluster}, we see that the valence screening is described by the parameter $U_{dc}$, which is the strength of the core-hole-induced attractive potential. If $U_{dc} = 0$, then valence band electrons do not feel the existence of the core hole, and thus no final-state effect is involved. With this observation, we propose that the dimensionless parameter $\xi = U_{dc}/W$, with $W$ the typical energy scale of the valence bandwidth, can be used to characterize the relative importance between initial-state and final-state effects: large $\xi$ favors the final-state effect; small $\xi$ favors the initial-state effect. As the bandwidth is proportional to the electron hopping $t$, we can roughly regard $1/U_{dc}$ as the time scale to create a core hole, and $1/t$ as the time scale for conduction electron to move to screen the core hole. Therefore the inverse of $\xi$ ($1/\xi$) essentially describes how efficient (fast) the conduction electrons screen the core hole. By fixing the value of $U_{dc}$ (about 10 eV \cite{CoreLevel}), materials of large/small valence bandwidth favor the initial-state/final-state effect. With this picture, we comment on the established interpretations of XPS spectra. For covalent materials such as carbon and silicon, the initial-state appears to be dominant and the multi-peak structure is used appropriately to signal the existence of different oxidation phases \cite{Miller_2002_C, PhysRevB.38.6084}. Consistent with our argument, the diamond structure of C and Si indeed have relatively large valence bandwidths of approximately 20 eV \cite{PhysRevB.41.3048, Bassani} and 12 eV \cite{PhysRevB.10.5095}, respectively, which favors the initial-state effect. For materials with valence electrons in localized orbitals (rare earth $4f$) such as lanthanum and cerium \cite{Kotani_1974, Kotani_99, PhysRevB.28.4315}, it is the final-state effect which dominates. For these materials the XPS multi-peak structure is not attributed to the oxidation states, but can be used to determine material-specific model parameters by comparing to a model calculation \cite{CoreLevel}. A typical bandwidth of $f$-orbitals is about 4 eV \cite{PhysRevB.27.7330, PhysRevB.46.3458, PhysRevB.85.125134}, which favors the final-state effect. In terms of valence bandwidth, the early transition metal oxides are in between the two classes of materials (about 6 to 8 eV \cite{Zollner,PhysRevB.79.113103}) . As the experimental results for carefully grown samples from different probing techniques fit the final-state effect better (see also Refs.~\cite{PhysRevB.76.035101, PhysRevB.78.075103}), we believe the final-state effect is also the dominant one in the transition metal oxides. Taking $U_{dc}$ to be 10 eV, we summarize the origin of the multi-peak structure in XPS for the materials mentioned above in Table \ref{table:xi}. \begin{table}[h] \begin{tabular}{ l | l l l } Material & valence bandwidth ($W$) & $\xi = U_{dc}/W$ & origin of multiple peaks \\ \hline diamond carbon & 20 eV & 0.5 & initial-state \\ diamond silicon & 12 eV & 0.83 & initial-state \\ SrTiO$_3$ & 6 eV & 1.67 & final-state \\ CeNi$_2$ & 4 eV & 2.5 & final-state \end{tabular} \caption{The origin of the multi-peak structure in XPS for various materials. The value SrTiO$_3$ is similar to the $d^1$ materials studied in this paper. $U_{dc}$ is taken to be 10 eV.} \label{table:xi} \end{table} \section{Conclusions} We investigate the origin of the observed XPS multi-peak structure of single phase nominally $d^1$ transition metal oxides including NbO$_2$, SrVO$_3$, LaTiO$_3$, and lightly $n$-doped SrTiO$_3$. Experimentally, we find that the XPS spectra (specifically the photoelectrons from Nb $3d$, V $2p$, Ti $2p$ core levels) of these materials all display at least two, and sometimes three pairs of peaks, which can be consistently assigned as $d^0$, $d^1$, and $d^2$ oxidation states. For lightly $n$-doped SrTiO$_3$, a weak $d^1$ shoulder, whose energy position is independent of the dopants, appears with respect to the main $d^0$ peak. For nominally $d^1$ transition metal oxides, electron doping increases the intensity of the $d^2$ peak but decreases that of the $d^0$ peak, whereas hole doping reverses this trend. A single $d^1$ peak is never observed, even in single phase samples. In particular, the $d^0$ peak always exists even in the electron doped samples where stoichiometric analysis shows strong oxygen-deficiency and diffraction shows no secondary phases, strongly indicating that the multi-peak structure is intrinsic to these materials. Theoretically, we construct and solve a cluster-bath model, and explicitly demonstrate that the final-state effect (i.e. the valence response to the created core hole) naturally leads to the multiple peaks in the XPS spectrum even in a spatially uniform system. Moreover, the relative peak strength as a function of doping is qualitatively consistent with the experimental observation. The combination of experimental and theoretical analysis leads us to conclude that the multi-peak structure in the nominally $d^1$ transition metal oxides is intrinsic, and does not necessarily imply the existence of spatially isolated (or clustered) $d^0$ and $d^2$ ions in a sample. Using the same analysis, we argue that the ratio between the local screening potential and the valence bandwidth is the key dimensionless parameter that determines the relative importance between initial-state and final-state effects. To establish the existence of different oxidation phases in a sample, further spatially-resolved probing techniques involving the valence electrons are needed. For this reason, investigating the final-state effect in x-ray absorption spectroscopy can be very helpful. \section*{Acknowledgements} C.L. thanks Jeroen van den Brink, Nicholas Plumb, and Ralph Claessen for encouraging and enlightening conversations. We thank Miri Choi ((La,Sr)TiO$_3$), Daniel Groom (LaTiO$_3$) and Kristy Kormondy ((La,Sr)VO$_3$) for help in growth optimization, and Andy O'Hara and Allan MacDonald for insightful comments. Support for this work was provided through Scientific Discovery through Advanced Computing (SciDAC) program funded by U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences under award number DESC0008877.
1,108,101,563,979
arxiv
\section{Introduction} The experimental search for the QCD deconfinement phase transition in ultrarelativistic heavy-ion collisions will enter a new stage when the relativistic heavy-ion collider (RHIC) at Brookhaven will provide data complementary to those from the CERN SPS \cite{qm97}. It is desirable to have a continuum field-theoretical modeling of quark deconfinement and chiral restoration at finite temperature and density (or chemical potential $\mu$) that can be extended also to hadronic observables in a rapid and transparent way. Significant steps in this direction have recently been taken through a continuum approach to QCD$_{T,\mu}$ based on the truncated Dyson-Schwinger equations (DSEs) within the Matsubara formalism~\cite{bbkr,brs,mrs} and a recent review~\cite{br98} is available. A most appealing feature of this approach to modeling nonperturbative QCD$_{T,\mu}$ is that dynamical chiral symmetry breaking {\it and} confinement is embodied in the the model gluon 2-point function constrained by chiral observables at \mbox{$T=\mu=0$} and no new parameters are needed for extension to \mbox{$T,\mu >0$}. Approximations introduced by a specific truncation scheme for the set of DSEs can be systematically relaxed. However due to the breaking of $O(4)$ symmetry and the number of discrete Matsubara modes needed, the finite $T,\mu$ extension of realistic DSE models entails solution of a complicated set of coupled integral equations. The generation of hadronic observables from such solutions, although a straightforward adaption of the approach~\cite{T97rev} found to be successful at \mbox{$T=\mu=0$}, adds further to the difficulties. In the separable model we study here, detailed realism is sacrificed in the hope that the dominant and essential features may be captured in a simple and transparent format. To this end we simplifiy an existing \mbox{$T=\mu=0$} confining separable interaction Ansatz~\cite{b+} to produce a gaussian separable model for \mbox{$T,\mu >0$}~\cite{bkt}. \section{Confining separable Dyson-Schwinger equation model} In a Feynman-like gauge where we take $D_{\mu\nu}=\delta_{\mu\nu}D(p-q)$ to be the effective interaction between quark colored vector currents, the rainbow approximation to the DSE for the quark propagator $S(p)=[i \rlap/p A(p) + B(p) + m_0]^{-1}$ yields in Euclidean metric \begin{eqnarray} B(p) &=& \frac{16}{3} \int \frac{d^4q}{(2\pi)^4} D(p-q) \frac{B(q)+m_0}{q^2A^2(q)+\left[ B(q)+m_0\right]^2} \,\,\, , \\ \left[ A(p)-1 \right] p^2 &=& \frac{8}{3} \int \frac{d^4q}{(2\pi)^4} D(p-q) \frac{(p\cdot q) A(q)}{q^2A^2(q)+\left[ B(q)+m_0\right]^2} \,\,\, . \end{eqnarray} We study a separable interaction given by~\cite{b+} \begin{equation} D(p-q) = D_0~ f_0(p^2)f_0(q^2) + D_1~ f_1(p^2)(p\cdot q)f_1(q^2)~, \label{model} \end{equation} where $D_0, D_1$ are strength parameters and the form factors, for simplicity, are here taken to be $f_i(p^2) = \mbox{exp}(-p^2/\Lambda_i^2)$ with range parameters $\Lambda_i$. It is easily verified that if $D_0$ is non-zero, then \mbox{$B(p)=\Delta m~f_0(p^2)$}, and if $D_1$ is non-zero, then \mbox{$A(p)=1+\Delta a~f_1(p^2)$}. The DSE then reduces to nonlinear equations for the constants $\Delta m$ and $\Delta a$. The form factors should be chosen to simulate the $p^2$ dependence of $A(p)$ and $B(p)$ from a more realistic interaction. We restrict our considerations here to the rank-1 case where $D_1=0$ and \mbox{$A(p)=1$}. The parameters $D_0$, $\Lambda_0$ and $m_0$ are used to produce reasonable $\pi$ and $\omega$ properties as well as to ensure the produced $B(p)$ has a reasonable strength with a range $\Lambda_0 \sim 0.6 \dots 0.8$ GeV to be realistic~\cite{b+}. If there are no solutions to $p^2A^2(p)+(B(p)+m_0)^2=0$ for real $p^2$ then the quarks are confined. If in the chiral limit ($m_0=0$) there is a nontrivial solution for $B(p)$, then chiral symmetry is dynamical broken. Both phenomena can be implemented in this separable model. In the chiral limit, the model is confining if $D_0$ is strong enough to make $\Delta m/\Lambda_0\ge1/\sqrt{2{\rm e}}$. Thus for a typical range $\Lambda_0$, confinement will typically occur with $M(p\approx 0)\ge 300$ MeV. Mesons as $q \bar q$ bound states are described by the Bethe-Salpeter equation which in the ladder approximation for the present approach is \begin{equation} - \lambda(P^2) \Gamma(p,P) = \case{4}{3} \int \frac{d^4q}{(2\pi)^4} D(p-q) \gamma_\mu S(q_+) \Gamma(q,P)S(q_-) \gamma_\mu~, \label{bs} \end{equation} where \mbox{$q_\pm = q \pm P/2$} and $P$ is the meson momentum. The meson mass is identified from \mbox{$\lambda(P^2=-M^2)=$}$1$. With the rank-1 separable interaction, only the $\gamma_5$ and the $\gamma_5 \mbox{$\not \! P$}$ covariants contribute to the $\pi$~\cite{b+}, and here we retain only the dominant term $\Gamma_\pi(p,P) = i \gamma_5 E_\pi (p,P)$. For the vector meson, the only surviving form is \mbox{$\Gamma_{\rho \mu}(p,P) =$} \mbox{$\gamma_\mu^T(P) E_\rho (p,P)$}, with $ \gamma_\mu^T(P)$ being the projection of $\gamma_\mu$ transverse to $P$. The separable solutions have the form $E_i (p,P)=f_0(p^2) C_i(P^2), ~~i=\pi, \rho~$, where the $C_i$ factor out from Eq.~(\ref{bs}). In the limit where a zero momentum range for the interaction is simulated by \mbox{$f_0^2(q^2)\propto $}\mbox{$ \delta^4(q)$}, then the expressions for the various BSE eigenvalues $\lambda(P^2)$ reduce to those of the Munczek and Nemirovsky~\cite{mn} model which implements extreme infrared dominance via \mbox{$D(p-q) \propto \delta^{(4)}(p-q)$}. The correspondence is not complete because the quark DSE solution in this model has $A(p)\neq 1$. The \mbox{$T,\mu >0$} generalization of this infrared dominant (ID) model have been studied recently~\cite{brs,mrs}. \section{Pion and rho-meson properties} \label{sec:mesons} With parameters $m_0/\Lambda_0=0.0096$, $D_0 \Lambda_0^2=128$ and $\Lambda_0=0.687$ GeV, the present Gaussian separable (GSM) model yields $M_\pi=0.14$ GeV, $M_\rho=M_\omega=0.783$ GeV, $f_\pi=0.104$ GeV, a chiral quark condensate $\langle \bar q q\rangle^{1/3}=-0.248$ GeV, and a $\rho-\gamma$ coupling constant $g_\rho=5.04$. The generalization to $T\neq 0$ is systematically accomplished by transcription of the Euclidean quark 4-momentum via \mbox{$q \rightarrow$} \mbox{$ q_n =$} \mbox{$(\omega_n, \vec{q})$}, where \mbox{$\omega_n=(2n+1)\pi T$} are the discrete Matsubara frequencies. The obtained $T$-dependence of the mass gap $\Delta m(T)$ allows for a study of the deconfinement and chiral restoration features of this model. We find that both occur at \mbox{$T_c=$} 146~MeV where, in the chiral limit, both $\Delta m(T)$ and $\langle \bar{q}q \rangle^0$ vanish sharply as \mbox{$(1-T/T_c)^\beta$} with the critical exponent having the mean field value \mbox{$\beta=1/2$}. For the $\bar q q$ meson modes, the $O(4)$ symmetry is broken and the type of mass shell condition employed must be specified. If there is a bound state, the associated pole contribution to the relevant $\bar q q$ propagator or current correlator will have a denominator proportional to \begin{equation} 1-\lambda(\Omega_m^2, \vec{P}^2) \; \propto \; \Omega_m^2 + \vec{P}^2 +M^2(T)~. \label{dennom} \end{equation} We investigate the meson mode eigenvalues $\lambda$ using only the lowest meson Matsubara mode ($\Omega_m=0$) and the continuation \mbox{$\vec{P}^2\longrightarrow -M^2$}. The masses so identified are spatial screening masses corresponding to a behavior $\exp(-M x)$ in the conjugate 3-space coordinate $x$ and should correspond to the lowest bound state if one exists. \begin{figure}[h] \centerline{ \psfig{figure=piVIbig.eps,width=5cm,height=5.8cm,angle=-90} \psfig{figure=rhoVItbig.eps,width=5cm,height=5.8cm,angle=-90} } \caption{\label{pirho} $T$-dependence of $\bar q q$ meson properties for the rank-1 separable model up to \mbox{$T_c=146$}~MeV. Left panel: Spatial $M_\pi(T)$ and quantities associated with mass relations as dictated by chiral symmetry; Right panel: Spatial masses $M_\rho^T(T)$ and $M_\rho^L(T)$ along with the \mbox{$\rho^0 \rightarrow e^+ e^-$} decay width and the associated vector coupling constant $g_{\rho}(T)$. } \end{figure} The obtained $\pi$ and $\rho$ masses displayed in Fig.~\ref{pirho} and are seen to be only weakly $T$-dependent until near \mbox{$T_c=146$}~MeV. This result for $M_\pi (T)$ reproduces the similar behavior obtained from the ladder-rainbow truncation of the DSE-BSE complex with a more realistic interaction~\cite{bbkr}. The qualitative behavior obtained for the 3-space transverse and longitudinal masses $M_\rho^T(T), M_\rho^L(T)$ agrees with that reported~\cite{mrs} for the limiting case of the zero momentum range or ID model. To explore the extent to which the model respects the detailed constraints from chiral symmetry, we investigate the exact QCD pseudoscalar mass relation~\cite{MRT98} which, after extension to \mbox{$T>0$}, is \begin{equation} \label{gen-GMOR} M_\pi^2(T) \, f_\pi(T) = 2 m_0 \, r_P(T)~. \end{equation} Here $r_P$ is the residue at the pion pole in the pseudoscalar vertex, and in the present model, is given by \begin{equation} \label{rp} i r_P(T) = N_c \; T \sum_n {\rm tr}_s \int \frac{d^3q}{(2\pi)^3}\, \gamma_5 S(q_n+\case{\vec{P}}{2}) \Gamma_\pi (q_n;\vec{P}) S(q_n-\case{\vec{P}}{2})\,. \end{equation} The relation in Eq.~(\ref{gen-GMOR}) is a consequence of the pion pole structure of the isovector axial Ward identity which links the quark propagator, the pseudoscalar vertex and the axial vector vertex~\cite{MRT98}. In the chiral limit, \mbox{$r_P \rightarrow$} \mbox{$ \langle \bar{q} q\rangle^0/f_\pi^0$} and Eq.(\ref{gen-GMOR}), for small mass, produces the Gell-Mann--Oakes--Renner (GMOR) relation. The exact mass relation, Eq.~(\ref{gen-GMOR}), can only be approximately satisfied when the various quantities are obtained approximately such as in the present separable model. The error can be used to assess the reliability of the present approach to modeling the behavior of the pseudoscalar bound state as the temperature is raised towards $T_c$. Our findings are displayed in Fig.~\ref{pirho}. There the solid line represents $r_P(T)$ calculated from the quark loop integral in Eq.~(\ref{rp}); the dotted line represents $r_P$ extracted from the other quantities in Eq.~(\ref{gen-GMOR}). It is surprising that the separable model obeys this exact QCD mass relation to better than 1\% for the complete temperature range up to the restoration of chiral symmetry. Also evident from Fig.~\ref{pirho} is that $M_\pi(T)$ and the chiral condensate are largely temperature independent until within about $0.8~T_c$ whereafter $M_\pi$ rises roughly as fast as the condensate falls. We have also investigated the (approximate) GMOR relation for the present model. The quantity \mbox{$ \langle \bar{q} q\rangle^0/N_\pi^0$} is displayed in Fig.~\ref{pirho} as the long-dashed line, and if the GMOR relation were exactly obeyed, this would coincide with \mbox{$M_\pi^2\,f_\pi/2m$} which is the dotted line. The quantity $N_\pi^0$ enters here via its role as the normalization constant of the chiral limit $\pi$ BS amplitude \mbox{$E_\pi^0(p^2)=$} \mbox{$i\gamma_5 B_0(p^2)/N_\pi^0$}. If all covariants for the pion were to be retained and the axial vector Ward identity were obeyed, one would have \mbox{$N_\pi=f_\pi$}~\cite{MRT98}. The results in Fig.~\ref{pirho} indicate that the GMOR relation contains an error of about $5$\% when compared either to the exact mass relation or to the quantities produced by the separable model and that this is temperature-independent until about $0.9~T_c$. It should be noted that $f_\pi^0, N_\pi^0$ and $ \langle \bar{q} q\rangle^0$ are equivalent order parameters near $T_c$ and have weak $T$-dependence below $T_c$. A consequence is that $M_\pi^2 \, f_\pi$, $r_P$ and \mbox{$ \langle \bar{q} q\rangle^0/N_\pi^0$} are almost $T$-independent and so are the estimated errors for the two mass relations linking these quantities. Since we obtain $M_\pi$ and $f_\pi$ from the model BSE solutions at finite current quark mass, $f_\pi$ does not exactly decrease to zero and $M_\pi$ does not exactly diverge at $T_c$. Vector mesons play an important role as precursors to di-lepton events in relativistic heavy-ion collisions and it is important to explore the intrinsic $T$-dependence of electromagnetic and leptonic vector coupling constants that can arise from the quark-gluon dynamics that underlies the finite extent of the vector $\bar q q$ modes. The present model provides a simple framework for such investigations. The electromagnetic decay constant $g_\rho(T)$ that describes the coupling of the transverse $\rho^0$ to the photon is given by~\cite{IKR99} \begin{eqnarray} \frac{{M^T_\rho}^2(T)}{g_\rho(T)} &=& \case{N_c}{3}\, T \sum_n {\rm tr}_s \int \frac{d^3q}{(2\pi)^3}\, \gamma_\mu S(q_n+\case{\vec{P}}{2}) \Gamma_\mu^T(q_n;\vec{P}) S(q_n-\case{\vec{P}}{2})~, \label{rhophoton} \end{eqnarray} after accounting for the normalization~\cite{T97rev} of the present BS amplitudes. The electromagnetic decay width of the transverse $\rho$ mode is calculated from \begin{eqnarray} \Gamma_{\rho^0 \rightarrow e^+\,e^-}(T) &=& \frac{4\pi\,\alpha^2\,M^T_\rho(T)}{3\;g_\rho^2(T)}~. \end{eqnarray} At \mbox{$T=0$} the experimental value is \mbox{$\Gamma_{\rho^0\rightarrow e^+e^-}(0) =$} \mbox{$6.77$}~keV corresponding to the value \mbox{$g_\rho(0) =5.03$}. Our results for $g_\rho(T)$ and \mbox{$\Gamma_{\rho^0\rightarrow e^+e^-}(T)$} are displayed in Fig.~\ref{pirho}. This electromagnetic width of the 3-space transverse $\rho$ increases with $T$ and reaches a maximum of 1.72 times the $T=0$ width at about $0.9~T_c$. An increasing electromagnetic width for the $\rho$ has been found empirically to be one of the possible medium effects that influence the heavy-ion dilepton spectrum~\cite{sb}. \section{Equation of state (EOS) for quark matter} The thermodynamical properties of the confining quark model and in particular the EOS and the phase diagram can be obtained from the grand canonical thermodynamical potential $\Omega(T,V,\mu)=T \ln Z(T,V,\mu)=-p(T,\mu)V$, where the contributions to the pressure (for a homogeneous system) \begin{equation} \label{ptot} p(T,\mu)=p_{\rm cond}(T,\mu)+p_{\rm kin}(T,\mu)+p_0 \end{equation} are obtained from a mean-field approximation to the Euclidean path integral representation of the grand canonical partition function $Z(T,V,\mu)$. In the rank-one separable gluon propagator model, the condensate contribution is $p_{\rm cond}(T,\mu)=3\,\Delta m(T,\mu)^2/(16\,D_0)$ and the kinetic part of the quark pressure is given by \begin{equation} \label{pkin} p_{\rm kin}(T,\mu)=2 N_c N_f \int\frac{d^3 k}{(2\pi)^3} T \sum_{n} \ln \left(\frac{k_n^2+M^2(k_n^2)}{k_n^2+m_0^2}\right)+p_{\rm free}(T,\mu)~. \end{equation} In Eq. (\ref{pkin}), $k_n^2=[(2n+1)\pi T + i \mu]^2 + {\bf k}^2$, $M(k_n^2)=m_0+\Delta m(T,\mu) f_0(k_n^2)$ and the divergent 3-momentum integration has been regularized by subtracting the free quark pressure and adding it in the well-known \cite{kapusta} regularized form $p_{\rm free}(T,\mu)$. The pressure contribution $p_0$ is found such that the total pressure (\ref{ptot}) at the phase boundary in the $T,\mu$-plane vanishes, see Fig. \ref{eos}. While investigating this EOS for the separable confining quark model defined above we have observed that, as a function of the coupling parameter $D_0$, an instability ${\rm d}(p_{\rm cond}+p_{\rm kin})/{\rm d}T<0$ occurs when the criterion for confinement (absence of quasiparticle mass poles) is fulfilled. The physical quark pressure in the confinement domain of the phase diagram vanishes, see Fig. \ref{pres_T}. The results for the EOS and the phase diagram can be compared to those for the zero momentum range model \cite{brs} with the important modification that in the present finite range model the tricritical point is obtained at finite chemical potential whereas with zero range it was found on the $\mu=0$ axis of the phase diagram, see Fig. \ref{eos}. The location of the tricitical point could be experimentally verified in CERN-SPS experiments provided that changes in the pion momentum correlation function could be detected as a function of the beam energy \cite{misha}. \begin{figure}[ht] \centerline{ \psfig{figure=r1_pres.eps,width=8cm,height=7cm,angle=-90} } \caption{\label{pres_T}The quark matter pressure as a function of temperature for the separable confining model. The kinetic part (long dashed line) is overcompensated by the condensate part (dot-dashed line) resulting in an instability ${\rm d} (p_{\rm cond}+p_{\rm kin})/ {\rm d} T < 0$ (dotted line) for $T<T_c=146$~MeV which is characteristic for confining quark models. The physical pressure of quark matter (solid line) vanishes in this region. For comparison, the free quark matter pressure is shown by the dashed line.} \end{figure} A particularly interesting phenomenological application is the $T=0$ quark matter EOS which is a necessary element for studies of quark deconfinement in neutron stars. The present Gaussian separable model leads~\cite{bb99} to a bag model EOS for quark matter with a bag constant $B(T=0)=150$ MeV/fm$^3$ for the parameter set ($D_0~\Lambda_0^2=128$) employed in Sec.~\ref{sec:mesons}. A second parameter set ($D_0~\Lambda_0^2=97$) that is also confining and provides an equally good description of the same $\pi$ and $\rho$ properties produces $B(T=0)=75$ MeV/fm$^3$; see also Fig. \ref{eos}. More stringent constraints on the low-temperature EOS will require the inclusion of hadronic excitations including the nucleon. \begin{figure}[ht] \centerline{ \psfig{figure=eos.eps,width=6cm,height=5cm,angle=0} \psfig{figure=phase0.eps,width=6cm,height=5cm,angle=0} } \caption{\label{eos}Left panel: pressure (solid line) and energy density (long dashed line) vs. chemical potential at \mbox{$T=0$} for the GSM with $\Lambda_0=0.756$ GeV and $D_0\Lambda_0^2=97.0$ in the chiral limit. The phase transition from the confining quark matter phase to the deconfined one with restored chiral symmetry occurs at $\mu=0.337$ GeV and is first order. The results are coincident with a bag model EOS (dashed and dotted lines) for a bag constant $B=75$ MeV/fm$^3$. Right panel: quark matter phase diagram for the GSM with $\Lambda_0=0.687$ GeV and $D_0\Lambda_0^2=128$. Along the dashed line the pressure vanishes, the solid line separates the chiral symmetric phase from the broken one. A tricritical point is obtained at $T\sim 127$ MeV, $\mu\sim 120$ MeV.} \end{figure} \section{Deconfinement in rotating neutron stars} For the discussion of deconfinement in neutron stars, it is crucial to go beyond the mean field description and to include into the EOS also the hadronic bound states (neutrons, protons, mesons) in the confined phase. This task has not yet been solved and therefore, we adopt for this phase a Walecka model as it is introduced in \cite{kapusta}. In constructing the phase transition to the deconfined quark matter as described by the GSM with a parameter set leading to $B=75$ MeV/fm$^3$ we have to obey the constraints of global baryon number conservation and charge neutrality \cite{glendenning}. The composition of the neutron star matter is also constrained by the processes which establish $\beta-$ equilibrium in the quark phase ($d\to u+e^-+\bar\nu$) and in the hadronic phase ($n\to p+e^-+\bar\nu$). For the given EOS we obtain a deconfinement transition of first order where the hadronic phase is separated from the quark matter one by a mixed phase in the density interval $1.39\le n/n_0\le 2.37$, where $n_0=0.16~{\rm fm}^{-3}$ is the nuclear saturation density~\cite{c+}. All observable consequences should be discussed for fastly rotating compact objects and therefore we have studied these rotating configurations using this model-EOS with a deconfinement transition. The result is shown in Fig. \ref{starmass} and shows that within the present approach a deconfinement transition in compact stars is compatible with constraints on radius and mass recently derived from the observation of QPO in low mass X-ray binaries (LMXBs)~\cite{lamb}. \begin{figure}[ht] \centerline{ \psfig{figure=starmass.eps,width=10cm,height=5.8cm,angle=0} } \caption{\label{starmass} The mass $M$ as a function of the equatorial radius $R_e$ (left panel) and the central density (right panel) for a neutron star with deconfinement transition. Rotating configurations (dashed lines) with maximum rotation frequency are connected with the static ones (solid lines) by lines of constant baryon number $N/N_\odot=1.3,~1.55,~1.8,~2.14$, respectively. The occurence of an extended quark matter core in the compact star is compatible with recently derived constraints on maximum mass and radius from QPO observations in low-mass X-ray binaries, see text.} \end{figure} The basic quantity for the study of a deconfinement transition rotating compact stars is the moment of inertia which governs the rotation and thus the spin-down characteristics. Changes in the angular velocity $\Omega(t)$ as a function of time can occur, e.g., due to magnetic dipole radiation~\cite{frido} or mass accretion~\cite{c+}. During the time evolution of an isolated pulsar, the deviation of the braking index from $n(\Omega)=3$ can signal not only the occurence of a quark matter core, but also its size~\cite{c+}. We have found that in LMXBs with mass accretion at conserved angular momentum the occurence of a quark matter core would reflect itself in a change from a spin-down to spin-up era~\cite{c+}, see Fig. \ref{spin}. \begin{figure}[ht] \centerline{ \psfig{figure=braking.eps,width=5cm,height=5.8cm,angle=-90} \psfig{figure=spinflip.eps,width=5cm,height=5.8cm,angle=-90} } \caption{\label{spin} Deconfinement signals from the pulse timing of rapidly rotating compact stars. Left panel: The braking index deviates from $n=3$ when during the spin-down evolution of a pulsar a quark matter core is formed which entails a change in the moment of inertia. The larger the radius of the quark core, the more pronounced the signal. Right panel: The spin-down evolution of a compact star with mass accretion at constant angular momentum $J$ flips to a spin-up behaviour at the onset of deconfinement.} \end{figure} More detailed investigations of these scenarios have to be performed with a more realistic EOS, in particular for the hadronic phase. The possibility of a $T=0$ quark matter EOS which corresponds to a bag model EOS with small bag constants of the order of $B=70$ MeV/fm$^{3}$ is an important result of the study of the confining quark model which bears interesting consequences for the study of further nontrivial phases in high-density QCD, as e.g. (color-) superconductivity. \section{Superconducting quark matter} The possible occurence of a superconducting quark matter phase~\cite{bl84} has been recently reconsidered on the basis of nonperturbative approaches to the effective quark 4-point interaction~\cite{br98,alford,rapp,carter} and critical temperatures of the order of $50$ MeV with diquark pairing gaps of $\approx 100$ MeV have been obtained. So, if quark matter occurs in the interior of compact stars as advocated in the previous section, then it had to be realised in such a superconducting phase. Deconfinement in compact stars can thus result in effects on the magnetic field structure~\cite{bss} as well as the cooling curves of pulsars. Contrary to previous estimates~\cite{bl84}, low temperature quark matter is a superconductor of second kind and thus the magnetic field can penetrate into the quark core in Abrikosov vortices and does not decay at timescales shorter than $10^7$ years. Thus the occurence of superconducting quark matter phase in compact stars does not contradict observational data~\cite{bss}. The recently developed nonperturbative approaches to diquark condensates in high-density quark matter~\cite{alford,rapp,carter} can be further constrained by studying the consequences for cooling curves of pulsars~\cite{BKSV} which have to be consistent with the observational data~\cite{tsuruta}. \section{Conclusions} A simple confining separable interaction Ansatz for the rainbow-ladder truncated QCD Dyson-Schwinger equations is found capable of modeling $\bar q q$ meson states at \mbox{$T>0$} together with quark deconfinement and chiral restoration. Deconfinement and chiral restoration are found to both occur at $T_c=146$ MeV. The spatial screening masses for the meson modes are obtained. We find that, until near $T_c$, $M_\pi(T)$ and $f_\pi(T)$ are weakly $T$-dependent and that this model obeys the exact QCD pseudoscalar mass relation to better than 1\%. The GMOR relation is found to be accurate to within 5\% until very near $T_c$. For the vector mode, the 3-space transverse and longitudinal masses $M_\rho^T(T)$ and $M_\rho^L(T)$ are weakly $T$-dependent while the width for the electromagnetic decay \mbox{$\rho^0\rightarrow e^+e^-$} is found to increase to 1.72 times the \mbox{$T=0$} width. The equation of state (EOS) for the model is investigated in the $T-\mu$ plane and it shows a tricritical point at $T= 127~{\rm MeV}, ~\mu= 120~{\rm MeV}$. At $T=0$ the EOS can be given the form of a bag model where a broad range of bag constants $B=75\dots 150$ MeV/fm$^3$ is obtained consistent with possible parametrizations of $\pi$ and $\rho$ observables. The consequences for deconfinement transition in rapidly rotating neutron stars are considered and a new signal from the pulsar timing in binary systems with mass accretion is suggested. The model EOS under consideration meets the new constraints for maximum mass and radius recently derived from QPO observations. Within the present model, quark matter below $T_c\sim 50$ MeV is a superconductor of second kind and it is suggested that the magnetic field in a neutron star forms an Abrikosov vortex lattice which penetrates into the quark matter core and thus in accordance with the observation does not decay on timescales of $10^4$ years as previously suggested. \section*{Acknowledgments} P.C.T. acknowledges support by the {\sc National Science Foundation} under Grant No. INT-9603385 and the hospitality of the University of Rostock where part of this work was conducted. The work of D.B. has been supported in part by the Deutscher Akademischer Austauschdienst (DAAD) and by the Volkswagen Stiftung under grant No. I/71 226. The authors thank Yu. Kalinovsky, P. Maris, C.D. Roberts and S. Schmidt for discussions and criticism. \section*{References}
1,108,101,563,980
arxiv
\section{Introduction} The standard model (SM) has passed many tests from various experiments from atomic physics scale up to a couple of TeV energy range. Still it is well known that the SM has to be extended in order to accommodate the neutrino masses and mixings, baryon number asymmetry (BAU) and cold dark matter (CDM) of the universe. The most economic explanation for the first two problem would be leptogenesis \cite{leptogenesis,lepto_review}, whereas there are many models for CDM in particle physics \cite{dm_review,Bergstrom:2000pn,Bertone:2004pz,Feng:2010gw}. For CDM physics, one of the puzzles is how CDM can be absolutely stable or very long lived. If unstable, the lifetime of CDM should be far longer than the age of the universe, say $\tau \gtrsim 10^{26-30}$ sec \cite{Ackermann:2012qk}. Otherwise its decay would produce too much $X$($\gamma$)-ray or neutrino flux to match observation. Still this lower bound of the CDM lifetime is far less than the lower bound on the proton lifetime, the reason of which still remains one of the mysteries in particle physics. The required longevity of the dark matter (DM) can be guaranteed by a symmetry. If the symmetry is global, it can be broken by gravitational effects, and there can be dangerous operators suppressed by Planck scale ($M_{\rm P}$), such as \begin{equation} - \mathcal{L}_{\rm decay} = \left\{ \begin{array}{ll} \frac{\lambda_{X, \rm non}}{M_{\rm P}} ~ X F_{\mu \nu} F^{\mu \nu} & \textrm{for bosonic~DM} \ X \\ & \\ \frac{\lambda_{\psi, \rm non}}{M_{\rm P}} ~\overline{\psi} \left( \slashed{D} \ell_{Li} \right) H^\dag & \textrm{for fermionic~DM} \ \psi \end{array} \right. \end{equation} where $\lambda_{X, \rm non} = \mathcal{O}(e^2)$ and $\lambda_{\psi, \rm non} \sim \mathcal{O}(1)$ are the couplings associated with the non-renormalizable operators, $X$ and $\psi$ are bosonic and fermionic dark mater candidates, and $\ell_{Li}$ and $H$ are the SM lepton and Higgs, respectively. In this case, dark matter can not be stable enough unless the mass of DM is small enough, e.g., $m_X \lesssim \mathcal{O}(10) \mathinner{\mathrm{keV}}$ for bosonic CDM $X$, or $m_\psi \lesssim \mathcal{O}(1) \mathinner{\mathrm{GeV}}$ for fermionic CDM $\psi$. It may be possible to have such a light dark matter though it may not be theoretically so natural. Axion or keV scale sterile neutrinos are good examples of DM whose longevity is guaranteed by some global symmetries. On the other hand, the argument above implies that it is highly unlikely that an electroweak (EW) scale CDM is long lived or stable due to some global symmetries. Contrary to global symmetry, local symmetry other than the SM gauge group often appears in theories beyond the SM, and would guarantee the absolute stability of dark matter if it is unbroken \footnote{If a $Z_2$ discrete symmetry is the remnant of a broken local symmetry, it can be used to guarantee the stability of dark matter as often appears in literature. However the realistic model should contain extra fields and couplings as discussed in Ref.~\cite{koz2,koz2-1,progress_broken}.}. For example, gauge groups in superstring theory have very large ranks, e.g. $SO(32)$ or $E_8 \times E_8^{'}$. At low energy, these gauge groups may be broken into a product of the SM gauge group ($SU(3)_C \times SU(2)_L \times U(1)_Y$) and some other group, the latter of which may be able to play a role of dark gauge group we consider in this paper. The presence of an unbroken extra local symmetry implies the existence of massless gauge boson(s). Since it is a carrier of long range force \footnote{We assume that the local dark symmetry is not confining. The confining nonAbelian hidden sector gauge interaction was considered in Ref.~\cite{ko_hidden_qcd1,ko_hidden_qcd2,ko_hidden_qcd_proceeding1,ko_hidden_qcd_proceeding2,ko_hidden_qcd_proceeding3,ko_hidden_qcd_proceeding4}.}, the massless dark gauge boson(s) could have significant effects on structure formation via self-interactions of dark matter \cite{Carlson:1992fn,deLaix:1995vi,Spergel:1999mh}. On one hand, the dark gauge interaction is highly constrained by various properties of small and large scale dark matter halos \cite{Buckley:2009in}. On the other hand, it can provide a solution to small scale puzzles of the collisionless CDM scenario (e.g. cored density profiles \cite{de Blok:1997ut,Oh:2010ea,deNaray:2011hy,Walker:2011zu} and low concentrations of massive sub-halos \cite{BoylanKolchin:2011dk}) without conflicting with constraints from large scale structure \cite{Vogelsberger:2012ku}. The massless dark gauge boson(s) could also contribute to the radiation density of the Universe in addition to thermal relic neutrinos of 3 species. Recent WMAP 9-year data analysis showed that the number of relativistic degrees of freedom is \cite{Hinshaw:2012fq} \begin{equation} N_{\rm eff}^{\rm obs} = 3.84 \pm 0.40 \ {\rm at} \ 68 \% \ {\rm CL}. \end{equation} Although it is consistent with the case of three active standard model neutrinos only ($N_{\rm eff}^{\rm SM} = 3.046$), some amount of extra radiation is still allowed and it could be from either light sterile neutrino \cite{Abazajian:2012ys} or hidden photon \cite{Holdom:1985ag} or axion \cite{Peccei:1977hh,Peccei:1977ur,Weinberg:1977ma,Wilczek:1977pj,Kim:1979if,Shifman:1979if,Zhitnitsky:1980tq,Dine:1981rt}. There are considerable amount of literatures on these possibilities. Meanwhile, dark sector can communicate with the SM sector via Higgs portal interactions ($H^\dagger H$) which are quite often used in the dark matter physics~\cite{EFT,EFT2,EFT_DLMQ,EFT_pseudo,Kim:2008pp,SFDM1} (see also \cite{Chu:2011be} where DM produced from SM particles via kinetic mixing and Higgs portals was analyzed). Another possible portal interaction can be provided by heavy RH neutrinos \footnote{ The operators $\tilde{H} l_{Li}$'s are also the SM gauge singlets as $H^\dagger H$, and could be a portal to another singlets from the hidden sector. We do not consider this because this operator is dim-$5/2$, and thus cannot have renormalizable couplings with composite operators made of the hidden sector fields charged under symmetry in the sector. Instead, we trade $\tilde{H} l_{Li}$ with the lower dim operators $N_{Ri}$'s in this paper.} which are singlet under the SM gauge group ~\cite{Cosme:2005sb,Gu:2009yy,Gu:2009hj,An:2009vq,Chun:2010hz,Falkowski:2011xh}. These singlet portal interactions are natural extensions of the SM in the framework of renormalizable quantum field theory, and allow rich phenomenology in both dark matter and Higgs sector as we will show in the subsequent sections. Based on this line of arguments and observations, in this paper we consider an extension of the SM where a local $U(1)_X$ dark symmetry is introduced to guarantee the stability of dark matter. The minimal particle contents and renormalizable interactions are completely fixed once portal interactions via Higgs and right-handed (RH) neutrinos are allowed. These extensions allow a possibility to accommodate neutrino masses and mixings, leptogenesis for BAU, (a)symmetric dark matter and dark radiation. In addition to these rich physics, Higgs inflation scenario can be also realized if large non-minimal couplings of scalar fields to gravity are introduced, and high enough reheating temperature after inflation sets a proper initial condition for the subsequent leptogenesis. Before we proceed to the main discussions, let us make two comments. If we considered the spontaneously broken $U(1)_X$ case by introducing a new $U(1)_X$-charged scalar $\phi$ with $\langle \phi (x) \rangle \neq 0$, there would appear a new neutral scalar $h_X$ from the radial component of $\phi$. Then, this new neutral scalar will mix with the SM Higgs $h$, resulting into 2 neutral Higgs-like scalar bosons. Since $h_X$ is the SM singlet scalar, the scalar boson sector will be similar to the case of Ref.~\cite{SFDM1}. There, it was argued that the Higgs signal strength is always smaller than unity independent of the decay channels of the Higgs boson, due to the mixing between the SM Higgs and the singlet scalar, and also possible decays of scalar bosons into a pair of CDM's. This case would be strongly disfavored if the current situation of enhanced $H\rightarrow \gamma\gamma$ remains there in the future analysis. Another issue in the spontaneously broken dark symmetry case is the stability or longevity of the dark matter candidate. Nonrenormalizable operators suppressed by some powers of (at least) $M_{\rm P}$ and even renormalizable operators for the scalar dark matter case would make the CDM decay in general, as long as electric charge, energy-momentum and the total angular momentum are conserved~\cite{koz2,progress_broken}. One has to make a judicious choice of dark charge assignments in order to avoid these problems. We postpone the detailed study of the spontaneously broken dark symmetry case to the future \cite{progress_broken}, although we describe the qualitative features in Table~2 of Sec.~8. This paper is organized as follows. In Sec.~2, we define the model Lagrangian assuming the local gauge symmetry $SU(3)_C \times SU(2)_L \times U(1)_Y \times U(1)_X$ as the underlying gauge symmetry, where $U(1)_X$ is the unbroken dark symmetry which guarantees stability of the dark matter. The right-handed neutrino singlet fields $N_{Ri}$'s are also included for the seesaw mechanism and the leptogenesis. In Sec.~3, we consider various constraints on our model from large and small scale structure formation, vacuum stability and no Landau pole up to Planck scale, direct detection cross section and indirect signatures after we identify the dark matter component in our model lagrangian. In Sec.~4, we calculate the amount of dark radiation within our model, which originates from massless dark photon. The leptogenesis from RH neutrino decays is discussed in Sec.~5. The possibility of Higgs inflation assisted with scalar dark matter is discussed in Sec.~6. Collider phenomenology of Higgs boson and scalar dark matter is presented in Sec.~7. Some variations of our model are described in Sec.~8, with special emphasis on the nature of CDM and singlet portals, the number of Higgs-like neutral scalar bosons, extra dark radiation, and the Higgs signal strengths. We discuss a few miscellaneous issues in Sec.~9, including the comparison of our model with other models in the literature and effects of nonrenormalizable operators. Finally we summarize the results in Sec.~10. Explicit expressions for thermally averaged cross sections for the processes relevant to our discussions are presented in Appendix. \section{The Model} As explained in Introduction, we assume that dark matter lives in a hidden sector, and it is stable due to unbroken local $U(1)_X$ dark gauge symmetry. All the SM fields are taken to be $U(1)_X$ singlets. Assuming that the RH neutrinos are portals to the hidden sector, we need both a scalar ($X$) and a Dirac fermion ($\psi$) with the same nonzero dark charge (see Table~1). Then the composite operator $\psi X^\dagger$ becomes a gauge singlet and thus can couple to the RH neutrinos $N_{Ri}$'s \footnote{If we did not assume that the RH neutrinos are portals to the dark sector, we did not have to introduce both $\psi$ and $X$ in the dark sector. This case is discussed in brief in Sec.~8.}. With these assumptions, we can write the most general renormalizable Lagrangian as follows: \begin{equation} \label{Lagrangian} \mathcal{L} = \mathcal{L}_{\rm SM} + \mathcal{L}_X + {\mathcal{L}_\psi} + \mathcal{L}_{\rm kin-mix} + \mathcal{L}_{\rm H-portal} + \mathcal{L}_{\rm RHN-portal} \end{equation} where $\mathcal{L}_{\rm SM}$ is the standard model Lagrangian and \begin{eqnarray} \label{LX} \mathcal{L}_X &=& {\left| \left( \partial_\mu + i g_X q_X \hat{B}'_\mu \right) X \right|^2} - \frac{1}{4} \hat{B}'_{\mu \nu} \hat{B}'^{ \mu \nu} - m_X^2 X^\dag X - \frac{1}{4} \lambda_X \left( X^\dag X \right)^2 , \\ \mathcal{L}_\psi &=& i \bar{\psi} \gamma^\mu \left( \partial_\mu + i g_X q_X \hat{B}'_\mu \right) \psi - m_\psi \bar{\psi} \psi , \\ \label{kin-mix} \mathcal{L}_{\rm kin-mix} &=& - \frac{1}{2} \sin \epsilon \hat{B}'_{\mu \nu} \hat{B}^{\mu \nu} , \\ \label{Hportal} \mathcal{L}_{\rm H-portal} &=& - \frac{1}{2} \lambda_{HX} X^\dag X H^\dag H , \\ \label{RHNportal} - \mathcal{L}_{\rm RHN-portal} &=& \frac{1}{2} M_i \overline{N_{Ri}^C} N_{Ri} + \left[ Y_\nu^{ij} \overline{N_{Ri}} \ell_{Lj} H^\dag + \lambda^i \overline{N_{Ri}} \psi X^\dag + \textrm{H.c.} \right]. \end{eqnarray} $g_X$, $q_X$, $\hat{B}'_\mu$ and $\hat{B}'_{\mu \nu}$ are the gauge coupling, $U(1)_X$ charge, the gauge field and the field strength tensor of the dark $U(1)_X$, respectively. $\hat{B}_{\mu \nu}$ is the gauge field strength of the SM $U(1)_Y$. We assume \begin{equation} m_X^2 > 0, \quad \lambda_X > 0, \quad \lambda_{HX} > 0 \end{equation} so that the local $U(1)_X$ remains unbroken and the scalar potential is bounded from below at tree level \footnote{Quantum corrections to the scalar potential will be discussed in Sec.~3.2.}. Either $X$ or $\psi$ is absolutely stable due to the unbroken local $U(1)_X$ gauge symmetry, and will be responsible for the present relic density of nonbaryonic CDM. In our model, there is a massless dark photon which couples to the SM $U(1)_Y$ gauge field by kinetic mixing. One can diagonalize the kinetic terms by taking a linear transformation defined as \cite{Holdom:1985ag} \begin{equation} \left( \begin{array}{l} \hat{B}_\mu \\ \hat{B}'_\mu \end{array} \right) = \left( \begin{array}{cc} 1 / \cos \epsilon & 0 \\ - \tan \epsilon & 1 \end{array} \right) \left( \begin{array}{l} B_\mu \\ B'_\mu \end{array} \right). \end{equation} In this basis, the SM $U(1)_Y$ gauge coupling is redefined as $g_Y = \hat{g}_Y / \cos \epsilon$, and hidden photon does not couple to the SM fields. However, dark sector fields now couple to the SM photon and $Z$-boson. In the small mixing limit, the couplings are approximated to \begin{equation} \label{DM-SM-int} \mathcal{L}_{\rm DS-SM} = {{\bar{\psi} i \gamma^\mu \left[ \partial_\mu - i g_X q_X t_\epsilon \left( c_W A_\mu - s_W Z_\mu \right) \right] \psi}} + \left| \left[ \partial _\mu - i g_X q_X t_\epsilon \left( c_W A_\mu - s_W Z_\mu \right) \right] X \right|^2 \end{equation} where $t_\epsilon = \tan \epsilon$, $c_W = \cos \theta_W$ and $s_W = \sin \theta_W$ with $\theta_W$ being the Weinberg angle. Hence, dark sector fields charged under $U(1)_X$ can be regarded as mini-charged particles under electromagnetism after the kinetic mixing term is removed by a field redefinition, Eq.~(2.8). Meanwhile, we can assign lepton number and $U(1)_X$ charge to RH neutrinos and dark fields as shown in Table~\ref{tab:charges}. \begin{table}[htdp] \begin{center} \begin{tabular}{|c||c|c|c|} \hline field & $N$ & $\psi$ & $X$ \\ \hline \hline $q_L$ & 1 & 1 & 0 \\ \hline $q_X$ & 0 & 1 & 1 \\ \hline \end{tabular} \end{center} \caption{Lepton number and $U(1)_X$ charge assignment} \label{tab:charges} \end{table} Then, the global lepton number is explicitly broken by Majorana mass terms for the RH neutrinos. If $Y_\nu$ and $\lambda_i$ carry $CP$-violating phases, the decay of RH neutrinos can develop lepton number asymmetry in both of visible and dark sectors. Since $U(1)_X$ is unbroken, the asymmetry in the dark sector has a relation, \begin{equation} \label{dpsiPlusdx} Y_{\Delta \psi} + Y_{\Delta X} = 0 \end{equation} where $Y_{\Delta i} \equiv (n_i - n_{\bar i})/s$ is the asymmetry between $i$ and $\bar{i}$ with $n_i$ and $s$ being the number density of $i$ and entropy density. There are various physics issues involved in our model as listed below: \begin{itemize} \item Small and large scale structure \item Vacuum stability of Higgs potential \item CDM relic density and direct/indirect DM searches \item Dark radiation \item Leptogenesis \item Higgs inflation in case of a large non-minimal gravitational couplings \end{itemize} In other words, the model will be highly constrained, but astonishingly it turns out that our model can also explain various issues related to those physics in its highly constrained narrow parameter space without conflicting with any phenomenological, astrophysical and cosmological observations. It is highly nontrivial that our model can accommodate all these constraints in a certain parameter region, reminding us that our model was based on local gauge principle for the dark matter stability, and assumption of singlet portals to the dark sector, by introducing only 3 new fields, $X, \psi$ and $\hat{B}'_\mu$. \section{Constraints} Including the portal interactions, the presence of an unbroken local $U(1)_X$ in dark sector with kinetic mixing with the SM sector is subject to various phenomenological and cosmological constraints. In this section, we will take a look each of constraint or physics one by one. \subsection{Structure formation} \label{structure-form} The presence of the dark matter self-interaction caused by nonzero charge of $U(1)_X$ could affect significantly the kinematics, shape and density profile of dark matter halo, so it is constrained by, for example, the galactic dynamics \cite{Ackerman:2008gi}, ellipticity of dark matter halos \cite{MiraldaEscude:2000qt} and Bullet Cluster \cite{Randall:2007ph} (see also \cite{Ostriker:1999ee,Buckley:2009in,Loeb:2010gj,Vogelsberger:2012ku}). For a velocity-dependent self-interaction, the transfer cross section of the dark matter self-interaction, defined as $\sigma_T = \int d \Omega \left( 1 - \cos \theta \right) \frac{d \sigma}{d \Omega}$, is upper-bounded as \cite{Vogelsberger:2012ku} \begin{equation} \label{sigmaT-obs} \left. \frac{\sigma_T^{\rm obs}}{m_{\rm dm}} \right|_{v=10 {\rm km/s}} \lesssim 35 \ {\rm cm^2/g}. \end{equation} Interestingly, it was shown that, if $\sigma_T^{\rm obs}$ is close to the bound, it can solve the core/cusp problem \cite{Oh:2010ea} and ``too big to fail'' problem \cite{BoylanKolchin:2011dk} of the standard collisionless CDM scenario \cite{Vogelsberger:2012ku}. In our model, for both $\psi$ and $X$ the self-interaction cross section with a massless dark photon is given by \cite{Feng:2009mn} \begin{equation} \sigma_T \simeq \frac{16 \pi \alpha_X^2}{m_{X(\psi)}^2 v^4} \ln \left[ \frac{m_{X(\psi)}^2 v^3}{(4 \pi \rho_X\alpha_X^3)^{1/2}} \right] \end{equation} where $v$ and $\rho_X$ are the velocity and density of the dark matter at the region of interest \footnote{There are other $t$-channel scatterings of $X$-$X^\dag$ (Higgs and $Z$-boson mediations) and the contact interaction-$\lambda_X$, but they don't have large enhancement caused by small velocity.}. We take $v = 10 {\rm km/sec}$ and $\rho_X = 3 \mathinner{\mathrm{GeV}} /{\rm cm}^3$. Then, compared to \eq{sigmaT-obs}, dark interaction is constrained as \begin{equation} \label{DMstructure-const} \alpha_X \lesssim 5 \times 10^{-5} \left( \frac{m_{X(\psi)}}{300 \mathinner{\mathrm{GeV}}} \right)^{3/2} \end{equation} where we approximated the log factor to $41$. Either $X$ or $\psi$, which is lighter than the other, poses a stronger constraint on $\alpha_X$. Note that $\psi$ couples only to dark photon at low energy, and the thermally-averaged annihilation cross section of $\psi$ is found to be \begin{equation} \label{sv-psi} \langle \sigma v \rangle_{\rm ann}^\psi \approx \frac{\pi \alpha_X^2}{2 m_\psi^2} . \end{equation} The abundance of $\psi$ at freeze-out is \begin{equation} \label{mYpsi} \left. \frac{m_\psi n_\psi}{s} \right|_{T_{{\rm f},\psi}} = 3.79 \left( \frac{g_*(T_{{\rm f}, \psi})^{1/2}}{g_{*S}} \right) \left( \frac{m_\psi}{T_{{\rm f},\psi}} \right) \frac{1}{\langle \sigma v \rangle_{\rm ann}^\psi M_{\rm P}} \simeq \frac{\langle \sigma v \rangle_{\rm ann}^{\rm th}}{\langle \sigma v \rangle_{\rm ann}^\psi} \left( \frac{m_{\rm dm} n_{\rm dm}}{s} \right)_{\rm obs} \end{equation} where \begin{equation} \label{sv-th} \langle \sigma v \rangle_{\rm ann}^{\rm th} \simeq 6 \times 10^{-26} {\rm cm}^3 / {\rm sec} \end{equation} is the thermally-averaged annihilation cross section which gives the right amount of present dark matter relic density \footnote{ Since a dark matter charged under an unbroken symmetry can annihilate only with its anti-particle which constitutes the half of the whole CDM relic density, $\langle \sigma v \rangle_{\rm ann}^{\rm th}$ is larger than the one for a charge-neutral dark matter by a factor 2. } corresponding to \begin{equation} \left( \frac{m_{\rm dm} n_{\rm dm}}{s} \right)_{\rm obs} \simeq 2 \times 10^{-10} \mathinner{\mathrm{GeV}}. \end{equation} In the far right-hand side of \eq{mYpsi}, we used the fact that , even if $\langle \sigma v \rangle _ {\rm ann}^\psi$ varies by several orders of magnitude, $m_{\rm dm} / T_{\rm f}$ is changed only by a factor of $\mathcal{O}(1)$. The constraint, \eq{DMstructure-const}, implies that \begin{equation} \label{psi-ann} \frac{\langle \sigma v \rangle_{\rm ann}^{\rm th}}{\langle \sigma v \rangle_{\rm ann}^\psi} \gtrsim 3.5 \times 10^4 \times \left\{ \begin{array}{lcc} \left( \frac{1 \mathinner{\mathrm{TeV}}}{m_X} \right)^3 \left( \frac{m_\psi}{1 \mathinner{\mathrm{TeV}}} \right)^2 & {\rm for} & m_X < m_\psi \\ \left( \frac{1 \mathinner{\mathrm{TeV}}}{m_\psi} \right) & {\rm for} & m_\psi < m_X. \end{array} \right. \end{equation} Hence, if $\psi$ were stable, it would be over-abundant at present. In order to avoid the over-closing by $\psi$, we assume \begin{equation} \label{CDM-const} m_\psi > m_X \end{equation} so that $\psi$ can decay through the virtual RH neutrinos. The decay rate of $\psi$ is given by \begin{equation} \Gamma_\psi \simeq \Gamma_{\psi \to \nu X} + \Gamma_{\psi \to \nu X h} \end{equation} where \begin{eqnarray} \Gamma_{\psi \to \nu X} &\simeq& \frac{\lambda_1^2}{16 \pi} \frac{\tilde{m}_\nu}{M_1} m_\psi \left( 1 - \frac{m_X^2}{m_\psi^2} \right)^2 , \\ \Gamma_{\psi \to \nu X h} &\simeq& \frac{1}{48 \pi^2} \left( \frac{m_\psi^2}{v_H^2} \right) \Gamma_{\psi \to \nu X} \end{eqnarray} with $\tilde{m}_\nu \equiv Y_\nu^2 v_H^2 / M_1$ and $v_H=174 \mathinner{\mathrm{GeV}}$ being respectively a contribution to the neutrino mass matrix and the vev of Higgs field. The present CDM relic density poses the strongest constraint on $\Gamma_\psi$, as we will see in a moment. \eq{dpsiPlusdx} implies that, even if the asymmetry between $\psi$ and $\bar{\psi}$ may arise in the decay of RH neutrinos, once $\psi$ decays, the dark matter composed of $X$ and $X^\dag$ becomes totally symmetric irrespective of its origin. If $\psi$ decays before the thermal component of $X$ freezes out, the $X$'s coming from the decay of $\psi$ thermalize, which makes the number density $n_X$ return to that of thermal equilibrium. The present relic density in this case is determined by the thermal relic, hence the annihilation cross section should be the one in \eq{sv-th} (``symmetric thermal'' case). On the other hand, if $\psi$ decays after the thermal freeze-out of $X$, the annihilation cross section should be larger than the one for thermal relics (i.e., \eq{sv-th}) so that the non-thermal freeze-out to provide a right amount of relic density (``symmetric non-thermal'' case). In this case, the required background temperature when $\psi$ decays is determined by the annihilation cross section of $X$. In our model, the pair annihilation of $X$-$X^\dag$ can be controlled by the Higgs portal interaction $\lambda_{HX}$ which leads to $s$-wave annihilations. It freezes out at a temperature $T_f \sim m_X / 20$. However dark matter can still be in kinetic equilibrium with thermal background at a lower temperature due to scatterings to SM particles. The scattering is mediated by photon and Higgs thanks to the kinetic mixing and Higgs portal interaction. The transfer cross section of photon-mediation is such that $\sigma_T \propto \epsilon^2 / T^2$. Although the associated scattering could be quite efficient at a low temperature, $\epsilon \ll 1$ make it less efficient. We found that, for $\epsilon \sim \mathcal{O}(10^{-9})$ which will be of our interest as described in section~3.3, the momentum transfer rate via photon is too small to keep kinetic equilibrium after freeze-out. In case of Higgs mediation, the kinetic equilibrium can be maintained by the scattering mainly to charm quark to a temperature of the charm quark mass scale \cite{Hofmann:2001bi}. At a lower temperature, the scattering rate is too small. Hence, for $\lambda_{HX} \lesssim 1$ and electroweak scale $m_X$, the kinetic decoupling takes place at a temperature $T_{\rm kd} \sim 1 \mathinner{\mathrm{GeV}}$ before QCD phase transition \footnote{ As long as $X$ is decoupled before QCD-phase transition, the effect of the dark photon to the SM radiation at the time of BBN is negligible even though the dark photon is decoupled from $X$ at temperature $T \simeq 16 \mathinner{\mathrm{MeV}} \left( \frac{5 \times 10^{-5}}{\alpha_X} \right) \left( \frac{m_X}{300 \mathinner{\mathrm{GeV}}} \right)^{3/2}$ in our scenario \cite{Feng:2009mn}. }. If $\psi$ decays to $X$ abundantly, $X$ and $X^\dag$ would be able to re-annihilate even after freeze-out of the thermal annihilation until their number densities is reduced enough to stop the re-annihilation. The abundance of $X$ and $X^\dag$ at the moment should be responsible for the present relic density of dark matter. Hence, when $\psi$ decays at a temperature $T_{\rm d}$, the annihilation of $X$ should be frozen with a rate, \begin{equation} \Gamma_{\rm ann}(\tau_\psi) = n_X \langle \sigma v \rangle_{\rm ann,d}^X \end{equation} with \begin{equation} \left. \frac{n_X}{s} \right|_{T_{\rm d}} \simeq 2 \times 10^{-12} \left( \frac{100 \mathinner{\mathrm{GeV}}}{m_X} \right) \end{equation} being the present number density to entropy density ratio of $X$ that matches observation. $\langle \sigma v \rangle_{\rm ann,d}^X$ is the annihilation cross section of $X$ when $\psi$ decays. Equating the annihilation rate to the expansion rate when $\psi$ decay, we find the decay temperature of $\psi$ to give a right amount of non-thermal relic density, \begin{equation} T_{\rm d} \equiv \left( \frac{\pi^2}{90} g_*(T_{\rm d}) \right)^{-1/4} \sqrt{\Gamma_\psi M_{\rm P}} \simeq 9.2 \mathinner{\mathrm{GeV}} \left( \frac{m_X}{300 \mathinner{\mathrm{GeV}}} \right) \left( \frac{\langle \sigma v \rangle_{\rm ann}^{\rm th}}{\langle \sigma v \rangle_{\rm ann, d}^X} \right) \end{equation} where we used $g_*(T_{\rm d})=g_{*S}(T_{\rm d})=100$ and $M_{\rm P} = 2.4 \times 10^{18} \mathinner{\mathrm{GeV}}$ in the right-hand side of above equation. This implies that \begin{eqnarray} \label{lambda1-CDM-const} \lambda_1^2 &\simeq& 58.5 \left( \frac{0.1 \mathinner{\mathrm{eV}}}{\tilde{m}_\nu} \right) \left( \frac{M_1}{10^9 \mathinner{\mathrm{GeV}}} \right) \left( \frac{1 \mathinner{\mathrm{TeV}}}{m_\psi} \right) \left( 1 - \frac{m_X^2}{m_\psi^2} \right)^{-2} \left[ 1 + \frac{1}{48 \pi^2} \left( \frac{m_\psi}{v_H} \right)^2 \right]^{-1} \nonumber \\ && \phantom{58.5} \left( \frac{m_X}{300 \mathinner{\mathrm{GeV}}} \right)^2 \left( \frac{\langle \sigma v \rangle_{\rm ann}^{\rm th}}{\langle \sigma v \rangle_{\rm ann,d}^X} \right)^2 . \end{eqnarray} Note that, if $\langle \sigma v \rangle_{\rm ann,d}^X = \langle \sigma v \rangle_{\rm ann}^{\rm th}$, $T_{\rm d}$ equal to or larger than $T_{\rm f}$ and corresponding $\lambda_1$ are fine. Note also that $T_{\rm d} > T_{\rm kd}$ unless $\langle \sigma v \rangle_{\rm ann,d}^X$ is larger than$\langle \sigma v \rangle_{\rm ann}^{\rm th}$ by at least two orders of magnitude. However, as described in section~\ref{dd}, only $\langle \sigma v \rangle_{\rm ann}^X / \langle \sigma v \rangle_{\rm ann} ^{\rm th} \lesssim 5$ is allowed, so we can take $\langle \sigma v \rangle_{\rm ann,d}^X = \langle \sigma v \rangle_{\rm ann}^X$. Fig.~\ref{fig:non-th-fz} shows contours for a right amount of relic density as a function of $\lambda_1$ and $m_\psi$. In the figure, solid blue lines from right to left are for $\langle \sigma v \rangle_{\rm ann}^X / \langle \sigma v \rangle_{\rm ann} ^{\rm th} = 1$ and $5$ with $\tilde{m}_\nu = 0.1 \mathinner{\mathrm{eV}}$, $M_1 =1.63 \times 10^{10} \mathinner{\mathrm{GeV}}$ and $m_X = 300 \mathinner{\mathrm{GeV}}$. \begin{figure}[ht] \centering \includegraphics[width=0.5\textwidth]{figs/non-thermal-fz.eps} \caption{ Parameter space for a right amount of dark matter relic density. Contours correspond to the present dark matter relic density for $\langle \sigma v \rangle_{\rm ann}^X / \langle \sigma v \rangle_{\rm ann} ^{\rm th} = 1, 5$ (sold blue lines from right to left) with $m_X = 300 \mathinner{\mathrm{GeV}}$, $\tilde{m}_\nu = 0.1 \mathinner{\mathrm{eV}}$ and $M_1 = 1.63 \times 10^{10} \mathinner{\mathrm{GeV}}$. The gray region is excluded by XENON100 direct dark matter search as described in section~\ref{dd}. } \label{fig:non-th-fz} \end{figure} Shortly speaking, the existence of the massless dark photon constrains our model parameters to satisfy \eqs{DMstructure-const}{lambda1-CDM-const}. They are from small/large scale structure formation and present dark matter relic density, respectively. \subsection{Vacuum stability} \label{vac-stab} In the standard model, Higgs potential becomes unstable at an intermediate scale because of top loop contributions to the Higgs quartic couplings, though it depends on some of the standard model parameters, for example top pole mass and strong interaction \cite{Alekhin:2012py}. Such instability can be cured if Higgs field couples to other scalar field(s) \cite{Lebedev:2012zw,EliasMiro:2012ay,Baek:2012uj}. Depending on the existence of mixing between Higgs and additional scalar(s), tree-level and/or loop effects should be able to remove the instability. In our model, $X$ does not develop non-zero VEV, and the SM Higgs is not mixed with $X$. In this case, the loop-effect should be large enough to remove the vacuum instability of the SM Higgs potential. Note that the Dirac neutrino mass terms also contribute to the RG-running of the Higgs quartic coupling. However it is a negative contribution reflecting the fermionic nature of the right-handed neutrinos \cite{Rodejohann:2012px}. Hence, in order not to make worse the vacuum instability up to Planck scale, we take \begin{equation} Y_{\nu}^{ij} \lesssim 0.1, \end{equation} and ignore its contribution to the RG equation of Higgs quartic coupling. Then, the relevant one-loop RG equations are \begin{equation} \beta_{\lambda_i} \equiv \frac{d \lambda_i}{d \ln \mu} \end{equation} where $i = H, HX, X$ and \begin{eqnarray} \label{lambdat-1loop-beta} \label{lambdaH-1loop-beta} \beta_{\lambda_H}&=& \frac{1}{16 \pi^2} \left[ 24 \lambda_H^2 + 12 \lambda_H \lambda_t^2 - 6 \lambda_t^4 - 3 \lambda_H \left( 3 g_2^2 + g_1^2 \right) + \frac{3}{8} \left( 2 g_2^4 + \left( g_2^2 + g_1^2 \right)^2 \right) + \frac{1}{8} \lambda_{HX}^2 \right] ,~~~~~~~ \\ \label{lambdaHS-1loop-beta} \beta_{\lambda_{HX}} &=& \frac{\lambda_{HX}}{16 \pi^2} \left[ 2 \left( 6 \lambda_H + 3 \lambda_X + \lambda_{HX} \right) - \left( \frac{3}{2} \lambda_H \left( 3 g_2^2 + g_1^2 \right) - 6 \lambda_t^2 \right) \right] , \\ \label{lambdaS-1loop-beta} \beta_{\lambda_X} &=& \frac{1}{16 \pi^2} \left[ \frac{1}{2} \lambda_{HX}^2 + 18 \lambda_X^2 \right] \end{eqnarray} in addition to the ones for the other SM couplings. We solved 2-loop RGEs for SM couplings and 1-loop RGEs for non-SM couplings numerically, and found that the vacuum stability of Higgs potential and perturbativity of the couplings require \begin{equation} 0.2 \lesssim \lambda_{HX} \lesssim 0.6, \quad \lambda_X \lesssim 0.2. \end{equation} \subsection{Direct detection} \label{dd} In our model, dark matter couples to the SM particles via neutral SM gauge bosons (see \eq{DM-SM-int}) and Higgs portal, hence both type of interactions provide channels for dark matter direct searches. In the case of gauge boson exchange, the spin-independent (SI) dark matter-nucleon scattering cross section via photon exchange provides a strong constraint on the kinetic mixing. As can be seen from \eq{DM-SM-int}, our dark matter has a mini-electric charge, \begin{equation} \epsilon_e = - \frac{g_X}{e} q_X c_W \tan \epsilon. \end{equation} For a scattering to a target atom with atomic number $Z$, the differential cross section of the Rutherford scattering of our dark matter is given by \begin{equation} \label{dsigma-dOmega} \frac{d \sigma_A}{d \Omega} = \frac{\epsilon_e^2 \alpha_{\rm em}^2 Z^2 \mu_A^2}{4 m_X^4 v_{\rm cm}^4 \sin^4 (\theta_{\rm cm}/2)} F_A^2(q r_A) \end{equation} where $\mu_A \equiv m_X m_A / \left(m_X + m_A \right)$ with $m_A$ being the mass of the atom is the reduced mass, $v_{\rm cm}$ is the dark matter velocity at the center mass frame, and $\mathcal{F}_A(q r_A)$ is the form factor of the target atom with $q$ and $r_A$ being respectively the momentum transfer and effective nuclear radius. The CM-frame scattering angle, $\theta_{\rm cm}$, is related to the nuclear recoil energy of the atom, $E_{\rm r}$, as \begin{equation} E_{\rm r} = \frac{\mu_A^2}{m_A} v^2 \left( 1 - \cos \theta_{\rm cm} \right) \end{equation} where $v$ is the lab velocity. So, \eq{dsigma-dOmega} is expressed as \begin{equation} \label{dsdE-th} \left. \frac{d \sigma_A}{d E_{\rm r}} \right|_{\rm th} = \frac{2 \pi \epsilon_e^2 \alpha_{\rm em}^2 Z^2}{m_A E_{\rm r}^2 v^2} \mathcal{F}_A^2(E_r). \end{equation} Experimentally, for the SI dark matter-nucleus scattering, the differential cross section with respect to the nucleus recoil energy is parameterized as \begin{equation} \label{dsdE-exp} \left. \frac{d \sigma_A}{d E_{\rm r}} \right|_{\rm exp} = \frac{2 m_A Z^2}{\mu_p^2 v^2} \left( \sigma_p^{\rm SI} \right)_{\rm exp} \mathcal{F}_A^2(E_{\rm r}) \end{equation} where $\mu_p = m_X m_p / \left( m_X + m_p \right)$ is the reduced mass of dark matter-proton system, $\left( \sigma_p^{\rm SI} \right)_{\rm exp}$ is the dark matter-proton scattering cross section constrained by experiments. Note that the velocity dependence of \eq{dsdE-th} is the same as that of \eq{dsdE-exp}, and $\mathcal{F}^2_A(E_{\rm r}) / E_{\rm r}^2$ is a monotonically decreasing function for the range of $E_{\rm r}$ relevant in various direct detection experiments \cite{Lewin:1995rx}. Hence, the kinetic mixing is bounded from above as \begin{equation} \label{t-epsilon-bnd} t_\epsilon < \left[ \frac{1}{\pi q_X^2 c_W^2 \alpha_X \alpha_{\rm em}} \right]^{1/2} \left( \frac{m_A}{\mu_p} \right) E_{\rm r}^{\rm T} \left( \sigma_p^{\rm SI} \right)^{1/2} \end{equation} where $E_{\rm r}^{\rm T}$ is the threshold recoil energy of a target atom at a given experiment. In the case of the Higgs portal interaction, the scattering cross section is \begin{equation} \sigma_{\mathcal {N}, H}^{\rm SI} = \frac{1}{\pi} m_{\rm r}^2 f_{\mathcal {N}, H}^2 \end{equation} where \begin{equation} f_{\mathcal {N}, H} = \frac{1}{8} \lambda_{HX} \frac{m_{\mathcal N}}{m_X m_H^2} f_{q,H} \end{equation} with \begin{equation} f_{q,H} = \left[ \sum_{q=u,d,s} f_{Tq}^{\mathcal N} + \frac{2}{27} \sum_{q=t,b,c} f_{TG}^{\mathcal N} \right] \end{equation} and $f_{q}^{\mathcal N}$ and $f_{G}^{\mathcal N}$ being hadronic matrix elements with a scalar. Based on the study on lattice \cite{Young:2009zb}, we take $f_q = 0.326$ here. Currently, the strongest bound on $\sigma_p^{\rm SI}$ comes from XENON100 direct search experiment \cite{Aprile:2012nq} which has $E_{\rm r}^{\rm T} = 6.6 \mathinner{\mathrm{keV}}$. Fig.~\ref{fig:lHX-Xenon-bound} shows how the kinetic mixing (left panel) and Higgs portal coupling (right panel) are limited by the experiment (gray region). \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{figs/epsilon-Xenon-structure-bound.eps} \includegraphics[width=0.45\textwidth]{figs/lambdaHX-Xenon-bound-wide.eps} \caption{ Left: XENON100(2012) bound on the kinetic mixing parameter $\epsilon$ as a function of $m_X$ for $\alpha_X$ given by the bound value of \eq{DMstructure-const}. Right: XENON100 (2012) bound and contours of $\langle \sigma v \rangle_{\rm ann}^X / \langle \sigma v \rangle_{\rm ann}^{\rm th}$ in ($\lambda_{HX}$, $m_X$) plane. $\langle \sigma v \rangle_{\rm ann}^{\rm th} \simeq 2 \times 10^{-36} {\rm cm}^2$ is the thermally averaged annihilation cross section giving the correct amount of dark matter relic density from thermal freeze-out. The gray region is excluded by the recent result from XENON100 \cite{Aprile:2012nq}. The lower and upper dark gray region is excluded by the vacuum stability of Higgs potential and perturbativity of couplings, respectively. The colored lines correspond to $\langle \sigma v \rangle_{\rm ann}^X / \langle \sigma v \rangle_{\rm ann}^{\rm th} = 1, 2, 5, 10$ from bottom to top.} \label{fig:lHX-Xenon-bound} \end{figure} Also, depicted are thermally averaged annihilation cross sections (colored lines) and bounds from vacuum stability and perturbativity (dark gray regions). In the left panel of the Fig.~2, we notice that, if small scale anomalies of structure formation are to be explained by the dark matter self-interaction, XENON100 direct search experiment constrains strongly the kinetic mixing as \begin{equation} \epsilon \lesssim 10^{-9} - 10^{-4} \quad {\rm for} \quad 6 \mathinner{\mathrm{GeV}} \lesssim m_X \lesssim 1 \mathinner{\mathrm{TeV}}. \end{equation} From the right panel of Fig.~2, we also notice that direct search experiments already excluded $m_X \lesssim 80 \mathinner{\mathrm{GeV}}$ except for the narrow resonance band around $m_X \simeq m_h/2$. In addition, for $m_X = \mathcal{O}(10^{2-3}) \mathinner{\mathrm{GeV}}$, $\langle \sigma v \rangle_{\rm ann}^X$ can be larger than $\langle \sigma v \rangle_{\rm ann}^{\rm th}$ (the one for the right amount of thermal relic) by about an order of magnitude at most. For $m_h = 125 \mathinner{\mathrm{GeV}}$, if top pole mass and strong coupling are respectively $m_t = 173.2 \mathinner{\mathrm{GeV}}$ and $\alpha_s = 0.1184$, vacuum stability and perturbativity allows $m_X$ only in the range \begin{equation} \label{CDM-mass-window} 200 \mathinner{\mathrm{GeV}} \lesssim m_X \lesssim 600 \mathinner{\mathrm{GeV}} \end{equation} and annihilation cross section satisfying \begin{equation} 1 \leq \frac{\langle \sigma v \rangle_{\rm ann}^X}{\langle \sigma v \rangle_{\rm ann}^{\rm th}} \lesssim 5. \end{equation} This implies that the thermal relic can be reduced to abut $20$ \% of the present relic density at most, and asymmetrically produced non-thermal dark matter can saturate the present relic density. Note that the recent report on $E_\gamma \sim 130 \mathinner{\mathrm{GeV}}$ line spectrum in Fermi-LAT $\gamma$-ray data is not achievable in our model since the branching fraction of the dark matter annihilation to photon(s) is of $\mathcal{O}(10^{-4}-10^{-3})$. \subsection{Indirect Signatures} \label{sec:indirect-dec} The dark interaction and kinetic mixing in our model should be highly suppressed as described in previous sections. In addition, since $\alpha_X \lesssim 10^{-4}$ for $m_X \lesssim \mathcal{O}(1) \mathinner{\mathrm{TeV}}$ (see \eq{DMstructure-const}), Sommerfeld enhancement factor, which is given by \begin{equation} S = \frac{\pi \alpha_X / v}{1 - e^{-\pi \alpha_X /v}}, \end{equation} is $\mathcal{O}(1)$. Hence it is difficult to expect detectable indirect signatures from the annihilation channels via dark interaction or kinetic mixing. The possible indirect detection signatures comes from Higgs portal interactions, \[ X X^\dagger \rightarrow H^* \rightarrow f \bar{f}, V V, \ \ {\rm or} \ \ X X^\dagger \rightarrow H H , \] where $f$ and $V$ are the SM fermions and the weak gauge bosons, respectively. These processes can produce a sizable continuum spectrum of photons, since the annihilation cross section can be larger than the value for thermal dark matter. However, the recent data from Fermi LAT $\gamma$-ray search provides upper-bounds on various annihilation channels \cite{,Huang:2012yf}. In our model, $W^+ W^-$ channel is dominant. Taking into account the fact that an annihilation is possible only for $X$-$X^\dag$ pairs, the annihilation cross section is expected to be constrained at least as \cite{Huang:2012yf} \begin{equation} \langle \sigma v \rangle_{XX^\dag \to W^+ W^-}^{\rm obs} \lesssim 2 \times 7.4 \times 10^{-26} {\rm cm}^3 / {\rm sec} \end{equation} for NFW dark matter profile. Hence the total annihilation cross section is upper-bounded as \begin{equation} \langle \sigma v \rangle_{\rm ann}^X \lesssim {\rm Br}(XX^\dag \to W^+ W^-)^{-1} \times 2 \times 7.4 \times 10^{-26} {\rm cm}^3 / {\rm sec}. \end{equation} In the allowed region of parameter space, that is, for $m_X = \mathcal{O}(10^{2-3}) \mathinner{\mathrm{GeV}}$, we find ${\rm Br}(XX^\dag \to W^+ W^-) \sim 0.5$, and the allowed ratio of the annihilation cross section to the value for thermal relic is bounded as \begin{equation} 1 \leq \frac{\langle \sigma v \rangle_{\rm ann}^X}{\langle \sigma v \rangle_{\rm ann}^{\rm th}} \lesssim 5 . \end{equation} This constraint is similar to the one coming from the perturbativity bound shown in Fig.~\ref{fig:lHX-Xenon-bound}. \section{Dark Radiation} \label{sec:dark-rad} Dark photon can contribute to the radiation density of the present universe. Its contribution is parameterized in terms of the extra relativistic neutrino species as \begin{equation} \Delta N_{\rm eff} = \frac{\rho_{\gamma'}}{\rho_\nu} = \frac{g_{\gamma'}}{(7/8) g_\nu} \left( \frac{T_{\gamma,0}}{T_{\nu,0}} \right)^4 \left( \frac{T_{\gamma',{\rm dec}}}{T_{\gamma,{\rm dec}}} \right)^4 \left( \frac{g_{*S}(T_{\gamma, 0})}{g_{*S}(T_{\gamma, \rm dec})} \right)^{4/3} \end{equation} where $\rho_{\gamma'}$ and $\rho_\nu$ are respectively the present energy densities of the dark photon and a neutrino species, $g_i$, $T_{i,0}$ and $T_{i, \rm dec}$ are respectively the degrees of freedom, the temperature at present and decoupling of the species, $i$, and $g_{*S}$ is the total SM degrees of freedom associated with entropy. Because of the energy injection to photons at the epoch of electron-positron pair annihilation which took place after neutrino decoupling, the photon is slightly hotter than neutrinos at present, resulting in the ratio of temperatures, $T_{\nu,0}/T_{\gamma,0} = \left( 4/11 \right)^{1/3}$. In addition, dark matter is decoupled from the SM thermal bath at a temperature $T \sim 1 \mathinner{\mathrm{GeV}}$ before QCD-phase transition while still in contact with dark photon. Hence dark matter and dark photon are decoupled from the SM thermal bath simultaneously. When it is decoupled, the temperature of dark photon is the same as that of photon. Therefore, we find \begin{equation} \Delta N_{\rm eff} = \frac{g_{\gamma'}}{(7/8) g_\nu} \left( \frac{T_{\gamma,0}}{T_{\nu,0}} \right)^4 \left( \frac{g_{*S}(T_{\gamma, 0})}{ g_{*S}(T_{\gamma, \rm dec})} \right)^{4/3} \simeq 0.08 \end{equation} where we used $g_{\gamma'} = g_\nu = 2$, $g_{*S}(T_{\gamma, 0}) = 3.9$ and $g_{*S}(T_{\gamma, \rm dec})=75.75$. The best fit value of observations is \cite{Hinshaw:2012fq} \begin{equation} \label{Neff-obs} N_{\rm eff}^{\rm obs} = 3.84 \pm 0.40 \ {\rm at} \ 68 \% \ {\rm CL} \end{equation} with SM expectation $N_{\rm eff}^{\rm SM} = 3.046$. Therefore, in our model the contribution of dark photon to the radiation density at present is consistent with observation within about 2-$\sigma$ error, slightly improving the SM prediction in the right direction. \section{Leptogenesis} Our model allows production of lepton number asymmetries in both of visible and dark sectors via decays of heavy RH neutrinos. If the mass of dark matter $X$ is much larger than proton mass and asymmetric generation of dark matter is responsible for the present relic density, the asymmetry of $\psi$ should be much smaller than that of lepton $\ell_i$. However, the contribution to $X$ and $X^\dag$ from the decay of thermal symmetric component of $\psi$-$\bar{\psi}$ is dominant as described in section~\ref{structure-form}. The present relic density is then determined by thermal or non-thermal freeze-out of the annihilation of $X$-$X^\dag$, depending on the temperature when $\psi$ decays. Considering asymmetric generation of dark matter in this circumstance is pointless. However we still have to check if a right amount of lepton number asymmetry in the visible sector can be achieved. The lepton number and $U(1)_X$ charges are assigned to relevant fields as shown in Table~\ref{tab:charges}. Then, the global lepton number is explicitly broken by Majorana mass terms for the RH neutrinos. The lightest RH Majorana neutrino $N_1$ can decay into both the SM fields and the DM fields: \[ N_1 \rightarrow l_{Li} H^\dagger , \ \ \ \psi X^\dagger . \] With nonzero complex phases in $Y_\nu$ and $\lambda_i$ the decay can generate the $\Delta L$, $\Delta \psi$ and $\Delta X$ as \footnote{For simplicity, we do not consider the case where the initial abundance of $N_1$ is negligible or zero.} \begin{equation} \label{Yasym} Y_{\Delta i} \equiv \frac{n_{\Delta_i}}{s} = \epsilon_i \eta_i Y_1^{\rm eq}(0) \end{equation} where $n_{\Delta_i}$ is the number density of a charge asymmetry associated with the field $i$, $s$ is the entropy density, $\epsilon_i$ and $\eta_i$ are asymmetry and wash-out effect of field $i$ from the decay of $N_1$, respectively, and $Y_1^{\rm eq}(0)=135 \zeta(3) / 4 \pi^4 g_*$ with $g_*(T \gg M_1) \sim 100$ being the number of relativistic degrees of freedom at a temperature well above the mass scale of the lightest RH neutrino ($M_1$). For a hierarchical mass spectrum, $M_1 \ll M_{2,3}$, the asymmetries are given by \cite{Falkowski:2011xh} \begin{eqnarray} \epsilon_L &\simeq& \frac{M_1}{8 \pi} \frac{{\rm Im} \left[ \left( 3 Y_\nu^* Y_\nu^T + \lambda^* \lambda \right) \mathbb{M}^{-1} Y_\nu Y_\nu^\dag \right]_{11}}{\left[ 2 Y_\nu Y_\nu^\dag + \lambda \lambda^* \right]_{11}} , \\ \epsilon_\psi &\simeq& \frac{M_1}{8 \pi} \frac{{\rm Im} \left[ \left( Y_\nu^* Y_\nu^T + \lambda^* \lambda \right) \mathbb{M}^{-1} \lambda \lambda^* \right]_{11}}{\left[ 2 Y_\nu Y_\nu^\dag + \lambda \lambda^* \right]_{11}} \end{eqnarray} where $\mathbb{M} = {\rm diag} \left(M_1, M_2, M_3 \right)$, and upper-bounded as \cite{Falkowski:2011xh,Davidson:2002qv} \begin{equation} \label{gen-DI-bound} \epsilon_L \leq \frac{3 M_1 m_\nu^{\rm max}}{16 \pi v_H^2} \times \left\{ \begin{array}{lcc} 1 & {\rm for} & {\rm Br}_L \gg {\rm Br}_\psi \\ \sqrt{ \lambda_2^2 M_1 / \lambda_1^2 M_2} & {\rm for} & {\rm Br}_L \ll {\rm Br}_\psi \end{array} \right. \end{equation} with $m_\nu^{\rm max}$ being the mass of the heaviest left-handed neutrino. The visible sector lepton number asymmetry, $\Delta L$, would end up the visible sector baryon number asymmetry via anomalous electroweak process \cite{Klinkhamer:1984di,Kuzmin:1985mm}. For simplicity, if we assume the visible sector lepton number asymmetry is dominated by a flavor, the late-time baryon number asymmetry is related to the lepton number asymmetry as \cite{lepto_review} \begin{equation} Y_{\Delta B} = \frac{12}{37} Y_{\Delta L}. \end{equation} Then, the present observations of baryon number asymmetry can be matched if \begin{eqnarray} \label{YL} Y_{\Delta L} &\simeq& 2.6 \times 10^{-10}. \end{eqnarray} The eventual outcome of leptogenesis via the decay of heavy RHN can be obtained by solving Boltzmann equations which involve effects of wash-out and transfer of the asymmetries between visible and dark sectors. However, if the narrow-width approximation is valid, we can get much simpler picture. The narrow-width approximation is valid if \begin{equation} \frac{\Gamma_1^2}{M_1 H_1} \ll 1 \end{equation} and $2 \to 2$ scattering between visible and dark sectors via heavy neutrino is ineffective, hence asymmetries in both sector evolves independently. In this circumstance, the washout effect on asymmetry is mainly from the inverse decay. If the washout effect is weak, i.e., \begin{equation} {\rm Br}_i \frac{\Gamma_1}{H_1} \ll 1 , \end{equation} the final asymmetry is directly related to the asymmetry from the decay of RHN. Otherwise, there can be large reduction of the asymmetry. The decay rate of RHN is \begin{equation} \Gamma_1 = \frac{1}{16 \pi} \left( Y_{\nu 1}^2 + \lambda_1^2 \right) M_1 , \end{equation} and the branching fractions to the SM and dark sectors are \begin{equation} {\rm Br}_L = \frac{Y_{\nu 1}^2}{Y_{\nu 1}^2 + \lambda_1^2}, \quad {\rm Br}_\psi = \frac{\lambda_1^2}{Y_{\nu 1}^2 + \lambda_1^2}. \end{equation} Hence \begin{equation} \label{washout-cond} {\rm Br}_i \frac{\Gamma_1}{H_1} = \frac{M_{\rm P}}{16 \pi} \times \left\{ \begin{array}{lcc} \tilde{m}_\nu / v_H^2 & {\rm for} & L \\ \lambda_1^2 / M_1 & {\rm for} & \psi. \end{array} \right. \end{equation} For simplicity, we use narrow-width approximation from now on. This implies \begin{equation} \label{narrow-wid-approx} Y_{\nu_1}^2+ \lambda_1^2 \ll 16 \pi \left( \frac{M_1}{M_{\rm P}} \right)^{1/2} \simeq 10^{-3} \left( \frac{M_1}{10^9 \mathinner{\mathrm{GeV}}} \right)^{1/2}. \end{equation} Note that \begin{equation} \label{Ynu-mnu} Y_{\nu_1}^2 = \frac{\tilde{m}_\nu M_1}{v_H^2} \simeq 3 \times 10^{-6} \left( \frac{\tilde{m}_\nu}{0.1 \mathinner{\mathrm{eV}}} \right) \left( \frac{M_1}{10^9 \mathinner{\mathrm{GeV}}} \right). \end{equation} So, $Y_{\nu 1}$ can not saturate the bound of \eq{narrow-wid-approx} for $M_1 \ll 10^{14} \mathinner{\mathrm{GeV}}$ which we assumed in order not to worsen the vacuum instability of the SM Higgs potential. Hence, for $\lambda_1$ saturating the bound, we always have ${\rm Br}_L \ll {\rm Br}_\psi$. Combined with the constraint \eq{lambda1-CDM-const}, the narrow-width approximation can be achieved if \begin{equation} \label{mpsiLbnd} m_\psi \gtrsim 94.3 \mathinner{\mathrm{TeV}} \left[ \left( \frac{0.1 \mathinner{\mathrm{eV}}}{\tilde{m}_\nu} \right) \left( \frac{M_1}{10^9 \mathinner{\mathrm{GeV}}} \right)^{1/2} \left( \frac{\langle \sigma v \rangle_{\rm ann}^{\rm th}}{\langle \sigma v \rangle_{\rm ann}^X} \right)^2 \right]^{1/3} \left( \frac{m_X}{300 \mathinner{\mathrm{GeV}}} \right). \end{equation} Depending on the sizes of $Y_{\nu 1}$ and $\lambda_1$, there are various regimes of wash-out as analyzed in Ref.~\cite{Falkowski:2011xh}. The purpose of this paper is not at the full analysis of leptogenesis, so here we simply show a working example in the following paragraph. If $\tilde{m}_\nu \sim 0.1 \mathinner{\mathrm{eV}}$ and $\lambda_1 > Y_{\nu 1}$, both of visible and hidden sectors are in the strong washout regime. The wash-out effects are given by \cite{Falkowski:2011xh,lepto_review} \begin{equation} \label{eta-strong} \eta_L \simeq \frac{H_1}{\Gamma_1 {\rm Br}_L}, \quad \eta_\psi \simeq \frac{H_1}{\Gamma_1 {\rm Br}_\psi} \end{equation} with the ratio between asymmetries, \begin{equation} \label{dLdpsi} \frac{Y_{\Delta L}}{Y_{\Delta \psi}} \simeq \frac{\epsilon_L {\rm Br}_L}{\epsilon_\psi {\rm Br}_\psi} \simeq \frac{\lambda_1 Y_{\nu 2}}{\lambda_2 Y_{\nu 1}}. \end{equation} Since $Y_{\Delta \psi}$ can be smaller or larger than $Y_{\Delta L}$ even though $m_X$ is much larger than proton mass, we can assume \begin{equation} \label{YLeqYpsi} \frac{\lambda_1 Y_{\nu 2}}{\lambda_2 Y_{\nu 1}} = 1. \end{equation} From \eqss{Yasym}{gen-DI-bound}{eta-strong}, the maximally expected late-time lepton number asymmetry is \begin{equation} \label{YLmax} Y_{\Delta L}^{\rm max} = 1.6 \times 10^{-11} \left(\frac{M_1}{10^9 \mathinner{\mathrm{GeV}}} \right) \left( \frac{\lambda_2^2 M_1}{\lambda_1^2 M_2} \right)^{1/2}. \end{equation} Hence the present baryon number asymmetry corresponding to \eq{YL} can be obtained if \begin{equation} \label{BAU-const} \left(\frac{M_1}{10^9 \mathinner{\mathrm{GeV}}} \right) \left( \frac{Y_{\nu 2}^2 M_1}{Y_{\nu 1}^2 M_2} \right)^{1/2} \simeq 16.3 \end{equation} where we used \eq{YLeqYpsi} in the left-hand side of above equation. Fig.~\ref{fig:para-space-narrow-wid-approx} shows a parameter space limited in our analysis. In the figure, dark gray region is excluded by XENON100 direct dark matter search. Narrow-width approximation is valid in the white region well below the light gray region (e.g., below the dashed gray line). $\lambda_1 < Y_{\nu 1}$ below the green line. Although a wider parameter space may be allowed, our analysis of leptogenesis in this section is limited only in the white region bounded by the dashed gray and green lines. In the region, right amounts of baryon number asymmetry and dark matter relic density can be obtained as long as \eq{BAU-const} is satisfied. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{figs/parameter-space-CDM-BAU.eps} \caption{Parameter space for right amounts of baryon number asymmetry and dark matter relic density at present. We used $m_X = 300 \mathinner{\mathrm{GeV}}$, $\tilde{m}_\nu = 0.1 \mathinner{\mathrm{eV}}$ and $\sqrt{Y_{\nu 2}^2 M_1 / Y_{\nu 1}^2 M_2} = 1$ corresponding to $M_1 = 1.63 \times 10^{10} \mathinner{\mathrm{GeV}}$. Dark gray region is excluded by XENON100 dark matter direct search experiment. In the light gray region, narrow-width approximation is not valid. The boarder of the light gray region and the gray dashed line correspond to $\lambda_1/ \sqrt{16 \pi \sqrt{M_1/M_{\rm P}}} = 1, 1/3$, respectively. Below the green line, $Y_{\nu 1} > \lambda_1$ for which our analysis is not valid. The blue lines correspond to $\langle \sigma v \rangle_{\rm ann}^X / \langle \sigma v \rangle_{\rm ann}^{\rm th} = 1, 5$ from right to left. } \label{fig:para-space-narrow-wid-approx} \end{figure} So far, we have considered the lepton number asymmetry in the visible sector that comes from the decay of RH-neutrinos only. However there is an additional contribution from the late-time decay of $\psi$ which also carries lepton number. Since the decays of $\psi$ and $\bar{\psi}$ involve a virtual internal line of a Majorana RH-neutrino which decays eventually to a SM lepton and Higgs pair, both of decays produce equal amount of the same-sign lepton number asymmetry in the visible sector. In addition, there is no dilution of the produced visible sector asymmetry due to inverse decay or transfer to the dark sector, since such processes are kinematically forbidden. Hence, the contribution from those decays is \begin{equation} \Delta (Y_{\Delta L}) = 2 \epsilon_L Y_\psi(T_{\rm fz}^\psi) \end{equation} where $T_{\rm fz}^\psi$ is the freeze-out temperature of the pair annihilation of $\psi$-$\bar{\psi}$ and we used $Y_\psi(T_{\rm d}) = Y_\psi(T_{\rm fz}^\psi)$ in the right-hand side of the above equation. The freeze-out abundance of $\psi$ is given by \cite{Kolb-Turner} \begin{equation} Y_\psi(T_{\rm fz}^\psi) = \frac{3.79 \left( \sqrt{8 \pi} \right)^{-1} g_*^{1/2} /g_{*S} x_{\rm fz}^\psi}{m_\psi M_{\rm P} \langle \sigma v \rangle_{\rm ann}^\psi} \simeq 0.05 \frac{x_{\rm fz}^\psi}{\alpha_X^2} \frac{m_\psi}{M_{\rm P} } \end{equation} where $x_{\rm fz}^\psi \equiv m_\psi / T_{\rm fz}^\psi$ and we used $g_* = g_{*S} = 100$ and \eq{sv-psi} at the far right-hand side of the above equation. Combining with \eqs{gen-DI-bound}{YL}, we find \begin{equation} \frac{\Delta (Y_{\Delta L})}{Y_{\Delta L}} \simeq 2 \times 10^7 \frac{x_{\rm fz}^\psi}{\alpha_X^2} \frac{m_\psi}{M_{\rm P} } \frac{M_1 m_\nu^{\rm max}}{v_H^2} \times \left\{ \begin{array}{lcc} 1 & {\rm for} & {\rm Br}_L \gg {\rm Br}_\psi \\ \sqrt{ \lambda_2^2 M_1 / \lambda_1^2 M_2} & {\rm for} & {\rm Br}_L \ll {\rm Br}_\psi. \end{array} \right. \end{equation} As an example, we may take $\epsilon_L \sim 10^{-7}$. Then, for $\alpha_X = 10^{-5}$ and $m_\psi = 10^3 \mathinner{\mathrm{TeV}}$, we find $x_{\rm fz}^\psi \simeq 2.2$ resulting in $\Delta (Y_{\Delta L}) / Y_{\Delta L} \simeq 0.3$. Therefore, depending on $\alpha_X$ and $m_\psi$, the decay of $\psi$ and $\bar{\psi}$ can be the origin of the baryon number asymmetry in the present universe even though the asymmetry between $\psi$ and $\bar{\psi}$ is absent. \section{Higgs Inflation} In order for the leptogenesis described in the previous section to work, the temperature of the early universe should be high enough so that the lightest RHN can be in thermal equilibrium before it is decoupled. This condition can be achieved if the reheating temperature of the primordial inflation is high enough. An intriguing possibility is so-called Higgs inflation \cite{Bezrukov:2007ep,Bezrukov:2010jz} which uses the SM Higgs as the inflaton equipped with a large non-minimal gravitational coupling. As a variant, Higgs-scalar singlet system has been also considered in the literature \cite{Lerner:2009xg} (see also \cite{Clark:2009dc,Lebedev:2011aq,GarciaBellido:2011de}). Modulo the subtle issues of the unitarity problem \cite{Burgess:2009ea,Barbon:2009ya,Burgess:2010zq,Hertzberg:2010dc,Lerner:2011it}, our model indeed allows inflation along Higgs direction since Higgs potential is stabilized by the help of a coupling to the singlet scalar $X$. The model parameters relevant to inflation are $\lambda_{HX}$, $\lambda_X$ and the Higgs quartic coupling in addition to the large non-minimal couplings (say $\xi_i$). As free parameters, we can adjust $\xi_i$s for given set of quartic couplings while satisfying requirements on the inflationary observables under the assumption of the positivity of quartic couplings (see \cite{Lebedev:2012zw} for example). Hence the physics involved in inflation does not pose any new constraint other than ones described in previous sections if inflation takes place along Higgs direction, and the Higgs inflation along with a singlet scalar can be realized. It turned out that the reheating temperature after Higgs inflation is around $\mathcal{O}(10^{13-14}) \mathinner{\mathrm{GeV}}$ \cite{Bezrukov:2008ut}. It is high enough to populate the lightest RHN in thermal bath. Therefore, Higgs inflation sets the initial condition for the leptogenesis. \section{Higgs and DM phenomenology at colliders} The Higgs boson in our model could decay into a pair of scalar DM's through $\lambda_{HX}$ term if kinematically allowed. However, as shown in Fig.~\ref{fig:lHX-Xenon-bound}, dark matter direct search allows only $m_X \sim m_h /2$ with $\lambda_{HX} \lesssim 10^{-1}$ even though SM Higgs may not suffer from vacuum instability problem. If it is allowed, the decay rate of Higgs to dark matter is \begin{equation} \Gamma_{h \to XX^\dag} = \frac{\lambda_{HX}^2}{128 \pi} \frac{v_H^2}{m_h} \left( 1 - \frac{4 m_X^2}{m_h^2} \right)^{1/2} , \end{equation} and the signal strength ($\mu$) of SM Higgs searches at collider experiments is given by \begin{equation} \mu = 1 - \frac{\Gamma_{h \to X X^\dag}}{\Gamma_h^{\rm tot}} \end{equation} where $\Gamma_h^{\rm tot}$ is the total decay rate of SM Higgs. Recent results from ATLAS and CMS collaborations are \cite{ATLAS,CMS} \begin{eqnarray} \mu_{\rm ATLAS} &=& 1.43 \pm 0.21 \quad {\rm for} \ m_h = 125.5 \mathinner{\mathrm{GeV}} \ , \\ \mu_{\rm CMS} &=& 0.8 \pm 0.14 \quad \hspace{0.5em} {\rm for} \ m_h = 125.7 \mathinner{\mathrm{GeV}} \ . \end{eqnarray} Hence the invisible decay of Higgs to dark matter can be consistent with CMS data only if $\lambda_{HX} \ll 0.1$ or $m_X$ is very close to $m_h/2$. On the other hand, if vacuum stability is imposed, such a small $\lambda_{HX}$ is excluded and only $m_X = \mathcal{O}(10^{2-3}) \mathinner{\mathrm{GeV}}$ is allowed. In this case, the production and decay rate of Higgs boson in our model are exactly the same as those of SM Higgs boson, since $H\rightarrow X X^\dagger$ is kinematically forbidden. Therefore it is difficult to discriminate our model from SM in such a case. In other words, if collider experiments shows any non-SM signature, our model is excluded. It may be possible to search for a pair of dark matter production at the LHC or the ILC through $ e^+ e^- \rightarrow Z h^* \rightarrow Z ( X X^\dagger )$, or $WW$ fusion through \[ q \bar{q} \rightarrow q \bar{q} h^* \rightarrow q \bar{q} X X^\dagger \] with extra emissions of gluon or $\gamma$ from the initial or the final quark jets. The detailed study of this channel will be beyond the scope of this paper, and will be addressed elsewhere. \section{Variations of the model} Instead of our model analyzed in this paper, one can consider a simpler dark sector which contain either $X$ or $\psi$ only in addition to $\hat{B}'_\mu$. In these cases, renormalizable RH-neutrino portal interactions are not possible and leptogenesis from seesaw sector has nothing to do with the dark matter. If $\psi$ is absent and $X, X^\dagger$ are dark matters, the only change relative to our present model is that the current dark matter relic density should come from the thermal freeze-out of $X$-$X^\dag$ annihilation via $\lambda_{HX}$ interaction. Hence the annihilation is fixed to be \eq{sv-th} as usual. If $X$ is absent and $\psi$ is the dark matter, one has to introduce a real SM-singlet scalar (say $S$) connecting the dark sector to the SM sector as the model discussed in Ref.~\cite{SFDM1}, so that the thermal freeze-out of $\psi$-$\bar{\psi}$ annihilation via the newly introduced interactions of $S$ provides a right amount of dark matter at present. Otherwise $\psi$ and $\bar{\psi}$ would be overproduced due to the smallness of $\alpha_X$. The physics of this model is nearly same as that discussed in Refs.~\cite{SFDM1,Baek:2012uj} modulo the effects of the dark interaction on structure formation and direct DM searches, as well as dark radiation from massless hidden photon. The spin-independent cross section of the $\psi$ (or $\bar{\psi}$)-to-nucleon scattering via photon exchange is the same as the one in the case of $X$-$X^\dag$ dark matter, so the constraint on the kinetic mixing ($\epsilon$) shown in the left panel of Fig.~\ref{fig:lHX-Xenon-bound} is equally applicable to this case. Higgs inflation is still possible in these variations as discussed in Ref.~\cite{Lebedev:2012zw}, since there are extra scalar fields $X$ or $S$ in either case. Finally one could consider the case the $U(1)_X$ dark symmetry is spontaneously broken by nonzero $\langle \phi \rangle \neq 0$. Then in this case there is a singlet scalar from $\phi$ after $U(1)_X$ breaking, which will mix with the SM Higgs boson. Therefore there are two Higgs-like neutral scalar bosons after all, and both of them have signal strengths universally suppressed from the SM value ``1''. If $\psi$ is the CDM, one needs a singlet scalar $S$ as a messenger, and this will mix with the SM Higgs boson (and the remnant from $\phi$). In Table~2, we summarize the dark field contents, messengers, the particle identity of the dark matter (DM), the amount of dark radiation (DR) and the signal strengths of Higgs-like neutral scalar bosons (including the number of them) in various scenarios. In all cases, there are additional scalar bosons (either $X$ or $\phi$ or both) which make Higgs inflation still viable for $m_H = 125$ GeV. And the Higgs signal strength is smaller than ``1'' except for the scalar is the CDM with unbroken $U(1)_X$ dark symmetry. Especially $\mu_{i=1,2,(3)} < 1$ for fermion CDM, whether $U(1)_X$ is broken or not. Our conclusions on the Higgs signal strength are based on the assumption that there is only one Higgs doublet in the model. If we include additional Higgs doublets or triplet Higgs, the Higgs portal would have richer structure, and the signal strength will change completely and will vary depending on the Higgs decay channels. Also it should be possible to have a signal strength for $H\rightarrow \gamma\gamma$ channel greater than ``1'' without difficulty. \begin{table}[htdp] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Dark sector fields & $U(1)_X$ & Messenger & DM & Extra DR & $\mu_i$ \\ \hline $\hat{B}'_\mu , X , \psi$ & Unbroken & $H^\dagger H , \hat{B}'_{\mu\nu} \hat{B}^{\mu\nu} , N_R$ & $X$ & $\sim 0.08$ & $1~(i=1)$ \\ $\hat{B}'_\mu , X$ & Unbroken & $H^\dagger H , \hat{B}'_{\mu\nu} \hat{B}^{\mu\nu}$ & $X$ & $\sim 0.08$ & $1 ~(i=1)$ \\ $\hat{B}'_\mu , \psi$ & Unbroken & $H^\dagger H , \hat{B}'_{\mu\nu} \hat{B}^{\mu\nu} , S$ & $\psi_X$ & $\sim 0.08$ & $< 1~(i=1,2)$ \\ \hline $\hat{B}'_\mu , X , \psi , \phi$ & Broken & $H^\dagger H , \hat{B}'_{\mu\nu} \hat{B}^{\mu\nu} , N_R$ & $X$ or $\psi$ & $\sim 0$ & $< 1~(i=1,2)$ \\ $\hat{B}'_\mu , X , \phi$ & Broken & $H^\dagger H , \hat{B}'_{\mu\nu} \hat{B}^{\mu\nu}$ & $X$ & $\sim 0$ & $< 1 ~(i=1,2)$ \\ $\hat{B}'_\mu , \psi$ & Broken & $H^\dagger H , \hat{B}'_{\mu\nu} \hat{B}^{\mu\nu} , S$ & $\psi$ & $\sim 0$ & $~~< 1~(i=1,2,3)$ \\ \hline \end{tabular} \end{center} \caption{Dark fields in the hidden sector, messengers, dark matter (DM), the amount of dark radiation (DR), and the signal strength(s) of the $i$ scalar boson(s) ($\mu_i$) for unbroken or spontaneously broken (by $\langle \phi \rangle \neq 0$) $U(1)_X$ models considered in this work. The number of Higgs-like neutral scalar bosons could be 1,2 or 3, depending on the scenarios. } \label{default} \end{table}% \section{Discussions of some miscellaneous issues} \subsection{Comparison with other models} Leptogenesis in our model is very similar to one in Ref.~\cite{Falkowski:2011xh} in that the RH neutrino decay is the origin of BAU and CDM observed today. However there are a few important different aspects of our model compared with Ref.~\cite{Falkowski:2011xh}: \begin{itemize} \item Our lagrangian is based on local gauge symmetry $G_X = U(1)_X$ that guarantees that CDM is absolutely stable. Assuming all the SM singlet fields are portals to the hidden sector DM, we are naturally led to the present model without any other {\it ad hoc} assumptions. \item If stable, either $X$ or $\psi$ could be dark matter, but the smallness of the dark interaction that is required from observations of large scale structure does not allow the fermion dark matter $\psi$ in our model. Hence $\psi$ should be able to decay. The interaction to SM via RHN allows the decay of $\psi$ if $X$ is lighter than $\psi$. Due to this process, dark matter is composed of $X$ and $X^\dag$, and becomes symmetric eventually irrespective of its origin (symmetric thermal or asymmetric non-thermal). \item Because of the smallness of dark interaction, the freeze-out abundance of $\psi$ before decay is quite large and becomes main contribution to dark matter abundance of $X$-$X^\dag$. In other words, asymmetric production of dark matter does not play any significant role in our scenario, and the eventual relic density of dark matter is determined by thermal or non-thermal freeze-out of $X$-$X^\dag$ pair annihilation through Higgs portal $\lambda_{XH}$ terms. \item The decays of $\psi$ and $\bar{\psi}$ via RH neutrino portal contribute to the visible sector lepton number asymmetry even if there is no asymmetry between $\psi$ and $\bar{\psi}$. The contribution can be the origin of the present baryon number asymmetry if the coupling of the dark $U(1)_X$ is small ($\alpha_X \lesssim 10^{-5}$) and $\psi$ is heavy enough ($m_\psi \gtrsim 10^3 \mathinner{\mathrm{TeV}}$). \item Higgs inflation can be realized thanks to the existence of the portal interaction which is necessary for efficient pair annihilation of $X$ and $X^\dagger$ and for vacuum stability. The large enough reheating temperature after inflation can set a proper initial condition for leptogenesis to work. \item Since the dark symmetry is an unbroken gauge symmetry in our model, there is always dark radiation from massless hidden photon. This conclusion can be evaded, if the dark symmetry is an unbroken but confining symmetry like color gauge symmetry in QCD \cite{ko_hidden_qcd1,ko_hidden_qcd2,ko_hidden_qcd_proceeding1, ko_hidden_qcd_proceeding2}. In that case, CDM would be a composite hadron made of hidden sector quarks (similar to baryons or mesons in ordinary QCD without electroweak interacrtion) and would be absolutely stable. \item The dark matter self-interaction caused by the massless dark photon can explain the small scale structure problem appearing in usual collisionless CDM scenario. \end{itemize} \subsection{Effects of nonrenormalizable operators} Since our model has no Landau pole or vacuum instability up to Planck scale ($M_{\rm Pl}$), this model could be an ultimate theory up to Planck scale when we ignore the fine tuning problems related with the Higgs mass$^2$ or cosmological constant. Still there may be higher dimensional nonrenormalizable operators suppressed by some positive powers of $1/M_{\rm P}$ originating from quantum gravity effects. Since the dark symmetry $U(1)_X$ is not broken, dark matter would be absolutely stable even in the presence of these higher dimensional operators \footnote{In case local dark symmetry is spontaneously broken, dark matter candidates may decay via nonrenormalizable operators. This will be discussed in detail in a separate publication~\cite{progress_broken}.}. In this section, let us list some dim-5 or dim-6 operators that are suppressed by one or two powers of $1/M_{\rm P}$ and contain either the RH neutrino or the dark fields \footnote{One can refer to Ref.~\cite{Buchmuller:1985jz,Grzadkowski:2010es} for dim-5 and dim-6 operators in the SM case.}, and discuss their effects on the results obtained in the previous sections. We still impose the local gauge invariance of the nonrenormalizable operators under local gauge symmetry transformations, $SU(3)_C \times SU(2)_L \times U(1)_Y \times U(1)_X$. Dim-5 and dim-6 operators with $\overline{\psi} \psi$ will contribute to thermalization of $\psi$ and $\bar{\psi}$: \begin{eqnarray} {\rm dim-5}: & & \frac{1}{M_{\rm P}} \overline{\psi} \psi H^\dagger H \ , \ \ \frac{1}{M_{\rm P}} \overline{\psi} \psi X^\dagger X \ , \ \ \frac{1}{M_{\rm P}}\overline{\psi} \sigma^{\mu\nu} \psi B_{\mu\nu} \ , \ \ \frac{1}{M_{\rm P}}\overline{\psi} \sigma^{\mu\nu} \psi B'_{\mu\nu} \\ {\rm dim-6}: & & \frac{1}{M_{\rm P}^2} \overline{\psi} \gamma_\mu \psi \overline{f} \gamma^\mu f \ , \ \ \frac{i}{M_{\rm P}^2} \overline{\psi} \gamma_\mu \psi \left[ H^\dagger D^\mu H - ( D^\mu H^\dagger ) H \right] \ , \ \ etc. \end{eqnarray} where $f$ is the SM chiral fermion field. The first two dim-5 operators above contribute to $\psi \overline{\psi} \rightarrow H H^\dagger , X X^\dagger$, whose cross section is estimated as \[ \sigma \sim \frac{1}{ 4\pi M_{\rm Planck}^2} \] in the limit $m_\psi \gg m_H, m_X$ which is legitimate in our model. This is far less than the cross section into a pair of massless dark photon, Eq.~(3.4), derived in Sec.~3.1 even if the dark gauge coupling is very small. The dim-6 and dim-7 operators will be even smaller. Dim-6 operators including $X^\dagger X$ will contribute to thermalization of $X$ and $X^\dagger$ \footnote{There are no gauge invariant dim-5 operators involving $X^\dagger X$. }: \begin{eqnarray} {\rm dim-6}: & & \frac{i}{M_{\rm P}^2} \left[ X^\dagger ( D_\mu X ) - (D_\mu X^\dagger ) X \right] \overline{f} \gamma^\mu f \ , \ \ \frac{1}{M_{\rm P}^2} X^\dagger X {\cal O}_{\rm SM}^{(4)} \ , \\ && \frac{1}{M_{\rm P}^2} \left[ X^\dagger ( D_\mu X ) - (D_\mu X^\dagger ) X \right] \left[ H^\dagger D^\mu H - ( D^\mu H^\dagger ) H \right] \ , \ \ etc. \end{eqnarray} where ${\cal O}_{\rm SM}^{(4)} $ represents the dim-4 gauge invariant SM operators appearing in the SM lagrangian. The cross section for annihilation of $X$ and $X^\dagger$ from dim-6 operators is estimated as \[ \sigma (X X^\dagger \rightarrow {\rm SM ~particles}) \sim \frac{m_X^2}{4\pi M_{\rm P}^4} \] which is totally negligible compared with the annihilation through renormalizable Higgs portal interaction involving $\lambda_{HX}$. We do not show $1/M_{\rm P}^2$-suppressed dim-6 operators for $\psi$ decay into $X l_{Li} H$, since it is far subdominant than the dim-6 operators generated by the virtual RH neutrinos which we already studied in Sec.~5. \section{Conclusion} In this paper, we showed that, if the dark matter stability is guaranteed by an unbroken local dark symmetry with nonzero dark charge, renormalizable portal interactions of the RH neutrinos and SM Higgs ($H^\dagger H$) fix the minimal field contents of dark sector (a scalar $X$ and fermion $\psi$ as well as massless dark photon $\hat{B}'_\mu$) and allow very rich physics without conflicting with various phenomenological, astrophysical and cosmological constraints coming from the existence of massless dark photon. The unbroken local dark symmetry is very strongly constrained by small and large scale structure formation, requiring the dark fine structure constant to be \begin{equation} \alpha_X \lesssim 10^{-5} - 10^{-4} \end{equation} for $\mathcal{O}(10^{2-3}) \mathinner{\mathrm{GeV}}$ scale mass of dark matter. On the other hand, the dark interaction can be the solution to the core/cusp and ``too big to fail'' problems of small scale structure in collisionless CDM scenarios. The smallness of dark interaction could cause a danger of dark matter over-abundance. In our model, this potential danger is removed by RH neutrino portal which allows $\psi$ to decay to $X$, and Higgs portal which allows efficient dilution of the stable dark matter $X$ to get a right amount of dark matter relic density. All these nice features are consequences of local dark gauge symmetry, and the assumption that all the SM singlet operators being portals to the dark sector. The RH neutrino portal also allows production of dark sector asymmetry as leptogenesis in type-I seesaw model does in the visible sector, but the dark sector asymmetry eventually disappears as $\psi$ decays and does not play any significant role. However, one should note that in our scenario eventual relic density of dark matter can be determined by thermal or non-thermal freeze-out, depending on the temperature when $\psi$ decays to $X$. This allows wider range of dark matter annihilation cross section. Additionally, depending on $\alpha_X$ and the mass of $\psi$, the decays $\psi$ and $\bar{\psi}$ can be the origin of the present baryon number asymmetry irrespective of the possible asymmetry between $\psi$ and $\bar{\psi}$. Fig. 4 is a sketch of the brief thermal history of our leptogenesis, including the production of dark matter relics. The Higgs portal interaction to dark scalar $X$ cures instability problem of the SM Higgs potential by loop effect. So, by introducing large non-minimal gravitational couplings to scalar fields, it becomes possible to realize Higgs inflation whose high enough reheating temperature sets the initial condition for leptogenesis in our model. The portal interactions also make the dark sector be accessible by direct and/or indirect searches, which are consistent with the current bounds from various terrestrial experiments and observations in the sky. It turned out that the contribution of dark photon to the radiation density at present is about 8\% of the energy density of a massless neutrino. It is rather small, and still consistent with the present observation within $2$-$\sigma$ error. The smallness is originated from the fact that dark photon couples only to the dark sector fields and dark matter is decoupled from SM particles before QCD-phase transition. Our model is a sort of the minimal model which has inflation, leptogenesis, absolutely stable dark matter and dark radiation, as well as seesaw mechanism for neutrino masses and mixings, modulo some variations described at section~8. The basic principles for the model building were the local gauge symmetry working on the dark sector too, and the assumption of the SM singlet operators being portals to the dark sector. From these simple principles, we could derive very rich physics results that are fully consistent with all the observations made so far. It is interesting to note that the Higgs property measurements will strongly constrain our model, since we predict that the Higgs signal strength should be equal to or less than ``1" for all the decay channels of Higgs boson. Our model could be extended in various directions. First of all, one can consider the case where the dark gauge symmetry is spontaneously broken. Depending on the local dark symmetry being broken fully or partially in case of non-Abelian dark gauge symmetry, one would have interesting phenomenological consequences in both dark matter and Higgs sectors. Secondly, it would be straightforward to extend our model to supersymmetric one. The qualitative features of SUSY models will be similar to this letter. However there are two types of CDM. One in the MSSM sector, and the other in the other sector. Therefore one could consider the $R$-parity in the MSSM sector is broken (either lepton number or baryon number). Then the $G_X$-charged hidden sector dark matter will make the one in the universe. These issues will be addressed in the future publication. \begin{figure}[h] \centering \includegraphics[width=16cm]{figs/thermal-history-2.eps} \caption{Thermal history of the universe in our model} \label{fig:history} \end{figure} \section*{Note Added} After we submit this paper on the archive, we received a new result by the Planck Collaboration on the effective number of neutrino species to be $N_{\rm eff} = 3.30 \pm 0.27$ at 68\% CL \cite{planck2013}, which is significantly lower than other previous results obtained by WMAP-9, SPT and ACT. It is amusing to notice that the new Planck data is in perfect agreement with our prediction $(N_{\rm eff} = 3.130)$ in the model with unbroken $U(1)_X$ as well as other cases summarized in Table~2 in Sec.~8. \section*{Acknowledgement} We are grateful to Takeo Inami, Kenji Kadota, Hyunmin Lee and J.C. Park for useful discussions and comments. This work is supported in part by NRF Research Grant 2012R1A2A1A01006053 (PK and SB), and by SRC program of NRF Grant No. 2009-0083526 funded by the Korea government(MSIP) (PK).
1,108,101,563,981
arxiv
\section{Introduction} Due to their direct relevance to the quantum Hall effect \cite{shapere}, possible connection with high temperature superconductivity \cite{wilczek} as well as their intellectual challenges, anyon systems have attracted enormous attention in the past several years. Almost exclusively, the formulation of anyons is based on the idea of the Chern-Simons construction. In this formulation anyons are locally so-called charge-flux tube composites. Physically, the essence involved is nothing but the famous Aharonov-Bohm effect \cite{aharonov}. It is the interference of the gauge field, (which would be a pure gauge when totally isolated), associated with each particle that modifies the statistics of the particle such that it interpolates between the boson and fermion limits. While it is conceptually more satisfactory, the involvement of the Chern-Simons term, because of its technical complexity, makes the understanding of anyon properties rather difficult, especially in many-anyon systems. In this paper we study a system which in many ways captures the main features of anyons, while avoiding the inclusion of the Chern-Simons term. In a topologically simple configuration space it would be necessary to include the Chern-Simons term to implement the fractional statistics. However, when the configuration space becomes topologically non-trivial, there is no need to add terms other than the minimal coupling between the gauge and the matter fields in order to achieve the desired interference. Thus with the slight complication of introducing a topologically non-trivial configuration space, the procedure of setting up a system which displays anyon-like behavior is greatly simplified and one can study typical effects such as the periodic but non-analytic dependence of thermodynamical observables on the statistics parameter. It is this technical simplification which allowed us to carry out our study of the thermodynamics of such a system rather explicitly, using conventional methods. We hope that our results will shed some light on the thermodynamics of Chern-Simons anyon systems. The specific model we consider is the Gross-Neveu model \cite{gross} with an external constant gauge potential defined by \begin{equation} {\cal L}=\bar{\psi}[i\partial_\mu-a_\mu(\theta)]\gamma^\mu\psi +{g^2\over 2N}(\bar{\psi}\psi)^2 \, , \label{model} \end{equation} on a ring (1+1 dimensions) and on a cylinder (2+1 dimensions) with $x$-direction compactified (of length $L$) to the leading order in $1/N$. The $\psi$-field is a two-component Dirac spinor and implicitly has $N$ components in the internal space. The external gauge field $a_\mu(\theta)$ is generated by a thin solenoid of magnetic flux $2\pi\theta$ which coincides with the axis of the ring or cylinder, that is, $a_1(\theta)=2\pi\theta/L$ and all other components vanish. Since the Lagrangian is fermionic when $\theta=0$, the anti-periodic boundary condition in $x$-direction, $\psi(x+L)=-\psi(x)$, is naturally imposed. Without losing generality we only need to consider $\theta\in [0,1)$, for the integral part of $\theta$ can be safely gauge-transformed away. The periodic dependence of physical quantities on $\theta$ is obvious. The geometric setting here is such that the particles move in a multi-connected configuration space. Due to the presence of the magnetic flux, we anticipate that the Aharonov-Bohm effect will have a profound impact on the statistical mechanics of the system. It is possible to formally remove the gauge field in Eq.(\ref{model}) via a gauge transformation $\psi\rightarrow \exp(i2\pi\theta x/L)\psi\equiv\psi'$. However, the expense of eliminating the gauge field is that the new field $\psi'$ becomes multi-valued (when $\theta\neq 0$ or $1/2$). The magnetic flux can be regarded as an external means of varying the boundary condition. It should be emphasized that, like in the original Aharonov-Bohm experiment, the magnetic field on the ring or cylinder is zero. Therefore, the effect of $\theta\neq 0$ is purely quantum mechanical in nature, taking the advantage of $L$ being finite. In the limit $L\rightarrow\infty$ all the $\theta$-dependence of physical observables should disappear, since then the effect of the boundary condition becomes irrelevant. It is also easy to see that single particle states have smooth $\theta$-dependence and that the non-analyticity of physical observables arises only from many-body effects. Before we dwell into details, let us comment on the choice of the specific microscopic Lagrangian, Eq.(\ref{model}). In our opinion, the precise underlying dynamics is not too crucial, as long as the qualitative features, such as the existence of a phase transition, are present. Therefore, the choice is simply dictated by convenience. However, it should be pointed out that the Gross-Neveu type of Lagrangians do have some relevance to certain condensed matter systems or models. For example, the Gross-Neveu model in 1+1 dimensions is closely related to the continuum limit of the one dimensional Hubbard model at half-filling and the quantum spin-one-half antiferromagnetic Heisenberg chain \cite{hubbard}, while in 2+1 dimensions it models the continuum limit of the so-called chiral spin liquid \cite{wen}. Although we do not directly address questions on how to make observations in some specifically designed experiments, one should always keep these issues in mind. The Lagrangian in Eq.(\ref{model}) possesses discrete symmetries. In the 1+1 dimensional case, the symmetry is the chiral symmetry \begin{equation} \psi(x,t)\rightarrow e^{i\gamma_5\pi/2}\psi(x,t) \, , \label{chiral} \end{equation} whereas in the 2+1 dimensional case, the relevant symmetry is parity \begin{equation} \psi(x,y,t)\rightarrow i\gamma_2\psi(x,-y,t) \, . \label{parity} \end{equation} Since the operator $\bar{\psi}\psi$ changes sign under either Eq.(\ref{chiral}) or Eq.(\ref{parity}), the order parameters are the ensemble averages $\langle\bar{\psi}\psi\rangle$. In other words, when a mass term is generated, the discrete symmetry, the chiral symmetry in 1+1 dimensions or parity in 2+1 dimensions, is dynamically broken. Because of the discrete nature of the symmetry involved, there will be no Goldstone mode associated with the dynamical symmetry breaking in our model. For convenience, we introduce a scalar auxiliary field $\sigma$, which interpolates the composite operator $-g^2\bar{\psi}\psi$. Then the Lagrangian in Eq.(\ref{model}) is equivalent to \begin{equation} {\cal L}=\bar{\psi}[i\partial_\mu-a_\mu(\theta)]\gamma^\mu\psi -{N\over 2g^2}\sigma^2-\sigma\bar{\psi}\psi \, . \label{model1} \end{equation} Note that the $\sigma$-field obeys periodic boundary conditions for any value of $\theta$, though it is odd under the transformations Eq.(\ref{chiral}) or Eq.(\ref{parity}). The rest of the paper is organized as follows. In the next section we present the result in 1+1 dimensions, along with its relevance to L\"{u}scher's small-volume expansion in asymptotically free field theories with phase transitions. In section III we present the result in 2+1 dimensions, in a manner suited for possible verification in condensed matter experiments. To further reveal the effect of $\theta$ on the phase transition, we compute in section IV all the critical exponents and verify that the phase transitions remain in the universality class of mean-field type to leading order in $1/N$. The anyonic character of the quasi-particle is demonstrated in section V via an explicit calculation of the relevant propagators and commutators. Finally, we summarize and point out possible generalizations in section VI. \section{Results in 1+1 dimensions} To leading order in $1/N$ the effective potential for the $\sigma$-field is given by the standard one-loop formula, in dimensional regularization, \begin{equation} V_{\rm eff}[\sigma,\theta]=N\biggl[ {\mu^{2\epsilon}\sigma^2\over 2g_B^2}- {1\over L}\sum_{n=-\infty}^\infty \mu^{2\epsilon}\int {d^{1-2\epsilon}\omega \over (2\pi)^{1-2\epsilon}} \ln(\omega^2+\tilde{k}_n^2+\sigma^2)\biggr] \, , \label{veff2} \end{equation} where $\tilde{k}_n=k_n+2\pi\theta/L$ and $k_n=\pi(2n+1)/L$. Since an additive constant in $V_{\rm eff}[\sigma,\theta]$ is irrelevant, we can first calculate \begin{equation} {\partial V_{\rm eff}[\sigma,\theta]\over\partial\sigma} =N\sigma\biggl[{\mu^{2\epsilon}\over g_B^2} -{1\over L}\sum_{n=-\infty}^\infty \mu^{2\epsilon}\int {d^{1-2\epsilon}\omega \over (2\pi)^{1-2\epsilon}} {1\over\omega^2+\tilde{k}_n^2+\sigma^2} \biggr] \, . \end{equation} After some straightforward manipulation we obtain explicitly \begin{equation} {\partial V_{\rm eff}[\sigma,\theta]\over\partial\sigma} ={N\sigma\over\pi}\biggl[{\pi\mu^{2\epsilon}\over g_B^2} -{1\over 2}\Bigl({1\over\epsilon}+\gamma_E-\ln{4\pi} +\ln(\mu^2L^2)\Bigr)-C_0(\theta)-\sum_{\nu=1}^\infty C_{2\nu}(\theta) (\sigma L)^{2\nu}\biggr] \, , \end{equation} where \begin{equation} C_0(\theta)={1\over 2}\sum_{\nu=1}^\infty \zeta(1+\nu) \Bigl[ (1/2+\theta)^\nu+(1/2-\theta)^\nu\Bigr] ={1\over 2}\sum_{n=1}^\infty\Bigl[ {1/2-\theta\over n(n-1/2+\theta)} +{1/2+\theta\over n(n-1/2-\theta)}\Bigr]\, , \label{c0} \end{equation} and \begin{equation} C_{2\nu}(\theta)={(-1)^\nu\over 2} \left(\begin{array}{c} 2\nu \\ \nu \end{array}\right) \Bigl({1\over 4\pi}\Bigr)^{2\nu} \Bigl[\zeta(2\nu+1,1/2+\theta)+\zeta(2\nu+1,1/2-\theta)\Bigr]\, . \end{equation} In the above equation $\gamma_E$ is the Euler-Mascheroni constant and $\zeta(\alpha,z)=\sum_{n=0}^\infty (n+z)^{-\alpha}$ is the generalized Riemann Zeta function. By further choosing the modified minimal subtraction scheme, the final result for the renormalized effective potential is then given by \begin{equation} {\partial V_{\rm eff}[\sigma,\theta]\over\partial\sigma} ={N\sigma\over\pi}\biggl[-\ln(ML)-\gamma_E+\ln{4\pi} -C_0(\theta)-\sum_{\nu=1}^\infty C_{2\nu}(\theta) (\sigma L)^{2\nu}\biggr] \, , \label{dveff2} \end{equation} where $M$ is a shorthand notation for the standard $\Lambda_{\overline{\rm MS}}$. Since the convergence radius of the series in $\sigma L$ in the above equation is $2\pi$, it is sometimes necessary to use the integral representation of the series \begin{equation} C_0(\theta)+\sum_{\nu=1}^\infty C_{2\nu}(\theta) z^{2\nu}= \ln{4\pi}-\gamma_E-\ln{z}+\sum_{n=1}^\infty (-1)^n \cos(2\pi n\theta) \int_0^\infty{dt\over t}\exp\bigl(-z^2t-{n^2\over 4t}\bigr)\, , \end{equation} in Eq.(\ref{dveff2}). The mass gap is solved from $\partial V_{\rm eff}(\sigma=m)/\partial\sigma=0$, yielding \begin{equation} m(L,\theta)=\left\{\begin{array}{ll} M\exp\Bigl[\sum_{n=1}^\infty (-1)^n \cos(2\pi n\theta) \int_0^\infty {dt\over t} \exp(-z^2 t-n^2/4t)\Bigr] \, , &\text{when $L>L_c(\theta)$,}\\ 0\, , &\text{when $L<L_c(\theta)$,}\end{array} \right. \label{gap2} \end{equation} where $z\equiv m(L,\theta)L$ and $L_c(\theta)$ is the critical length of the ring defined by \begin{equation} L_c(\theta)={\pi\over M} \exp\Bigl[-\gamma_E+\ln{4}-C_0(\theta)\Bigr] \, . \label{lc2} \end{equation} As defined in Eq.(\ref{c0}) $C_0(\theta)$ is a monotonically increasing function of $\theta\in[0,1/2)$ starting at $C_0(0)=\ln{4}$ and diverging at $\theta=1/2$. Thus the longest critical length results when $\theta=0$, $L_c(0)=\pi\exp(-\gamma_E)/M\approx 1.7639/M$, and the shortest critical length is zero, $L_c(1/2)=0$. For other values of $\theta$ the critical length interpolates continuously between these two extrema. In Fig.\ref{fig1} we plot $m(L,\theta)$ as a function of $1/L$ for several values of $\theta$. The non-trivial dependence of the mass gap on $\theta$ and $L$ has an interesting implication with respect to the small-volume expansion, proposed by L\"{u}scher \cite{luescher,luescher1}, for asymptotically free theories with phase transitions. L\"{u}scher's approach is based on the observation that when the volume is small, the running coupling becomes small due to asymptotic freedom, and perturbative calculations can then be applied. The physical result is finally obtained by doing a sequence of calculations with increasing volumes and extrapolating to the infinite volume limit. However, there are some obstacles in doing the infinite volume extrapolation. One is due to the existence of instantons, mostly addressed so far in the literature \cite{baal}. The other is related to the existence of phase transitions, such as the deconfinement transition in QCD at finite temperature. When a phase transition is present the infinite volume extrapolation can hardly be smooth. The 1+1 dimensional Gross-Neveu model in the large-$N$ limit provides a perfect example for illustrating the problem. Since $M=\Lambda_{\overline{\rm MS}}$ is the only dimensionful parameter, the transition point $L_c$ is of order one in units of $1/M$ when $\theta=0$. This transition is of course the same as the finite temperature phase transition. From Fig.\ref{fig1} we see that the entire regime of finite mass gap is located in the non-perturbative region, where the perturbative running coupling is very large or divergent, as exemplified by the first order (dotted line) and second order (dashed line) perturbative results, respectively. When the size of the box $L$ is smaller than $L_c$, chiral symmetry forbids the mass gap generation. Hence the small-volume expansion in the most naive form does not work at all. However, as we have shown, we can delay the phase transition to an arbitrary point by varying the relevant boundary condition. This widens the window in which the mass gap curve overlapps with the perturbative regime. More concretely, let us examine the mass gap behavior near the bosonic limit. To order $(\theta-1/2)^2$ the mass gap equation is \begin{equation} \ln\Bigl({1\over ML}\Bigr)+\ln{4\pi}-\gamma_E= \Bigl(4(\theta-1/2)^2+{m(L,\theta)^2L^2\over\pi^2}\Bigr)^{-1/2}\, , \end{equation} which yields, in the region of $|\theta-1/2|<g^2(L)/2\pi$, \begin{equation} m(L,\theta)={g^2(L)\over L} \sqrt{1-{4\pi^2(\theta-1/2)^2\over g^2(L)}}\, , \end{equation} where $g^2(L)=\pi/\ln(4\pi e^{-\gamma_E}/ML)$ is the perturbative running coupling constant. The critical region is then shifted to $g^2(L)\sim|\theta-1/2|$. Thus, close to the bosonic limit $\theta=1/2$, the mass gap curve possesses a big region in which a perturbative calculation can be carried out. At the very value of $\theta=1/2$ the gap equation Eq.(\ref{dveff2}) becomes identical to that of the non-linear $\sigma$-model in the large-$N$ limit \cite{luescher}. In practice, some intermediate values of $\theta$ may be optimal to make the curve flat enough for a better extrapolation, though the choice of $\theta$ would only be known {\it a posteriori}. Using more general boundary conditions, such as twisted boundary conditions \cite{thooft}, to improve the convergence in the Yang-Mills gauge theory have been pursued \cite{twisted}. Strictly speaking, the analogy with finite temperature phase transitions is no longer valid when more than one spatial directions are compactified. However, even in these cases, rapid cross-overs are still expected, though there would be no true singularities, which only associate with phase transitions. We would like to mention in passing that the phase transition in the 1+1 dimensional Gross-Neveu model with a finite $L$ is no longer possible when $N$ is finite due to the existence of the $\sigma$-kinks, as shown in Ref. \cite{dashen}. For this reason we will not dwell further into details on the thermodynamical observables in the 1+1 dimensional case. However, since this absence of the phase transition is invisible in the large-$N$ expansion to any finite order, the results we found above are entirely legitimate within the domain of the $1/N$ expansion. As we will show in the next section, qualitatively similar results are obtained in the 2+1 dimensional Gross-Neveu model, where a phase transition can exist for an infinitely long cylinder (but with finite circumference $L$) at zero temperature. \section{Results in 2+1 dimensions} The effective potential for the $\sigma$-field in 2+1 dimensions to leading order in $1/N$ has a similar form as Eq.(\ref{veff2}) \begin{equation} V_{\rm eff}[\sigma,\theta]=N\biggl[ {\mu^{2\epsilon}\sigma^2\over 2g_B^2}- {1\over L}\sum_{n=-\infty}^\infty \mu^{2\epsilon}\int {d^{2-2\epsilon}\omega \over (2\pi)^{2-2\epsilon}} \ln(\omega^2+\tilde{k}_n^2+\sigma^2)\biggr] \, . \label{veff3} \end{equation} Again we first calculate the derivative of $V_{\rm eff}$ with respect to $\sigma$. After some standard manipulation we obtain \begin{equation} {\partial V_{\rm eff}[\sigma,\theta]\over\partial\sigma} =N\sigma\biggl[{\mu^{2\epsilon}\over g_B^2} +{1\over 2\pi L}\ln(4\cos^2{\pi\theta})+{1\over 2\pi L} \sum_{\nu=1}^\infty C_{2\nu}(\theta) (\sigma L)^{2\nu}\biggr]\, , \label{dveff3} \end{equation} where \begin{equation} C_{2\nu}(\theta)={(-1)^{\nu+1}\over\nu} \Bigl({1 \over 2\pi}\Bigr)^{2\nu} \Bigl[\zeta(2\nu,1/2+\theta)+\zeta(2\nu,1/2-\theta)\Bigr]\, . \end{equation} Due to the use of dimensional regularization we do not encounter infinities here. However, in order for the theory to possess spontaneous parity breaking in the limit $L\rightarrow\infty$ we demand, as in Ref. \cite{review}, \begin{equation} {\mu^{2\epsilon}\over g_B^2}=-{M\over 2\pi} \, . \end{equation} $M$ will turn out to be the dynamical mass of the particle in the limit $L\rightarrow\infty$. We want to remind the reader that in 2+1 dimensions $g_B^2$ has the dimension of inverse mass. If we further use the infinite product representation of \begin{equation} \cosh{z}-\cos{2\pi\alpha}= 2\sin^2{\pi\alpha}\Bigl(1+{z^2\over 4\pi^2\alpha^2}\Bigr) \prod_{k=1}^\infty\Bigl(1+{z^2\over 4\pi^2(k+\alpha)^2}\Bigr) \Bigl(1+{z^2\over 4\pi^2(k-\alpha)^2}\Bigr) \, , \end{equation} the series in Eq.(\ref{dveff3}) can be summed into a closed form \begin{equation} {\partial V_{\rm eff}[\sigma,\theta]\over\partial\sigma} ={N\sigma\over 2\pi}\biggl[-M+{1\over L} \ln\bigl(2\cosh{\sigma L}+2\cos{2\pi\theta}\bigr)\biggr]\, . \end{equation} The mass gap is immediately solved from $\partial V_{\rm eff}(\sigma=m)/\partial\sigma=0$, \begin{equation} m(L,\theta)=\left\{\begin{array}{ll} {1\over L}\cosh^{-1} \Bigl[1+{1\over 2}(e^{ML}-e^{ML_c(\theta)})\Bigr]\, , &\text{when $L>L_c(\theta)$,}\\ 0\, , &\text{when $L<L_c(\theta)$,}\end{array} \right. \label{gap3} \end{equation} where the critical length $L_c(\theta)$ is defined by \begin{equation} L_c(\theta)=\left\{\begin{array}{ll} 0\, , &\text{when $\theta\in(1/3,2/3)$,} \\ {1\over M}\ln(4\cos^2{\pi\theta})\, , &\text{otherwise.}\end{array}\right. \label{lc3} \end{equation} The free energy, equal to the effective potential evaluated at $\sigma=m(L,\theta)$, has the form \begin{equation} F[L,\theta]={N\over 2\pi}\biggl[-{M\over 2}m^2(L,\theta) +{1\over L}\int_0^{m(L,\theta)}d\sigma\,\sigma \ln\bigl(2\cosh{\sigma L}+2\cos{2\pi\theta}\bigr)\biggr]\, . \label{fe3} \end{equation} For convenience from the experimental point of view we would like to regard $\theta$, rather than $1/L$, as the independent variable, because one could use the same sample and only change the magnetic field strength in the solenoid. In Fig.\ref{fig2} we display the mass gap as a function of $\theta$ for three values of $1/L$. The distinct feature in the figure is the non-analyticity of $m(L,\theta)$ as a function of $\theta$ when $L\le L_c(0)$, reflecting the fact that the Aharonov-Bohm effect pushes the critical length of the phase transition to a smaller value or effectively changes the mass scale of the theory to a larger value. In Fig.\ref{fig3} we plot the phase diagram in the $(1/L,\theta)$-plane determined from Eq.(\ref{gap3}). We not only find the change of the critical length as $\theta$ is varied, we also find, surprisingly, that there is a region $\theta\in(1/3,2/3)$ where parity is always broken, no matter how small $L$ is. In other words, even when the renormalized mass scale $M=0$, the system would still prefer to stay in the parity breaking phase for $\theta\in(1/3,2/3)$. To understand why there should be such a region of $\theta$ where parity is always broken, let us focus on the $\sigma$-propagator. We know from the literature \cite{review} that the mechanism of parity breaking at zero temperature (corresponding to $L\rightarrow\infty$ and $\theta=0$ in our case) is due to the formation of a quasi-bound state, which shows up as a ``pole'' at the fermion-antifermion threshold, $\omega^2=4M^2$. It is this quasi-bound state, analogous to the Cooper-pair in an usual superconductor, that condenses and renders the expectation value $\langle\bar{\psi}\psi\rangle$ non-vanishing. By an explicit calculation of the $\sigma$-propagator, we can show that this quasi-bound state persists whenever the mass gap $m(L,\theta)$ is finite. Therefore the mechanism for parity breaking is the same when $\theta\in(1/3,2/3)$ as when $L\rightarrow\infty$ and $\theta=0$. Since the threshold becomes zero in the symmetric phase due to the vanishing mass gap, all states have to lie within the continuum and hence there can not be any bound state. One can easily verify that $\omega^2=0$ is no longer a pole for the $\sigma$-propagator in the symmetric phase. Qualitatively, the symmetric phase is a kinetic energy dominated phase, while the symmetry breaking phase is the interaction dominated phase. When the size of the cylinder $L$ becomes smaller, the kinetic energy becomes more and more important, if none of the discrete momentum levels vanish. In our case the effective momentum is $\tilde{k}_n=\pi(2n+1+\theta)/L$, which interpolates between the fermionic and bosonic cases. When $\theta$ is finite, one of the $\tilde{k}_n$ is closer to zero and thus the role of the kinetic energy is reduced. This in turn implies that, effectively, the interaction becomes stronger when $\theta$ increases from the fermionic value 0 to the bosonic value 1/2. We will show, in section V, that the presence of $\theta$ indeed modifies the phase acquired when interchanging two particles, from the original fermionic value, $\pi$, to $\pi+2\pi\theta$. This extra phase could be interpreted as an additional contact interaction. One such example in the context of non-relativistic quantum mechanics was noticed some time ago by Leinaas and Myrheim \cite{leinaas}. Another physical observable is the induced (or persistent) surface current density circling around the cylinder, defined by \begin{equation} J[L,\theta]\equiv -{\partial F[L,\theta]\over\partial a_1}= -{L\over 2\pi}{\partial F[L,\theta]\over \partial\theta} ={N\sin{2\pi\theta}\over 2\pi L^2}\int_0^{m(L,\theta)L} dx {x\over\cosh{x}+\cos{2\pi\theta}} \, . \end{equation} One can easily recognize that the overall factor $\sin{2\pi\theta}$ in the above equation has the same origin as a similar factor in the Aharonov-Bohm scattering amplitude \cite{aharonov}. This overall factor implies that the induced current vanishes at the fermion and boson limits or when $\theta$ is integer or half integer, in analogy to the vanishing of the Aharonov-Bohm scattering cross section in the same limits. A similar induced current on a ring was derived in terms of a single particle picture in Ref. \cite{gerry} more than a decade ago. What is novel in our case is that $m(L,\theta)$ is a dynamically determined quantity, due to many-body effects, rather than an external parameter. In Fig.\ref{fig4} we plot $J[L,\theta]$ as a function of $\theta$ for several values of $1/L$. Again we observe the non-analyticity, inherent from the non-analyticity of $m(L,\theta)$. Expanding the free energy in Eq.(\ref{fe3}) as a power series in $m(L,\theta)L$, we obtain \begin{equation} F[L,\theta]={N m(L,\theta)^2\over 4\pi} \biggl[-M+{1\over L}\ln(4\cos^2{\pi\theta})+{1\over L} \sum_{\nu=1}^\infty \overline{C}_{2\nu}(\theta) \Bigl(m(L,\theta)L\Bigr)^{2\nu}\biggr] \, , \end{equation} with \begin{equation} \overline{C}_{2\nu}(\theta)= {(-1)^{\nu+1}\over \nu(\nu+1)}\Bigl({1 \over 2\pi}\Bigr)^{2\nu} \Bigl[\zeta(2\nu,1/2+\theta)+\zeta(2\nu,1/2-\theta)\Bigr]\, . \end{equation} This expression for $F[L,\theta]$ can be interpreted as a virial expansion, in analogy to the expansion of pressure as a power series in the particle density. The corresponding virial coefficients are $\overline{C}_{2\nu}(\theta)$, which become singular when $\theta=1/2$, since $\zeta(\alpha,z)\sim z^{-\alpha}$ when $z$ is small enough. We will show in section V that $\theta$ plays the role of the statistics parameter, with $\theta=0$ being the fermionic limit and $\theta=1/2$ being the bosonic limit. The divergence of the virial coefficients at $\theta=1/2$ apparently reflects the fact that it is not possible to use a single series to represent the free energy with all the coefficients smoothly interpolating between the fermionic and bosonic regimes, even though both regimes are well behaved by themselves. Put differently, the fermionic regime and bosonic regime are separated by non-perturbative physics (in terms of the statistics parameter). A much milder singularity (cusp) is found for the second virial coefficient in the non-relativistic anyon system considered by Arovas, Schrieffer, Wilczek and Zee \cite{arovas}. One may speculate that the stronger singularity we find is rooted in the deep connection between statistics and spin in fully relativistic theories. It is worth emphasizing again that the non-analyticity is a manifestation of the Aharonov-Bohm effect in an infinite many-particle system. Without a phase transition, the $\theta$ dependence of physical observables would never be singular. \section{Universality} In the last two sections we found that many thermodynamical observables are strongly influenced by the Aharonov-Bohm effect. In order to reveal the precise influence of the Aharonov-Bohm effect on the phase transition we calculate the critical exponents in this section. It turns out that if we fix $\theta$ and vary $1/L$, in analogy to the usual variation of temperature, the universalities of the chiral restoration in the 1+1 dimensional case and parity restoration in the 2+1 dimensional case are not altered by the presence of $\theta$ to leading order in $1/N$, as long as the phase transition is not totally destroyed. Since the calculation process is insensitive to the space-time dimensions involved, we will only present the calculation in the case of 2+1 dimensions. For notational simplicity let us define $\tau\equiv [L-L_c(\theta)]/L_c(\theta)$, with $L_c(\theta)$ given by Eq.(\ref{lc3}). For detailed definitions of the critical exponents we refer to the book \cite{itzykson}. Because the phase transition exists when $\theta\in [0,1/3)$ and $\theta\in (2/3,1]$, the critical exponents are defined only in these regions in 2+1 dimensions. From Eq.(\ref{gap3}) we can immediately solve for the mass gap $m(L,\theta)$, for small $\tau>0$, \begin{equation} m(L,\theta)\approx {M\over\sqrt{\ln(4\cos^2{\pi\theta})} } \, \tau^{1/2}, \end{equation} which in turn yields $\beta=1/2$. The free energy clearly vanishes in the symmetric phase when $L<L_c(\theta)$, since $m(L,\theta)=0$ there. In the symmetry breaking phase the free energy is given, as can be seen directly from Eq.(\ref{fe3}), by \begin{equation} F[L,\theta]\approx{N m(L,\theta)^2\over 4\pi} \biggl[-M+{L_c(\theta)\over L}M+ \overline{C}_2(\theta)\Bigl(m(L,\theta)L\Bigr)^2\biggr] =-{N M^3\over 4\pi} {1-\overline{C}_2(\theta)\over\ln(4\cos^2{\pi\theta})}\, \tau^2\, , \end{equation} with $\overline{C}_2(\theta)=1/8\cos^2\pi\theta$. The specific heat $c_v$ is proportional to the second derivative of $F[L,\theta]$ with respect to $\tau$. Thus we see that $c_v$ is finite in both phases and has a discontinuity at the critical point, which implies that $\alpha=\alpha'=0$. To obtain the exponents $\gamma$, $\nu$ and $\eta$ we need to calculate the two-point correlation function for the $\sigma$-field in momentum space in the ``static'' limit, $D_\sigma(\omega=0,{\bf p})$. Here $\omega$ should be understood as the momentum in the $x$-direction and ${\bf p}$ represents the momenta along the Euclidean time and $y$-direction, respectively. To leading order in $1/N$ this correlation function is given by simple iteration of the bubble graph. After some lengthy but straightforward calculation we obtain, to order ${\bf p}^2$, \begin{equation} D_\sigma(\omega=0,{\bf p})={\pi\over N} \Bigl[{\overline{C}_2(\theta)L_c(\theta)\over 2}{\bf p}^2 +aM\tau\Bigr]^{-1}\, , \label{sprop} \end{equation} where $a=1$ when $\tau>0$ and $a=-1/2$ when $\tau<0$. At the criticality, or $\tau=0$, $D_\sigma(\omega=0,{\bf p})\propto {\bf p}^{-2}$, which implies $\eta=0$. The susceptibility $\chi$ is proportional to $D_\sigma(\omega=0,{\bf p}=0)\propto \tau^{-1}$ in both phases. Thus, we have $\gamma=\gamma'=1$. Eq.(\ref{sprop}) also implies that the correlation length $\xi$ has the form \begin{equation} \xi=\sqrt{\overline{C}_2(\theta)L_c(\theta)\over 2a M}\, |\tau|^{-1/2}\, , \end{equation} or equivalently $\nu=\nu'=1/2$. In order to obtain the last exponent $\delta$ we need to introduce an uniform external field $h$ coupled linearly to $\bar{\psi}\psi$ (or a mass term). In the presence of $h$ the gap equation to leading order in $1/N$ is still given by Eq.(\ref{dveff3}) except for the replacement $\sigma\rightarrow\sigma+h$ in the one-loop part. As a consequence of this replacement the gap equation now becomes \begin{equation} m\biggl\{\tau+{1\over ML_c(\theta)} \sum_{\nu=1}^\infty C_{2\nu}(\theta) \Bigl(m(L,\theta)L\Bigr)^{2\nu}\biggr\}=h\, . \end{equation} At the critical point we have $m^3\propto h$, or in other words, $\delta=3$. This completes the calculation of the exponents in 2+1 dimensions. The same result was obtained in Ref. \cite{rosenstein} for $\theta=0$. One obtains identical exponents by a similar calculation in 1+1 dimensions. We summarize the final result in Table \ref{tab1}. It hardly escapes notice that the exponents in Table \ref{tab1} are of the mean-field type. This does not constitute a surprise since we only work up to the leading order in $1/N$. However, one should be aware that the large-$N$ limit alone does not guarantee a mean-field result. For example, the non-linear $\sigma$-model in lower than 4 dimensions violates mean-field universality even in leading order in $1/N$ \cite{itzykson}. The real reason in our case is that no $\sigma$-loop contribution is involved to leading order in $1/N$. Of course, we anticipate that the degeneracy of the critical exponents in 1+1 and 2+1 dimensions is lifted when we include higher order corrections. It perhaps should be mentioned explicitly that, though the critical exponents are independent of $\theta$, the amplitudes do depend on $\theta$. Finally, we would like to point out that, if we regard $\theta$ as the independent variable and fix $1/L$, we find that the functional dependence on $\theta$ of some physical observables, such as the mass gap and the induced current, near the critical region depends on the value of $1/L$, as explicitly indicated in Fig.\ref{fig2} and Fig.\ref{fig4}, with $L=L_c(0)$ as a special point. \section{The Nature of the quasi-particle} Since the interactions between quasi-particles are of the order $1/N$, which is vanishingly small in the large $N$ limit, we are allowed to discard the interaction all together when we attempt to elucidate the nature of the quasi-particle. The effective Lagrangian is simply given by \begin{equation} {\cal L}_{\rm eff}= \bar{\psi}[i\partial_\mu-a_\mu(\theta)]\gamma^\mu\psi -m\bar{\psi}\psi \, , \end{equation} with $m=m(L,\theta)$ and $\psi(x+L)=-\psi(x)$. The constant gauge field can be eliminated by a gauge transformation, as mentioned in the Introduction, at the expense of the $\psi$-field becoming multi-valued when $\theta\neq 0$ or $1/2$. This multi-valuedness signals that the quasi-particle is anyon-like. To be more specific, let us calculate the propagator for the $\psi$-field, which can be easily written down in momentum space as \begin{equation} \tilde{S}(k_n,\omega;\theta)=i {\omega\gamma_0-\tilde{k}_n\gamma_1+m \over \omega^2-\tilde{k}_n^2-m^2+i\epsilon} \, . \end{equation} The propagator in coordinate space is then obtained by a Fourier transform \begin{equation} S(x,t;\theta)={1\over L}\sum_{n=-\infty}^\infty \int_{-\infty}^\infty {d\omega\over 2\pi} \, e^{i(\omega t-k_n x)}\tilde{S}(k_n,\omega;\theta) =\bigl[(i\partial_\mu-a_\mu(\theta))\gamma^\mu+m\bigr] S_0(x,t;\theta) \, , \label{prop} \end{equation} where $S_0(x,t;\theta)$ is the ``bosonic'' propagator \begin{eqnarray} S_0(x,t;\theta)&=&{1\over L}\sum_{n=-\infty}^\infty e^{-i k_n x} \int_{-\infty}^\infty {d\omega\over 2\pi} e^{i\omega t} {i\over \omega^2-\tilde{k}_n^2-m^2+i\epsilon} \, , \\ &=&{1\over L}\sum_{n=-\infty}^\infty \, e^{-i k_n x}{1\over 2E_n}\left[\theta(t) e^{-i E_n t} +\theta(-t) e^{i E_n t} \right] \, , \end{eqnarray} and $E_n=\sqrt{\tilde{k}_n^2+m^2}$. To carry out the discrete momentum sum we use the Poisson summation formula \begin{equation} \sum_{n=-\infty}^\infty\, f(n)=\sum_{l=-\infty}^\infty \int_{-\infty}^\infty d\phi \, e^{-i2\pi l\phi} f(\phi) \, . \end{equation} It is then straightforward, following the procedure in \cite{bogoliubov}, to obtain \begin{equation} S_0(x,t;\theta)=\sum_{l=-\infty}^\infty e^{i\pi l}\, e^{i2\pi(l+x/L)\theta} S_B(x+lL,t) \, , \label{prop0} \end{equation} where $S_B(x,t)$ is given by \begin{equation} S_B(x,t)={1\over 2\pi}\theta(x^2-t^2)K_0(m\sqrt{x^2-t^2}) -i{1\over 4}\theta(t^2-x^2)H_0^{(2)}(m\sqrt{t^2-x^2}) \, . \label{propb} \end{equation} In the above equation $t$ should be understood as $t(1-i\epsilon)\equiv t-i\epsilon\text{sign}(t)$ or equivalently $t^2\rightarrow t^2-i\epsilon$, to avoid divergences on the light-cone. The physical meaning of each term in $S_0(x,t;\theta)$ can be easily recognized. The ``$l$'' sum is obviously a sum over winding numbers. The first factor is due to the anti-periodic boundary condition. The second factor accounts for the total flux embraced by a quasi-particle with a path which winds around the origin ``$l$'' times. The third factor is nothing but the relativistic boson propagator in flat 1+1 dimensional Minkowski space from point $x_i=(0,0)$ to point $x_f=(x+lL,t)$. Therefore, the above equation is a decomposition of $S_0(x,t;\theta)$ into a sum over paths of distinct homotopy class in the universal covering space with $\theta$ taking the role of the statistics parameter. In the limit $L>>t$ all terms except $l=0$ are exponentially (or algebraically when $m=0$) suppressed and $S(x,t;\theta)$ approaches the usual boson propagator. The interpretation of $\theta$ as the statistics parameter can be also understood from the point of view of interchanging a pair of quasi-particles. Since these quasi-particles can not penetrate through each other due to the hard core (inherent from the fermionic Lagrangian when $\theta=0$), the only way to interchange the positions of quasi-particle number 1 located at $x_1=0$ and quasi-particle number 2 at $x_2=x$ is to let quasi-particle 1 travel through the interval $(0,x)$, while quasi-particle 2 travels through the interval $(x,L)$ and then $(L:=0,x)$. The combined world line exactly circumscribes the ring once and therefore accumulates a phase factor \begin{equation} \exp\biggl\{i\oint dx^\mu a_\mu(\theta)\biggr\}=e^{i2\pi\theta} \, . \end{equation} In possession of the explicit expression for $S(x,t;\theta)$ we can use the Bjorken-Johnson-Low formula \cite{bjl} to calculate the equal-time anti-commutation relation \begin{equation} \{\psi_\alpha(x,0),\psi_\beta^\dagger(0,0)\}= \Bigl(\lim_{t\rightarrow 0^+} S(x,t;\theta)\gamma_0- \lim_{t\rightarrow 0^-} S(x,t;\theta)\gamma_0\Bigr)_{\alpha\beta} \end{equation} and see whether the usual anti-commutator is affected by the presence of the magnetic flux. It is readily verified that \begin{equation} \lim_{\epsilon\rightarrow 0}\lim_{t\rightarrow 0} S(x,t(1-i\epsilon);\theta)={1\over 2}{\rm sign}(t) \sum_{l=-\infty}^\infty (-1)^l \delta(x+lL)\gamma_0 + \,\, \text{terms independent of $t$} \, , \end{equation} which in turn yields \begin{equation} \{\psi_\alpha(x,0),\psi_\beta^\dagger(0,0)\}= \delta_{\alpha\beta}\sum_{l=-\infty}^\infty (-1)^l \delta(x+lL) \, . \end{equation} While the sum over ``$l$'' is again due to the geometry of the ring, we have verified that $\theta$ does not enter the canonical equal-time anti-commutator in our model, in contrast to the graded commutator obeyed by anyons induced via the Chern-Simons action. This result should not be surprising in light of the local character of the canonical commutation relation and the fact that the magnetic flux is global and never attached to each particle in our model. On the other hand, the propagator contains global information and should have a non-trivial $\theta$-dependence, which in turn yields the non-trivial dependence on $\theta$ of thermodynamical quantities. Thus, the nature of the quasi-particle now becomes clear. Locally, the quasi-particle is a fermion, but globally, the quasi-particle behaves like an anyon. Similarly, the propagator in 2+1 dimensions can be calculated and is again given by Eq.(\ref{prop}) except that $S_0(x,t;\theta)$ in Eq.(\ref{prop0}) should be replaced by the 2+1 dimensional version \begin{equation} S_0(x,y,t;\theta)=\sum_{l=-\infty}^\infty e^{i\pi l}\, e^{i2\pi(l+x/L)\theta} S_B(x+lL,y,t) \, , \end{equation} with \begin{equation} S_B(x,y,t;\theta)= \theta(r^2-t^2){e^{-m\sqrt{r^2-t^2}}\over 4\pi\sqrt{r^2-t^2}} -i\theta(t^2-r^2){e^{-im\sqrt{t^2-r^2}}\over 4\pi\sqrt{t^2-r^2}} \, , \end{equation} and $r=\sqrt{x^2+y^2}$. Using the identity \begin{equation} \lim_{\epsilon\rightarrow 0}{\epsilon\over(r^2+\epsilon^2)^{3/2}} =2\pi\delta(x)\delta(y) \, , \end{equation} the anti-commutator in 2+1 dimensions is then given by \begin{equation} \{\psi_\alpha(x,y,t),\psi_\beta^\dagger(0,0,t)\}=\delta_{\alpha\beta} \sum_{l=-\infty}^\infty (-1)^l \delta(x+lL)\delta(y) \, . \end{equation} It is not difficult to see that the quasi-particle in the 2+1 dimensional case is anyon-like only in the $x$ direction. In other words, only those world lines which circumscribe the axis of the cylinder non-trivially can acquire phases of integer multiples of $2\pi\theta$. \section{Summary and outlook} We considered the statistical mechanics of the Gross-Neveu model on a ring and on a cylinder with a magnetic solenoid coinciding with the axis. Among the interesting results we obtained are 1) the periodic but non-analytic dependence of thermodynamical observables on the magnetic flux ($\theta$) and 2) the existence of an interval of $\theta\in(1/3,2/3)$ (modulo integers) where parity is always spontaneously broken. All these phenomena are explicit manifestations of the Aharonov-Bohm effect in thermodynamics or in many-body systems. We further showed that the mean-field nature of the existing phase transitions is preserved to leading order in $1/N$, by verifying the $\theta$-independence of all the critical exponents. The precise nature of the quasi-particle, locally fermion-like and globally anyon-like, was illuminated through the calculation of the equal-time commutator and the decomposition of the propagator into a sum over paths of distinct homotopy class or winding number. There are several directions in which our present work can be extended within the same model. The most interesting one is to include an external space-time dependent electromagnetic field and study the response to it. To set up junctions and investigate tunnelings and interferences could also be interesting. Although we only worked to leading order in $1/N$, we do expect that our results, especially in the 2+1 dimensional case, are qualitatively stable against higher order $1/N$-corrections, based on the next to leading order study of the same model in Ref. \cite{gat} in the absence of $\theta$. Nevertheless, an explicit calculation of the higher order corrections should be carried out. One may also try to introduce a chemical potential and study the situation away from the half-filling point. As we have been emphasizing all along, the periodic but non-analytic $\theta$-dependence of thermodynamical observables is a manifestation of the Aharonov-Bohm effect in many-body systems with phase transitions, not necessarily pertinent only to the Gross-Neveu model itself. For this reason we anticipate that similar phenomena can be found in other models. The detailed microscopic interactions are likely to play minor role, as long as phase transitions exist. Thus, in order to make direct contact with experimental verification, it would be very interesting to consider more realistic models.
1,108,101,563,982
arxiv
\section{Introduction} \label{sec1}In 1984 Jones \cite{J1, J2} discovered a polynomial invariant of oriented knots. Later, along this clue, the two-variable HOMFLY-PT \cit {HOMFLY, PT} and one-variable Kauffman polynomials \cite{Kau1} of oriented and unoriented links have also been discovered in 1985 and 1987 respectively. HOMFLY-PT polynomial generalizes both Alexander and Jones polynomial. In 1990, two-variable Kauffman polynomial \cite{Kau2} was also introduced, which only generalizes the Jones polynomial. By using the Chern-Simons path integral method Witten \cite{W} gave a quantum field theory interpretation of the Jones polynomial. Witten \cite{W} also predicted the existence of 3-manifold quantum invariants. Following this track, Reshetikhin and Turaev \cite{RT1, RT2} gave a construction of 3-manifold invariants by using quantum universal enveloping algebra (quantum group) $U_{q}(sl_{2})$ at roots of unity, which leads to the colored version of classical HOMFLY-PT and Kauffman polynomial invariants. These work actually give a unified understanding of the quantum group invariants of links. Here color means the representation of quantum groups. It turns out that colored HOMFLY-PT polynomial is a special linear quantum group invariants, i.e. the quantum group of $A_{n}$ type; while colored Kauffman is a quantum group invariants of $B_{n}$, $C_{n}$ and $D_{n}$ type. In a series of papers, Labastida, Mari\~{n}o, Ooguri and Vafa \cite{LaM, LMV, OV} proposed a conjectural description of deep relationship between reformulated invariants of colored HOMFLY-PT link in late 1990s and early 2000s. This conjecture was proved by K. Liu and P. Peng in \cite{LP1, LP2}. In some sense the Labastida-Mari\~{n}o-Ooguri-Vafa (LMOV) conjecture can be expressed purely by using mathematical language, i.e. irreducible representation of quantum groups. The physics background of this conjecture can be dated back to 't Hooft's seminal work on large $N$ expansion of $U(N)$ gauge field theories in 1974. Gopakumar and Vafa \cite{GV} described the exact theory that closed topological string theory on the resolved conifold is dual to the $U(N)$ Chern-Simons theory on $S^{3}$. The Gromov-Witten theory of the resolved conifold actually corresponds to the Chern-Simons theory of an unknot. The LMOV conjecture considers the more general case when the link or knot is nontrivial and the corresponding Wilson loop expectation values, i.e. colored HOMFLY-PT polynomial of the link. So the LMOV conjecture could be viewed as a counterpart of Gopakumar-Vafa conjecture. Previously people thought that only HOMFLY-PT can be expressed as a series in $q-q^{-1}$ and $t$ with integer coefficients; while the colored HOMFLY-PT polynomial is just a polynomial of $q^{\pm 1}$ and $t^{\pm 1}$ with rational coefficients. The LMOV conjecture predicts an intrinsic symmetry of q-q^{-1} $ about reformulated invariants of the colored HOMFLY-PT polynomial and hidden integrality encoded in the colored HOMFLY-PT polynomial. In the late 2010s, the orthogonal LMOV conjecture was also formulated by Lin Chen \& Qingtao Chen \cite{Che1, CC, Che2}. and Marcos Mari\~{n}o \cite{M}. Chen-Chen's formulation put an emphasis on the colored Kauffman solely; while Mari\~{n}o's put an emphasis on the relation between the colored HOMFLY-PT polynomials and the colored Kauffman polynomials. More recently, Kefeng Liu and Pan Peng \cite{LP3} obtained a new structure of the colored HOMFLY-PT polynomial that the Chern-Simons partition function appeared in the LMOV conjecture can be expressed as an infinite product, which indicates some potential modularity of the Chern-Simons partition function \cite{LP3}. In this paper, an infinite product expression for the orthogonal Chern-Simons partition function appeared in the orthogonal LMOV type conjecture is established and the case of unknot is presented in explicit formula. This paper is organized as follows. In Section 2, we introduce the basic setups for the quantum group invariants of links. In Section 3, we describe the original and the orthogonal LMOV conjecture and notations. In Section 4, we derive the orthogonal Chern-Simons partition function as an infinite product and illustrate an example of the unknot. In Section 5, we discuss the symmetric properties associated to this infinite product structure. \section{Quantum Invariants of Links} \label{sec2}Let $\mathfrak{g}$ be a finite dimensional complex semi-simple Lie algebra of rank $N$ with Cartan matrix $(C_{ij})$. Let $U_{q}(\mathfrak{ })$ be the quantum enveloping algebra of $\mathfrak{g}$. Let $V$ be a vector space over a field $k$. A linear automorphism $c$ of $V\otimes V$ is said to be an $R$-matrix if it is a solution of the following Yang-Baxter equation \begin{equation} (c\otimes id_{V})(id_{V}\otimes c)(c\otimes id_{V})=(id_{V}\otimes c)(c\otimes id_{V})(id_{V}\otimes c) \end{equation that holds in the automorphism group of $V\otimes V\otimes V$. It is well known that the solution of the Yang-Baxter equation provides a representation of the braid group. The solution we used is the following so-called universal $R$-matrix. \begin{equation} R=q^{\underset{i,j}{\sum }C_{ij}^{-1}H_{i}\otimes H_{j}}\underset{\beta } \prod }exp_{q}[(1-q^{-2})X_{\beta }^{+}\otimes X_{\beta }^{-}], \end{equation where $\beta $ runs over positive roots of $sl(N,\mathbf{C})$, $(C_{ij})$ is the Cartan matrix, and $q-$exponential is given by \begin{equation} exp_{q}[x]=\underset{k=0}{\overset{\infty }{\sum }}q^{\frac{1}{2}k(k+1) \frac{x^{k}}{[k]_{q}!}, \end{equation where $[k]_{q}!=[k]_{q}\cdot \lbrack k-1]_{q}\cdot \cdot \cdot \lbrack 1]_{q} $, $[k]_{q}=\dfrac{[k]}{[1]}$ and $[n]=q^{n}-q^{-n}$. Given a link $\mathcal{L}$ with $L$ components. It is well-known that \mathcal{L}$ can be represented by an element in some braid group $B_{m}$ with $m$ strands. For each component, we associate to it with an irreducible representation $A^{\alpha }$ of quantized universal enveloping algebra U_{q}(sl(N,\mathbb{C}))$. $A^{\alpha }$ is labeled by the highest weight \Lambda _{\alpha }$. As usual, we associate to them with the Young diagrams. Without loss of generality, one can assume the first $m_{1}$ strands correspond to the first component, the second $m_{2}$ strands correspond to the second component, and so on. Let \begin{equation} \widehat{V}=\overset{L}{\underset{\alpha =1}{\bigotimes }}V_{\Lambda _{\alpha }}^{m_{\alpha }}. \end{equation} And write the braiding $\check{R}=P_{12}R:\, V\otimes W\rightarrow W\otimes V $, where $P_{12}(s\otimes t)=t\otimes s$. For a generator of the braid group $B_{m}$, $\sigma _{i}$, define \begin{equation} \pi (\sigma _{i}^{\pm 1})=Id_{V_{1}}\otimes Id_{V_{2}}\otimes \cdot \cdot \cdot \otimes Id_{V_{i-1}}\otimes \check{R}^{\pm 1}\otimes \cdot \cdot \cdot \otimes Id_{V_{m}}. \end{equation The quantum group invariants of the link $\mathcal{L}$ is defined as follows \begin{equation} W_{A^{1},\cdot \cdot \cdot ,A^{L}}^{\mathfrak{g}}(\mathcal{L})=q^{d(\mathcal L})}Tr_{\widehat{V}}(\mu ^{m}\cdot \pi (\mathcal{L})), \end{equation where $\mu =q^{\rho ^{\ast }}$, $\rho ^{\ast }$ is the element in $\mathfrak h\subset }U_{q}(\mathfrak{h})$ corresponding to Weyl vector (i.e. the sum of fundamental weights) under the natural isomorphism $\mathfrak{h\simeq h ^{\ast }$ and $d(\mathcal{L})$ is given by the following formula \begin{equation} d(\mathcal{L})=-\underset{\alpha =1}{\overset{L}{\sum }}\omega (\mathcal{K _{\alpha })(\Lambda _{\alpha },\Lambda _{\alpha }+2\rho )+\frac{2}{N \underset{\alpha <\beta }{\overset{L}{\sum }}lk(\mathcal{K}_{\alpha } \mathcal{K}_{\beta })l_{\alpha }l_{\beta }. \end{equation} Special case 1: For the unknot $\bigcirc $, $W_{A}(\bigcirc )$ is the quantum dimension $\dim _{q}(V_{A})$ of the corresponding representation space $V_{A}$. Special case\ 2: If $\mathfrak{g}=sl_{N}$ and $A^{1}=A^{2}=\cdot \cdot \cdot =A^{L}=(1)$, the quantum group invariant of links equal to the HOMFLY polynomial at $t=q^{N}$ up to a universal factor $\dfrac{t-t^{-1}}{q-q^{-1}} . Special case\ 3: If $\mathfrak{g}=so_{2N+1}$ and $A^{1}=A^{2}=\cdot \cdot \cdot =A^{L}=(1)$, quantum group invariant of links equal to Kauffman polynomial at $t=q^{2N}$ up to a universal factor $1+\dfrac{t-t^{-1}} q-q^{-1}}$ and some $t$ power of the linking numbers. Thus the quantum group invariant associated to $\mathfrak{g}=sl_{N}$ and \mathfrak{g}=so_{2N+1}$ are called the colored HOMFLY and the colored Kauffman polynomials respectively. Actually the irreducible representation of the quantum groups of special linear and orthogonal cases can be labeled by the Young Tableau. Now we would like to introduce some basic notation of the partition and the corresponding Young Tableau. A partition of $n$ is a tuple of positive integers $\mu =(\mu _{1},\mu _{2},\ldots ,\mu _{k})$ such that $|\mu |\triangleq \overset{k}{\underset{i= }{\sum }}\mu _{i}=n$ and $\mu _{1}\geq \mu _{2}\geq \cdots \geq \mu _{k}>0$, where $|\mu |$ is called the degree of $\mu $ and $k$ is called the length of $\mu $, denoted by $\ell (\mu )$. A partition can be represented by a Young diagram, for example, partition $(5,4,2,1)$ can be identified as the following Young diagram. Denote by $\mathcal{P}$ the set of all Young diagrams. Let $\chi _{A}$ be the character of irreducible representation of symmetric group, labeled by partition $A$ . Given a partition $\mu $, define $m_{j}=\#(\mu _{k}=j;k\geq 1)$. The order of the conjugate class of type $\mu $ is given by: \begin{equation} \mathfrak{z}_{\mu }=\prod_{j\geq 1}j^{m_{j}}m_{j}!. \end{equation} The theory of symmetric functions has a close relationship with the representations of symmetric group. The symmetric power functions of a given set of variables $x=\{x_{j}\}_{j\geq 1}$ are defined as the direct limit of the Newton polynomials: \begin{equation} p_{n}(x)=\overset{\infty }{\sum_{j=1}}x_{j}^{n},\qquad p_{\mu }(x)=\overset \ell (\mu )}{\prod_{i=1}}p_{\mu _{i}}(x). \end{equation} We will consistently denote by $\mathcal{L}$ a link and by $L$ the number of components in $\mathcal{L}$. The irreducible $U_{q}(\mathfrak{g})$ modules associated to $\mathcal{L}$ will be labeled by their highest weights, thus by Young diagrams. We usually denote it by a vector form $\vec{A}=(A^{1},\ldots ,A^{L})$. Let $\vec{x}=(x^{1},\ldots ,x^{L})$ is $L$ sets of variables, each of which is associated to a component of $\mathcal{L}$ and $\vec{\mu}=(\mu ^{1},\ldots ,\mu ^{L})\in \mathcal{P}^{L}$ be a tuple of $L$ partitions. Define: \begin{align*} & [\mu ]=\prod_{i=1}^{\ell (\mu )}[\mu _{i}], & & [\vec{\mu}]=\prod_{\alpha =1}^{L}[\mu ^{\alpha }], & & \mathfrak{z}_{\vec{\mu}}=\prod_{\alpha =1}^{L \mathfrak{z}_{\mu ^{\alpha }}, \\ & \chi _{\vec{A}}(C_{\vec{\mu}})=\prod_{\alpha =1}^{L}\chi _{A^{\alpha }}(C_{\mu ^{\alpha }}), & & s_{\vec{A}}(\vec{x})=\prod_{\alpha =1}^{L}s_{A^{\alpha }}(x^{\alpha }), & & p_{\vec{\mu}}(\vec{x )=\prod_{\alpha =1}^{L}p_{\mu ^{\alpha }}(x^{\alpha }). \end{align*} When we consider the orthogonal quantum group invariants, we need to study the Brauer algebra $Br_{n}$ which contains the group algebra $\mathbb{C [S_{n}]$ as a direct summand. Thus all the irreducible representations of S_{n}$ are also irreducible representations of $Br_{n}$, labeled by partitions of the integer $n$. Indeed, the set of irreducible representations of $Br_{n}$ are bijective to the set of partitions of the integers $n-2k$, where $k=0,1,\cdots ,[\frac{n}{2}]$ \cite{Ram2, Wen}. Thus the semi-simple algebra $Br_{n}$ can be decomposed into the direct sum of simple algebras \begin{equation} Br_{n}\cong \overset{\lbrack \frac{n}{2}]}{\bigoplus_{k=0} \bigoplus_{\lambda \vdash n-2k}M_{d_{\lambda }\times d_{\lambda }}(\mathbb{C ). \end{equation} The work of Beliakova and Blanchet \cite{BB} constructed an explicit basis of the above decomposition. An up and down tableau $\Lambda =(\lambda _{1},\lambda _{2},\cdots ,\lambda _{n})$ is a tube of $n$ Young diagrams such that $\lambda _{1}=(1)$ and each $\lambda _{i}$ is obtained by adding or removing one box from $\lambda _{i-1}$. Let $\lambda $ be a partition of n-2k$. Denote by $|\Lambda |=\lambda $ if $\lambda _{n}=\lambda $, and we say an up and down tableau $\Lambda $ is of shape $\lambda $. There is a minimal path idempotent $p_{\Lambda }\in Br_{n}$ associated to each $\Lambda $. Then the minimal central idempotent $\pi _{\lambda }$ of $Br_{n}$ correspond to the irreducible representation labeled by $\lambda $ is given by \begin{equation} \pi _{\lambda }=\sum_{|\Lambda |=\lambda }p_{\Lambda }. \end{equation} In particular, the dimension of the irreducible representations $d_{\lambda } $ is the number of up and down tableau of shape $\lambda $. More detail can be found in \cite{BB, Wen}. The table of the characters and orthogonal relations can be found in \cit {Ram1, Ram2, Ram3}. The values of a character of $Br_{n}$ is completely determined by its values on the set of elements $e^{k}\otimes \gamma _{\lambda }$, where $e$ is the conjugacy class of $e_{1},\cdots ,e_{n-1}$ and $\gamma _{\lambda }$ is the conjugacy class in $S_{n-2k}$ labeled by the partition $\lambda $ of $n-2k$. The notion $e^{k}\otimes \gamma _{\lambda }$ stands for the tangle in the following diagram. \begin{align*} & \ e_{0}\ \ \ \ \ e_{2}\ \ \ \ \cdots \ \ \ \ \ e_{2k}\ \ \ \ \ \ \gamma _{\lambda }\ \ \ \\ & \includegraphics[height=1.0153in, width=2.1318in] {Brauerconjugacyclass.eps}} \end{align* where $\Gamma _{\lambda }$ is a diagram in the conjugacy class of $S_{n-2k}$ labeled by a partition $\lambda $ of $n-2k$. Denote by $\chi _{A}$ the character of the irreducible representation of Br_{n} $ labeled by a partition $A\vdash n-2k$ for some $k$, and denote by \chi _{B}^{S_{n}}$ the character of the irreducible representation of $S_{n}$ labeled by a partition $B\vdash n$. It is known that when $A$ is a partition of $n$, then $\chi _{A}(e^{m}\otimes \gamma _{\lambda })=0$ for all $m>0$ and partition $\lambda \vdash n-2m$, and $\chi _{A}(\gamma _{\mu })=\chi _{A}^{S_{n}}(\gamma _{\mu })$ for partition $\mu \vdash n$ coincide with the characters of the permutation group $S_{n}$ \cite{Ram2}. \section{Labastida-Mari\~{n}o-Ooguri-Vafa type Conjecture for colored Kauffman polynomial} \label{sec3}Let's quickly review the original LMOV conjecture first. For each link $\mathcal{L}$, the type$-A$ Chern-Simons partition function of $\mathcal{L}$ is defined by \begin{equation} Z_{CS}^{SL}(\mathcal{L};q,t;\overrightarrow{x})=\sum_{\overrightarrow{A}\in \mathcal{P}^{L}}W_{\overrightarrow{A}}^{SL}(\mathcal{L};q,t)s_ \overrightarrow{A}}(\overrightarrow{x})=1+\sum_{\overrightarrow{\mu }\neq \overrightarrow{0}}Z_{\overrightarrow{\mu }}^{SL}p_{\overrightarrow{\mu }} \overrightarrow{x}), \end{equation where $s_{\overrightarrow{A}}(\overrightarrow{x})$ are the Schur polynomials. The original LMOV conjecture describes a very subtle structure of Z_{CS}^{SL}(\mathcal{L};q,t;\overrightarrow{x})$, which was proved by Kefeng Liu and Pan Peng \cite{LP1, LP2}, based on the cabling technique and a careful degree analysis of the cut-join equation. As an application, the LMOV conjecture gives highly non-trivial relations between colored HOMFLY polynomials. The first such relation is the classical Lichorish-Millett theorem \cite{LiM}. The study of the colored Kauffman polynomials are more difficult. For instance, the definition of the Chern-Simons partition function for the orthogonal quantum groups involves the representations of Brauer centralizer algebras, which admit a more complicated orthogonal relations \cite{Ram1, Ram2, Ram3}. In a joint work with Lin Chen, we \cite{CC} rigorously formulate the orthogonal quantum group version of LMOV conjecture by using the representation of the Brauer centralizer algebra. Now we set $\mathfrak{g=}so_{2N+1}$. Let $\mathcal{L}$ be a link with $L$ components an \begin{equation} pb_{n}(z)=\overset{\infty }{\underset{j=-\infty }{\sum }}z_{j}^{n},\qquad pb_{\mu }(z)=\overset{\ell (\mu )}{\prod_{i=1}}pb_{\mu _{i}}(z),\qquad pb_ \overrightarrow{\mu }}(\overrightarrow{z})=\overset{L}{\prod_{\alpha =1} pb_{\mu ^{\alpha }}(z^{\alpha }). \end{equation} Let $Z_{CS}^{SO}(\mathcal{L},q,t)$ be the orthogonal Chern-Simons partition function defined by \begin{equation} Z_{CS}^{SO}(\mathcal{L};q,t;\overrightarrow{z})=\sum_{\overrightarrow{\mu \in \mathcal{P}^{L}}\frac{pb_{\overrightarrow{\mu }}(\overrightarrow{z})} \mathfrak{z}_{\vec{\mu}}}\underset{\overrightarrow{A}\in \widehat{Br}_{ \overrightarrow{\mu }|}}{\sum }\chi _{\overrightarrow{A}}(\gamma _ \overrightarrow{\mu }})W_{\overrightarrow{A}}^{SO}(\mathcal{L};q,t), \end{equation where $\mathfrak{z}_{\vec{\mu}}=\frac{||\overrightarrow{\mu }||!}{|C_ \overrightarrow{\mu }}|}$, $|\overrightarrow{\mu }|=(d^{1},...,d^{L})$, \widehat{Br}_{|\overrightarrow{\mu }|}$ denotes the set $\widehat{Br _{d^{1}}\times \cdot \cdot \cdot \times \widehat{Br}_{d^{L}}$ (every element is a representation of the Brauer algebra), $\overrightarrow{\mu }=(\mu ^{1},...,\mu ^{L})$ for partitions $\mu ^{i}$ of $d^{i}\in \mathbb{Z}$ and \chi _{\overrightarrow{A}}(\gamma _{\overrightarrow{\mu }})=\overset{L} \underset{i=1}{\prod }}\chi _{A^{i}}(\gamma _{\mu ^{i}})$ for the character \chi _{A^{i}}$ of $Br_{d^{i}}$ labeled by $A^{i}$. Expend the free energy \begin{equation} F_{CS}^{SO}(\mathcal{L};q,t;\overrightarrow{z})=\log Z_{CS}^{SO}(\mathcal{L ;q,t;\overrightarrow{z})=\sum_{\overrightarrow{\mu }\neq \overrightarrow{0 }F_{\overrightarrow{\mu }}^{SO}(\mathcal{L};q,t)pb_{\overrightarrow{\mu }} \overrightarrow{z}), \end{equation} Then the reformulated invariants are defined by \begin{equation} g_{\overrightarrow{\mu }}(\mathcal{L};q,t)=\sum_{k|\overrightarrow{\mu } \frac{\mu (k)}{k}F_{\overrightarrow{\mu }/k}(\mathcal{L};q^{k},t^{k}). \end{equation} The orthogonal LMOV conjecture was formulated by L. Chen and Q. Chen \cit {CC} as follows \begin{conjecture}[Orthorgonal LMOV, Chen-Chen \protect\cite{CC}] \label{Main \begin{equation*} \frac{\mathfrak{z}_{\vec{\mu}}[1]^{2}\cdot \lbrack g_{\overrightarrow{\mu }} \mathcal{L};q,t)-g_{\overrightarrow{\mu }}(\mathcal{L};q,-t)]}{2 \overrightarrow{\mu }]}=\overset{\infty }{\sum_{g=0}}\sum_{\beta \in \mathbb Z}}N_{\overrightarrow{\mu },g,\beta }(q-q^{-1})^{g}t^{\beta }, \end{equation* where $N_{\overrightarrow{\mu },g,\beta }$ are the integer coefficients and vanish for sufficiently large $g$ and $|\beta |$, \end{conjecture} This conjecture is a rigorous mathematical formulation of the LMOV type conjecture about the colored Kauffman polynomial; while in \cite{BFM, M}, their conjecture emphasizes on the relationship between colored HOMFLY-PT and colored Kauffman. The integer coefficients $N_{\overrightarrow{\mu ,g,\beta }$ are closely related to the BPS numbers. \section{Infinite product formula for orthogonal Chern-Simons partition functions} \label{sec4}To derive an infinite product formula, we will state the result for a knot first, since the notations in the computation for a knot are relatively simpler. \subsection{The case of a knot} Given $z=\{z_{i}\}_{-\infty <i<\infty }$, $x=\{x_{j}\}_{j\geq 1}$, define \begin{equation*} x\ast y=\{x_{i}\cdot y_{j}\}_{-\infty <i<\infty ,j\geq 1}. \end{equation* We also define $z^{d}=\{z_{i}^{d}\}_{-\infty <i<\infty }$. The $d$-th Adam operation of a type-B Schur function is given by $sb_{A}(z^{d})$. Introduce variables $q^{\rho }=\{-q^{2j-1}\}_{j\geq 1}$. we have \begin{equation} p_{n}(q^{\rho })=\frac{1}{[n]}, \end{equation where we assume $|q|<1$. Set $w=z\ast q^{\rho }$, then we have \begin{equation} pb_{n}(w)=\frac{pb_{n}(z)}{[n]} \end{equation and \begin{equation} pb_{\mu }(w)=\frac{pb_{\mu }(z)}{[\mu ]}. \end{equation} Consider the free energy weighted by the $pb_{\mu }(w)$, orthogonal LMOV conjecture implies the following reformulation of the free energy: \begin{equation} F^{SO}(\mathcal{K};q,t;w)=\overset{\infty }{\underset{d=1}{\sum }}\underset \mu \neq 0}{\sum }\frac{1}{d}g_{\mu }(\mathcal{K};q^{d},t^{d})pb_{\mu }(w^{d}) \end{equation} and \begin{equation} g_{\mu }(\mathcal{K};q,t)-g_{\mu }(\mathcal{K};q,-t)=\frac{2[\mu ]} \mathfrak{z}_{\mu }[1]^{2}}\overset{\infty }{\sum_{g=0}}\sum_{\beta \in \mathbb{Z}}N_{\overrightarrow{\mu },g,\beta }(q-q^{-1})^{g}t^{\beta }. \end{equation} There exists integers $n_{B;\,g,\beta }$ such that \begin{equation} \overset{\infty }{\sum_{g=0}}N_{B,g,\beta }(q-q^{-1})^{g}=\overset{\infty } \sum_{g=0}}n_{B,g,\beta }\underset{k=0}{\overset{g}{\sum }}q^{g-2k}. \end{equation} By the orthogonal LMOV conjecture, $N_{B;g,\beta }$ vanish for sufficiently large $g$ and $|\beta |$, thus $n_{B;\,g,\beta }$ vanish for sufficiently large $g$ and $|\beta |$. We have \begin{eqnarray} &&F^{SO}(\mathcal{K};q,t;w)-F^{SO}(\mathcal{K};q,-t;w) \notag \\ &=&\overset{\infty }{\underset{d=1}{\sum }}\underset{\mu \neq 0}{\sum }\frac 1}{d}g_{\mu }(\mathcal{K};q^{d},t^{d})pb_{\mu }(w^{d})-\overset{\infty } \underset{d=1}{\sum }}\underset{\mu \neq 0}{\sum }\frac{1}{d}g_{\mu } \mathcal{K};q^{d},(-1)^{d}t^{d})pb_{\mu }(w^{d}) \notag \\ &=&\underset{d\in O\mathbb{Z}_{+}}{\sum }\underset{\mu \neq 0}{\sum }\frac{ }{d}(g_{\mu }(\mathcal{K};q^{d},t^{d})-g_{\mu }(\mathcal{K ;q^{d},-t^{d}))pb_{d\mu }(w) \notag \\ &=&\underset{d\in O\mathbb{Z}_{+}}{\sum }\underset{\mu \neq 0}{\sum }\frac{ }{d}\frac{2[d\mu ]}{\mathfrak{z}_{\mu }[d]^{2}}\overset{\infty }{\sum_{g=0} \underset{\beta \in \mathbb{Z}}{\sum }N_{\mu ,g,\beta }(q^{d}-q^{-d})^{g}t^{d\beta }pb_{d\mu }(w) \notag \\ &=&\underset{d\in O\mathbb{Z}_{+}}{\sum }\underset{\mu \neq 0}{\sum }\frac{ }{d}\frac{2}{\mathfrak{z}_{\mu }[d]^{2}}\overset{\infty }{\sum_{g=0} \underset{\beta \in \mathbb{Z}}{\sum }N_{\mu ,g,\beta }(q^{d}-q^{-d})^{g}t^{d\beta }pb_{d\mu }(z) \label{integer N} \\ &=&\underset{d\in O\mathbb{Z}_{+}}{\sum }\underset{\mu \neq 0}{\sum }\frac{ }{d}\frac{2}{\mathfrak{z}_{\mu }[d]^{2}}\overset{\infty }{\sum_{g=0} \underset{\beta \in \mathbb{Z}}{\sum }\underset{k=0}{\overset{g}{\sum } n_{\mu ,g,\beta }q^{(g-2k)d}t^{d\beta }pb_{d\mu }(z) \notag \\ &=&\underset{d\in O\mathbb{Z}_{+}}{\sum }\underset{\mu \neq 0}{\sum }\frac{ }{d}\frac{2}{\mathfrak{z}_{\mu }}\underset{m=1}{\overset{\infty }{\sum } mq^{2md}\overset{\infty }{\sum_{g=0}}\underset{\beta \in \mathbb{Z}}{\sum \underset{k=0}{\overset{g}{\sum }}n_{\mu ,g,\beta }q^{(g-2k)d}t^{d\beta }pb_{d\mu }(z) \notag \\ &=&\underset{d\in O\mathbb{Z}_{+}}{\sum }\underset{\mu \neq 0}{\sum }\frac{ }{d}\frac{2}{\mathfrak{z}_{\mu }}\overset{\infty }{\sum_{g=0}}\underset \beta \in \mathbb{Z}}{\sum }\underset{m=1}{\overset{\infty }{\sum }}\underse {k=0}{\overset{g}{\sum }}mn_{\mu ,g,\beta }q^{(g-2k+2m)d}t^{d\beta }pb_{d\mu }(z) \notag \\ &=&\underset{\mu \neq 0}{\sum }\overset{\infty }{\sum_{g=0}}\underset{\beta \in \mathbb{Z}}{\sum }\underset{m=1}{\overset{\infty }{\sum }}\underset{k=0} \overset{g}{\sum }}\frac{mn_{\mu ,g,\beta }}{\mathfrak{z}_{\mu }}\underset d\in O\mathbb{Z}_{+}}{\sum }\frac{2}{d}q^{(g-2k+2m)d}t^{d\beta }pb_{d\mu }(z), \end{eqnarray} where $O\mathbb{Z}_{+}=\{1,3,5,...\}$ denotes the set of all positive odd integers. Now we analyze the following computation in detail, \begin{align*} & \underset{d\in O\mathbb{Z}_{+}}{\sum }\frac{2}{d}q^{(g-2k+2m)d}t^{d\beta }pb_{d\mu }(z) \\ & =\underset{d\in O\mathbb{Z}_{+}}{\sum }\frac{2}{d}q^{(g-2k+2m)d}t^{d\beta \underset{j=1}{\overset{\ell (\mu )}{\prod }}\left( \underset{i=-\infty } \overset{\infty }{\sum }}(z_{i})^{d\mu _{j}}\right) \\ & =\underset{d\in O\mathbb{Z}_{+}}{\sum }\frac{2}{d}q^{(g-2k+2m)d}t^{d\beta \underset{i_{1},...,i_{\ell (\mu )}}{\sum }(z_{i_{1}}^{\mu _{1}}z_{i_{2}}^{\mu _{2}}\cdot \cdot \cdot z_{i_{\ell (\mu )}}^{\mu _{\ell (\mu )}})^{d} \\ & =\underset{i_{1},...,i_{\ell (\mu )}}{\sum }\underset{d\in O\mathbb{Z}_{+} {\sum }\frac{2}{d}(q^{g-2k+2m}t^{\beta }z_{i_{1}}^{\mu _{1}}z_{i_{2}}^{\mu _{2}}\cdot \cdot \cdot z_{i_{\ell (\mu )}}^{\mu _{\ell (\mu )}})^{d} \end{align*} Now we compute the series $\underset{d\in O\mathbb{Z}_{+}}{\sum }\frac{2}{d x^{d}$ as follows \begin{align*} & \underset{d\in O\mathbb{Z}_{+}}{\sum }\frac{2}{d}x^{d} \\ & =2\left( \overset{\infty }{\underset{d=1}{\sum }}\frac{1}{d}x^{d}-\underse {d\in 2\mathbb{Z}_{+}}{\sum }\frac{1}{d}x^{d}\right) \\ & =2\left( \overset{\infty }{\underset{d=1}{\sum }}\frac{1}{d}x^{d}-\overset \infty }{\underset{d=1\text{ }}{\sum }}\frac{1}{2d}x^{2d}\right) \\ & =2\left( -\log (1-x)+\frac{1}{2}\log (1-x^{2})\right) \\ & =\log \frac{1+x}{1-x}, \end{align* where $2\mathbb{Z}_{+}=\{2,4,6,...\}$ denotes the set of all positive even integers. Thus we obtain \begin{align*} & F^{SO}(\mathcal{K};q,t;w)-F^{SO}(\mathcal{K};q,-t;w) \\ & =\underset{\mu \neq 0}{\sum }\overset{\infty }{\sum_{g=0}}\underset{\beta \in \mathbb{Z}}{\sum }\underset{m=1}{\overset{\infty }{\sum }}\underset{k=0} \overset{g}{\sum }}\frac{mn_{\mu ,g,\beta }}{\mathfrak{z}_{\mu }}\underset d\in O\mathbb{Z}_{+}}{\sum }\frac{2}{d}q^{(g-2k+2m)d}t^{d\beta }pb_{d\mu }(z) \\ & =\underset{\mu \neq 0}{\sum }\overset{\infty }{\sum_{g=0}}\underset{\beta \in \mathbb{Z}}{\sum }\underset{m=1}{\overset{\infty }{\sum }}\underset{k=0} \overset{g}{\sum }}\frac{mn_{\mu ,g,\beta }}{\mathfrak{z}_{\mu }}\underset i_{1},...,i_{l(\mu )}}{\sum }\underset{d\in O\mathbb{Z}_{+}}{\sum }\frac{2}{ }(q^{g-2k+2m}t^{\beta }z_{i_{1}}^{\mu _{1}}z_{i_{2}}^{\mu _{2}}\cdot \cdot \cdot z_{i_{\ell (\mu )}}^{\mu _{\ell (\mu )}})^{d} \\ & =\underset{\mu \neq 0}{\sum }\overset{\infty }{\sum_{g=0}}\underset{\beta \in \mathbb{Z}}{\sum }\underset{m=1}{\overset{\infty }{\sum }}\underset{k=0} \overset{g}{\sum }}\frac{mn_{\mu ,g,\beta }}{\mathfrak{z}_{\mu }}\underset i_{1},...,i_{l(\mu )}}{\sum }\log \frac{1+q^{g-2k+2m}t^{\beta }z_{i_{1}}^{\mu _{1}}z_{i_{2}}^{\mu _{2}}\cdot \cdot \cdot z_{i_{\ell (\mu )}}^{\mu _{\ell (\mu )}}}{1-q^{g-2k+2m}t^{\beta }z_{i_{1}}^{\mu _{1}}z_{i_{2}}^{\mu _{2}}\cdot \cdot \cdot z_{i_{\ell (\mu )}}^{\mu _{\ell (\mu )}}} \\ & =\underset{\mu \neq 0}{\sum }\overset{\infty }{\sum_{g=0}}\underset{\beta \in \mathbb{Z}}{\sum }\underset{m=1}{\overset{\infty }{\sum }}\underset{k=0} \overset{g}{\sum }}\frac{mn_{\mu ,g,\beta }}{\mathfrak{z}_{\mu }}\log \underset{i_{1},...,i_{l(\mu )}}{\prod }\frac{1+q^{g-2k+2m}t^{\beta }z_{i_{1}}^{\mu _{1}}z_{i_{2}}^{\mu _{2}}\cdot \cdot \cdot z_{i_{\ell (\mu )}}^{\mu _{\ell (\mu )}}}{1-q^{g-2k+2m}t^{\beta }z_{i_{1}}^{\mu _{1}}z_{i_{2}}^{\mu _{2}}\cdot \cdot \cdot z_{i_{\ell (\mu )}}^{\mu _{\ell (\mu )}}} \end{align*} Define the symmetric product as shown in the following formula: \begin{equation*} \big\langle1\pm \omega z^{\mu }\big\rangle=\prod_{i_{1},\ldots ,i_{\ell (\mu )}}\Big(1\pm \omega z_{i_{1}}^{\mu _{1}}\cdots z_{i_{\ell (\mu )}}^{\mu _{\ell (\mu )}}\Big). \end{equation*} Thus we have \begin{align*} & F^{SO}(\mathcal{K};q,t;w)-F^{SO}(\mathcal{K};q,-t;w) \\ & =\underset{\mu \neq 0}{\sum }\overset{\infty }{\sum_{g=0}}\underset{\beta \in \mathbb{Z}}{\sum }\underset{m=1}{\overset{\infty }{\sum }}\underset{k=0} \overset{g}{\sum }}\frac{mn_{\mu ,g,\beta }}{\mathfrak{z}_{\mu }}\log \frac \big\langle1+q^{g-2k+2m}t^{\beta }z^{\mu }\big\rangle}{\big\langl 1-q^{g-2k+2m}t^{\beta }z^{\mu }\big\rangle} \\ & =\log \underset{\mu \neq 0}{\prod }\overset{\infty }{\underset{g=0}{\prod }\underset{\beta \in \mathbb{Z}}{\prod }\underset{m=1}{\overset{\infty } \prod }}\underset{k=0}{\overset{g}{\prod }}\left( \frac{\big\langl 1+q^{g-2k+2m}t^{\beta }z^{\mu }\big\rangle}{\big\langle1-q^{g-2k+2m}t^{\beta }z^{\mu }\big\rangle}\right) ^{\frac{mn_{\mu ,g,\beta }}{\mathfrak{z}_{\mu } } \end{align*} Now we obtain the infinite product formula for the orthogonal Chern-Simons partition function of knots. \begin{theorem}[Orthogonal Infinite Product Formula for Knots] The Chern-Simons partition function for orthogonal quantum group invariants can be expressed as the following infinite product formula \begin{equation} \frac{Z_{CS}^{SO}(\mathcal{K};q,t;w)}{Z_{CS}^{SO}(\mathcal{K};q,-t;w)} \underset{\mu \neq 0}{\prod }\overset{\infty }{\underset{g=0}{\prod } \underset{\beta \in \mathbb{Z}}{\prod }\underset{m=1}{\overset{\infty } \prod }}\underset{k=0}{\overset{g}{\prod }}\left( \frac{\big\langl 1+q^{g-2k+2m}t^{\beta }z^{\mu }\big\rangle}{\big\langle1-q^{g-2k+2m}t^{\beta }z^{\mu }\big\rangle}\right) ^{\frac{mn_{\mu ,g,\beta }}{\mathfrak{z}_{\mu } } \end{equation} \end{theorem} \subsection{The case of a link} Now we consider the case of link. Given a link $\mathcal{L}$ of $L$ components, let $\overrightarrow{w =(w^{1},...w^{L})$ and $\overrightarrow{z}=(z^{1},...z^{L})$ satisfying w^{i}=z^{i}\ast q^{\rho }$, for $i=1,...,L$. We generalize the symmetric product to the case of link as follows: \begin{equation*} \big\langle1\pm \omega (z^{1})^{\mu ^{1}}\cdot \cdot \cdot (z^{L})^{\mu ^{L} \big\rangle=\prod_{i_{1,1},\ldots ,i_{1,\ell (\mu ^{1})},...,i_{L,1},\ldots ,i_{L,\ell (\mu ^{L})}}\Big(1\pm \omega \overset{L}{\prod_{\alpha =1}}\left( (z_{i_{\alpha ,1}}^{\alpha })^{\mu _{1}^{\alpha }}\cdots (z_{i_{\alpha ,\ell (\mu ^{\alpha })}}^{\alpha })^{\mu _{\ell (\mu ^{\alpha })}^{\alpha }}\right) \Big), \end{equation*} There exist integers $n_{\overrightarrow{B};\,g,\beta }$ such that \begin{equation*} \overset{\infty }{\sum_{g=0}}N_{\overrightarrow{B},g,\beta }(q-q^{-1})^{g} \overset{\infty }{\sum_{g=0}}n_{\overrightarrow{B},g,\beta }\underset{k=0} \overset{g}{\sum }}q^{g-2k}\text{.} \end{equation*} In a similar way, the infinite product formula for orthogonal Chern-Simons partition function of link can be obtained as follows \begin{theorem}[Orthogonal Infinite Product Formula for Links] The Chern-Simons partition function for orthogonal quantum group invariants can be expressed as the following infinite product formula \begin{equation} \frac{Z_{CS}^{SO}(\mathcal{L};q,t;\overrightarrow{w})}{Z_{CS}^{SO}(\mathcal{ };q,-t;\overrightarrow{w})}=\underset{\overrightarrow{\mu }\neq \overrightarrow{0}}{\prod }\overset{\infty }{\underset{g=0}{\prod }}\underse {\beta \in \mathbb{Z}}{\prod }\underset{m=1}{\overset{\infty }{\prod } \underset{k=0}{\overset{g}{\prod }}\left( \frac{\big\langl 1+q^{g-2k+2m}t^{\beta }(z^{1})^{\mu ^{1}}\cdot \cdot \cdot (z^{L})^{\mu ^{L} \big\rangle}{\big\langle1-q^{g-2k+2m}t^{\beta }(z^{1})^{\mu ^{1}}\cdot \cdot \cdot (z^{L})^{\mu ^{L}}\big\rangle}\right) ^{\frac{mn_{\overrightarrow{\mu ,g,\beta }}{\mathfrak{z}_{\overrightarrow{\mu }}}} \end{equation} \end{theorem} \subsection{The case of the unknot} In Proposition 10.2 of \cite{CC}, we have computed the free energy associated to orthogonal Chern-Simons partition function of the knot. \begin{equation*} F^{SO}(\bigcirc ;q,t;z)=\underset{k=1}{\overset{\infty }{\sum }}\frac{1}{k \left( 1+\frac{t^{k}-t^{-k}}{q^{k}-q^{-k}}\right) pb_{k}(z)\text{.} \end{equation*} Thus we have \begin{align*} & F^{SO}(\bigcirc ;q,t;w)-F^{SO}(\bigcirc ;q,-t;w) \\ & =\underset{k\in O\mathbb{Z}_{+}}{\sum }\frac{2}{k}\frac{t^{k}-t^{-k}} q^{k}-q^{-k}}pb_{k}(w) \\ & =\underset{k\in O\mathbb{Z}_{+}}{\sum }\frac{2}{k}\frac{t^{k}-t^{-k}} [k]^{2}}pb_{k}(z) \end{align*} Compared with (\ref{integer N}), we obtain \begin{equation} N_{(1),0,1}=-N_{(1),0,-1}=1. \end{equation} All other coefficients $N_{B;g,Q}$ are zero. Thus we have \begin{equation} n_{\mu ,g,\beta }=\delta _{\mu ,(1)}\delta _{g,0}\mbox{sign}(\beta ) \end{equation} and \begin{equation} \frac{Z_{CS}^{SO}(\bigcirc ;q,t;w)}{Z_{CS}^{SO}(\bigcirc ;q,-t;w)}=\underset m=1}{\overset{\infty }{\prod }}\underset{i=-\infty }{\overset{\infty }{\prod }}\left( \frac{(1+q^{2m}tz_{i})(1-q^{2m}t^{-1}z_{i})} (1-q^{2m}tz_{i})(1+q^{2m}t^{-1}z_{i})}\right) ^{m} \end{equation} \subsection{Symmetry Property of $q\rightarrow q^{-1}$ in Infinite Product Structure} In this subsection, we discuss a basic symmetric property of this infinite of product structure obtained from the orthogonal LMOV partition function. Here we focus on the knot case only, while the case of links exactly follows from the same analysis. In the derivation of the infinite product formula, we assume $|q|<1 $ for the Taylor expansion of $\frac{1}{[d]^{2}}$. In the case of $|q|>1$, the Taylor expansion is given by \begin{equation} \frac{1}{[d]^{2}}=\underset{m=1}{\overset{\infty }{\sum }}mq^{-2md} \end{equation} Therefore, the infinite product formula will be read as \begin{eqnarray} &&\frac{Z_{CS}^{SO}(\mathcal{K};q,t;w)}{Z_{CS}^{SO}(\mathcal{K};q,-t;w)} \notag \\ &=&\underset{\mu \neq 0}{\prod }\overset{\infty }{\underset{g=0}{\prod } \underset{\beta \in \mathbb{Z}}{\prod }\underset{m=1}{\overset{\infty } \prod }}\underset{k=0}{\overset{g}{\prod }}\left( \frac{\big\langl 1+q^{g-2k-2m}t^{\beta }z^{\mu }\big\rangle}{\big\langle1-q^{g-2k-2m}t^{\beta }z^{\mu }\big\rangle}\right) ^{\frac{mn_{\mu ,g,\beta }}{\mathfrak{z}_{\mu } } \notag \\ &=&\underset{\mu \neq 0}{\prod }\overset{\infty }{\underset{g=0}{\prod } \underset{\beta \in \mathbb{Z}}{\prod }\underset{m=1}{\overset{\infty } \prod }}\underset{k=0}{\overset{g}{\prod }}\left( \frac{\big\langl 1+q^{-g+2(g-k)-2m}t^{\beta }z^{\mu }\big\rangle}{\big\langl 1-q^{-g+2(g-k)-2m}t^{\beta }z^{\mu }\big\rangle}\right) ^{\frac{mn_{\mu ,g,\beta }}{\mathfrak{z}_{\mu }}} \notag \\ &=&\underset{\mu \neq 0}{\prod }\overset{\infty }{\underset{g=0}{\prod } \underset{\beta \in \mathbb{Z}}{\prod }\underset{m=1}{\overset{\infty } \prod }}\underset{k=0}{\overset{g}{\prod }}\left( \frac{\big\langl 1+q^{-g+2k-2m}t^{\beta }z^{\mu }\big\rangle}{\big\langle1-q^{-g+2k-2m}t^ \beta }z^{\mu }\big\rangle}\right) ^{\frac{mn_{\mu ,g,\beta }}{\mathfrak{z _{\mu }}} \notag \\ &=&\underset{\mu \neq 0}{\prod }\overset{\infty }{\underset{g=0}{\prod } \underset{\beta \in \mathbb{Z}}{\prod }\underset{m=1}{\overset{\infty } \prod }}\underset{k=0}{\overset{g}{\prod }}\left( \frac{\big\langl 1+(q^{-1})^{g-2k+2m}t^{\beta }z^{\mu }\big\rangle}{\big\langl 1-(q^{-1})^{g-2k+2m}t^{\beta }z^{\mu }\big\rangle}\right) ^{\frac{mn_{\mu ,g,\beta }}{\mathfrak{z}_{\mu }}} \end{eqnarray} This is the symmetry of $q\rightarrow q^{-1}$ for the infinite product formula. \section{Acknowledgments} We thank Pan Peng for many valuable discussions and Shengmao Zhu for giving many helpful suggestions and for proof reading the paper.
1,108,101,563,983
arxiv
\section{Introduction} String theory has several remarkable features. Most interesting are those that are not present for point particles, but are rather linked to the extended nature of string, like the appearance of stringy symmetries.These are often discovered when compactifying a ten-dimensional superstring theory down to lower dimensions. One prominent example of a stringy symmetry, which becomes manifest during the compactification process, is T-duality. It relies on the existence of string winding modes. By interchanging winding and momentum excitations, T-duality links very small and very large compact dimensions as being completely indistinguishable. Moreover T-duality allows for the existence of new `geometries' as consistent string backgrounds. These are certain generalizations of standard Riemannian spaces and often called non-geometric string backgrounds \cite{Dabholkar:2005ve}. The dynamics of a string in such a non-geometric background is governed by the interplay between winding and momentum modes. This gives rise to many new phenomena which are not present in a geometric background with momentum modes only. One prominent example for such new effects is a new kind of spatial non-commutativity and non-associativity of the form $[ X^I(\tau, \sigma), X^J(\tau, \sigma)]\simeq P^K$ resp. $[ [X^I(\tau, \sigma), X^J(\tau, \sigma),X^K(\tau, \sigma)]]\neq 0$ of the closed string coordinates in the presence of non-geometric $Q$- and $R$-fluxes, as has been argued in \cite{Blumenhagen:2010hj,Lust:2010iy,Blumenhagen:2011ph,Condeescu:2012sp,Andriot:2012vb,Mylonas:2012pg,Bakas:2013jwa,Blumenhagen:2013zpa,Mylonas:2013jha}. In correspondence to Heisenberg's well know uncertainty relation between position and momentum, these relations describe a stringy limited resolution of the string's position, which can be interpreted as a fuzzy non-commutative and non-associative space. These effects arise on the interface between large and small compact dimensions, which are very different for a string compared to a point particle. Furthermore non-geometric backgrounds extend the landscape of string theory considerably and perhaps help to find one day a string compactification which reproduces the phenomenology of our universe. Thus it is important to understand the properties of such backgrounds in more detail. In this paper we want to discuss the construction of non-geometric backgrounds and analyze their spectrum in type IIA/IIB superstring theory. We focus on the NS/NS sector, which consists of three different massless string excitations: the symmetric metric $g_{ij}$, the antisymmetric $B$-field $B_{ij}$ and a scalar $\phi$ called dilaton. Their complete dynamics are governed by string field theory in $D$-dimensions. But in general, string field theory is much too involved to be evaluated explicitly. Hence an effective field theory is used in the low energy limit. It is defined by the following action \begin{equation}\label{eqn:nsnsaction} S_\mathrm{NS} = \int\mathrm{d}^{D}x\, \sqrt{-g} e^{-2\phi} \left(\mathcal R + 4 \partial_\mu \phi \partial^\mu \phi - \frac{1}{12} H_{\mu\nu\rho} H^{\mu\nu\rho} \right)\,. \end{equation} which describes the NS/NS sector of a $\mathcal{N}=2$ supergravity. Due to its construction, this effective theory only considers strings with momentum modes. In order not to violate the low energy limit, the compact dimensions described by \eqref{eqn:nsnsaction} have to be large. Due to this limitation the stringy symmetries, in particular T-duality, are not implemented into this action. Because non-geometric backgrounds depend on the interplay between winding and momentum modes, this action is only of limited use when studying the properties of non-geometric backgrounds. Thus the fields $g_{ij}$, $B_{ij}$ and $\phi$ are in generally ill defined (either globally or even locally) for a non-geometric background. For non-geometric backgrounds which are T-dual to geometric ones, a fields redefinition can be performed to obtain a well defined geometric description \cite{Andriot:2011uh,Andriot:2012wx,Andriot:2012an,Blumenhagen:2012nk,Blumenhagen:2012nt,Blumenhagen:2013aia,Andriot:2013xca}. But for all other non-geometric backgrounds, which are in the following called truly non-geometric, this is not possible. Double Field Theory (DFT) \cite{Siegel:1993th,Hull:2009mi,Hull:2009zb,Hohm:2010jy,Hohm:2010pp} is a promising approach to overcome these problems. In particular DFT allows us to make T-duality a manifest symmetry of the effective theory. Hence, we will investigate consistent Scherk-Schwarz like dimensional reductions \cite{Scherk:1978ta,Scherk:1979zr} of the $2 D$-dimensional DFT \cite{Aldazabal:2011nj,Grana:2012rr}. Recently, such reductions were also discussed in the context of generalized geometry \cite{Lee:2014mla}. They give rise to a gauged supergravity in the remaining $d$-dimension, exhibiting non-Abelian gauge symmetries together with a scalar potential on their moduli space (parameters which describe the shape of the background in the internal direction). This potential can be used to stabilize some of the moduli and so remove a lot of arbitrariness when choosing the explicit shape of a background. Furthermore the scalar potential possesses phenomenologically interesting properties, like a non-vanishing cosmological constant \cite{Samtleben:2008pe}. Similar effects arise in massive type II theories, which were discussed in DFT \cite{Hohm:2011cp}, too. We find solutions for the field equations of the $d$-dimensional gauged supergravity and lift them up to solution of the full DFT field equations $\mathcal{R}_{MN}=0$. Here $\mathcal{R}_{MN}$ is the generalized Ricci tensor of the double geometry, in $2 D$ dimensions. This uplift is possible when the Scherk-Schwarz ansatz exhibits $2 (D-d)$ Killing vectors \cite{Scherk:1978ta,Scherk:1979zr,Pons:2003ka}. Among all the different gauged supergravities, that can arise for a Scherk-Schwarz ansatz, we are focusing on the ones with a Minkowski vacuum. Such theories exhibit a minimum of the scalar potential, on which the scalar potential vanishes. This restriction puts additional constraints on the covariant fluxes $\mathcal{F}_{ABC}$ \cite{Aldazabal:2011nj,Geissbuhler:2013uka}, which specifies the explicit form of the Scherk-Schwarz ansatz. As we will show, these fluxes are directly connected to fluxes $H_{ijk}$, $f^i_{jk}$, $Q^{ij}_k$ and $R^{ijk}$ which are widely used to characterize non-geometric backgrounds. Similar calculations were discussed e.g. in \cite{Geissbuhler:2013uka,Blumenhagen:2013hva}. For gauged supergravities with Minkowski vacuum, we discuss small fluctuations around the vacuum. This gives rise to $(D-d)^2$ scalar field and $2(D-d)$ vector gauge bosons. We calculate the masses of the scalars and the gauge group of the vectors. Because DFT is constructed as a background independent low energy description of string theory, the spectrum we have obtained in this way should be identical to CFT calculations, but we leave an explicit verification to future work. In order to provide explicit examples for non-geometric spaces, we restrict ourself to $(D-d)=3$ internal dimensions. Here we provide all supergravities with Minkowski vacuum and consistent uplift. There are only two of them, which we call single elliptic and double elliptic case. The double geometries in the internal direction of both cases correspond to fibrations, where the doubled fiber is a four-dimensional torus $\mathrm{T}^4$ over a doubled circle as base. The double elliptic case is not T-dual to a geometric description, an its generalized geometric description within DFT has been discussed in \cite{ReviewDFT:2013hlb}. It exhibits $H$-, $f$- and $Q$-flux at the same time. Nevertheless it is compatible with the strong constraint of DFT. Thus it is a truly non-geometric space. It cannot be written in terms of a globally well defined metric, $B$-field and dilaton. Nevertheless, as discussed recently, this notion of non-geometric backgrounds can be properly defined in DFT \cite{Andriot:2012an}. In particular generalized coordinate transformations can be used as the so-called patching conditions for non-geometric spaces. This is of particular importance for truly non-geometric spaces that are not T-dual to any geometric spaces. In fact, without the use of the DFT formalism, the dimensional reduction on these non-geometric backgrounds could not have been discussed so far, a fact, which clearly demonstrates the necessity to go beyond the standard effective string action, when one wants to explore the full landscape \cite{Susskind:2003kw,Ashok:2003gk} of consistent string compactifications. We give explicit expressions for the Killing vectors, the twist of the Scherk-Schwarz ansatz, the masses of scalar bosons and the structure coefficients of the gauge boson's gauge group. All these results are in accordance with the CFT calculation for a asymmetric orbifold presented in \cite{Condeescu:2013yma}. Thus we conjecture that the double elliptic case is the low energy description of superstring theory in this background. This shows, that non-geometric background are not a mere theoretical construct, but leads to effective theories which are beyond the case of SUGRA. The paper is organized as follows: In section~\ref{sec:DFTreview}, we review some important features and notions of DFT, needed throughout the paper. Section~\ref{sec:DFTbackgrounds} defines the Scherk-Schwarz ansatz in terms of the twist $U\indices{^M_N}$ and connects this twist to the covariant fluxes $\mathcal{F}\indices{_A_B_C}$. It discusses several constraints that the covariant fluxes have to fulfill and finally presents the action of the gauged supergravity obtained by the Scherk-Schwarz ansatz. Gauged supergavities with a Minkowski vacuum are discussed in section~\ref{sec:minkowski}. Here further constraints on the covariant fluxes are defined. The masses of the scalar bosons, which arise through fluctuations around the vacuum, are calculated. For $(D-d)=3$ all flux constraints are solved explicitly. Finally, section~\ref{sec:constrtwists} presents the explicit construction of the twist $U\indices{^M_N}$ and the Killing vectors $K\indices{_I^J}$. It also discusses how different values for the $B$-field, the $\beta$-field and the metric arise in the elliptic and double elliptic case through field redefinition. A conclusion about the results in the paper is drawn in section~\ref{sec:conclusions}. \section{Double field theory}\label{sec:DFTreview} In this section we review some important properties of DFT, which will be relevant for the calculations in this paper. We start with introducing the DFT action and show its various symmetries. Afterwards we present the equations of motion which arise from the variation of this action. Finally we discuss how fluxes arise in DFT. \subsection{Action and its symmetries}\label{sec:dftandsym} DFT is an effective description of closed string theory that takes into account both momentum and winding modes in compact space time. Hence in addition to the $D$ space time coordinates $x$ (conjugate to the momentum modes), it introduces $D$ new coordinates $\tilde x$ (conjugate to the winding modes of the string). In total there are now $2 D$ coordinates which are combined into the $2D$-dimensional vector $X^M=\begin{pmatrix}\tilde x_i & x^i\end{pmatrix}$. To lower and raise the index $M$ of this vector, the O$(D,D)$ invariant metric \begin{equation}\label{eqn:etaMN} \eta_{MN}=\begin{pmatrix} 0 & \delta^i_j \\ \delta_i^j & 0 \end{pmatrix} \quad \text{and its inverse} \quad \eta^{MN}=\begin{pmatrix} 0 & \delta_i^j \\ \delta^i_j & 0 \end{pmatrix} \end{equation} are used. Furthermore one defines the partial derivative according to $\partial^M=\begin{pmatrix}\partial_i & \tilde{\partial}^i\end{pmatrix}$. Now the DFT action can be expressed in the generalized metric formulation \cite{Hohm:2010pp} as \begin{equation}\label{eqn:dftaction} S_\mathrm{DFT} = \int \mathrm{d}^{2D} X\,e^{-2\phi'} \mathcal{R} \end{equation} where \begin{align}\label{eqn:genricciscalar} \mathcal{R} = 4 \mathcal{H}^{MN} \partial_M \phi' \partial_N \phi' - \partial_M \partial_N \mathcal{H}^{MN} &- 4\mathcal{H}^{MN} \partial_M \phi' \partial_N \phi' + 4 \partial_M \mathcal{H}^{MN} \partial_N \phi' \nonumber \\ + \frac{1}{8} \mathcal{H}^{MN} \partial_M \mathcal{H}^{KL} \partial_N \mathcal{H}_{KL} &- \frac{1}{2} \mathcal{H}^{MN}\partial_N \mathcal{H}^{KL}\partial_L\mathcal{H}_{MK} \end{align} is called the generalized Ricci or curvature scaler and \begin{equation}\label{eqn:genmetricBg} \mathcal{H}^{MN}=\begin{pmatrix} g_{ij} - B_{ik}g^{kl}B_{lj} & -B_{ik}g^{kj} \\ g^{ik} B_{kj} & g^{ij} \end{pmatrix} \end{equation} is the generalized metric. It combines the metric $g_{ij}$ and the $B$-field $B_{ij}$ into a O$(D,D)$ valued, symmetric tensor with the properties \begin{equation} \mathcal{H}^{MN} \eta_{ML} \mathcal{H}^{LK} = \eta^{NK} \quad \text{and} \quad \mathcal{H}^{MN} = \mathcal{H}^{NM} \,. \end{equation} The dilaton $\phi$ is encoded in the O$(D,D)$ singlet \begin{equation} \phi' = \phi - \frac{1}{2} \log \sqrt{-g}\,. \end{equation} Because it only consists of covariant quantities, the action \eqref{eqn:dftaction} posses a manifest, global $O(D,D)$ symmetry. The symmetry is global only, but the DFT action \eqref{eqn:dftaction} has further symmetries which are local. In order to display one of them, we express the generalized metric in terms of the generalized vielbein $E\indices{^A_M}$, employing a vielbein formalism, as originally introduced by Siegel in \cite{Siegel:1993th} and applied to DFT in \cite{Hohm:2010xe}. We thus express the generalized metric in terms of frame fields via \begin{equation}\label{eqn:genmetricformvielbein} \mathcal{H}^{MN} = E\indices{^A_M} \delta_{AB} E\indices{^B_N}\, . \end{equation} In the following it is convenient to slightly adapt the frame formalism of \cite{Siegel:1993th,Hohm:2010xe} in such a way that the frame field can be viewed as a proper group element, as has been used in \cite{Geissbuhler:2011mx}. The flat generalized metric is then given by \begin{equation} \delta_{AB} = \begin{pmatrix} \eta^{a b} & 0 \\ 0 & \eta_{a b} \end{pmatrix}\,, \end{equation} where $\eta_{ab}$ and its inverse $\eta^{ab}$ are the usual $D$-dimensional Minkowski metric. From now on we distinguish between the indices $A, B, \dots$ and $M, N, \dots\;$. The former are called flat and the latter curved. As already mentioned, the generalized metric $\mathcal{H}^{MN}$ is an O$(D,D)$ valued tensor, and here the generalized vielbein is O$(D,D)$ valued, too: \begin{equation}\label{eqn:vielbeinodd} E\indices{^A_M} \eta_{MN} E\indices{^B_N} = \eta^{AB} \quad \text{with} \quad \eta^{AB}=\begin{pmatrix} 0 & \delta_a^b \\ \delta^a_b & 0 \end{pmatrix}\,. \end{equation} Here $\eta^{AB}$ in flat indices does not differ for $\eta^{NM}$ in curved ones. Let us now inspect the local Lorentz group in some detail. Consider the local double Lorentz transformation of the generalized vielbein \begin{equation}\label{eqn:localodxodsym} \tilde E\indices{^A_M} = T\indices{^A_B} E\indices{^B_M}\,. \end{equation} Requiring that this leaves the generalized metric invariant, the transformation has to fulfill \begin{equation}\label{eqn:trafoo2d} T\indices{^A_C} \delta^{CD} T\indices{^B_D} = \delta^{AB}\,. \end{equation} In addition, the transformed generalized vielbein $\tilde E\indices{^A_M}$ has still to satisfy \eqref{eqn:vielbeinodd}, which gives rise to the further constraint \begin{equation}\label{eqn:trafoodd} T\indices{^A_C} \eta^{CD} T\indices{^B_D} = \eta^{AB}\,. \end{equation} Transformations that simultaneously solve \eqref{eqn:trafoo2d} and \eqref{eqn:trafoodd}, belong to the local subgroup O$({D-1},1)_\mathrm{R}\times$O$(1,{D-1})_\mathrm{L}$. In order to examine their explicit form, we transform $\eta_{AB}$ into the diagonal form \begin{gather}\label{eqn:trafobaredind} R\indices{^{\bar A}_C} \eta^{CD} R\indices{^{\bar B}_D} = \eta^{\bar A\bar B} = \begin{pmatrix} - \eta_{\bar a \bar b} & 0 \\ 0 & \eta^{\bar a \bar b} \end{pmatrix} \\ \text{with} \quad R\indices{^{\bar A}_B} = \frac{1}{\sqrt{2}} \begin{pmatrix} \delta_{\bar a}^b & -\eta_{\bar a b} \\ \eta^{\bar a b} & \delta^{\bar a}_b \end{pmatrix} \quad \text{and} \quad R\indices{_{\bar A}^B} = \frac{1}{\sqrt{2}} \begin{pmatrix} \delta^{\bar a}_b & -\eta^{\bar a b} \\ \eta_{\bar a b} & \delta_{\bar a}^b \end{pmatrix}\,. \end{gather} Here, bared indices are used in order to distinguish between the different representations of the invariant metric\footnote{It is important to distinguish its notation form the one introduces in \cite{Hohm:2010xe}. In \cite{Hohm:2010xe}, a tensor $T\indices{_{\bar a}^{\bar b}}$ is relates to $T\indices{^{\bar a}_{\bar b}}$ by rising and lowering the bared indices with the Minkowski metric $\eta^{ab}$ and $\eta_{ab}$, respectively. While in our notation, $T\indices{_{\bar a}^{\bar b}}$ and $T\indices{^{\bar a}_{\bar b}}$ are totally unrelated objects.} . In the same fashion, the bared version \begin{equation} R\indices{^{\bar A}_C} \delta^{CD} R\indices{^{\bar B}_D} = \delta^{\bar A\bar B} = \begin{pmatrix} \eta_{\bar a \bar b} & 0 \\ 0 & \eta^{\bar a \bar b} \end{pmatrix} \end{equation} of the flat generalized metric is calculated. The deeper meaning of the coordinate transformation mediated by $R\indices{^{\bar A}_B}$ becomes clear, when one applies it on the doubled coordinates $X^M$ and obtains \begin{equation} R\indices{^{\bar M}_N} X^N = \frac{1}{\sqrt{2}} \begin{pmatrix} \tilde x_{\bar i} - x_{\bar i} & \tilde x^{\bar i} + x^{\bar i} \end{pmatrix} = \begin{pmatrix} {x_{\scriptscriptstyle\mathrm R}}_{\bar i} & {x_{\scriptscriptstyle\mathrm L}}^{\bar i} \end{pmatrix}\,. \end{equation} Here $x_{\scriptscriptstyle\mathrm R}$ and $x_{\scriptscriptstyle\mathrm L}$ are the positions conjugated to the momenta of the closed string's right and left moving part. Expressing \eqref{eqn:trafoo2d} and \eqref{eqn:trafoodd} in bared indices gives rise to \begin{equation} \begin{pmatrix} T\indices{_{\bar a}^{\bar c}} & T_{\bar a\bar c} \\ T^{\bar a\bar c} & T\indices{^{\bar a}_{\bar c}} \end{pmatrix} \begin{pmatrix} \pm \eta_{\bar c\bar d} & 0 \\ 0 & \eta^{\bar c\bar d} \\ \end{pmatrix} \begin{pmatrix} T\indices{^{\bar d}_{\bar c}} & T^{\bar d\bar c} \\ T_{\bar d\bar c} & T\indices{_{\bar d}^{\bar b}} \end{pmatrix} = \begin{pmatrix} \pm \eta_{\bar a\bar b} & 0 \\ 0 & \eta^{\bar a\bar b} \\ \end{pmatrix} \end{equation} which is solved by $T_{\bar a\bar b}=T^{\bar a\bar b}=0$ and two different $O(1,D-1)$ transformations \begin{equation} u\indices{_{\bar a}^{\bar c}} \eta_{\bar c\bar d} u\indices{_{\bar b}^{\bar d}} = \eta_{\bar a\bar b} \quad \text{and} \quad v\indices{^{\bar a}_{\bar c}} \eta^{\bar c\bar d} v\indices{^{\bar b}_{\bar d}} = \eta^{\bar a\bar b}\,. \end{equation} They are identified with the remaining components $T\indices{^A_B}$ as $T\indices{_{\bar a}^{\bar b}}=u\indices{_{\bar a}^{\bar b}}$ and $T\indices{^{\bar a}_{\bar b}}=v\indices{^{\bar a}_{\bar b}}$. In unbared indices this transformation reads \begin{equation}\label{eqn:trafoonxon} T\indices{^A_B} = R\indices{^A_{\bar C}} T\indices{^{\bar C}_{\bar D}} R\indices{^{\bar D}_B} = \begin{pmatrix} u\indices{_a^b} + v\indices{_a^b} & u\indices{_a_b} - v\indices{_a_b} \\ u\indices{^a^b} - v\indices{^a^b} & u\indices{^a_b} + v\indices{^a_b} \end{pmatrix}\,. \end{equation} Hence the generalized metric and therewith the DFT action \eqref{eqn:dftaction} are invariant under local double Lorentz transformations of the form \eqref{eqn:localodxodsym}. Except for the dilaton, the generalized vielbein combines all fields of the theory. As an element of O$(D,D)$ it has $D(2D-1)$ independent degrees of freedom. By gauging the local double Lorentz symmetry only $D^2$ of them remain. A possible parameterization of the generalized vielbein is given by \begin{equation}\label{eqn:EAMfixed} E\indices{^A_M} = \begin{pmatrix} e\indices{_a^i} & e\indices{_a^l} B_{li} \\ 0 & e\indices{^a_i} \end{pmatrix} \end{equation} in terms of the metric's vielbein $e\indices{^a_i}$ with $e\indices{^a_i} \eta_{ab} e\indices{^b_j} = g_{ij}$ and the antisymmetric $B$-field $B_{ij}$. If $e\indices{^a_i}$ is restricted to be an upper triangular matrix, this parameterization fixes the double Lorentz symmetry completely. An O$(D,D)$ vielbein without any gauge fixing is \begin{equation}\label{eqn:EAMgeneral} E\indices{^A_M} = \begin{pmatrix} e\indices{_a^i} & e\indices{_a^l} B_{li} \\ e\indices{^a_l} \beta^{li} & e\indices{^a_i} + e\indices{^a_l} \beta^{lk} B_{ki} \end{pmatrix} \end{equation} where $e\indices{^a_i}$ is an unrestricted vielbein of $g_{ij}$ and $\beta^{ij}$ is an antisymmetric bi-vector. Finally, the DFT action is also invariant under generalized diffeomorphisms. These transform $X^M$ into $\tilde{X}^M=X^M - \xi^M$ where $\xi^M$ is infinitesimal. The corresponding changes of the generalized vielbein and the dilaton are given by the generalized Lie derivatives \begin{align}\label{eqn:genliederiv} &\delta_\xi E\indices{^A_M} = \mathcal{L}_\xi E\indices{^A_M} = \xi^P \partial_P E\indices{^A_M} + (\partial_M \xi^P - \partial^P \xi_M) E\indices{^A_P} \quad \text{and}\\ &\delta_\xi \phi' = \mathcal{L}_\xi \phi' = \xi^M \partial_M \phi' - \frac{1}{2} \partial_M \xi^M \,. \end{align} These infinitesimal transformations form the algebra \begin{equation} [\delta_{\xi_1}, \delta_{\xi_2}] = \delta_{\xi_1} \delta_{\xi_2} - \delta_{\xi_2} \delta_{\xi_1} = - \mathcal{L}_{[\xi_1, \xi_2]_\mathrm{C}} \end{equation} which is governed by the C-bracket \begin{equation}\label{eqn:Cbracket} \left[ \xi_1, \xi_2 \right]_\mathrm{C}^M = \xi_1^N \partial_N \xi_2^M - \frac{1}{2} \xi_{1 N} \partial^M \xi_2^N - \left( \xi_1 \leftrightarrow \xi_2 \right)\,, \end{equation} provided we impose the strong constraint \begin{equation}\label{eqn:strongconstraint} \partial_N \partial^N \cdot = 0 \end{equation} where $\cdot$ is a place holder for fields, gauge parameters and arbitrary products of them. This is a stronger form of the level-matching constraint $L_0 - \bar L_0 = 0$ of closed string theory. In general this algebra does not satisfy the Jacobi identity and so the generalized diffeomorphisms do not form a Lie group. However, its failure to satisfy the Jacobi identity is of a trivial form that does not generate a gauge transformation on fields satisfying the strong constraint. Thus, it is consistent with the Jacobi identity for symmetry variations on physical fields, which always holds. A trivial way to solve \eqref{eqn:strongconstraint} is to set $\tilde \partial^i = 0$. In this case, the DFT action \eqref{eqn:dftaction} leads to the NS/NS action \eqref{eqn:nsnsaction} discussed in the introduction. \subsection{Equations of motion for the generalized metric}\label{sec:dfteom} Consistent background solutions of the DFT are obtained by the variation of the DFT action. The variation w.r.t. the generalized metric yields \begin{equation} \frac{\delta S_\mathrm{DFT}}{\delta \mathcal{H}^{MN}} = \mathcal{K}_{MN}\,. \end{equation} This does not lead to the equations of motion for the generalized metric directly, because $\mathcal{H}^{MN}$ is a constrained field. To determine the proper projection that encoded the equations of motion we have to use that the generalized metric is O$(D,D)$ valued and must fulfill \begin{equation} \mathcal{H}^{LM} \eta_{MN} \mathcal{H}^{KN} = \eta^{KL}\,. \end{equation} The variation of this constraint leads to \begin{equation}\label{eqn:constrvariation} \delta \mathcal{H}^{LM} \mathcal{H}\indices{^K_M} + \mathcal{H}\indices{^L_M} \delta \mathcal{H}^{KM} = 0 \end{equation} and after some relabeling of indices and using $\mathcal{H}^{ML} \mathcal{H}_{LN} = \delta^M_N$ one obtains \begin{equation} \delta\mathcal{H}^{MN} = -\mathcal{H}^{MK} \delta\mathcal{H}_{KL} \mathcal{H}^{LN}\,. \end{equation} As described in \cite{Hohm:2010pp,Hohm:2011si}, the most general variation $\delta\mathcal{H}^{MN}$ satisfying \eqref{eqn:constrvariation} can be written as \begin{gather} \delta\mathcal{H}^{MN} = \bar{P}^{MK} \delta\mathcal{M}_{KL} P^{LN} + P^{MK} \delta\mathcal{M}_{KL} \bar{P}^{LN} \\ \quad \text{with} \quad \bar{P}^{MN} = \frac{1}{2}\left(\eta^{MN} + \mathcal{H}^{MN}\right) \quad \text{and} \quad P^{MN} = \frac{1}{2}\left(\eta^{MN} - \mathcal{H}^{MN}\right)\,, \end{gather} where $\delta \mathcal{M}_{MN}$ is now an arbitrary, unconstrained symmetric variation. Because this new variation is not subject to any constraints, it leads to \begin{equation} \delta S_\mathrm{DFT} = \int \mathrm{d}^{2D} X \mathcal{K}^{MN} \delta \mathcal{H}_{MN} = \int \mathrm{d}^{2D} X \mathcal{R}_{MN} \delta \mathcal{M}^{MN}\,, \end{equation} where \begin{equation}\label{eqn:genriccitensor} \mathcal{R}_{MN} = P_{MK} \mathcal{K}^{KL} \bar{P}_{LN} + \bar{P}_{MK} \mathcal{K}^{KL} P_{LN} \end{equation} is called the generalized Ricci tensor. Then the equation \begin{equation} \mathcal{R}_{MN}=0 \end{equation} is the equation of motion for the generalized metric. Because the generalized metric $\mathcal{H}^{MN}$ is symmetric, $\mathcal{K}_{MN}$ and $\mathcal{R}_{MN}$ are symmetric, too. For completeness we give finally the explicit expression for $\mathcal{K}_{MN}$ which arises from the variation of the DFT action with respect to the generalized vielbein\footnote{Within this paper we use the abbreviations \begin{equation*} T_{[a_1 \dots a_n]} = \frac{1}{n!} \sum\limits_{\sigma\in P} \sign(\sigma) T_{\sigma_1 \dots \sigma_n} \quad \text{and} \quad T_{(a_1 \dots a_n)} = \frac{1}{n!} \sum\limits_{\sigma\in P} T_{\sigma_1 \dots \sigma_n}\,, \end{equation*} where $P$ is the set of all permutations of the indices $a_1,\dots,a_n$, for the (anti)symmetrization of rank $n$ tensors.}: \begin{gather} \mathcal{K}_{MN} = \frac{1}{8} \partial_M \mathcal{H}^{KL} \partial_N \mathcal{H}_{KL} - \frac{1}{4}\left( \partial_L - 2(\partial_L \phi')\right) \left( \mathcal{H}^{KL}\partial_K \mathcal{H}_{MN}\right) + 2\partial_M\partial_N \phi' \nonumber \\ - \frac{1}{2} \partial_{(M} \mathcal{H}^{KL} \partial_L \mathcal{H}_{N)K} + \frac{1}{2} \left( \partial_L - 2( \partial_L \phi') \right) \left( \mathcal{H}^{KL}\partial_{(M} \mathcal{H}_{N)K} + \mathcal{H}\indices{^K_{(M}} \partial_K \mathcal{H}\indices{^L_{N)}} \right) \,. \end{gather} \subsection{Covariant formulation of fluxes}\label{sec:covariantfluxes} Before we discuss how to obtain solutions of the DFT equations of motion, let us connect the DFT background fields to geometric as well as non-geometric fluxes. It will be useful to have an $O(D,D)$ covariant characterization of the fluxes, which combines the geometric and non-geometric fluxes into a single $O(D,D)$ tensor. Without doubling of coordinates, such a description has already been given a few years ago by Ellwood in \cite{Ellwood:2006ya}. There is a straightforward extension of this prescription to DFT, most conveniently in the language of a frame formalism \cite{Siegel:1993th,Hohm:2010xe}. This has been worked out in the recent papers \cite{Aldazabal:2011nj,Aldazabal:2013sca}, giving a slight reformulation of the frame formulation of \cite{Siegel:1993th,Hohm:2010xe} that is somewhat better adapted to the usual description of fluxes. In this formulation the covariant fluxes can be defined covariantly by means of the C-bracket and the $O(D,D)$ inner product as \begin{equation}\label{eqn:fluxesandparameters} \mathcal{F}_{ABC} = \left[ E\indices{_A} , E\indices{_B} \right]_\mathrm{C}^L E_{CL}\,. \end{equation} Using the definition of the C-bracket \eqref{eqn:Cbracket}, \eqref{eqn:fluxesandparameters} expands to \begin{align}\label{eqn:omega_ABCexpanded} \mathcal{F}_{ABC} &= E\indices{_A^N} \partial_N E\indices{_B^L} E_{CL} - \frac{1}{2} E_{AN} \partial^L E\indices{_B^N} E_{CL} - ( A \leftrightarrow B ) \nonumber \\ &= \Omega_{ABC} + \frac{1}{2} \Omega_{CAB} - \Omega_{BAC} - \frac{1}{2} \Omega_{CBA} = \Omega_{ABC} + \Omega_{CAB} + \Omega_{BCA}\,, \end{align} when introducing the coefficients of anholonomy \begin{equation}\label{eqn:coeffanholo} \Omega_{ABC}=E\indices{_A^N}\partial_N E\indices{_B^M} E_{CM}\,. \end{equation} They are antisymmetric with respect to its last two indices $B$ and $C$, as a consequence of \begin{equation} E\indices{_A^N} \partial_N \left( E\indices{_B^M} \eta_{ML} E\indices{_C^L} \right) = E\indices{_A^N} \partial_N \eta_{BC} = 0\,. \end{equation} We thus obtain \begin{equation}\label{eqn:fluxescoeffanholo} \mathcal{F}_{ABC} = \Omega_{ABC} + \Omega_{CAB} + \Omega_{BCA}\,. \end{equation} Using the antisymmetric property once more, it is evident that the covariant fluxes are totally asymmetric, \begin{equation} \mathcal{F}_{ABC} = 3 \Omega_{[ABC]}\,. \end{equation} They have three flat indices and thus are subject to double Lorentz transformations. For completeness, in the following we explicitly calculate the various components of $\mathcal{F}_{ABC}$ by starting with a generalized vielbein that is `over-parametrized' in the sense that it encodes a two-form $B_{ij}$ and a bi-vector $\beta^{ij}$, as opposed to the physical fields only (i.e., either the two-form or the bivector). Put differently, we have not yet gauge fixed to the physical diagonal subgroup of the double Lorentz group O$({D-1},1)_\mathrm{R}\times$O$(1,{D-1})_\mathrm{L}$ so that there are pure gauge modes left. In a given physical situation one may then gauge fix further to a frame containing only a 2-form, only a bivector, or some intermediate frame. For a gauge without independent $B$-field the covariant fluxes reduce to those identified in \cite{Andriot:2012wx,Andriot:2012an}. Here we give the vielbein with the flat index lowered and the curved one raised: \begin{equation} \label{eqn:paramUAM} E\indices{_A^M} = \eta_{AB} E\indices{^B_N} \eta^{NM} = \begin{pmatrix} e\indices{^a_i} + e\indices{^a_j} \beta^{jk} B_{ki} & e\indices{^a_j} \beta^{ji} \\ e\indices{_a^j} B_{ji} & e\indices{_a^i} \end{pmatrix}\,. \end{equation} Due to the fact that the covariant fluxes are described by a totally antisymmetric tensor, only 4 of the 8 $D\times D\times D$ blocks ${\mathcal F}_{ABC}$ consists of are independent from each other. Each of these independent blocks, namely ${\mathcal F}_{abc}$, ${\mathcal F}\indices{^a_b_c}$, ${\mathcal F}\indices{^a^b_c}$ and ${\mathcal F}^{abc}$, will now be evaluated. By this calculation, we are able to connect the covariant fluxes with the fluxes $H^{abc}$, $f\indices{^a_b_c}$ (geometric flux), $Q\indices{^a^b_c}$ ($Q$-flux) and $R_{abc}$ ($R$-flux) in flat indices. The three additional fluxes, which were not discussed so far, are common in the description of non-geometric backgrounds. A good overview over their structure and properties is given for example by \cite{Shelton:2005cf, Andriot:2012an}. We start with ${\mathcal F}_{abc}$ which is given in terms of \begin{equation}\label{eqn:Fabcfromcoeffanholo} {\mathcal F}_{abc} = {\Omega}_{abc} + {\Omega}_{cab} + {\Omega}_{bca} = 3{\Omega}_{[abc]}\,. \end{equation} Putting \eqref{eqn:paramUAM} into \eqref{eqn:coeffanholo}, the relevant coefficients of anholonomy evaluate to \begin{equation} {\Omega}_{abc}= e\indices{_a^i} e\indices{_b^j} e\indices{_c^k} \left( \partial_i B_{jk} + B_{il} \tilde{\partial}^l B_{jk} \right)\,. \end{equation} Combining this result with the antisymmetrization of $\Omega_{ijk}$ in \eqref{eqn:Fabcfromcoeffanholo} gives rise to \begin{equation} {\mathcal F}_{abc} = 3 e\indices{_a^i} e\indices{_b^j} e\indices{_c^k} \left( \partial_{[i} B_{jk]} - B_{l[i} \tilde{\partial}^l B_{jk]} \right) = H_{abc}\,. \end{equation} When applying the strong constraint $\tilde \partial_i = 0$, this expression is equivalent to the $H$-flux in flat indices. In the next step, we calculate the three components $\Omega\indices{^a_b_c}$, $\Omega\indices{_a^b_c}$ and $\Omega\indices{_a_b^c}$. These are all combinations with two lowered and one raised index. They are given by the following expressions \begin{align}\label{eqn:Omega^a_b_c} \Omega\indices{^a_b_c} &= e\indices{^a_i} e\indices{_b^j} e\indices{_c^k} \left( \tilde{\partial}^i B_{jk} + \beta^{il} \Omega_{ljk} \right)\;, \\ \Omega\indices{_a^b_c} &= e\indices{_a^i} \partial_i e\indices{^b_j} e\indices{_c^j} + e\indices{_a^i} B_{ij} \tilde{\partial}^j e\indices{^b_k} e\indices{_c^k} + e\indices{_a^i} e\indices{^b_j} e\indices{_c^k} \beta^{jl} \Omega_{ilk}\;, \\ \Omega\indices{_a_b^c} &= -\Omega\indices{_a^c_b} \,. \end{align} With these three components, the covariant fluxes ${\mathcal F}\indices{^a_b_c}$ read \begin{align} {\mathcal F}\indices{^a_b_c} &= \Omega\indices{^a_{[b}_{c]}} + \Omega\indices{_{[c}^a_{b]}} + \Omega\indices{_{[b}_{c]}^a} = \Omega\indices{^a_{[b}_{c]}} + 2 \Omega\indices{_{[c}^a_{b]}} \nonumber \\ &= 2 \left( e\indices{_{[b}^i} \partial_i e\indices{^a_j} e\indices{_{c]}^j} + e\indices{_a^i} B_{ij} \tilde{\partial}^j e\indices{^b_k} e\indices{_c^k} \right) + e\indices{^a_i} e\indices{_b^j} e\indices{_c^k} \left( \tilde{\partial}^i B_{jk} + \beta^{il} H_{ljk} \right)= f^a_{bc}\,. \end{align} They are equivalent to the geometric fluxes $f^a{}_{bc}$ in flat indices. This equivalence gets manifest, if a frame is chosen where $\tilde \partial^i = 0$ and $\beta^{ij}=0$ holds. Then $\mathcal{F}\indices{^a_b_c}$ becomes \begin{equation} {\mathcal F}\indices{^a_b_c} = 2 e\indices{_{[b}^i} \partial_i e\indices{^a_j} e\indices{_{c]}^j} = f^a_{bc}\,, \end{equation} which is exactly the form given by e.g. \cite{Blumenhagen:2013hva}. In order to calculate ${\mathcal F}\indices{^a^b_c}$ one needs the anholonomy coefficient's components \begin{align} \Omega\indices{^a^b_c} &= e\indices{^a_i}\tilde\partial^i e\indices{^b_j} e\indices{_c^j} + e\indices{^a_i} e\indices{^b_j} e\indices{_c^k} \beta^{il} \Omega\indices{_l^j_k} \\ \Omega\indices{_a^b^c} &= e\indices{_a^i} e\indices{_j^b} e\indices{_k^c} \left( \partial_i \beta^{jk} + B_{il}\tilde\partial^l \beta^{jk} + \beta^{jl} \beta^{km} \Omega_{ilm} \right) \quad \text{and} \\ \Omega\indices{^a_b^c} &= -\Omega\indices{^a^c_b}\,. \end{align} They are combined to \begin{align} \mathcal F\indices{^a^b_c} &= \Omega\indices{^{[a}^{b]}_c} + \Omega\indices{_c^{[a}^{b]}} + \Omega\indices{^{[b}_c^{a]}} = 2 \Omega\indices{^{[a}^{b]}_c} + \Omega\indices{_c^{[a}^{b]}} \nonumber\\ & = 2 e\indices{^{[a}_i} \tilde\partial^i e\indices{^{b]}_j} e\indices{_c^j} + e\indices{_i^{[a}} e\indices{_j^{b]}} e\indices{_c^k} \left( \partial_k \beta^{ij} + B_{kl}\tilde\partial^l \beta^{ij} - \beta^{li} \left[ 2 \Omega\indices{_l^{j}_k} + \beta^{jn} \Omega_{kln} \right] \right) = Q_c^{ab} \end{align} which is equivalent to the $Q$-flux in flat indices. In the frame $\tilde \partial^i$ and $B_{ij}=0$, this expression transforms into \begin{equation} \mathcal F\indices{^a^b_c} = e\indices{_i^a} e\indices{_j^b} e\indices{_c^k} \left( \partial_k \beta^{ij} - \beta^{l[i} f^{j]}_{kl} \right) =Q^{ab}_c \end{equation} and thus is equivalent to the $Q$-flux defined in e.g. \cite{Andriot:2013xca}. Finally, we have \begin{align} \Omega^{abc} &= e\indices{^a_i} e\indices{^b_j} e\indices{^c_k} \left(\tilde\partial^i \beta^{jk} + \beta^{il} \Omega\indices{_l^j^k}\right)\,, \end{align} which gives rise to \begin{equation}\label{eqn:Rflux} {\mathcal F}^{abc} = 3 \Omega^{[abc]} = e\indices{^a_i} e\indices{^b_j} e\indices{^c_k} 3 \left(\tilde\partial^{[i} \beta^{jk]} + \beta^{[il} \partial_l \beta^{jk]} + \beta^{il} B_{ln} \tilde\partial^n \beta^{jk} + \beta^{il} \beta^{jm} \beta^{kn} \mathcal{F}\indices{_l_m_n}\right) \end{equation} and is equivalent to the $R$-flux in flat indices. To see this, we use the frame $\tilde \partial^i=0$ and $B_{ij}=0$ in which \eqref{eqn:Rflux} reads \begin{equation} {\mathcal F}^{abc} = e\indices{^a_i} e\indices{^b_j} e\indices{^c_k} 3 \beta^{[il} \partial_l \beta^{jk]} = R^{abc}\,. \end{equation} This expression is equivalent to the $R$-flux defined in e.g.\cite{Andriot:2012an}. All these results agree with the ones presented in \cite{Geissbuhler:2013uka,Blumenhagen:2013hva} and show that the covariant fluxes are indeed a generalization of the fluxes known from the SUGRA effective action \eqref{eqn:nsnsaction}. \section{Twisted backgrounds in DFT}\label{sec:DFTbackgrounds} When constructing backgrounds for string theory, a major challenge is to find non-trivial solutions for the background field equations. As shown in section~\ref{sec:dfteom}, these equations are derived by varying the DFT action \eqref{eqn:dftaction} with respect to the generalized metric's physical degrees of freedom. As discussed in section~\ref{sec:dfteom}, they are very involved, and in general it is impossible to solve them directly. One way to overcome this problem is to start with known SUGRA solutions, like NS 5-branes or orthogonal intersections of them and apply various T-duality transformations on them \cite{Hassler:2013wsa}. Here we use another technique, namely a \textit{consistent} generalized Scherk-Schwarz compactification. It gives rise to a lower-dimensional effective action which is easier to handle than the full DFT action. This action describes a gauged (super)gravity and is equipped with a scalar potential which considerably restricts the vacua of the effective theory. Because we use a consistent compactification, the solutions of the effective gauged (super)gravity's field equations can be uplifted to solutions of the DFT background field equations. In fact, the uplift can always be performed in case the background possesses enough isometries. This was discussed e.g. in \cite{Scherk:1978ta,Scherk:1979zr,Pons:2003ka} for standard dimensional reductions of higher dimensional supergravity theories on $(D-d)$-dimensional spaces with $D-d$ isometries. So in case the generalized Scherk-Schwarz ansatz possesses the doubled number of isometries, i.e. $2(D-d)$ isometries with respect to the coordinates as well as with respect to the dual coordinates, we will argue that the same argument still holds for the consistent uplift of the reduced DFT. Thus the steps we are performing are summarized by the following diagram: \begin{center}\label{fig:consistentcomp} \begin{tikzpicture}[>=stealth',node distance=4em] \node (SDFT) {$S_\mathrm{DFT}$}; \node (Seff) [right of=SDFT, xshift=20em] {$S_\mathrm{eff}$}; \node (eomeff) [below of=Seff] {field equations}; \node (soleff) [below of=eomeff] {solution\,.}; \node (eom) [below of=SDFT] {background field equations}; \node (sol) [below of=eom] {background}; \draw[->] (SDFT) -- (Seff) node[midway, above] {consistent compactification ansatz}; \draw[->] (Seff) -- (eomeff) node[midway, above, anchor=west] {$\delta S_\mathrm{eff} = 0$}; \draw[->] (eomeff) -- (soleff) node[midway, above, anchor=west] {solve (easy)}; \draw[->] (soleff) -- (sol) node[midway, below] {uplift} ; \draw[->,dashed] (SDFT) -- (eom) node[midway, below, anchor=east] {$\delta S_\mathrm{DFT} = 0$}; \draw[->,dashed] (eom) -- (sol) node[midway, below, anchor=east] {solve (involved)}; \end{tikzpicture} \end{center} We will now follow the path marked by the solid black lines to find a valid background. The following subsections describe the way from $S_\mathrm{DFT}$ to the solution of the effective field theory's equations of motion. Section~\ref{sec:constrtwists} discussed the explicit uplift by considering so called twisted backgrounds, with enough isometries for a consistent uplift. \subsection{Generalized Kaluza-Klein ansatz}\label{sec:KKansatz} In every compactification one distinguishes between internal and external, i.e. uncompactified directions. Here we assume that we have $d$ external and $D-d$ internal dimensions. To make this situation manifest, we split the $2D$ components of the vector $X^{\mathcal M}=\begin{pmatrix} \tilde x_i & x^i \end{pmatrix}$ into \begin{equation} X^{\hat M}= \begin{pmatrix} \tilde x_\mu & x^\mu & Y^M \end{pmatrix} = \begin{pmatrix} \mathds{X} & \mathds{Y} \end{pmatrix}\,, \quad\text{where}\quad \mu=0,\dots, d-1 \end{equation} counts the external directions and $Y^M$ is an covariant vector in the internal double space. In these conventions the O$(D,D)$ invariant metric \eqref{eqn:etaMN} reads \begin{equation} \eta_{\hat M\hat N}=\begin{pmatrix} 0 & \delta^\mu_\nu & 0 \\ \delta_\mu^\nu & 0 & 0 \\ 0 & 0 & \eta_{MN} \end{pmatrix} \quad \text{and its inverse} \quad \eta^{\hat M \hat N}=\begin{pmatrix} 0 & \delta_\mu^\nu & 0\\ \delta^\mu_\nu & 0 & 0\\ 0 & 0 & \eta^{MN} \end{pmatrix}\,. \end{equation} In this subsection we will review as warm-up compactifications of DFT, for which the internal $2(D-d)$-dimensional space does not depend on the coordinates in the internal directions. Hence we are basically dealing with compacifications on a doubled torus $T^{2(D-d)}$. Specifically, we demand, that the internal space is invariant under $2(D-d)$ independent isometries. An isometry is a shift of the coordinates $X^{\hat J} \rightarrow X^{\hat J} - K^{\hat J}$ which does not change the generalized metric. Using the generalized Lie derivative, which generates such coordinate shifts, an isometry is defined by \begin{equation}\label{eqn:killinggenmetric} {\mathcal L}_{K^{\hat J}} {\mathcal H}_{\hat M\hat N} = 0\;, \end{equation} where $K^{\hat J}$ is the Killing vector. This is the generalized Killing equations in the generalized geometry of DFT. In total we need $2(D-d)$ independent isometries to construct a consistent compactification ansatz. They are denotes by $K\indices{_I^{\hat J}}$ with $I=1,\dots,2(D-d)$ labeling the different Killing vectors. Condition \eqref{eqn:killinggenmetric} is fulfilled in particular when \begin{equation} {\mathcal L}_{K\indices{_I^{\hat J}}} E\indices{^{\hat A}_{\hat M}} = 0 \quad \rightarrow \quad {\mathcal L}_{K\indices{_I^{\hat J}}} {\mathcal H}_{\hat M\hat N} = \left( {\mathcal L}_K E\indices{^{\hat A}_{\hat M}} \right) \delta_{AB} E\indices{^{\hat B}_{\hat N}} + E\indices{^{\hat A}_{\hat M}} \delta_{AB} \left( {\mathcal L}_{K} E\indices{^{\hat B}_{\hat N}} \right) = 0\,, \end{equation} although in general one may impose the weaker condition that the Killing vectors leave the frame field invariant only up to a local Lorentz transformation. This equation allows us to use the generalized vielbein $E\indices{^{\hat A}_{\hat M}}$ to look for Killing vectors of the internal space. As a warm up, we begin with the simplest set of Killing vectors namely \begin{equation}\label{eqn:KillingvecKK} K\indices{_I^{\hat J}} = \begin{pmatrix} 0 & 0 & \delta_I^J \end{pmatrix}\,. \end{equation} The corresponding Killing equation then implies that the generalized vielbein $E\indices{^{\hat A}_{\hat M}}$ has to be independent of the internal coordinates $\mathds{Y}$. This condition leads to the constrained vielbein $\widehat{E}\indices{^{\hat A}_{\hat M}}(\mathds{X})$ that depends only on $\mathds{X}$. This implies that the kinetic part of the energy in the $\mathds{Y}$ directions vanishes and the Kaluza-Klein tower of states is consistently truncated to massless states only. Generalized Lie derivatives on $\widehat{E}\indices{^{\hat A}_{\hat M}}$ should not violate our ansatz by introducing a $\mathds{Y}$ dependence. Thus, we restrict the gauge parameters $\xi$ to depend on $\mathds{X}$ only. In the following, $\mathds{Y}$ independent quantities are always marked by a hat. After these restrictions, one is able to decompose the generalized vielbein into several fields which do not mix under generalized diffeomorphisms and the other symmetry transformations in section~\ref{sec:dftandsym}. These fields are \begin{itemize} \item the $d$-dimensional vielbein $e\indices{^\alpha_\mu}$ and \item the corresponding $B$-field $B_{\mu\nu}$, \item the $\mu=1,\dots,d$ $2(D-d)$-dimensional, covariant vectors $\widehat A_{M\mu}$ and \item the O$(D-d,D-d)$ valued vielbein $\widehat E\indices{^A_M}$. \end{itemize} They will be considered as the field content of the effective theory which arises after the compactification. Altogether, they completely parameterize the $D^2$ degrees of freedom of the totally gauge fixed generalized vielbein in \eqref{eqn:EAMfixed} and lead to the Kaluza-Klein ansatz \begin{equation}\label{eqn:vielbeinreparam} \widehat E\indices{^{\hat A}_{\hat M}}(\mathds{X}) = \begin{pmatrix} e\indices{_\alpha^\mu} & - e\indices{_\alpha^\rho} C_{\mu\rho} & - e\indices{_\alpha^\rho} \widehat A_{M\rho} \\ 0 & e\indices{^\alpha_\mu} & 0 \\ 0 & \widehat E\indices{^A_L} \widehat A\indices{^L_\mu} & \widehat E\indices{^A_M} \end{pmatrix} \quad \text{with} \quad C_{\mu\nu} = B_{\mu\nu} + \frac{1}{2} \widehat{A}\indices{^L_\mu} \widehat{A}_{L\nu} \,. \end{equation} This coincides with the ansatz given in \cite{Hohm:2013nja} once the dependence on internal coordinates is dropped. Of course $\widehat E\indices{^{\hat A}_{\hat M}}$ has to be still O$(D,D)$ valued and hence must satisfy \eqref{eqn:vielbeinodd}. This is the case, if and only if \begin{equation} e\indices{_\alpha^\mu} \eta^{\alpha\beta} e\indices{_\beta^\nu} = \eta^{\mu\nu} \quad \text{and} \quad \widehat{E}\indices{^A_M} \eta_{AB} \widehat{E}\indices{^B_N} = \eta^{MN}\;, \end{equation} i.e., if $\hat{E}$ is O$(D-d,D-d)$ valued. In the $d$ uncompactified space time directions, there are no winding modes. Thus in these directions, the strong constraint \eqref{eqn:strongconstraint} is trivially solved by $\tilde \partial^\mu = 0$ and the partial derivative in doubled coordinates reduces to $\partial^{\hat M} = \begin{pmatrix} \partial_\mu & 0 & \partial^M \end{pmatrix}$. We now compute the action of the generalize diffeomorphisms on the generalized vielbein \eqref{eqn:vielbeinreparam}. They are defined by the generalized Lie derivative \eqref{eqn:genliederiv} with the parameter $\widehat \xi^{\hat M}$. As already mentioned, $\widehat \xi^{\hat M}$ only depend on the coordinates $\mathds{X}$. Its components are \begin{equation}\label{eqn:xiMkaluzaklein} \widehat \xi^{\hat M}(\mathds{X}) = \begin{pmatrix} \tilde \xi_\mu & \xi^\mu & \widehat \Lambda^M \end{pmatrix}\,. \end{equation} After some algebra, one gets the infinitesimal generalized diffeomorphisms \begin{align} \label{eqn:genLieKKealphamu} {\mathcal L}_{\widehat \xi} e\indices{^\alpha_\mu} &= L_\xi e\indices{^\alpha_\mu}\,, \\ \label{eqn:genLieKKBmunu} {\mathcal L}_{\widehat \xi} B_{\mu\nu} &= L_\xi B_{\mu\nu} + \left( \partial_\mu \tilde \xi_\nu - \partial_\nu \tilde \xi_\mu \right) + \partial_{[\mu} \widehat{\Lambda}_M \widehat{A}\indices{^M_{\nu]}}\,, \\ \label{eqn:genLieKKAMmu} {\mathcal L}_{\widehat \xi} \widehat A_{M\mu} &= L_\xi \widehat A_{M\mu} - \partial_\mu \widehat \Lambda_M \quad \text{and}\\ \label{eqn:genLieKKEAM} {\mathcal L}_{\widehat \xi} \widehat E\indices{^A_M} &= L_\xi \widehat E\indices{^A_M} \end{align} for the various fields of the effective theory, which can also be read off directly from \cite{Hohm:2013nja}. Here, $L_\xi$ is the common Lie derivation in the $d$-dimensional, extended space time. As required, these transformations do not mix different fields. In addition, they show that the $M=1,\dots,2 (D - d)$ fields $A_{M\mu}$ transform like vectors and the generalized vielbein $\hat E\indices{^A_M}$ transforms like $(D-d)^2$ scalars in the effective theory. Furthermore the vectors posses an abelian U$(1)^{2(D-d)}$ gauge symmetry. This symmetry is generated by the parameters $\widehat{\Lambda}_M$ in \eqref{eqn:genLieKKAMmu}. With the expressions \eqref{eqn:genLieKKealphamu}--\eqref{eqn:genLieKKEAM} for the generalized Lie derivatives of the various fields, it is immediately clear that the vectors in \eqref{eqn:KillingvecKK} are indeed Killing vectors and thus fulfill \begin{equation} \mathcal{L}_{K\indices{_I^{\hat J}}} e\indices{^\alpha_\mu} = \mathcal{L}_{K\indices{_I^{\hat J}}} B_{\mu\nu} = \mathcal{L}_{K\indices{_I^{\hat J}}} \widehat{A}_{M\mu} = \mathcal{L}_{K\indices{_I^{\hat J}}} \widehat{E}\indices{^A_M} = 0\,. \end{equation} \subsection{Generalized Scherk-Schwarz ansatz}\label{sec:scherkschwarz} Now we want to deform the Kaluza-Klein ansatz from the previous section. This leads to non-abelian gauge symmetries and massive scalars in the effective theory. Nevertheless, the $2(D-d)$ isometries along the compact internal directions $\mathds{Y}$ shall be kept. In order to achieve this, we replace the $N=1,\dots,2(D-1)$ holonomic basis 1-forms $d Y^N$ of the Kaluza-Klein ansatz with the right-invariant 1-forms \cite{Kaloper:1999yr} \begin{equation} \eta^M = U\indices{^M_N}(\mathds{Y}) d Y^N \end{equation} of a Lie group $G$. This is done by the so called twist $U\indices{^N_M}(\mathds{Y})$ and breaks the isometries $G_\mathrm{L} \times G_\mathrm{R}$ of a bi-invariant metric, like the one used in the last section, down to $G_\mathrm{R}$. While $G_\mathrm{R}$ still consists of enough isometries to perform a consistent truncation, $G_\mathrm{L}$ is now used to implement the gauge group of the effective theory. In order to connect this new basis 1-forms with the generalized metric, we have to adapt the scalars $E\indices{^A_M}$ and the vectors $A_{M\mu}$ as \begin{equation}\label{eqn:twistofgenvielbein} E\indices{^A_M}(\mathds{X},\mathds{Y}) = \widehat E\indices{^A_N}(\mathds{X}) U\indices{^N_M}(\mathds{Y}) \quad \text{and} \quad A_{M\mu}(\mathds{X},\mathds{Y}) = \widehat A_{N\mu}(\mathds{X}) U\indices{^N_M}(\mathds{Y})\,. \end{equation} Of course, one can also write this ansatz in terms of the generalized vielbein \begin{equation}\label{eqn:twistscherkschw} E\indices{^{\hat A}_{\hat M}}(\mathds{X},\mathds{Y}) = \widehat E\indices{^{\hat A}_{\hat N}}(\mathds{X}) U\indices{^{\hat N}_{\hat M}}(\mathds{Y}) \quad \text{with} \quad U\indices{^{\hat N}_{\hat M}} = \begin{pmatrix} \delta^\mu_\nu & 0 & 0 \\ 0 & \delta_\mu^\nu & 0 \\ 0 & 0 & U\indices{^N_M} \end{pmatrix}\,, \end{equation} too. As previously emphasised, the generalized vielbein $E\indices{^{\hat A}_{\hat M}}$ has to be O$(D,D)$ valued. The untwisted generalized vielbein $\hat E\indices{^{\hat A}_{\hat M}}$ has this property. Hence the twist $U\indices{^{\hat N}_{\hat M}}$ also has to be O$(D,D)$ valued, which is exactly the case if, and only if, $U\indices{^N_M}$ is O$(D-d,D-d)$ valued. Dual to the right-invariant 1-forms $\eta^M$ are vectors of the form \begin{equation} \label{eqn:ssansatzxi} \xi^{\hat M} = \widehat \xi^{\hat N} U\indices{_{\hat N}^{\hat M}} = \begin{pmatrix} \tilde \xi_\mu & \xi^\mu & \Lambda^M \end{pmatrix}\,. \end{equation} They generate left-translations acting on $G_\mathrm{L}$. This group, as already explained, was chosen to implement the gauge symmetry of the effective theory. Thus, transformations $\widehat{\xi}^{\hat M}$ with an arbitrary $\mathds{X}$-dependent $\widehat{\xi}^{\hat N}$ represent gauge transformations of the effective theory. To check this, we calculate the generalized Lie derivative of the vector $V_{\hat M} = \widehat{V}_{\hat N} U\indices{^{\hat N}_{\hat M}}$ (which corresponds to a right-invariant 1-form) with the gauge parameter $\xi^{\hat L}$: \begin{align} \mathcal{L}_\xi V_{\hat M} & = \xi^{\hat P} \partial_{\hat P} V_{\hat M} + \left( \partial_{\hat M} \xi^{\hat P} - \partial^{\hat P} \xi_{\hat M} \right) V_{\hat P} \nonumber \\ &= \mathcal{L}_{\widehat \xi} \widehat{V}_{\hat I} U\indices{^{\hat I}_{\hat M}} + \widehat{\xi}^{\hat L} \widehat{V}_{\hat N} \left( U\indices{_{\hat L}^{\hat P}} \partial_{\hat P} U\indices{^{\hat N}_{\hat M}} + \partial_{\hat M} U\indices{_{\hat L}^{\hat P}} U\indices{^{\hat N}_{\hat P}} - U\indices{^{\hat N}_{\hat P}} \partial^{\hat P} U_{\hat L\hat M} \right) \nonumber \\ &= \left( \mathcal{L}_{\widehat \xi} \widehat{V}_{\hat I} + \widehat{\xi}^{\hat L} \widehat{V}^{\hat N} \left[ \Omega_{\hat L\hat N \hat I} + \Omega_{\hat I\hat L\hat N} - \Omega_{\hat N\hat L\hat I} \right] \right) U\indices{^{\hat I}_{\hat M}} \nonumber \\ \label{eqn:twistedgenLie} &= \left( \mathcal{L}_{\widehat \xi} \widehat V_{\hat I} + \mathcal{F}_{\hat I\hat N\hat L} \widehat \xi^{\hat N} \widehat V^{\hat L} \right) U\indices{^{\hat I}_{\hat M}}\,. \end{align} Here the covariant tensor $\mathcal{F}_{\hat M\hat N\hat L}$ arises through the twist $U\indices{^{\hat M}_{\hat N}}$. A similar deformation of gauge transformations is also part of the DFT formulation of heterotic strings \cite{Hohm:2011ex}. Due to the structure of twist, the covariant tensor vanishes in all external directions $\mathds{X}$. Its non-vanishing components are linked to the covariant fluxes introduced in \eqref{eqn:fluxescoeffanholo} in section~\ref{sec:covariantfluxes} by \begin{equation} \mathcal{F}_{ABC} = \widehat{E}\indices{_A^I} \widehat{E}\indices{_B^J} \widehat{E}\indices{_C^K} \mathcal{F}_{IJK}\,. \end{equation} Hence in the following we will also call $\mathcal{F}_{IJK}$ covariant fluxes. They are the structure constants of the Lie algebra $\mathfrak{g}_\mathrm{L}$ associated to the Lie group $G_\mathrm{L}$ which we choose as gauge group. Actually, $G_\mathrm{L}$ is only a group if its associated Lie algebra $\mathfrak{g}_\mathrm{L}$ is consistent, i.e., satisfies the Jacobi identity. Explicit calculations using \eqref{eqn:twistedgenLie} and $\xi_\mu = \tilde \xi^\mu = 0$ show that this condition reads \begin{equation}\label{eqn:gaugealgclosed} \left( \mathcal{F}_{MNL} \mathcal{F}\indices{^L_I_K} - \mathcal{F}_{MIL} \mathcal{F}\indices{^L_N_K} \right) \widehat{\Lambda}_1^N \widehat{\Lambda}_2^I \widehat V^K = \mathcal{F}_{MNK} \widehat{\Lambda}_{12}^N \widehat V^K\,. \end{equation} Thus, covariant fluxes need to fulfill the Jacobi identity \begin{equation}\label{eqn:quadraticc} \mathcal{F}_{LMN}\mathcal{F}\indices{^L_I_K} + \mathcal{F}_{LIM}\mathcal{F}\indices{^L_N_K} + \mathcal{F}_{LNI}\mathcal{F}\indices{^L_M_K} = 0 \quad \text{or} \quad \mathcal{F}_{L[MN} \mathcal{F}\indices{^L_{I]}_K} = 0\,, \end{equation} taking the total antisymmetry $\mathcal{F}_{NML} = \mathcal{F}_{[MNL]}$ into account. When \eqref{eqn:quadraticc} holds, we find an effective parameter $\widehat{\Lambda}_{12}^N$ that satisfies \eqref{eqn:gaugealgclosed}, namely \begin{equation}\label{eqn:structurecoeff} \widehat{\Lambda}_{12}^K = \mathcal{F}\indices{^K_I_J} \widehat{\Lambda}_1^I \widehat{\Lambda}_2^J\,. \end{equation} Remembering the fact that the hatted quantities depend only on the extended directions $\mathds{X}$, it becomes clear that the covariant fluxes $\mathcal{F}\indices{^K_I_J}$ may, if at all, also depend only on these directions. Otherwise the gauge algebra would not be closed. But as one sees from \eqref{eqn:coeffanholo}, $\mathcal{F}\indices{^K_I_J}$ depends on the compact directions $\mathds{Y}$ only. So, in order to still close the gauge algebra it have to be \begin{equation}\label{eqn:constfluxes} \mathcal{F}_{NML}\,:\quad \text{constant}\,. \end{equation} The closure condition \eqref{eqn:gaugealgclosed} is known to hold if the strong constraint \eqref{eqn:strongconstraint} is imposed. The strong constraint is satisfied if and only if the twist $U\indices{^M_N}$ also fulfills the strong constraint. But the mapping between covariant fluxes and twists, i.e. the inverse of \eqref{eqn:coeffanholo}, is not trivial. Hence it is not obvious how to impose the strong constraint on the level of the covariant fluxes $\mathcal{F}_{IJK}$ directly. In this context the constraints \eqref{eqn:quadraticc} and \eqref{eqn:constfluxes} are very useful: In case one of them is violated, the strong constraint is violated as well. Another check whether the strong constraint is violated can be performed like this: Provided $\partial_M U\indices{_N^M} = 0$, which we will assume as usual in Scherk-Schwarz compactification, a consequence of the strong constraint is \begin{equation}\label{eqn:strongconstfluxes} \mathcal{F}_{MNL} \mathcal{F}^{MNL} = 0\,. \end{equation} In order to confirm this we compute \begin{align} \mathcal{F}_{MNL} \mathcal{F}^{MNL} &= 3 \Omega_{MNL}\Omega^{MNL} + 6 \Omega_{MNL}\Omega^{LMN} \nonumber \\ &= 3 \partial_M U\indices{_N^L} \partial^M U\indices{^N_L} - 6 \partial_M U\indices{_N^L} \partial_L U^{NM} = 3 \partial_M U\indices{_N^L} \partial^M U\indices{^N_L} = 0 \end{align} by using \eqref{eqn:fluxescoeffanholo} and the strong constraint \eqref{eqn:strongconstraint} in the last step. To see that the second term in the second line vanishes, we used \begin{equation} \partial_M \partial_L \left( U\indices{_N^L} U^{NM} \right) = 0 = \partial_M U\indices{_N^L} \partial_L U^{NM} \quad \text{with} \quad \partial_M U\indices{_N^M} = 0\,. \end{equation} The last expression can also be written as \begin{equation}\label{eqn:FLLN=0} U\indices{_L^M} \partial_M U\indices{_N^K} U\indices{^L_K} = \Omega\indices{_L_N^L} = -\Omega\indices{^L_L_N} = 0 \quad \leftrightarrow \quad \mathcal{F}\indices{^L_L_N} = 0\,. \end{equation} A similar condition we will be given below for the Killing vectors. It guarantees that the generalized Lie derivative $\mathcal{L}_{U\indices{_N^M}} \cdot$ leaves densities invariant. Summarizing this discussion, there is the following hierarchy of constraints: \begin{center} \begin{tikzpicture}[>=stealth',node distance=4em] \node (sc) [draw, rectangle] {strong constraint $\partial_M\partial^M \cdot = 0$ and compactification ansatz}; \node (sum) [below left of=sc, anchor=east, yshift=-2em, draw, rectangle] {$\mathcal{F}_{MNL}\mathcal{F}^{MNL}=0$}; \node (close1) [below right of=sc, anchor=west, yshift=-1em, draw, rectangle] {$\begin{aligned} \mathcal{F}_{MNL} &=\text{constant} \\ \mathcal{F}_{L[MN} \mathcal{F}\indices{^L_{I]}_K} &= 0 \end{aligned}$}; \node (close2) [below of=close1, anchor=north, yshift=0.5em, draw, rectangle] {closure of C-bracket\,.}; \draw[->] (sc) to node[anchor=east,xshift=-1em] {and $\mathcal{F}\indices{^L_L_N}=0$} (sum); \draw[->] (sc) -- (close1); \draw[<->] (close1) -- (close2); \end{tikzpicture} \end{center} Combining \eqref{eqn:twistedgenLie} with \eqref{eqn:genLieKKAMmu} and \eqref{eqn:genLieKKEAM} respectively, one gets the generalized Lie derivatives \begin{align} \label{eqn:gendiffeomorphAMmu} {\mathcal L}_\xi A_{M\mu} &= L_\xi A_{M\mu} -\partial_\mu \widehat{\Lambda}_M + \mathcal{F}_{MNL} \widehat{\Lambda}^N A\indices{^L_\mu} \quad \text{and} \\ \label{eqn:gendiffeomorphEAM} {\mathcal L}_\xi E\indices{^A_M} &= L_\xi E\indices{^A_M} + \mathcal{F}_{MNL} \widehat{\Lambda}^N E^{AL}\,. \end{align} for the twisted fields. It is obvious that both $A_{M\mu}$ and $E\indices{^A_M}$ transform under generalized diffeomorphisms with non-vanishing $\Lambda^M$ as non-abelian vectors fields. With the twist, introduced by the Scherk-Schwarz ansatz, we have transformed the abelian gauge symmetry of the Kaluza-Klein ansatz into a non-abelian one. The $2(D-d)$ required Killing vectors $K\indices{_I^{\hat J}}$ have to generate right-translations which leave the generalized vielbein $E\indices{^{\hat A}_{\hat M}}$ and also the gauge transformation generated by $\xi^{\hat M}$ invariant. This is the case when \begin{equation} \mathcal{L}_{K\indices{_I^J}} U\indices{_N^M} = 0 \end{equation} and the $K\indices{_I^{\hat J}}$ in the external directions vanish. In this case the generators of $G_\mathrm{L}$ and $G_\mathrm{R}$ commute \begin{equation} \mathcal{L}_{K\indices{_I^{\hat J}}} \mathcal{L}_{\xi^{\hat M}} \mathcal{H}^{\hat M\hat N} = \mathcal{L}_{\xi^{\hat M}} \mathcal{L}_{K\indices{_{\hat I}^{\hat J}}} \mathcal{H}^{\hat M\hat N}\,, \end{equation} and one obtains the direct product $G_\mathrm{L}\times G_\mathrm{R}$ from which we started. Of course there are also structure coefficients for the group of isometries associated to the Killing vectors. They are calculated in the same way as the covariant fluxes in \eqref{eqn:twistedgenLie}. This gives rise to \begin{equation} \mathcal{L}_{K\indices{_I^{\hat M}}} K\indices{_J^{\hat N}} = \tilde{\mathcal F}\indices{_I_J^K} K\indices{_K^{\hat N}}\;, \end{equation} with \begin{equation}\label{eqn:structurekilling} \tilde{\mathcal F}\indices{_I_J^K} = K\indices{_I^N} \partial_N K\indices{_J^M} K\indices{^K_M} + K^{KN} \partial_N K\indices{_I^M} K_{JM} + K\indices{_J^N} \partial_N K\indices{^K_M} K\indices{_I^M}\,. \end{equation} Here $K\indices{^I_J}$ again denotes the inverse transpose of $K\indices{_I^J}$ and $K\indices{_I^L} K\indices{^J_L} = \delta^J_I$. But nevertheless, in general, $K\indices{_I^J}$ is not an O$(D-d,D-d)$ matrix. Hence, its first index cannot be raised or lowered with $\eta_{MN}$ or $\eta^{MN}$, respectively. Furthermore, the transformations generated by $K\indices{_I^J}$ have to leave densities, like $e^{-2 \phi'}$, invariant. For the Kaluza-Klein ansatz from the last section, this constraint is fulfilled trivially, but here we have to check that \begin{equation}\label{eqn:invdensity} \mathcal{L}_{K\indices{_I^J}} \phi' = K\indices{_I^J} \partial_J \phi' - \frac{1}{2} \partial_J K\indices{_I^J} = \frac{1}{2} \partial_J K\indices{_I^J} = 0 \quad \rightarrow \quad \partial_J K\indices{_I^J} = 0\,. \end{equation} As for the reset of the paper, we assumed in the first step $\phi'=\,$constant. In analogy with \eqref{eqn:FLLN=0}, this condition can be also expressed in terms of the structure constants $\tilde{\mathcal F}_{IJK}$, namely \begin{equation} \tilde{\mathcal F}\indices{^I_I_J} = 0\,. \end{equation} Let us note that the condition \eqref{eqn:invdensity} can be used to prove that the Lagrangian density does not depend anymore on the internal coordinates. To see this, consider the action of a Killing vector $K_I$ on the Lagrangian defining DFT which, being a scalar density, transforms as \begin{equation}\label{eqn:genlieLdft2} \delta_{K_I} L_\mathrm{DFT} = \partial_J ( K\indices{_I^J} L_\mathrm{DFT}) = \partial_J K\indices{_I^J} L_\mathrm{DFT} + K\indices{_I^J} \partial_J L_\mathrm{DFT} = K\indices{_I^J} \partial_J L_\mathrm{DFT} = 0\;, \end{equation} where we used \eqref{eqn:invdensity} to drop the term with the partial derivative acting on the Killing vectors $K\indices{_I^J}$. Because $K\indices{_I^J}$ consists of $2(D-d)$ linearly independent vector fields, from this equation we can immediately conclude \begin{equation} \partial_J L_\mathrm{DFT} = 0\,. \end{equation} This shows that $L_\mathrm{DFT}$ does not depend on the internal coordinates $\mathds{Y}$ when there are $2(D-d)$ linearly independent Killing vectors. Hence, according to our notation, the Lagrange density $L_\mathrm{DFT}$ can be written as $\widehat{L}_\mathrm{DFT}$. \medskip In the following we want to argue that the Scherk-Schwarz compactification is consistent in the strong Kaluza-Klein sense that each solution of the lower-dimensional theory can be lifted to a solution of the original, higher-dimensional theory. We first note that, by definition, the Killing vectors leaves the generalized Ricci tensor invariant, \begin{align} \label{eqn:genliegenricci} \delta_{K_I} \mathcal{R}_{\hat M\hat N} &= 0\,. \end{align} It is now easy to see that this equation is solved by \begin{equation}\label{eqn:ansatzgenricci} \mathcal{R}_{\hat M\hat N} = U\indices{^{\hat I}_{\hat M}} \widehat{\mathcal R}_{\hat I\hat J} U\indices{^{\hat J}_{\hat N}}\,, \end{equation} using \begin{equation} \mathcal{L}_{K_I} U\indices{^{\hat L}_{\hat M}} = 0 \quad \text{and} \quad \mathcal{L}_{K_I} \widehat{\mathcal R}_{\hat L \hat K} = 0\,. \end{equation} Now, acting with $U\indices{_{\hat I}^{\hat M}}$, the inverse transpose of $U\indices{^{\hat I}_{\hat M}}$, we can conclude \begin{equation} \mathcal{R}_{\hat M\hat N} = 0 \quad \leftrightarrow \quad \widehat{\mathcal R}_{\hat M\hat N} = 0\,. \end{equation} Hence, once the $\mathds{Y}$-independent part of the equations of motion is solved we can immediately construct the higher-dimensional Ricci tensor (satisfying the original DFT equations) via \eqref{eqn:ansatzgenricci}, thus showing the consistency of the Scherk-Schwarz reduction. Put differently, the dashed and the solid path in the diagram on page \pageref{fig:consistentcomp} commute. For our analysis in subsequent chapters we need the explicit definition of the Ricci tensor in the lower-dimensional theory,which is computed from \begin{equation} \widehat{\mathcal K}_{\hat M\hat N} = \frac{\delta \widehat{S}_\mathrm{eff}}{\delta \widehat{\mathcal H}^{\hat M\hat N}} \quad \text{with} \quad S_\mathrm{eff}=\int d^{2D} X \,\widehat{L}_\mathrm{DFT} \end{equation} using the projection \begin{equation} \widehat{\mathcal R}_{\hat M\hat N} = \widehat{P}_{\hat M\hat K} \widehat{\mathcal K}^{\hat K\hat L} \bar{\widehat{P}}_{\hat L\hat N} + \bar{\widehat{P}}_{\hat M\hat K} \widehat{\mathcal K}^{\hat K\hat L} \widehat{P}_{\hat L\hat N}\;. \end{equation} (See section~\ref{sec:dfteom} for details on the projection). Finally, we want to mention, that the generalized fluxes presented in this section are closely related to the embedding tensor $\Theta\indices{_I^\alpha}$ of gauged supergravities. In this context they describe a subset of the global O$({D-d},{D-d})$ symmetry transformations of the compact directions, which is promoted to a gauge symmetry in the effective theory. Comparing the formalism reviewed in \cite{Samtleben:2008pe} and the one shown here, one finds the connection \begin{equation}\label{eqn:embeddingtensor} \mathcal{F}\indices{_I_J^K} = \Theta\indices{_I^\alpha} (t_\alpha)\indices{_J^K} = \left( X_I \right)\indices{_J^K} \,, \end{equation} where $t_\alpha$ are $(D-d)\left[2(D-d) - 1\right]$ different O$({D-d},{D-d})$ generators and $(t_\alpha)\indices{_J^K}$ is the corresponding representation with respect to $2(D-d)$-dimensional vectors. One imposes two consistency constraints on the embedding tensor, namely the linear and the quadratic constraint. An explicit discussion of these constrains for $D-d=2,\,3$ and the connection to DFT is given in \cite{Dibitetto:2012rk}. \subsection{Gauged (super)gravity and its vacua}\label{sec:gaugedgravity} In section~\ref{sec:scherkschwarz}, we proved that a consistent Scherk-Schwarz ansatz leads to an $\mathds{Y}$-independent effective action $S_\mathrm{eff}$. The effective action is most conveniently obtained by starting from the formulation in \cite{Hohm:2013nja}, which reduces to the previous results in \cite{Aldazabal:2011nj,Geissbuhler:2011mx} for a Scherk-Schwarz ansatz. Following \cite{Hohm:2013nja}, let us first define a derivate \begin{equation}\label{eqn:covderiv} D_\mu = \partial_\mu - \mathcal{L}_{A\indices{^M_\mu}} \end{equation} which transforms covariantly under gauge transformations \eqref{eqn:ssansatzxi}. Applied on the generalized metric $\mathcal{H}_{MN}$, it gives rise to \begin{align} D_\mu \mathcal{H}_{MN} &= U\indices{^I_M} \widehat{D}_\mu \widehat{\mathcal H}_{IJ} U\indices{^J_N} \quad \text{with} \nonumber \\ \widehat{D}_\mu \widehat{\mathcal H}_{MN} &= \partial_\mu \widehat{\mathcal H}_{MN} + \mathcal{F}\indices{_M_J^I} \widehat{A}\indices{^J_\mu} \widehat{\mathcal H}_{IN} + \mathcal{F}\indices{_N_J^I} \widehat{A}\indices{^J_\mu} \widehat{\mathcal H}_{MI}\,. \end{align} The field strength of the gauge field $A\indices{_\mu^M}$ is defined in analogy with Yang-Mills theory by setting \begin{align}\label{eqn:F_mu_nu^M} F\indices{^M_\mu_\nu} &= 2 \partial_{[\mu} A\indices{^M_{\nu]}} - [ A_\mu, A_\nu ]^M_\mathrm{C} = \widehat{F}\indices{^N_\mu_\nu} U\indices{_N^M} \quad \text{with} \nonumber\\ \widehat{F}\indices{^M_\mu_\nu} &= 2 \partial_{[\mu} \widehat{A}\indices{^M_{\nu]}} - \mathcal{F}\indices{^M_N_L} \widehat{A}\indices{^N_\mu} \widehat{A}\indices{^L_\nu}\,. \end{align} It describes how two covariant derivative commute \begin{equation} [D_\mu, D_\nu] = - \mathcal{L}_{F\indices{^M_\mu_\nu}}\,. \end{equation} As shown in \cite{Hohm:2013nja}, $F\indices{^M_\mu_\nu}$ in general does not transform covariantly under gauge transformations, \begin{equation} \Delta_\xi F\indices{_\mu_\nu^M} = \delta_\xi F\indices{_\mu_\nu^M} - \mathcal{L}_\xi F\indices{_\mu_\nu^M} = \partial^M ( \partial_{[\mu} \xi^N A_{\nu]N} )\,. \end{equation} This problem is fixed by adding the partial derivative of a 2-form gauge potential to the field strength defined in \eqref{eqn:F_mu_nu^M} which compensates for the wrong transformation behavior. But due to the special properties of the Scherk-Schwarz ansatz for fields \eqref{eqn:twistofgenvielbein} and gauge parameter \eqref{eqn:ssansatzxi}, the failure of covariance vanishes because the expression in the bracket depends on the external directions only. Hence for a Scherk-Schwarz compactification, $F\indices{_\mu_\nu^M}$ is already a covariant field strength. A short calculation, where the result (2.32) from \cite{Hohm:2013nja} is used, shows that also the Bianchi identity \begin{equation} D_{[\mu} F\indices{^M_\nu_{\rho]}} = 0 \end{equation} is fulfilled for $F\indices{^M_\mu_\nu}$. Let us next discuss the field strength for the $B$-field, which is extended by a CS terms in order to be invariant under gauge transformations. This gives rise to the field strength \begin{equation} \widehat{G}_{\mu\nu\rho} = 3\partial_{[\mu} B_{\nu\rho]} + 3\partial_{[\mu} \widehat{A}\indices{^M_\nu} \widehat{A}_{M \rho ]} - \mathcal{F}_{MNL} \widehat{A}\indices{^M_\mu} \widehat{A}\indices{^N_\nu} \widehat{A}\indices{^L_\rho}\,. \end{equation} It transforms covariantly and fulfills the Bianchi identity \begin{equation} \partial_{[\mu} G_{\nu\rho\lambda]} = 0\,. \end{equation} With these quantities at hand, the Kaluza-Klein action in \cite{Hohm:2013nja} reads \begin{gather} S_\mathrm{eff} = \int \mathrm{d}x^{(D-d)} \sqrt{-g} e^{-2\phi} \Bigl( \mathcal{R} + 4 \partial_\mu\phi \partial^\mu\phi -\frac{1}{12} \widehat{G}_{\mu\nu\rho} \widehat{G}^{\mu\nu\rho} \nonumber \\ \label{eqn:ddimeffaction} \qquad -\frac{1}{4} \widehat{\mathcal H}_{MN} \widehat{F}^{M\mu\nu} \widehat{F}\indices{^N_\mu_\nu} +\frac{1}{8} \widehat{D}_\mu \widehat{\mathcal H}_{MN} \widehat{D}^\mu \widehat{\mathcal H}^{MN} -\widehat{V}\Bigr)\,. \end{gather} Here $\mathcal{R}$ denotes the scalar curvature in the external directions. In the internal directions, the Lagrange density $L_\mathrm{DFT}$ is constant. Thus the integrals in these direction can be solve and give rise to a global factor, which is neglected in \eqref{eqn:ddimeffaction}. This result is equivalent to the one presented by \cite{Aldazabal:2011nj}. Finally on has to calculate the scalar potential \begin{equation} \widehat{V} = - \widehat{R}(\phi', \widehat{\mathcal H}^{MN})\,. \end{equation} Due to the properties of the Scherk-Schwarz ansatz, it is constant with respect to the internal direction $\mathds{Y}$. Hence it is sufficient to calculate it at one special point, lets say $Y^N=0$. Using the definition \eqref{eqn:genricciscalar}, $\phi'=\text{const.}$, \begin{equation}\label{eqn:diUY=0} \left. \partial_I U\indices{^J_K} \right|_{Y^N=0} = \Omega\indices{_I^J_K} \quad \text{and} \quad \left. \partial_I \partial_J U\indices{^L_K} \right|_{Y^N=0} = \Omega\indices{_{(I}^L_M} \Omega\indices{_{J)}^L_K}\,, \end{equation} one obtains after some algebra \begin{equation}\label{eqn:scalarpotential} \widehat V = -\frac{1}{4} \mathcal{F}\indices{_I^K^L} \mathcal{F}\indices{_J_K_L} \widehat{\mathcal H}^{IJ} + \frac{1}{12} \mathcal{F}_{IKM} \mathcal{F}_{JLN} \widehat{\mathcal H}^{IJ} \widehat{\mathcal H}^{KL} \widehat{\mathcal H}^{MN}\,. \end{equation} Again, this result is consistent with \cite{Aldazabal:2011nj,Geissbuhler:2013uka}. In the remaining part of this section and in section~\ref{sec:minkowski} all quantities belong to the effective theory and thus only depend on the $d$ external coordinates $\mathds{X}$. To avoid overloading the notation there, we drop the hat we introduced to emphasis that quantities depend on $\mathds{X}$ only. In section~\ref{sec:constrtwists}, we start to use the hat to distinguish between $\mathds{X}$ and $\mathds{Y}$ dependent quantities again. Since we have performed a consistent compactification, each solution of the effective action is also a solution of the DFT we started with. So in order to find consistent backgrounds we have to solve the field equations of the effective action. These equation are obtained by the variation of the effective action $S_\mathrm{eff}$ which gives rise to \begin{align} \label{eqn:eomeffmetric} 0 &= \frac{\delta S_\mathrm{NS}}{\delta g^{ij}} - \frac{1}{2} \mathcal{H}_{MN} F\indices{^M_\mu^\rho} F\indices{^N_\nu_\rho} + \frac{1}{8} D_\mu \mathcal{H}_{MN} D_\nu \mathcal{H}^{MN} \\ 0 &= \frac{\delta S_\mathrm{NS}}{\delta \phi} - \frac{1}{4} \mathcal{H}_{MN} F^{M\mu\nu} F\indices{^N_\mu_\nu} + \frac{1}{8} D_\mu \mathcal{H}_{MN} D^\mu \mathcal{H}^{MN} - V \\ 0 &= 2 D_\nu \left( \mathcal{H}_{MN} F^{N\mu\nu} \right) - 4\partial_\nu \phi \mathcal{H}_{MN} F^{N\mu\nu} + F\indices{^M_\nu_\rho} G^{\mu\nu\rho} + \mathcal{F}\indices{_M_N^L} \mathcal{H}_{LK} D^\mu \mathcal{H}^{NK} \quad \text{and} \\ \label{eqn:eomgenRicci} 0 &= P_{MK} \mathcal{K}^{KL} \bar{P}_{LN} + \bar{P}_{MK} \mathcal{K}^{KL} P_{LN} \end{align} with \begin{equation} \label{eqn:eomkappaMN} \mathcal{K}^{MN} = F^{M\mu\nu} F\indices{^N_\mu_\nu} + D_\mu D^\mu \mathcal{H}^{MN} - 2\partial_\mu \phi D^\mu \mathcal{H}^{MN} + 4 \frac{\delta V}{\delta \mathcal{H}_{MN}} \end{equation} and additionally, the well know equations of motion for the string's $NS/NS$ sector \begin{align} \frac{\delta S_\mathrm{NS}}{\delta g^{ij}} &= \mathcal{R}_{\mu\nu} + 2\nabla_\mu\partial_\nu \phi - \frac{1}{4} G_{\mu\rho\lambda} G\indices{_\nu^\rho^\lambda} \\ \frac{\delta S_\mathrm{NS}}{\delta \phi} &= \mathcal{R} + 4 \left( \nabla_\mu \nabla^\mu \phi - \partial_\mu \phi \partial^\mu \phi \right) - \frac{1}{12} G_{\mu\nu\rho} G^{\mu\nu\rho} \\ 0 &= \nabla^\mu G_{\mu\nu\rho} - 2 \partial^\mu \phi G_{\mu\nu\rho} \end{align} in the low energy approximation. In \eqref{eqn:eomgenRicci} and \eqref{eqn:eomkappaMN}, we have applied the projectors discussed in section~\ref{sec:dfteom}. They respect that not all components of ${\mathcal H}^{MN}$ are physical degrees of freedom. \section{Minkowski vacua}\label{sec:minkowski} There are various possibilities how the solve the equations of motion \eqref{eqn:eomeffmetric}-\eqref{eqn:eomkappaMN} of the effective theory. The most straightforward one is to assume that we have a $d$-dimensional Minkowski space. In this case the metric is $g_{\mu\nu}=\eta_{\mu\nu}$ while the dilaton $\phi$ and the generalized metric $\mathcal{H}^{MN}$ of the internal space are constant. Furthermore the $B$-field $B_{\mu\nu}$ and the vectors $A_{M\mu}$ vanish. Now the field equations, discussed in the last section, simplify dramatically into \begin{equation}\label{eqn:minkowskivacuum} \mathcal{R}_{\mu\nu} = 0\,, \quad V = 0 \quad \text{and} \quad \mathcal{K}^{MN} = \frac{\delta V}{\delta \mathcal{H}_{MN}}\,. \end{equation} The vacua obtained by these equations fulfill the following requirements: \begin{itemize} \item They correspond to minima of the effective gauged supergravity potential that must have vanishing cosmological constant. Hence the uncompatified dimensions are described by flat Minkowski space time. At this point it is worth noting that the generalized curvature $\cal R$ of DFT in the internal directions $\mathds{Y}$ precisely corresponds to the vacuum energy in the effective theory. Hence the vanishing of the generalized Ricci tensor ${\cal R}_{MN}$ ensures that we are dealing with vacua with vanishing cosmological constant. \item The fluctuations around the Minkowski vacua are stable, i.e. the scalar mass matrix is at lest positive semi-definite, as we show in section~\ref{sec:spectrum}. Hence, the scalar potential in general leads to the stabilization of some moduli. \end{itemize} In order to solve the equations \eqref{eqn:minkowskivacuum}, let us fist have a closer look at the variation of the scalar potential \eqref{eqn:scalarpotential} with respect to the generalized metric, \begin{equation}\label{eqn:kappamnfull} \mathcal{K}^{MN} = \frac{\delta V}{\delta \mathcal{H}_{MN}} = \frac{1}{4}\left( -\mathcal{F}^{MKL} \mathcal{F}\indices{^N_K_L} + \mathcal{F}\indices{^M_I_K} \mathcal{F}\indices{^N_J_L} \mathcal{H}^{IJ} \mathcal{H}^{KL} \right)\,. \end{equation} It has to be evaluated for the value $\bar{\mathcal H}^{MN}$, which $\mathcal{H}^{MN}$ acquires for the vacuum. We express this value in terms of the vacuum's generalized vielbein \begin{equation}\label{eqn:vacuumgenvielbein} \bar{\mathcal H}^{MN} = \bar{E}\indices{_A^M} \delta^{AB} \bar{E}\indices{_B^N}\,. \end{equation} In the following, flat and curved indices will be related by means of this \textit{background} frame field, which in particular has the consequence that objects with flat indices are $\mathds{X}$-dependent that usually are constant. By applying this prescription to the indices of \eqref{eqn:kappamnfull}, one obtains \begin{equation}\label{eqn:confluxesintern} \mathcal{K}^{MN} = \frac{1}{4}\left( \mathcal{F}\indices{^M_A_B} \eta^{BC} \mathcal{F}\indices{^N_C_D} \eta^{DA} - \mathcal{F}\indices{^M_A_B} \delta^{BC} \mathcal{F}\indices{^N_C_D} \delta^{DA} \right)\,. \end{equation} A further simplification is achieved when barred indices are used (see \eqref{eqn:trafobaredind} in section~\ref{sec:dftandsym}). In this case the invariant metric $\eta_{\bar A\bar B}$ and the flat generalized metric $\delta_{\bar A\bar B}$ have non-vanishing entries for $\bar A=\bar B$ only. Using this simplification one is able to explicitly evaluate the two terms in \eqref{eqn:confluxesintern} ($\sigma=-1$ gives rise to the first term, while $\sigma=+1$ reproduces to the second one) as \begin{equation} \begin{split} \mathcal{F} \indices{^{\bar M}^{\bar a}^{\bar b}} \mathcal{F} \indices{^{\bar N}^{\bar c}^{\bar d}} \eta_{\bar b\bar c} \eta_{\bar a\bar d} + 2\sigma \mathcal{F}\indices{^{\bar M}^{\bar a}_{\bar b}} \mathcal{F}\indices{^{\bar N}_{\bar c}^{\bar d}} \eta^{\bar b\bar c} \eta_{\bar a\bar d} \,+\,& \mathcal{F}\indices{^{\bar M}_{\bar a}_{\bar b}} \mathcal{F}\indices{^{\bar N}_{\bar c}_{\bar d}} \eta^{\bar b\bar c} \eta^{\bar a\bar d} \\ &= \begin{cases} \mathcal{F}\indices{^{\bar M}_{\bar A}_{\bar B}} \eta^{\bar B\bar C} \mathcal{F}\indices{^{\bar N}_{\bar C}_{\bar D}} \eta^{\bar D\bar A} & \text{for } \sigma=-1 \\ \mathcal{F}\indices{^{\bar M}_{\bar A}_{\bar B}} \delta^{\bar B\bar C} \mathcal{F}\indices{^{\bar N}_{\bar C}_{\bar D}} \delta^{\bar D\bar A} & \text{for } \sigma=+1 \end{cases}\,, \end{split} \end{equation} where we have used the parameterization \begin{equation} \mathcal{F}_{\bar M\bar A\bar B} = \begin{pmatrix} \mathcal{F}\indices{_{\bar M}^{\bar a}^{\bar b}} & \mathcal{F}\indices{_{\bar M}^{\bar a}_{\bar b}} \\ \mathcal{F}\indices{_{\bar M}_{\bar a}^{\bar b}} & \mathcal{F}\indices{_{\bar M}_{\bar a}_{\bar b}} \end{pmatrix} \end{equation} for the covariant fluxes. With this result it is straightforward to compute \begin{equation} \mathcal{K}_{\bar M\bar N} = \mathcal{F}\indices{^{\bar M}_{\bar b}^{\bar a}} \mathcal{F}\indices{^{\bar N}_{\bar c}^{\bar d}} \eta^{\bar b\bar c} \eta_{\bar a\bar d}\,. \end{equation} Furthermore the projectors $P_{MK}$ and $\bar{P}_{LN}$, needed to calculate the generalized Ricci tensor \eqref{eqn:genriccitensor}, take the simple form \begin{equation} \bar{P}_{\bar A\bar B} = \frac{1}{2}\left( \eta_{\bar A\bar B} + \delta_{\bar A\bar B} \right) = \begin{pmatrix} 0 & 0 \\ 0 & \eta_{\bar a\bar b} \end{pmatrix} \quad \text{and} \quad P_{\bar A\bar B} = \frac{1}{2}\left( \eta_{\bar A\bar B} - \delta_{\bar A\bar B} \right) = \begin{pmatrix} \eta_{\bar a\bar b} & 0 \\ 0 & 0 \end{pmatrix} \;, \end{equation} in barred, flat indices. Hence the generalized Ricci scalar reads \begin{equation} \mathcal{R}_{\bar A\bar B} = - \begin{pmatrix} 0 & \mathcal{K}\indices{^{\bar a}_{\bar b}} \\ \mathcal{K}\indices{_{\bar a}^{\bar b}} & 0 \end{pmatrix}\,. \end{equation} This tensor is symmetric and thus the equation of motion $\mathcal{R}_{MN}=0$ reduces to \begin{equation}\label{eqn:fluxeseom} \mathcal{K}\indices{^{\bar a}_{\bar b}} = \mathcal{F}\indices{^{\bar a}_{\bar d}^{\bar c}} \mathcal{F}\indices{_{\bar b}_{\bar e}^{\bar f}} \eta^{\bar d\bar e} \eta_{\bar c\bar f} = 0\,. \end{equation} Only backgrounds that satisfy this equation are consistent. Thus in addition to \eqref{eqn:quadraticc} and \eqref{eqn:constfluxes}, we have to impose the further constraint \eqref{eqn:fluxeseom} on the generalized fluxes. Like the Jacobi identity \eqref{eqn:quadraticc}, it is quadratic in the fluxes. In summary, a valid background (without warp factor) is the direct product of a $d$-dimen\-sional Minkowski space and a twisted torus in the compact $(D-d)$-dimensional space. The twist of the torus is described in terms of the covariant fluxes $\mathcal{F}_{ABC}$. They are not arbitrary, but severely constrained. \subsection{Spectrum of the effective theory}\label{sec:spectrum} In the last section we discussed vacua for the effective field theory in $d$ dimensions. Now the focus is on small perturbations around these vacua. They play an important r\^ole in the process of moduli stabilization, which fixes some or even all of the scalar fields $\mathcal{H}_{MN}$. This process is governed by mass terms in the effective field theory's Lagrangian. Due to these terms some scalars obtain masses and are not excited in the ground state. The mass term arises from the second order variation of the scalar potential, \begin{equation}\label{eqn:secondvarV} \delta^2 V = \sum\limits_{\alpha\,,\,\beta}\left( \frac{\delta^2 V}{\delta \mathcal{H}_{IJ} \delta \mathcal{H}_{KL}} \frac{\delta \mathcal{H}^{IJ}}{\delta \phi_\alpha} \frac{\delta \mathcal{H}^{KL}}{\delta \phi_\beta} + \frac{\delta V}{\delta \mathcal{H}_{KL}} \frac{\delta^2 \mathcal{H}^{KL}}{ \delta \phi_\alpha \, \delta \phi_\beta} \right) \delta \phi_\alpha \delta \phi_\beta \,. \end{equation} Here we have taken into account that $\mathcal{H}^{MN}$ has to be O$({D-d},{D-d})$ valued and thus not all of its $2(D-d)(D-d-1)$ entries correspond to physical degrees of freedom. So we express the generalized metric $\mathcal{H}^{MN}$ in terms of scalar fields $\phi_\alpha$, $\alpha = 1,\dots,(D-d)^2$, which correspond to unconstrained, physical degrees of freedom. Furthermore, we define \begin{equation} \mathcal{M}_{IJKL} = \frac{\delta^2 V}{\delta \mathcal{H}_{IJ} \delta \mathcal{H}_{KL}} = \frac{1}{2} {\mathcal F}_{IKM} {\mathcal F}_{JLN} {\mathcal H}^{MN} \end{equation} in analogy with \eqref{eqn:kappamnfull} and use the abbreviation \begin{equation} \left( h_\alpha \right){}^{IJ} = \frac{\delta {\mathcal H}^{IJ}}{\delta \phi_\alpha} \,. \end{equation} Now, \eqref{eqn:secondvarV} takes the form \begin{equation}\label{eqn:secondvarV2} \delta^2 V = \sum\limits_{\alpha\,,\,\beta} \left[ \mathcal{M}_{IJKL} \left( h_\alpha \right){}^{IJ} \left( h_\beta \right){}^{KL} + \mathcal{K}_{KL} \frac{\delta }{\delta \phi_\alpha} \left( h_\beta \right){}^{KL} \right] \delta \phi_\alpha \delta \phi_\beta \,. \end{equation} One can regard $\left( h_\alpha \right){}^{IJ}$ as an infinitesimal generator of a field variation of ${\cal H}^{IJ}$. Thus it has to be compatible with the constraint \eqref{eqn:constrvariation}. It is convenient to work in flat indices like in \eqref{eqn:confluxesintern}. We again use the generalized vielbein $\bar{E}\indices{^A_M}$ of the vacuum to transform curved indices into flat ones. Then the constraint \eqref{eqn:constrvariation} on the variation reads \begin{equation}\label{eqn:constvariation2} \left( h_\alpha \right){}^{AC} \eta_{CD} \delta^{DB} + \delta^{AC} \eta_{CD} \left( h_\alpha \right){}^{DB} = 0\,. \end{equation} In order to construct all generators which fulfill this equation, we switch to barred indices and define \begin{equation} \left( h_{\bar A\bar B} \right)^{\bar C\bar D} = \sqrt{2} \delta^{\bar C}_{[\bar A} \delta_{\bar B]\bar E} \eta^{\bar E\bar D} \quad \text{with} \quad \alpha = \begin{pmatrix} \bar A & \bar B \end{pmatrix}\,. \end{equation} For $\bar A < \bar B$ this leads to $2(D-d)(D-d-1)$ independent generators. Only $(D-d)^2$ are symmetric, the others are antisymmetric. We drop the antisymmetric ones, because the generalized metric is symmetric and so are its variations. Finally we switch back to unbarred indices. With these generators at hand, the generalized metric can be expressed by the exponential map \begin{equation}\label{eqn:genmetricfluctuation} \mathcal{H}^{AB} = \prod\limits_\alpha \exp\left[ \sum\limits_\alpha \left( h_\alpha \right)^{AB} \phi_\alpha \right] = \delta^{AB} + \sum\limits_\alpha (h_\alpha)^{AB} \phi_\alpha + \frac{1}{2} \sum\limits_{\alpha,\,\beta} (h_\alpha)^{AC} \delta_{CD} (h_\beta)^{DB} \phi_\alpha \phi_\beta + \dots\;. \end{equation} We recall that we have used the vacuum vielbein to flatten curved indices. In the vacuum, all $\phi_\alpha$ vanish and according to \eqref{eqn:genmetricfluctuation}, the generalized metric equals $\mathcal{H}^{AB} = \delta^{AB}$. Back in curved indices this gives rise to the vacuum generalized metric $\bar{\mathcal H}^{MN} = \mathcal{H}^{MN}(\phi_\alpha=0)$. With the parameterization of the generalized metric in \eqref{eqn:genmetricfluctuation}, one obtains \begin{equation} \left. \frac{\delta^2}{\delta \phi_\alpha \, \delta \phi_\beta} \mathcal{H}^{AB} \right|_{\phi_\gamma = 0} = \begin{cases} \left( h_\alpha \right)^{AC} \delta_{CD} \left( h_\beta \right)^{DB} & \text{for } \alpha \le \beta \\ \left( h_\beta \right)^{AC} \delta_{CD} \left( h_\alpha \right)^{DB} & \text{otherwise} \end{cases}\,. \end{equation} Using this result and \begin{equation} \mathcal{H}^{MN} = \bar{E}\indices{_A^M} \mathcal{H}^{AB} \bar{E}\indices{_B^N}\,, \end{equation} one is able to evaluate the variation \eqref{eqn:secondvarV} explicitly. Finally, \eqref{eqn:secondvarV2} gives rise to \begin{equation} \delta^2 V = \sum\limits_{\alpha\,,\,\beta} M_{\alpha\beta} \delta\phi_\alpha \delta\phi_\beta \end{equation} with the symmetric mass matrix \begin{equation}\label{eqn:massmatrix} M_{\alpha\beta} = \left( {\mathcal M}_{ABCD} + {\mathcal K}_{AD} \delta_{BC} \right) \left( h_\alpha \right){}^{AB} \left( h_\beta \right){}^{CD}\,. \end{equation} In order to identify massive scalars excitations, this matrix has to be diagonalized. Because $M_{\alpha\beta}$ is symmetric, this is always possible and leads to $(D-d)^2$ eigenvalues $\lambda_\alpha$ and the corresponding, orthonormal eigenvectors $v_\alpha$ with the components $\left(v_\alpha\right)_\beta$. In order to diagonalize we rotate the generators $\left( h_\alpha \right){}^{AB}$ by defining \begin{equation}\label{eqn:rotatedgenh} ({\bar h}_\alpha ){}^{AB} := \sum\limits_{\beta} \left(v_\alpha\right)_\beta \left( h_\beta \right){}^{AB}\,. \end{equation} The generalized metric $\mathcal{H}^{AB}$ in \eqref{eqn:genmetricfluctuation} has to invariant under this rotation. Thus one also has to rotate the scalar fields \begin{equation} {\bar\phi}_\alpha := \sum\limits_{\beta} \left(v_\alpha\right)_\beta \phi_\beta\,. \end{equation} By plugging the rotated generators from \eqref{eqn:rotatedgenh} into the expression for the mass matrix \eqref{eqn:massmatrix}, one finally obtains the requested diagonal form \begin{equation} \bar M_{\alpha\beta} := \diag ( \lambda_\alpha )\,. \end{equation} The first order variation of the scalar potential and its vev vanish due to effective theory's field equation \begin{equation} \frac{\delta V}{\delta \phi_\alpha} = 0\,. \end{equation} Here a projection like in \eqref{eqn:eomgenRicci} is not necessary, because the $\phi_\alpha$'s already describe the physical degrees of freedom only. Thus $V$ is only governed by second order perturbations, which lead to \begin{equation}\label{eqn:expansionV} V = 2 \lambda_\alpha \phi_\alpha^2 + \mathcal{O}(\phi^3)\,. \end{equation} When inserting the expression for the generalized metric \eqref{eqn:genmetricfluctuation} into the kinetic term for the generalized metric in \eqref{eqn:ddimeffaction}, one obtains \begin{equation}\label{eqn:expansionKin} D_\mu \mathcal{H}_{MN} D^\mu \mathcal{H}^{MN} = \sum\limits_\alpha 4 \partial_\mu \phi_\alpha \partial^\mu \phi_\alpha + \text{interaction terms}\,. \end{equation} The interaction terms describe self-couplings among the scalars $\phi_\alpha$ and couplings between scalars and gauge bosons $a_{M\mu}$, which are fluctuation around the vev of $A_{M\mu}$. The quadratic part of the Lagrangian for the scalars $\phi_\alpha$ is obtained by plugging \eqref{eqn:expansionV} and \eqref{eqn:expansionKin} into the action \eqref{eqn:ddimeffaction} and reads \begin{equation} \mathcal{L}_\phi = \frac{1}{2} \sum\limits_\alpha \left( \partial_\mu \phi_\alpha \partial^\mu \phi_\alpha - 4 \lambda_\alpha \phi_\alpha^2 \right)\,. \end{equation} It identifies $2 \sqrt{\lambda_\alpha} = m_\alpha$ as the mass of the scalar field $\phi_\alpha$. Thus the eigenvalues $\lambda_\alpha$ have to be positive or zero in order to avoid tachyons. So we see that the string theory which belongs to this background should give rise to $(D-d)^2$ scalars $\phi_\alpha$ with the masses $m_\alpha$. Furthermore there should be $2(D-d)$ vector bosons $a_{M\mu}$ which arise from the internal symmetry of the scalars. \subsection{Solution of flux constrains in $(D-d)=3$ dimensions}\label{sec:solutionsconstr} In section~\ref{sec:DFTbackgrounds} and \ref{sec:minkowski}, we have discussed various constraints on the covariant fluxes. Only when all these constraints hold, one is able to construct a consistent background. Now we want to look systematically for their solutions. We restrict our search to $(D-d)=3$-dimensional compact spaces. In this case the number of compact dimensions is large enough to find interesting, non-trivial solutions. On the other hand it is still so small that we are able to manage the search with an appropriate effort. As shown in \eqref{eqn:embeddingtensor}, there is a direct link between the covariant flux $\mathcal{F}\indices{_I_J^K}$ and the embedding tensor of gauged supergravities. For $(D-d)=3$, the $X_I$ in \eqref{eqn:embeddingtensor} describe the O$(3,3)$ generators labelled by $I=1,\dots,6$. Group-theoretically, $(X_I)\indices{_J^K}$ lives in the tensor product \begin{equation}\label{eqn:decomptensorprod} 6 \otimes 15 = 6 \oplus \overline{10} \oplus 10 \oplus 64 \,. \end{equation} The first factor in this product is the vector representation of SO$(3,3)$ and the second is the adjoint representation of SO$(3,3)$. There is one linear constraint, namely that the covariant fluxes are totally antisymmetric ($\mathcal{F}_{IJK} = \mathcal{F}_{[IJK]}$). This implies that the irreps $6$ and $64$ of the general tensor product decomposition \eqref{eqn:decomptensorprod} are absent. The remaining irreps $\overline{10} \oplus 10$ matches perfectly the number of independent components of $\mathcal{F}_{IJK}$, which is $6\cdot5\cdot4/ 3! = 20$ in $2(D-d)=6$ dimensions. Following the reasoning in \cite{Dibitetto:2012rk}, one can express $\left(X_I\right)\indices{_J^K}$ also as irreps of SL$(4)$, which is isomorphic to SO$(3,3)$. In this case \eqref{eqn:decomptensorprod} does not change. To distinguish between the two different groups, one introduces fundamental SL$(4)$ indices $p,q,r = 1,\dots,4$. The generators $\left(X_I\right)\indices{_J^K}$ can also be written in terms of SL$(4)$ indices \begin{equation}\label{eqn:decompfluxes} \left(X_{mn}\right)\indices{_p^q} = \frac{1}{2} \delta^q_{[m} M_{n]p} - \frac{1}{4} \varepsilon_{mnpr}\tilde M^{rq}\,, \end{equation} where $M_{np}$ and $\tilde M^{rq}$ are symmetric $4\times 4$ matrices and $\varepsilon$ denotes the Levi-Civita symbol. The matrices $M_{np}$ and $\tilde M^{rq}$ have $4\cdot 5/2=10$ independent components each and hence match exactly the remaining irreps $\overline{10}$ and $10$ in \eqref{eqn:decomptensorprod}. A double index, like $mn$ in $\left(X_{mn}\right)\indices{_p^q}$, labels the $6$ independent components of the SL$(4)$ irrep $6$. These $6=4\cdot 3/2$ different components are the entries of an antisymmetric $4\times 4$ matrix. They are lowered by \begin{equation} X_{mn} = \frac{1}{2} \varepsilon_{mnpq} X^{pq}\,. \end{equation} At this point, it is important to keep in mind that the indices $n$, $p$ of $M_{np}$ and $r$, $q$ of $\tilde M^{rq}$ are still fundamental SL$(4)$ indices and not doubled ones. Finally we transform the fundamental SL$(4)$ indices $p$ and $q$ in $\left(X_{mn}\right)\indices{_p^q}$ to double indices $pq$ and $rs$ respectively by using the identity \begin{equation}\label{eqn:doubleindices} \left(X_{mn}\right)\indices{_{pq}^{rs}} \ = \ 2 \left(X_{mn}\right)\indices{_{[p}^{[r}} \delta^{s]}_{q]}\,. \end{equation} The covariant fluxes in this representation using $6$ of SL$(4)$ indices, are linked to one with $6$ of SO$(3,3)$ indices, used throughout the paper, by the 't Hooft symbols $(G_I)^{mn}$. For $(D-d)=3$, they are defined as \begin{align} \left( G^1 \right)^{mn} &= \begin{pmatrix} 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} & \left( G^2 \right)^{mn} &= \begin{pmatrix} 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} & \left( G^3 \right)^{mn} &= \begin{pmatrix} 0 & 0 & 0 & -1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{pmatrix} \nonumber \\ \left( G_1 \right)^{mn} &= \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 \end{pmatrix} & \left( G_2 \right)^{mn} &= \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{pmatrix} & \left( G_3 \right)^{mn} &= \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} \end{align} and fulfill the identities \begin{align} \left( G_I \right)_{mn} \left( G_J \right)^{mn} &= 2\eta_{IJ}\,, \\ \left( G_I \right)_{mp} \left( G_J \right)^{pn} + \left( G_J \right)_{mp} \left( G_I \right)^{pn} &= -\delta^n_m \eta_{IJ}\,, \\ \left( G_I \right)_{mp} \left( G_J \right)^{pq} \left( G_K \right)_{qr} \left( G_L \right)^{rs} \left( G_M \right)_{st} \left( G_N \right)^{tn} &= \delta^n_m \varepsilon_{IJKLMN}\,. \end{align} Finally, we can express the covariant fluxes as \begin{equation}\label{eqn:SL(4)toSO(3,3)} \mathcal{F}_{IJK} = \left( X_{mn} \right)\indices{_{pq}^{rs}} \left( G_I \right)^{mn} \left( G_J \right)^{pq} \left( G_K \right)_{rs} \,. \end{equation} To evaluate the condition \eqref{eqn:fluxeseom}, which arise from the effective theory's equations of motion, one also needs the covariant fluxes in flat indices \begin{equation}\label{eqn:fluxesflatvacuum} \mathcal{F}_{ABC} = \bar{E}\indices{_A^I} \bar{E}\indices{_B^J} \bar{E}\indices{_A^K} \mathcal{F}_{IJK}\,. \end{equation} This equation is invariant under O$({D-d},{D-d})$ transformations of the vacuum's generalized vielbein and the covariant fluxes, like \begin{equation}\label{eqn:O(D-d,D-d)trafofluxes} \bar E\indices{_A^I} \rightarrow \bar E\indices{_A^J} O\indices{_J^I} \quad\text{and}\quad \mathcal{F}_{IJK} \rightarrow \mathcal{F}_{LMN} O\indices{^L_I} O\indices{^M_J} O\indices{^N_K} \quad\text{with}\quad O\indices{^M_N}\eta_{ML}O\indices{^L_K}=\eta_{NK} \,. \end{equation} Furthermore \eqref{eqn:fluxeseom} is invariant under double Lorentz transformations \begin{equation} \bar E\indices{_A^I}\rightarrow T\indices{_A^B}\bar E\indices{_B^I} \quad \text{with} \quad T\indices{_A^C} \delta_{CD} T\indices{_B^D} = \delta_{AB} \quad \text{and} \quad T\indices{_A^C} \eta_{CD} T\indices{_B^D} = \eta_{AB}\,. \end{equation} Combining these two transformations, one is able to choose an arbitrary vacuum vielbein $\bar E\indices{_A^I}$. In the following, we use \begin{equation}\label{eqn:defvacuumvielbein} \bar{E}\indices{_A^I} := \delta_A^I\;, \end{equation} which allows to identify the components of the covariant fluxes in flat and curved indices. Other choices would be possible too, but they would make explicit calculations more complicated. This shows nicely that all relevant informations about the vacuum can be embedded in the covariant fluxes. Next, we state and solve the constraints on the fluxes in terms of \eqref{eqn:SL(4)toSO(3,3)}. First, using the decomposition \eqref{eqn:decompfluxes}, the Jacobi-type constraint \eqref{eqn:quadraticc} on the fluxes reads \begin{equation}\label{eqn:quadraticcMMtilde} M_{mp} \tilde M^{pn} = \frac{1}{4} \delta^m_n M_{qp} \tilde M^{pq}\,. \end{equation} Because $M_{np}$ is symmetric, it can always be diagonalized by an SO$(4)$ transformation. The group SO$(4)$ is the maximal compact subgroup of SL$(4)$ and it is, up to a discrete Z${}_2$, isomorphic to SO$(3)\times$SO$(3)$, the maximal compact subgroup of SO$(3,3)$. Hence it is always possible to diagonalize $M_{np}$ by an O$(3)\times$O$(3)$ double Lorentz transformation applied on the covariant fluxes. Such transformations leave all constraints on the covariant fluxes invariant. When $M_{np}$ is diagonal, $\tilde M_{rq}$ has to be diagonal, too. Otherwise the constraint \eqref{eqn:quadraticcMMtilde} is violated. In this case one can identify the components \begin{equation} M_{mn} = \diag \begin{pmatrix} H_{123} & Q^{23}_1 & Q^{31}_2 & Q^{12}_3 \end{pmatrix} \quad \text{and} \quad \tilde{M}_{mn} = \diag \begin{pmatrix} R^{123} & f^1_{23} & f^2_{31} & f^3_{12} \end{pmatrix} \end{equation} by applying \eqref{eqn:decompfluxes}, \eqref{eqn:doubleindices}, \eqref{eqn:SL(4)toSO(3,3)} and the mapping between the covariant fluxes $\mathcal{F}_{ABC}$ in flat indices and the $H$-, $f$-, $Q$- and $R$-flux derived in section~\ref{sec:covariantfluxes} successively. These remaining fluxes automatically fulfill \begin{equation} \mathcal{F}\indices{^M_M_N} = 0 \quad \leftrightarrow \quad f^i_{ij} = 0 \quad \quad \text{and} \quad Q^{ij}_i = 0\,, \end{equation} as required by \eqref{eqn:FLLN=0}. \begin{table}[b] \centering \begin{tabular}{|c||c|c|c|c|} \hline $\alpha$ & $m_\alpha $ & $( \bar h_\alpha )_{ij}$ & $( \bar h_\alpha )\indices{^k_j}$ & $\bar\phi_\alpha$ \\ \hline 1 & $2 \left|f\right|$ & $\begin{pmatrix} 0 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$ & $0$ & $\tau_{\scriptscriptstyle\mathrm I}$ \\ \hline 2 & $2 \left|f\right|$ & $\begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix}$ & $0$ & $\tau_{\scriptscriptstyle\mathrm R}$ \\ \hline 3 & $2 \left|H\right|$ & $\begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$ & $0$ & $\rho_{\scriptscriptstyle\mathrm I}$ \\ \hline 4 & $2 \left|H\right|$ & $0$ & $\begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 0 \end{pmatrix}$ & $\rho_{\scriptscriptstyle\mathrm R}$ \\ \hline \end{tabular} \caption{The massive scalar fields with arise from the fluxes in \eqref{eqn:solutionfluxes}.}\label{tab:massivescalars} \end{table} Hence, according to \eqref{eqn:strongconstfluxes}, the strong constraint restricts the fluxes by \begin{equation} H_{123} R^{123} + Q^{23}_1 f^1_{23} + Q^{31}_2 f^2_{31} + Q^{12}_3 f^3_{12} = M_{qp} {\tilde M}^{pq} = 0\,. \end{equation} In conjunction with the quadratic constraint \eqref{eqn:quadraticcMMtilde} this gives rise to \begin{equation} H_{123} R^{123} = 0\,, \qquad Q^{23}_1 f^1_{23} = 0\,, \qquad Q^{31}_2 f^2_{31} = 0\,, \qquad Q^{12}_3 f^3_{12} = 0\,. \end{equation} Finally, the constraint from the field equations \eqref{eqn:fluxeseom} leads to \begin{align} \left( H_{123} - Q^{23}_1 \right)^2 - \left( Q^{31}_2 - Q^{12}_3 \right)^2 &= \left( R^{123} - f^1_{23} \right)^2 - \left( f^2_{31} - f^3_{12} \right)^2 \\ \left( H_{123} - Q^{31}_2 \right)^2 - \left( Q^{12}_3 - Q^{23}_1 \right)^2 &= \left( R^{123} - f^2_{31} \right)^2 - \left( f^3_{12} - f^1_{23} \right)^2 \\ \left( H_{123} - Q^{12}_3 \right)^2 - \left( Q^{23}_1 - Q^{31}_2 \right)^2 &= \left( R^{123} - f^3_{12} \right)^2 - \left( f^1_{23} - f^2_{31} \right)^2 \,. \end{align} The only non-trivial solution for these three equations, which is not excluded by the strong constraint, is \begin{equation}\label{eqn:solutionfluxes} H_{123} = Q^{23}_1 = H\,, \quad Q^{31}_2 = Q^{12}_3 = 0\,, \quad R^{123} = f^1_{23} = 0 \quad \text{and} \quad f^2_{31} = f^3_{12} = f\,. \end{equation} In $D-d=3$ dimensions, only these fluxes are allowed for backgrounds without a warp factor. This shows how restrictive the conditions on the covariant fluxes are. The covariant fluxes in \eqref{eqn:solutionfluxes} are given in flat indices. Thus they are invariant under $O({D-d},{D-d})$ transformations \eqref{eqn:O(D-d,D-d)trafofluxes} but depend on the fixing of the double Lorentz symmetry. In total, we obtain three different kind of solutions which will be discussed in section~\ref{sec:twistforD-d=3} in detail: \begin{itemize} \item $f\neq0$, $H=0$: this is a geometric background, called {\it single elliptic} $f$-flux space. \item $f=0$, $H\neq0$: this is a non-geometric background, because by \eqref{eqn:solutionfluxes} it has non-vanishing $H$ \textit{and} $Q$ flux. It is called {\it single elliptic} $H,Q$-flux space. It is, however, T-dual to the previous, geometric background. \item $f\neq0$, $H\neq0$: this is a non-geometric background, called {\it double elliptic} $f,H,Q$-flux space. It is not T-dual to any geometric space. \end{itemize} Following the reasoning in section~\ref{sec:spectrum} one is able to express the fluctuations of the generalized metric around its vev as \begin{equation} \delta \mathcal{H}^{MN} = \sum\limits_\alpha \bar{E}\indices{_A^M} \bar{E}\indices{_B^N} (\bar h_\alpha)^{AB} \phi_\alpha \,. \end{equation} By using $\bar{E}\indices{_A^M}=\delta_A^M$, c.f.~\eqref{eqn:defvacuumvielbein}, it is straightforward to identify such fluctuation of the generalized metric \eqref{eqn:genmetricBg} with \begin{equation} \delta g_{ij} = \sum\limits_\alpha ( \bar h_\alpha )_{ij} \phi_\alpha \quad \text{and} \quad \delta B_{ij} = \delta_{ik} ( \bar h_\alpha )\indices{^k_j} \,. \end{equation} For the double elliptic background, there are in total four massive and five massless scalar fields. The massive ones are listed in table~\ref{tab:massivescalars}. In the directions $y^2$ and $y^3$ the shape of the double tours specified by $\bar{\mathcal H}^{MN}$ is completely fixed by the massive scalars. A double torus in these directions is parameterized by four real scalars which correspond the metric components $g_{22}$, $g_{33}$, $g_{23}$ and the $B$-field component $B_{23}$. They can also be expressed in terms of the complex structure $\tau=\tau_{\scriptscriptstyle\mathrm R}+i\tau_{\scriptscriptstyle\mathrm I}$ and the K\"ahler parameter $\rho=\rho_{\scriptscriptstyle\mathrm R} + i\rho_{\scriptscriptstyle\mathrm I}$ as \begin{equation}\label{eqn:metricbrhotau} \begin{pmatrix} g_{22} & g_{23} \\ g_{23} & g_{33} \end{pmatrix} = \frac{\rho_{\scriptscriptstyle\mathrm R}}{\tau_{\scriptscriptstyle\mathrm I}} \begin{pmatrix} 1 & \tau_{\scriptscriptstyle\mathrm R} \\ \tau_{\scriptscriptstyle\mathrm R} & \left| \tau_{\scriptscriptstyle\mathrm R} \right|^2 \end{pmatrix} \quad \text{and} \quad - B_{23} = B_{32} = \rho_{\scriptscriptstyle\mathrm R}\,. \end{equation} For $\bar{\mathcal H}^{MN}=\delta^{MN}$, one gets $\bar \tau_{\scriptscriptstyle\mathrm I} = \bar \rho_{\scriptscriptstyle\mathrm I} = 1$ and $\bar \tau_{\scriptscriptstyle\mathrm R} = \bar \rho_{\scriptscriptstyle\mathrm R} = 0$. Here the bar on $\tau$, $\rho$ and its component $\tau_{\scriptscriptstyle\mathrm R}$, $\tau_{\scriptscriptstyle\mathrm I}$, $\rho_{\scriptscriptstyle\mathrm R}$ and $\rho_{\scriptscriptstyle\mathrm I}$ does not indicates complex conjugation, but that these quantities belong to the vacuum vielbein $\bar E\indices{_A^M}$. The variation of the metric and the $B$-field in \eqref{eqn:metricbrhotau} with respect to $\tau_{\scriptscriptstyle\mathrm R}$, $\rho_{\scriptscriptstyle\mathrm R}$, $\tau_{\scriptscriptstyle\mathrm I}$ and $\rho_{\scriptscriptstyle\mathrm I}$ leads to the same results as given in table~\ref{tab:massivescalars}. Hence it is straightforward to identify the scalar moduli $\phi_\alpha$ in this table with the real and imaginary parts of $\tau$ and $\rho$. The full scalar potential in these moduli reads \begin{equation} \label{eqn:explicitspotential} V = \frac{f^2 \left(1 + 2(\tau_{\scriptscriptstyle\mathrm R}^2 - \tau_{\scriptscriptstyle\mathrm I}^2) + |\tau|^4\right)}{2 \tau_{\scriptscriptstyle\mathrm I}^2} + \frac{H^2 \left(1 + 2(\rho_{\scriptscriptstyle\mathrm R}^2 - \rho_{\scriptscriptstyle\mathrm I}^2) + |\rho|^4\right)}{2 \rho_{\scriptscriptstyle\mathrm I}^2} \,. \end{equation} A minimum of this potential has to fulfill \begin{equation} \left.\frac{\partial V}{\partial \tau_{\scriptscriptstyle\mathrm R}}\right|_{\tau=\bar \tau} = \frac{f^2 \bar \tau_{\scriptscriptstyle\mathrm R}(1+|\bar \tau|^2)}{\bar \tau_{\scriptscriptstyle\mathrm I}^2} = 0 \quad\text{and}\quad \left.\frac{\partial V}{\partial\tau_{\scriptscriptstyle\mathrm I}}\right|_{\tau=\bar \tau} = \frac{f^2 \left[ 2 \bar \tau_{\scriptscriptstyle\mathrm R}^2 (\bar \tau_{\scriptscriptstyle\mathrm I}^2 - 1) + 2\bar \tau_{\scriptscriptstyle\mathrm I}^4 - |\bar \tau|^4 - 1\right]}{\bar \tau_{\scriptscriptstyle\mathrm I}^3}=0\,. \end{equation} From the first equation follows that $\bar \tau_{\scriptscriptstyle\mathrm R}=0$. In this case, the second one simplifies to $\bar \tau_{\scriptscriptstyle\mathrm I}^4=1$ and thus gives rise to $\bar \tau_{\scriptscriptstyle\mathrm I}=1$. These are exactly the values we expected. The same argumentation holds for $\rho$. Plugging the vevs $\bar \tau$ and $\bar \rho$ into \eqref{eqn:explicitspotential}, we see that the scalar potential $V(\bar \tau,\bar \rho)=0$ vanishes for the vacuum. This result is in accordance with \eqref{eqn:minkowskivacuum}. After a short calculation, one obtains the Hesse matrix \begin{equation} \left.\frac{\partial^2 V}{\partial \phi_\alpha \partial \phi_\beta}\right|_{\bar \phi} = 4 \begin{pmatrix} f^2 & 0 & 0 & 0 \\ 0 & f^2 & 0 & 0 \\ 0 & 0 & H^2 & 0 \\ 0 & 0 & 0 & H^2 \end{pmatrix} \quad\text{with}\quad \phi = \{\tau_{\scriptscriptstyle\mathrm R}, \tau_{\scriptscriptstyle\mathrm I}, \rho_{\scriptscriptstyle\mathrm R}, \rho_{\scriptscriptstyle\mathrm I}\} \end{equation} for the vacuum. It is diagonal and so proves that $\tau$ and $\rho$ are indeed the right moduli to describe the massive scalar field which arise in the effective theory. \section{Twists, Killing vectors and background fields}\label{sec:constrtwists} Until now, we have only considered the constant values of the covariant fluxes $\mathcal{F}_{IJK}$. But in order to construct the metric and $B$-field or $\beta$-field of a doubled geometry, one needs to know the twist $U\indices{^M_N}$ and its action on the scalar fields $\widehat{\cal H}^{MN}$. Here we give twists that reproduce the given covariant fluxes. We focus on covariant fluxes that describe fibered backgrounds. For them, we are able to provide an explicit expression for the twist and also for the Killing vectors which are associated to it. The background described in section~\ref{eqn:solutionfluxes} is such a fibration. Hence we can apply these results to study its properties in more detail. Finally we show how the remaining double Lorentz symmetry of the covariant fluxes is fixed, for which there are different possibilities related to each other via a field redefinitions. \subsection{Fibered backgrounds} To construct explicit expressions for the twist $U\indices{^M_N}$ and its Killing vectors, we focus on fibered geometries ${\cal M}^{2(D-d)}$ of the kind \begin{equation} T^{2 d_\mathrm{f}}\,\hookrightarrow\, {\cal M}^{2(D-d)} \,\hookrightarrow\,T^{2 d_\mathrm{b}}\,. \end{equation} Here $T^{2 d_\mathrm{f}}$ is a $2 d_\mathrm{f}$-dimensional double torus in the fiber, which is twisted by the covariant fluxes. While the $2d_b$-dimensional, rectangular base torus $T^{2d_\mathrm{b}}$ is not affected by this twist. At first glance this sounds like a strong limitation, which excludes many potential backgrounds. Nevertheless, the consistent backgrounds from section~\ref{sec:solutionsconstr}, which satisfy the various constraints discussed in this paper, are exactly of this form. In order to make the structure of the fibration manifest, we split the $2(D-d)$ internal, compact coordinates $Y^{\mathcal M}=\begin{pmatrix} \tilde{y}_i & y^i \end{pmatrix}$ into \begin{equation} Y^{\hat M} = \begin{pmatrix} Y^{\tilde M} & Y^M \end{pmatrix}\,. \end{equation} Indices with a tilde label the base coordinates and indices without a tilde are assigned to the directions of the fiber. For these conventions, the invariant metric is given by \begin{equation} \eta^{\hat M \hat N} = \begin{pmatrix} \eta^{\tilde M\tilde N} & 0 \\ 0 & \eta^{MN} \end{pmatrix}\,. \end{equation} Analogous expressions hold for the generalized vielbein, the twist and the parameter of generalized diffeomorphisms. Using this splitting, the twist $U\indices{_{\hat N}^{\hat M}}$ can be expressed by the matrix exponential \begin{equation}\label{eqn:twistfromflux} U\indices{_{\hat N}^{\hat M}}(Y^{\tilde I}) = \exp\left( \mathcal{F} \indices{_{\hat N}^{\hat M}_{\tilde I}} Y^{\tilde I} \right) \,. \end{equation} The only non-vanishing covariant fluxes are $\mathcal{F}_{NM\tilde I}$, while the remaining flux components \begin{equation}\label{eqn:compfluxes=0} \mathcal{F}_{\hat N\hat M I}=0 \quad \text{and} \quad \mathcal{F}_{\tilde N \tilde M \hat I}=0 \end{equation} vanish in order to be compatible with the fibration discussed above. Furthermore, we consider only matrices in the exponent of \eqref{eqn:twistfromflux}, which commute for arbitrary values of $\tilde I$ and $\tilde J$. Thus the additional constraint \begin{equation}\label{eqn:commutatorconstr} \mathcal{F}\indices{_{\tilde I}^M_L} \mathcal{F}\indices{_{\tilde J}^L_N} - \mathcal{F}\indices{_{\tilde J}^M_L} \mathcal{F}\indices{_{\tilde I}^L_N} = 0 \quad \text{or} \quad \mathcal{F}_{LM[\tilde I} \mathcal{F}\indices{^L_{\tilde J ]}_N} = 0 \end{equation} has to hold. Without it and \eqref{eqn:compfluxes=0}, we are not able to derive the following properties of the twist: \begin{equation}\label{eqn:propUfibration} U\indices{_{\hat N}^{\tilde M}} = \delta_{\hat N}^{\tilde M} \,, \quad U\indices{_{\tilde N}^{\hat M}} = \delta_{\tilde N}^{\hat N} \quad \text{and} \quad \partial_{\hat L} U\indices{_{\hat N}^{\hat M}} = \begin{cases} \mathcal{F}\indices{_N^P_{\tilde L}} U\indices{_P^M} & \\ 0 & \text{otherwise.} \end{cases} \end{equation} With them, it is then straightforward to calculate the non-vanishing coefficients of anholonomy \begin{equation}\label{eqn:FfromU} \Omega_{\tilde I J K} = \partial_{\tilde I} U\indices{_J^M} U_{KM} = \mathcal{F}\indices{_J^N_{\tilde I}} U\indices{_N^M} U_{KM} = \mathcal{F}\indices{_J^N_{\tilde I}} \eta_{NK} = \mathcal{F}_{\tilde I J K}. \end{equation} The remaining components \begin{equation} \Omega_{I \tilde J K} = -\Omega_{I K \tilde J} = 0 \end{equation} vanish. Hence, the non-vanishing components of the covariant fluxes for the twist \eqref{eqn:twistfromflux} are \begin{equation} \mathcal{F}_{\tilde I J K} = \Omega_{\tilde I J K} + \Omega_{K \tilde I J} + \Omega_{J K \tilde I} = \Omega_{\tilde I J K}\;, \end{equation} as expected. Furthermore we have to find the $2(D-d)$ Killing vectors $K\indices{_{\hat I}^{\hat J}}$ connected to the twist $U\indices{_{\hat N}^{\hat M}}$. For the fibration, discussed in this section, they are given by \begin{equation} K\indices{_{\hat I}^{\hat J}} = \exp\left( -\frac{1}{2} \mathcal F\indices{_{\tilde I}^{\hat J}_{\hat L}} Y^{\hat L} \right) \,. \end{equation} Here the $\tilde I$ in $\mathcal{F}\indices{_{\tilde I}^{\hat J}_{\hat L}} Y^{\hat L}$ denotes that the matrix given by this expression has only non-vanishing entries in columns with are associated to base coordinates. Again, we find the following properties: \begin{equation}\label{eqn:Killingfromfluxes} K\indices{_{\hat I}^{\tilde J}}=\delta_{\hat I}^{\tilde J}\,, \quad K\indices{_I^{\hat J}} = \delta_I^{\hat J} \quad \text{and} \quad \partial_{\hat L} K\indices{_{\hat I}^{\hat J}} = \begin{cases} - \frac{1}{2} \mathcal{F}\indices{_{\tilde I}^J_L} & \\ 0 & \text{otherwise.} \end{cases} \end{equation} With these identities, it is straightforward to show that \begin{align} \mathcal{L}_{K\indices{_{\hat I}^{\hat J}}} U\indices{_{\hat N}^{\hat M}} &= K\indices{_{\hat I}^{\tilde P}} \partial_{\tilde P} U\indices{_N^M} + \partial^M K\indices{_{\tilde I}^{P}} U_{\hat N P} - U\indices{_{\hat N}^P} \partial_P K\indices{_{\tilde I}^M} \nonumber \\ &= \mathcal{F}\indices{_N^P_{\tilde I}} U\indices{_P^M} - \frac{1}{2}\mathcal{F}\indices{_{\tilde I}^P^M} U_{NP} + \frac{1}{2} U\indices{_N^P} \mathcal{F}\indices{_{\tilde I}^M_P} \nonumber \\ &= \mathcal{F}\indices{_{\tilde I}_N^P} U\indices{_P^M} - U\indices{_N^P} \mathcal{F}\indices{_{\tilde I}_P^M} = [\left(\mathcal{F}_{\tilde I}\right), U] = 0 \,. \end{align} In the last step we have used that according to \eqref{eqn:commutatorconstr} the matrices $\left(\mathcal{F}_{\tilde I}\right)\indices{_N^M}$ have to commute for all possible values of $\tilde I$. We can check that the condition \begin{equation} \partial_{\hat J} K\indices{_{\hat I}^{\hat J}} = -\frac{1}{2} \mathcal{F}\indices{_{\tilde I}^J_J} = 0 \quad \leftrightarrow \quad \mathcal{F}\indices{^{\hat L}_{\hat L}_{\hat N}} = 0 \end{equation} holds. According to \eqref{eqn:invdensity} it has to be fulfilled in order to leave densities invariant when they are shifted along the Killing vectors. For the fibrations discussed here, this condition is equivalent to \eqref{eqn:FLLN=0}. Finally, we calculate the structure coefficients associated to the algebra generated by the Killing vectors. According to \eqref{eqn:structurekilling}, they read \begin{equation}\label{eqn:structurecoeffkillingvecvstwist} \tilde{\mathcal F}\indices{_{\hat I}_{\hat J}_{\hat K}} = -\frac{1}{2} \mathcal{F}\indices{_{\hat I}_{\hat J}_{\hat K}}\,. \end{equation} Despite having the same structure coefficients up to a factor -1/2, the Killing vectors have very different properties in comparison to the twist. In general, $K\indices{_{\hat I}^{\hat J}}$ is not an O$({D-d},{D-d})$ valued matrix. Furthermore, if $U\indices{_{\hat N}^{\hat M}}$ fulfills the strong constraint, it is not guaranteed that the Killing vectors also do so. Nevertheless, the construction in this section guarantees that their algebra is closed. The value of the twist after going completely around the base circle in the direction $\tilde I$ is called monodromy. It is given by the expression \begin{equation} M\indices{_{\tilde I}_N^M} = \exp \left( 2\pi \mathcal{F}\indices{_{\tilde I}_N^M} \right) \end{equation} and has to be O$({D-d},{D-d},\mathds{Z})$ valued. When only considering pure DFT, an O$({D-d},{D-d})$ valued monodromy would be sufficient. In this case the two different tori at $Y^{\tilde I}=0$ and $Y^{\tilde I} = 2\pi$ can be identified by a generalized diffeomorphism. But in string theory tori are only identified by the subgroup O$({D-d},{D-d},\mathds{Z})$ whose elements parameterize T-duality transformations. As we will show in the following section, this restriction allows only for discrete values for covariant fluxes. \subsection{Configurations with Minkowski vacuum}\label{sec:twistforD-d=3} Section~\ref{sec:solutionsconstr} has already presented covariant fluxes, which fulfill the various constraints imposed in section~\ref{sec:DFTbackgrounds} and lead to a Minkowski vacuum in the external directions. Additionally, these fluxes satisfy \eqref{eqn:commutatorconstr} and give rise to a fibered background with $d_\mathrm{f}=2$ and $d_\mathrm{b}=1$. Thus we are able to construct the associated twist $U\indices{^M_N}$ and the Killing vectors $K\indices{_I^J}$. For $d_\mathrm{f}=2$, the twist of the fiber is an element of O$(2,2)$. Such an element can be decomposed into SO$(2,2) \times \mathrm{Z}_2$. The $\mathrm{Z}_2$ part consists of two elements, the identity and an O$(2,2)$ element T with $\det T=-1$ and $T^2 = 1$. Here we choose $T$ as a T-duality transformation along the second direction of the fiber, which amounts to \begin{equation} T=\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \end{pmatrix}=T^{-1}=T^T\,. \end{equation} The SO$(2,2)$ part decomposes into $\mathrm{SL}(2)_\tau\times \mathrm{SL}(2)_\rho$. Thus, in order to express an SO$(2,2)$ element, one needs two SL$(2)$ matrices, which we call $M_\tau$ and $M_\rho$. They are mapped to the corresponding SO$(2,2)$ element $M$ by \begin{equation}\label{eqn:decompSL2xSL2} M = \begin{pmatrix} M_\tau & 0 \\ 0 & M_\tau^{-T} \end{pmatrix} T \begin{pmatrix} M_\rho & 0 \\ 0 & M_\rho^{-T} \end{pmatrix} T^{-1}\,. \end{equation} We interpret $\tau$ as the complex structure and $\rho$ as the K\"ahler parameter of a torus in the fiber. $SL(2)$ transformations act on these two parameters as \begin{equation}\label{eqn:actiontau&rho} \tau' = \frac{a \tau + b}{c \tau + d} \quad \leftrightarrow \quad M_\tau = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \quad \text{and} \quad \rho' = \frac{a' \rho + b'}{c' \rho + d'} \quad \leftrightarrow \quad M_\rho = \begin{pmatrix} a' & b' \\ c' & d' \end{pmatrix}\;, \end{equation} respectively. The T-duality transformation $T$ acts as an exchange of $\tau$ and $\rho$. More precisely, the isomorphism reads \begin{equation}\label{eqn:decompO(2,2)} \mathrm{O}(2,2) \cong \mathrm{SL}_\tau(2) \times \mathrm{SL}_\rho(2) \times \mathrm{Z}_2^{\tau\leftrightarrow\rho}\;. \end{equation} A convenient way to characterize SL$(2)$ group elements is given by their conjugacy classes. In total there are three different classes, which are discriminated by the traces \begin{equation} \left| \Tr M \right| < 2 \quad \text{elliptic} \qquad \left| \Tr M \right| = 2 \quad \text{parabolic} \qquad \text{and} \qquad \left| \Tr M \right| > 2 \quad \text{hyperbolic} \end{equation} of the corresponding SL$(2)$ element $M$. By explicitly evaluating \eqref{eqn:twistfromflux} with the covariant fluxes obtained in \eqref{eqn:solutionfluxes}, we obtain the twist \begin{equation}\label{eqn:twistHf} U\indices{^{\hat M}_{\hat N}}(x^1) = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & \cos f x^1 \cos H x^1 & \sin f x^1 \cos H x^1 & - \sin f x^1 \sin H x^1 & \cos f x^1 \sin H x^1 \\ 0 & 0 & - \sin f x^1 \cos H x^1 & \cos f x^1 \cos H x^1 & - \cos f x^1 \sin H x^1 & - \sin f x^1 \sin H x^1 \\ 0 & 0 & - \sin f x^1 \sin H x^1 & \cos f x^1 \sin H x^1 & \cos f x^1 \cos H x^1 & \sin f x^1 \cos H x^1 \\ 0 & 0 & - \cos f x^1 \sin H x^1 & -\sin f x^1 \sin H x^1 & - \sin f x^1 \cos H x^1 & - \cos f x^1 \cos H x^1 \end{pmatrix} \end{equation} and with \eqref{eqn:decompSL2xSL2} we are able to decompose this result into \begin{equation}\label{eqn:Utau&rho} U_\tau(x^1) = \begin{pmatrix} \cos f x^1 & \sin f x^1 \\ -\sin f x^1 & \cos f x^1 \end{pmatrix} \quad \text{and} \quad U_\rho(x^1) = \begin{pmatrix} \cos H x^1 & \sin H x^1 \\ -\sin H x^1 & \cos H x^1 \end{pmatrix}\,. \end{equation} These twist $U_\tau$ and $U_\rho$ are both elliptic. Each of them is an element of SO$(2)$, the maximal compact subgroup of SL$(2)$. As already stated, the possible values of $H$ and $f$ are not continuous because the monodromy \begin{equation} M\indices{_1^M_N} = U\indices{^M_N}(2\pi) \end{equation} has to be an element of O$(2,2,\mathds{Z})$. This subset of O$(2,2)$ decomposes along the lines of \eqref{eqn:decompO(2,2)} into \begin{equation} \mathrm{O}(2,2,\mathds{Z})\cong\mathrm{SL}(2,\mathds{Z})_\tau\times \mathrm{SL}(2,\mathds{Z})_\rho\times \mathrm{Z}_2^{\tau\leftrightarrow \rho}\,. \end{equation} The discrete transformation is not realized by the monodromy. But the remaining two $SL(2,Z)$ transformations are not trivial and lead to \begin{equation} M_\tau = \begin{pmatrix} \cos 2\pi f & \sin 2\pi f \\ -\sin 2\pi f & \cos 2\pi f \end{pmatrix} \quad \text{and} \quad M_\rho = \begin{pmatrix} \cos 2\pi H & \sin 2\pi H \\ -\sin 2\pi H & \cos 2\pi H \end{pmatrix}\,. \end{equation} \begin{table}[b] \centering \begin{tabular}{|c||c|c|} \hline $f \bmod 1$ & $\Tr M_\tau$ & $\bar \tau$ \\ \hline $0$ & $2$ & $i$ \\ $1/6$ & $1$ & $(-1+\sqrt{3} i)/2$ \\ $1/4$ & $0$ & $i$ \\ $1/3$ & $-1$ & $(-1+\sqrt{3} i)/2$ \\ $1/2$ & $-2$ & $i$ \\ \hline \end{tabular}\qquad\qquad\qquad \begin{tabular}{|c||c|c|} \hline $H \bmod 1$ & $\Tr M_\rho$ & $\bar \rho$ \\ \hline $0$ & $2$ & $i$ \\ $1/6$ & $1$ & $(-1+\sqrt{3} i)/2$ \\ $1/4$ & $0$ & $i$ \\ $1/3$ & $-1$ & $(-1+\sqrt{3} i)/2$ \\ $1/2$ & $-2$ & $i$ \\ \hline \end{tabular} \caption{Quantized values for the fluxes $f$ and $h$ and the corresponding vevs for $\tau$ and $\rho$.}\label{tab:quantizedfluxes} \end{table}% Each of these two matrices have to be an element of SL$(2,\mathds{Z})$, which is obviously the case if $f\mod 1$ and $H\mod 1$ are elements of the set $0$, $1/2$ or $1/4$. But this is not an exhaustive list of all allowed fluxes. We can still apply an $O(2,2)$ transformation \eqref{eqn:O(D-d,D-d)trafofluxes} to make the monodromies $M_\rho$ and $M_\tau$ elements of SL$(2,\mathds{Z})$. This is possible when both of them have integer traces. Table~\ref{tab:quantizedfluxes} lists all different values for the fluxes which fulfill this constraint. According to \eqref{eqn:O(D-d,D-d)trafofluxes} the vacuum vielbein $\bar E\indices{_A^M}$ gets modified by such transformations, too. Thus, the table also lists the new vevs for $\tau$ and $\rho$, respectively. The covariant fluxes in flat indices $\mathcal{F}_{ABC}$ are not affected by \eqref{eqn:O(D-d,D-d)trafofluxes} and their curved counterparts $\mathcal{F}_{IJK}$ are calculated from them with the vacuum vielbein $\bar E\indices{_A^M}(\bar\tau,\bar\rho)$ according to \eqref{eqn:fluxesflatvacuum}. Finally, a transformation into barred indices gives some additional insights into the structure of the monodromy \begin{align} M\indices{^{\bar M}_{\bar N}} &= R\indices{^{\bar M}_L} M\indices{^L_K} R\indices{^K_{\bar N}} \nonumber\\ &= \begin{pmatrix} \cos\left[ 2\pi(f - H) \right] & \sin\left[ 2\pi(f - H) \right] & 0 & 0 \\ -\sin\left[ 2\pi(f - H) \right] & \cos\left[ 2\pi(f - H) \right] & 0 & 0 \\ 0 & 0 & \cos\left[ 2\pi(f + H) \right] & \sin\left[ 2\pi(f + H) \right] \\ 0 & 0 & -\sin\left[ 2\pi(f + H) \right] & \cos\left[ 2\pi(f + H) \right] \end{pmatrix}\,. \end{align} Remembering that the first two rows describe the string's right moving part and the remaining ones the left moving part, it is obvious that this background is totally symmetric for $H=0, f\ne 0$ and totally asymmetric for $H\ne0, f=0$. According to \eqref{eqn:Killingfromfluxes}, the Killing vectors read \begin{equation} K\indices{_{\hat I}^{\hat J}} = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & -\frac{1}{2}( H x^3 + f \tilde x^3) & \frac{1}{2}( H x^2 + f \tilde x^2) & -\frac{1}{2}( f x^3 + H \tilde x^3) & \frac{1}{2}( f x^2 + H \tilde x^2 ) \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ \end{pmatrix} \,. \end{equation} They cannot be combined to an O$(3,3)$ valued matrix and for $H\ne 0$, $f\ne 0$, $K\indices{_2^{\hat J}}$ violates the strong constraint. Nevertheless the algebra of infinitesimal transformations along the Killing vectors closes. The only non-trivial Killing vector $K\indices{_2^{\hat J}}$ can be decomposed into \begin{align} K\indices{_2^{\hat J}} &= K'^{\hat J} + K''^{\hat J} \quad \text{with} \\ K'^{\hat J} &= \begin{pmatrix} 0 & \frac{1}{2} & -\frac{1}{2} H x^3 & \frac{1}{2} H x^2 & -\frac{1}{2} f x^3 & \frac{1}{2} f x^2 \end{pmatrix} \quad \text{and} \\ K''^{\hat J} &= \begin{pmatrix} 0 & \frac{1}{2} & -\frac{1}{2} f \tilde x^3 & \frac{1}{2} f \tilde x^2 & -\frac{1}{2} H \tilde x^3 & \frac{1}{2} H \tilde x^2 \end{pmatrix}\,. \end{align} $K''^{\hat J}$ is equivalent to $K'^{\hat J}$ after a T-duality along all fiber directions. $K'^{\hat J}$ describes a coordinate transformation and a $B$-field gauge transformation, while its T-dual $K''^{\hat J}$ describes a coordinate transformation and a $\beta$-field gauge transformation. Thus, for $H\ne 0$ and $f\ne 0$, two coordinate patches of the background are always connected to each other by all possible kinds of generalized diffeomorphism: coordinate transformation, $B$- and $\beta$-field gauge transformation at the same time. This clearly shows that the double elliptic case cannot be discussed in SUGRA or even not in Generalized Geometry, because in these theories only two different kinds of generalized diffeomorphisms are allowed at the same time. We close this section, by discussing a chain of T-dualities for the background specified by the twist \eqref{eqn:twistHf}. Such chains are well know from the torus with constant $H$-flux \begin{equation} H_{ijk} \stackrel{T_{i}}{\longrightarrow} f^{i}_{jk} \stackrel{T_{j}}{\longrightarrow} Q_{k}^{ij} \stackrel{T_{k}}{\longrightarrow} R^{ijk}\,. \end{equation} A T-duality transformation along the $i$-th direction is given in terms of the $O({D-d},{D-d})$ element \begin{equation} O\indices{^M_N} = \begin{pmatrix} \mathds{1} - m_i & m_i \\ m_i & \mathds{1} - m_i \end{pmatrix}\,, \end{equation} where $m_i$ is a diagonal matrix with a one in the direction $i$, on which T-duality is performed and zeros in the other directions. In contrast to \eqref{eqn:O(D-d,D-d)trafofluxes}, T-duality act on the covariant fluxes only. It does not change the vaccum vielbein $\widehat{\bar E}\indices{^A_M}$. Hence, the covariant fluxes $\mathcal{F}_{IJK}$ transform like any other covariant object under T-duality, namely as \begin{equation} {\mathcal F}'_{IJK} = \mathcal{F}_{LMN} O\indices{^L_I} O\indices{^M_J} O\indices{^N_K}\,. \end{equation} When we start with the fluxes in \eqref{eqn:solutionfluxes} and do successively T-duality transformations along $x^2$, $x^3$ (isometric directions) and finally also over $x^1$, we obtain the T-dual configurations listed in table~\ref{tab:tdualitychain}. \begin{table}[t] \centering \begin{tabular}{|c|c|c||c|ccc|ccc|c|} \hline $x^1$ & $x^2$ & $x^3$ & $H_{123}$ & $f^1_{23}$ & $f^2_{31}$ & $f^3_{12}$ & $Q_1^{23}$ & $Q_2^{31}$ & $Q_3^{12}$ & $R^{123}$ \\ \hline & & & $H$ & 0 & $f$ & $f$ & $H$ & 0 & 0 & 0 \\ & $\bullet$ & & $f$ & 0 & $H$ & $H$ & $f$ & 0 & 0 & 0 \\ & $\bullet$ & $\bullet$ & $H$ & 0 & $f$ & $f$ & $H$ & 0 & 0 & 0 \\ $\bullet$ & $\bullet$ & $\bullet$ & 0 & $H$ & 0 & 0 & 0 & $f$ & $f$ & $H$ \\ \hline \end{tabular} \caption{T-duality chain for the double elliptic background. Directions, on which T-duality was applied, are marked by a dot.}\label{tab:tdualitychain} \end{table}% Here, let us distinguish between the three different cases: \begin{itemize} \item {\it Single elliptic space} with $f\neq0$, $H=0$: It is a geometric space with geometric $f$-flux. When one performs T-duality transformations on this space along the directions $x^2$ and $x^3$, it is mapped to itself. T-duality along $x^{2}$ transfers it into: \item {\it Single elliptic spaces} with $f=0$, $H\neq0$: Here the first and the third line in table~\ref{tab:tdualitychain} correspond to the same non-geometric space with $H$- and $Q$-flux. The second line is the geometric background with $f$-Flux only, whereas the forth line corresponds to a non-geometric space with $f$- and with $R$-flux. \item {\it Double elliptic spaces} with $f\neq0$, $H\neq0$: Now all configurations in this table have a geometric and a non-geometric flux turned on at the same time. Here there is no T-dual configuration with geometric fluxes only. Hence the double elliptic spaces cannot be handled with standard supergravity; they always need a full DFT description. \end{itemize} The most interesting background is the double elliptic space, because it can not be described by SUGRA. Nevertheless, it is known from CFT \cite{Dabholkar:2002sy,Condeescu:2012sp,Condeescu:2013yma} and was discussed recently by \cite{ReviewDFT:2013hlb} in the context of large generalized diffeomorphisms in DFT. \subsection{Background fields and field redefinitions} In this final section of the paper we want to derive explicit expressions for the background fields, namely the metric, the $B$-field and the $\beta$-field, as functions of the doubled coordinates $Y^N$. We will focus on the double elliptic background, discussed in the last chapter. The fields of this background depend on one single coordinate direction, $y^1$ (or in a T-dual frame $\tilde y^1$), only. As usual, the expressions for the background fields are subject to possible field redefinitions, as used in \cite{Andriot:2011uh, Andriot:2012wx, Andriot:2012an}. These field redefinitions for example exchange the $B$-field with the $\beta$-field or vice versa. In this context it is a crucial question whether there is a certain field redefinition after which the background is a geometric space. As we will discuss, this is impossible for the double elliptic background, which is not T-dual to a geometric space. As explained in section~\ref{sec:dftandsym}, the generalized vielbein $E\indices{^A_M}$ of the fiber is subject to a local double Lorentz symmetry, connecting \begin{equation} \tilde{E}\indices{^A_M} = T\indices{^A_B} \widehat{E}\indices{^B_N} U\indices{^N_M} \quad \text{and} \quad E\indices{^A_M} = \widehat{E}\indices{^A_N} U\indices{^N_M}\;. \end{equation} Here $T\indices{^A_B}$ is a double Lorentz transformation of the fiber, parameterized by $d_\mathrm{f}(d_\mathrm{f}-1)$ independent variables. All frames related via such transformations are physically equivalent. The twist \eqref{eqn:twistHf}, which was obtained in the last section, is an element of the double Lorentz group, too. For the vacuum, where $\widehat{E}\indices{^A_M}=\widehat{\bar E}\indices{^A_M}=\delta^A_M$, we are able to choose $T\indices{^A_B}$ as the inverse of the twist. In this case the generalized vielbein describes locally a flat space without fluxes. At first glance, this result seems strange. Because, we started explicitly with non-vanishing covariant fluxes in order to obtain a non-abelian gauge symmetry in the effective theory. This ambiguity is resolved when remembering that the background has a global monodromy, which can not be removed by local transformations on a single patch. A background which exhibits exactly this monodromy is the orbifold \begin{equation} T^{4} / \mathrm{Z}_R \times \mathrm{Z}_L \quad\text{with}\quad R = \frac{1}{(f-H)\bmod 1} \quad\text{and}\quad L = \frac{1}{(f+H)\bmod 1}\,, \end{equation} where $H$ and $f$ are the fluxes we started with. The first discrete group acts on the right movers and the second one on the left movers. A setup with vanishing $f$ component, is a completely asymmetric orbifold, while a vanishing $H$ component leads to a symmetric orbifold. Locally, we are not able to distinguish it from a flat torus. Both are Ricci flat and satisfy the field equations. Nevertheless, globally they are very different. This observation emphasizes that the fluxes we started with play a significant r\^ole and are not only an unphysical gauge. Before reading off the fields $\beta^{\mu\nu}$, $B_{\mu\nu}$ and the metric $g_{\mu\nu}$ from the generalized vielbein $E\indices{^A_M}$ in its most general parameterization \eqref{eqn:EAMgeneral}, we will fix the local double Lorentz symmetry. In general, there are two different possibilities to do so. The first and simplest one is the trivial choice $T\indices{^A_B}=\delta^A_B$. In this case one gets \begin{gather} B_{23} = -B_{32} = - \tan H x^1 \,, \quad \beta_{23} = - \beta_{32} = \frac{1}{2} \sin 2 H x^1 \quad \text{and} \quad \nonumber \\ e\indices{^a_i} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \frac{\cos f x^1}{\cos H x^1} & \frac{\sin f x^1}{\cos H x^1} \\ 0 & -\frac{\cos f x^1}{\cos H x^1} & \frac{\cos f x^1}{\cos H x^1} \end{pmatrix}\,. \end{gather} For more sophisticated double Lorentz gauge fixings, we have to choose a different $T\indices{_A^B}$ at each point of the base. This choice should be done in such a way that it leaves some functions of components of the generalized vielbein $\tilde E\indices{^A_M}$, like e.g. $f_i(\tilde E\indices{^A_M})$, constant over the whole base. Technically speaking, this means $\partial_{\tilde L} f_i (\tilde E\indices{^A_M})=0$ has to vanish for all directions $\tilde L$ along the base. To evaluate such conditions, we start by calculating \begin{equation}\label{eqn:localvarEAM} \partial_{\tilde L} \tilde{E}\indices{^A_M} = \partial_{\tilde L} T\indices{^A_B} \widehat{E}\indices{^B_N} U\indices{^N_M} + T\indices{^A_B} \widehat{E}\indices{^B_N} \partial_{\tilde L} U\indices{^N_M}\,. \end{equation} Furthermore we parameterize $T\indices{^A_M}$ in a similar way, as we have done it for $U\indices{^M_N}$ in \eqref{eqn:twistfromflux}. This gives rise to \begin{equation} T\indices{^A_B} = \exp\left[ \mathcal{G}\indices{^A_B} (X^{\tilde L}) \right] \end{equation} where the arbitrary functions $\mathcal{G}\indices{^A_B}(X^{\tilde L})$ in bared indices have to fulfill \begin{equation} \mathcal{G}\indices{^{\bar a}^{\bar b}} = \mathcal{G}\indices{_{\bar a}_{\bar b}} = 0 \end{equation} in order to restrict $T\indices{^A_B}$ to the double Lorentz subgroup of the full O$(d_\mathrm{f}, d_\mathrm{f})$. The most significant difference between this definition and \eqref{eqn:twistfromflux} is that the exponent here is not restricted to a linear dependence in the base coordinates $X^{\tilde L}$. With this definitions at hand, \eqref{eqn:localvarEAM} equals \begin{equation}\label{eqn:DGLfieldredef} \partial_{\tilde L} E\indices{^A_M} = T\indices{^A_B} \left( \partial_{\tilde L} \mathcal{G}\indices{^B_C} \widehat{E}\indices{^C_N} + \widehat{E}\indices{^B_K} \mathcal{F}\indices{^K_N_{\tilde L}} \right) U\indices{^N_M} \,. \end{equation} Let us now define the constituents of the generalized vielbein in \eqref{eqn:EAMgeneral} as \begin{equation} e\indices{^a_i} = \begin{pmatrix} e\indices{^1_1} & e\indices{^1_2} \\ 0 & e\indices{^2_2} \end{pmatrix}\,, \quad B_{ij} = \begin{pmatrix} 0 & B \\ -B & 0 \end{pmatrix} \quad \text{and} \quad \beta^{ij} = \begin{pmatrix} 0 & \beta \\ -\beta & 0 \end{pmatrix} \end{equation} for our $d_\mathrm{f}=2$ example from the last section. This gives rise to \begin{equation} e\indices{^1_1} = \frac{1}{E\indices{_1^1}}\,, \quad e\indices{^1_2} = - \frac{E\indices{_2^1}}{E\indices{_1^1} E\indices{_2^2}}\,, \quad e\indices{^2_2} = \frac{1}{E\indices{_2^2}}\,, \quad B = \frac{E_{12}}{E\indices{_1^1}} \quad \text{and} \quad \beta = E^{12} E\indices{_1^1}\,. \end{equation} In the following we use the three different derivatives: \begin{align}\label{eqn:varB} \partial_1 B &= \frac{1}{E\indices{_1^1}} \left( \partial_1 E_{12} - \frac{E_{12}}{E\indices{_1^1}} \partial_1 E\indices{_1^1}\right) \,, \\ \label{eqn:varbeta} \partial_1 \beta &= E\indices{_1^1} \partial_1 E^{12} + E^{12} \partial_1 E\indices{_1^1} \quad \text{and} \\ \label{eqn:varVolume} \partial_1 \det ( e\indices{^a_i} ) &= - \partial_1 \frac{1}{E\indices{_1^1}E\indices{_2^2}} = \frac{1}{\left( E\indices{_1^1} E\indices{_2^2} \right)^2} \left( E\indices{_2^2} \partial_1 E\indices{_1^1} + E\indices{_1^1} \partial_1 E\indices{_2^2} \right)\,. \end{align} Setting one of them to zero, and using the derivative of the generalized vielbein \eqref{eqn:localvarEAM} gives rise to a differential equation for $\mathcal{G}\indices{^A_B}(y^1)$, parameterized by \begin{equation} \mathcal{G}\indices{^A_B} = \frac{1}{2} \begin{pmatrix} 0 & \xi(y^1) + \phi(y^1) & 0 & -\xi(y^1) + \phi(y^1) \\ -\xi(y^1) -\phi(y^1) & 0 & \xi(y^1) - \phi(y^1) & 0 \\ 0 & -\xi(y^1) + \phi(y^1) & 0 & \xi(y^1) + \phi(y^1) \\ \xi(y^1) - \phi(y^1) & 0 & -\xi(y^1) + \phi(y^1) & 0 \end{pmatrix}\,. \end{equation} To obtain both parameters of the double Lorentz transformation, $\xi(x^1)$ and $\phi(x^1)$, one differential equation is not enough. Hence, we set additionally the derivative \begin{equation} \partial_1 E\indices{^2_1} = 0 \end{equation} to zero. This restricts the vielbein $e\indices{^a_i}$ to an upper triangular matrix and leads to a complete set of two coupled ordinary differential equations for $\xi$ and $\phi$. They can be solved numerically and depending on which of the derivatives \eqref{eqn:varB} - \eqref{eqn:varVolume} is set to zero, one obtains a totally double Lorentz fixed generalized vielbein $\tilde E\indices{^A_M}$ with \begin{itemize} \item with constant $B$ (which we choose $B=0$)\,, \item with constant $\beta$ (which we choose $\beta=0$) or \item with constant volume $V=\det ( e\indices{^a_i} )$ of the fiber. \end{itemize} These three choices are connected to each other via field redefinitions. For all $\widehat{E}\indices{^A_M} \ne \widehat{\bar E}\indices{^A_M}$, the first two cases lead to a metric with a discontinuity after one complete cycle around the base. Thus the field configurations obtained in this way, do not permit a geometric description and therefore are called non-geometric. Nevertheless, the question arises, whether there exists a field redefinition leading to a geometric description. This question naturally arises, because recent works like \cite{Andriot:2011uh,Andriot:2012wx} showed that certain backgrounds are non-geometric for the $\beta=0$ choice, but become geometric for $B=0$. In order to find a field redefinition which leads to a geometric setup, one first has to formulate a criterion to distinguish between geometric and non-geometric configurations: For a geometric configuration, the monodromy of the vielbein $e\indices{^a_i}$ has to be an element of the group of large diffeomorphisms on the torus. For $d_\mathrm{f}=2$, this group is SL$(2,Z)$ and one obtains the condition \begin{equation} M\indices{^i_j} = e\indices{_a^i}(y^1) e\indices{^a_j}(y^1 + 2\pi) \in \mathrm{SL}(2,Z)\,. \end{equation} It can only hold, if \begin{equation}\label{eqn:detofmonodromy} \det ( M\indices{^i_j} ) = \frac{V(2 \pi + y^1)}{V(y^1)} = 1 \quad \leftrightarrow \quad V(2 \pi + y^1) = V(y^1) \end{equation} is fulfilled. But for $B=0$ or $\beta=0$ this condition is violated. Thus the metric becomes discontinuous and prohibits a geometric description. This observation justifies the third case $V=$constant for which \eqref{eqn:detofmonodromy} is trivially fulfilled. With this fixing, which is implemented by setting $e\indices{^2_2}=V / e\indices{^1_1}$, the monodromy $M\indices{^i_j}$ reads \begin{equation}\label{eqn:monodromyV=const} M\indices{^i_j} = \begin{pmatrix} \frac{e\indices{^1_1}(2\pi + y^1)}{e\indices{^1_1}(y^1)} & \frac{e\indices{^1_2}(y^1 + 2\pi)}{e\indices{^1_1}(y^1)} - \frac{e\indices{^1_2}(y^1)}{e\indices{^1_1}(y^1 + 2\pi)} \\ 0 & \frac{e\indices{^1_1}(y^1)}{e\indices{^1_1}(2\pi + y^1)} \end{pmatrix}\,. \end{equation} The differential equation, discussed above, is a straightforward approach to fix the double Lorentz symmetry, but it is not well suited for more general calculations. Thus we want to discuss another technique, which leads to the same results. It is based on the complex structure $\tau=\tau_{\scriptscriptstyle\mathrm R} + i \tau_{\scriptscriptstyle\mathrm I}$ and the K\"ahler parameter $\rho=\rho_{\scriptscriptstyle\mathrm R} + i \rho_{\scriptscriptstyle\mathrm I}$ of the fiber torus. By using the decomposition \eqref{eqn:decompSL2xSL2} we find \begin{equation} E_\rho = \pm \frac{1}{\sqrt{V}}\begin{pmatrix} 1 & B \\ - V \beta & 1 - B \beta \end{pmatrix} \quad \text{and} \quad E_\tau = \pm \frac{1}{\sqrt{V}}\begin{pmatrix} \frac{V}{e\indices{^1_1}} & 0 \\ e\indices{^1_2} & e\indices{^1_1} \end{pmatrix}\,. \end{equation} With \eqref{eqn:actiontau&rho}, we can assign \begin{align} \rho &= \frac{1}{\sqrt{V}} \cdot \frac{ \pm i + B }{ \mp V \beta i + 1 - B \beta} &\text{and}& & \tau &= \frac{1}{\sqrt{V}} \cdot \frac{ \frac{V}{ e\indices{^1_1}} i }{\pm e\indices{^1_2} i + e\indices{^1_1}}\,, \intertext{Solving these two equations for $B$, $\beta$, $e\indices{^1_1}$ and $e\indices{^1_2}$ gives rise to} B &= \pm \frac{\sqrt{\pm \rho_{\scriptscriptstyle\mathrm I} \left( \rho_{\scriptscriptstyle\mathrm I} \mp V \left|\rho\right|^2 \right)}}{\rho_{\scriptscriptstyle\mathrm I}} \,, && & \beta &= \pm \frac{- \rho_{\scriptscriptstyle\mathrm R} + \sqrt{\pm \rho_{\scriptscriptstyle\mathrm I} \left( \rho_{\scriptscriptstyle\mathrm I} \mp V \left|\rho\right|^2 \right)}}{ V \left| \rho \right|^2} \\ e\indices{^1_1} &= \pm \sqrt{\frac{\pm V \tau_{\scriptscriptstyle\mathrm I}}{\left| \tau \right|^2}} &\text{and}& & e\indices{^1_2} &= \mp \sqrt{\frac{V}{\pm \tau_{\scriptscriptstyle\mathrm I} \left| \tau \right|^2}} \tau_{\scriptscriptstyle\mathrm R} \,. \end{align} The vielbein components $e\indices{^1_1}$ and $e\indices{^1_2}$ are defined for all $\tau \in \mathds{C}$. For $B$ and $\beta$, this is not the case. They are only defined in the complex region \begin{equation} \mathds{C} \, \setminus \, \left\{ \rho \in \mathds{C} \, | \, \left |\rho - \frac{i}{2 V} \right| < \frac{1}{2 V} \lor \left |\rho + \frac{i}{2 V} \right| < \frac{1}{2 V} \right\}\,. \end{equation} In order show the implications of this constraint, we consider a $\rho(0)=\exp(i \theta)$ where $0 \le \theta \le \frac{\pi}{2}$. From \eqref{eqn:Utau&rho} it follows that the complex function $\rho(y^1)$ is given by \begin{equation} \rho(y^1) = \frac{ \rho(0) \cos(H y^1) + \sin(H y^1) }{ - \rho(0) \sin(H y^1) + \cos(H y^1)}\,. \end{equation} In the complex plane, all possible values of this function lay on a circle around the point \begin{equation} z = i z_{\scriptscriptstyle\mathrm I} = \frac{i \rho_{\scriptscriptstyle\mathrm I}(0)}{1 - \rho_{\scriptscriptstyle\mathrm R}^2(0)} = \frac{i}{\sin \theta} \quad \text{which has the radius} \quad R = \frac{ \rho_{\scriptscriptstyle\mathrm I}(0) \, \rho_{\scriptscriptstyle\mathrm R}(0)}{1 - \rho_{\scriptscriptstyle\mathrm R}^2(0)} = \cot \theta \,. \end{equation} Because $z_{\scriptscriptstyle\mathrm I} > R$ we only need to consider the upper half of the complex plane. The circle with center $z$ and radius $R$ must not intersect the region where $B$ and $\beta$ are not defined. Thus one has to constrain the volume $V$ of the fiber to \begin{equation} V \le z_\mathrm{I} - R = \frac{\rho_{\scriptscriptstyle\mathrm I}(0)}{1 - \rho_{\scriptscriptstyle\mathrm R}(0)}=\frac{\sin\theta}{1-\cos\theta}\,. \end{equation} This fact is important, because it shows that when fixing the volume $V$ of the fiber to a finite value, there are always some field configurations which are not well defined in terms of $B$ and $\beta$. Finally we discuss the monodromy $M\indices{^i_j}$ for $f=1/4$. In this case, the twist gives rise to \begin{equation} \tau(2\pi + y^1) = - \frac{1}{\tau(y^1)}\,. \end{equation} Taking into account that \begin{equation} e\indices{^a_i} = \sqrt{\frac{V}{\tau_{\scriptscriptstyle\mathrm I}}} \begin{pmatrix} \tau_{\scriptscriptstyle\mathrm I} & \tau_{\scriptscriptstyle\mathrm R} \\ 0 & 1 \end{pmatrix}\,, \end{equation} one obtains \begin{equation} e\indices{^1_1}(2\pi) = \frac{1}{|\tau(0)|} e\indices{^1_1}(0) \quad \text{and} \quad e\indices{^1_2}(2\pi) = - \frac{1}{|\tau(0)|} e\indices{^1_2}(0)\,. \end{equation} Plugging this result into \eqref{eqn:monodromyV=const} gives rise to \begin{equation} M\indices{^i_j} = \frac{1}{\left| \tau(0) \right|} \begin{pmatrix} 1 & - \frac{\tau_{\scriptscriptstyle\mathrm R}(0)}{\tau_{\scriptscriptstyle\mathrm I}(0)}\left(|\tau(0)|^2 + 1 \right) \\ 0 & \left| \tau(0) \right|^2 \end{pmatrix}\,. \end{equation} Now there are two possibilities: $M\indices{^i_j}$ itself is an SL$(2,Z)$ matrix, or it can be transformed by a GL$(d)$ transformation $t\indices{^i_j}$ into such a matrix. GL$(d)$ transformations act as \begin{equation} \tilde{M}\indices{^i_j} = t\indices{^i_k} M\indices{^k_l} t\indices{_j^l} \end{equation} on the monodromy. In accordance with the notation uses so far, $t\indices{_i^j}$ is the inverse, transpose of $t\indices{^i_j}$. Such a transformation only exists, when the trace of $M$ is an integer, namely \begin{equation} \Tr M\indices{^i_j} = \left( | \tau(0) | + \frac{1}{| \tau(0) |} \right) \in \mathds{Z}\,. \end{equation} There are some special points for which this constraint hold, but in general it is violated and one ends with a non-geometric background as expected. \section{Conclusions and discussion}\label{sec:conclusions} In this paper we have applied a consistent Scherk-Schwarz ansatz to Double Field Theory in order to construct a reduced effective theory. This effective theory is used to find \begin{enumerate} \item non-trivial vacuum solutions of DFT's equations of motion and \item to describes fluctuations around this vacuum. \end{enumerate} To do this, we use a generalization of group manifolds, which are well understood for ordinary geometry, but has to be adapted to DFT. These manifolds need to have as many isometries as coordinates. In DFT, isometries are defined by the vanishing generalized Lie derivatives, \begin{equation} \mathcal{L}_{K\indices{_I^J}} \mathcal{H}^{MN} = 0 \quad \text{and} \quad \mathcal{L}_{K\indices{_I^J}} \phi' = 0\,. \end{equation} They give rise to homogeneous, doubled spaces which exhibit a constant generalized Ricci scalar (which is equivalent to the scalar potential in the effective theory). From the effective theory's point of view, these spaces are completely specified by the structure coefficients of the group they are linked to. The structure coefficients can be expressed in terms of the covariant fluxes $\mathcal{F}_{ABC}$. They are not arbitrary, but have to fulfill several constraints. In general, these constraints can be divided into three different categories: The first kind of constraints is needed to create a group structure. It requires that the covariant fluxes are constant and the Jacobi identity (or, more generally, the quadratic constraint) is fulfilled. Additionally, the second kind of constraints requires that the group manifold is compatible with the strong constraint. Such constraints are challenging, because the strong constraint has to be checked on the level of the generalized metric. But the map between covariant fluxes and generalized metric is involved, so in general one can only find conditions for the fluxes which lead to a violation of the strong constraint. Nevertheless, they help to restrict the number of covariant fluxes which survived the constraints of the first kind. Finally the field equations of the effective theory limit the allowed covariant fluxes. In this paper we looked for a vacuum solution which gives rise to a Minkowski space in the external direction. Thus the scalar potential $V$ has to have a minimum with $V=0$. This again puts severe restrictions on the covariant fluxes. In $D-d=3$, the only covariant fluxes which fulfill all constraints, discussed above, are \begin{equation} H_{123} = Q^{23}_1 = H \quad Q^{31}_2 = Q^{12}_3 = 0\,, \quad R^{123} = f^1_{23} = 0 \quad \text{and} \quad f^2_{31} = f^3_{12} = f \,. \end{equation} For them, we construct the twist $U\indices{^M_N}$ and the Killing vectors $K\indices{_I^J}$. Especially the Killing vectors are essential for a consistent dimensional reduction. In the literature they have not been discussed before. For $H\ne 0$ and $f\ne 0$, the background which corresponds to the fluxes above is not T-dual to a background with geometric fluxes only. In this case, the Killing vectors depend on the coordinates and the dual coordinates. They violate the strong constraint, but nevertheless the algebra generated by them is closed. These Killing vectors describe all three possible kinds of generalized diffeomorphism (coordinate transformations, $B$- and $\beta$-field gauge transformations) at the same time. Thus it is impossible to describe such background in SUGRA or generalized geometry. We also showed that it is impossible to find a field redefinition which makes the background and fluctuations around it well defined. Thus we come to the conclusion that these backgrounds are beyond the scope of SUGRA and generalized geometry. We also considered fluctuations around these backgrounds which have the same isometries (Killing vectors) as the background itself. In terms of the effective actions such fluctuations can be expressed as $(D-d)^2$ scalar, and $2(D-d)$ vector bosons. For these bosons we calculated the mass spectrum and the gauge group. So we use DFT in a {\it twofold way}. First we use it to calculate the background and afterwards, it is used to study fluctuations around this background. This is possible because DFT is a background independent theory. So it not only makes predictions about valid backgrounds, but also about fluctuations around these background. The gaugings we found are compatible with the CFT description of asymmetric orbifold discussed in \cite{Condeescu:2013yma}. Furthermore, the way the twist $U\indices{^M_N}$ acts on the generalized vielbein suggests that the double elliptic background has a realization as an asymmetric orbifold in string theory. Explicit CFT computations in this kind on string background could also confirm the mass spectrum we have calculated. This would be an important check that DFT indeed covers such string backgrounds. \acknowledgments We gratefully acknowledge that Olaf Hohm was involved in the initial stages of this project. We like to thank him for many important discussions during the preparation of this paper. We also would like to thank D. Andriot, A. Betz, R. Blumenhagen, S. Massai, F. Montiel, S. Nibbelink, P. Patalong and M. Schmidt Sommerfeld for helpful discussions. This work was partially supported by the ERC Advanced Grant ``Strings and Gravity''(Grant.No. 32004) and by the DFG cluster of excellence ``Origin and Structure of the Universe''.
1,108,101,563,984
arxiv
\section{Introduction} Shock-ignition (SI) \cite{Betti_2008,Perkins2009} is a direct-drive \cite{Craxton2015} inertial confinement fusion (ICF) scheme where the fuel assembly and ignition stages of the implosion are decoupled using a characteristic laser pulse shape. In the first stage a low-intensity ($10^{14}-10^{15}$ \si{W/\centi\metre^2}) pulse is used to ablate the outer layer of the target, creating a long density scale-length coronal plasma and compressing the fuel. In the second stage, high-intensity ($10^{15}-10^{16}$ \si{W/\centi\metre^2}) beams are used to launch a strong shock into the target, which in turn leads to a non-isobaric pressure profile. In a successful experiment, when the pressure peaks at the centre, ignition occurs. In the ignition stage of the pulse, the average laser intensity on target is above the threshold for laser-plasma instabilities (LPI) \cite{Theobald2012}, such as stimulated Raman scattering (SRS)\cite{Liu1974}, stimulated Brillouin scattering (SBS)\cite{Liu1974}, and two-plasmon decay (TPD)\cite{Liu1976}. One laser-plasma instability of major concern\cite{Rosenberg2018} to shock-ignition is stimulated Raman scattering, a three-wave parametric instability that transfers energy from the laser to an electron plasma wave (EPW) and a scattered light wave \cite{Liu1974}. SRS is only possible in plasma regions where the following matching conditions can be satisfied: $\omega_0 = \omega_\mathrm{EPW} + \omega_\mathrm{s}$, $\mathbf{k}_0 = \mathbf{k}_\mathrm{EPW} + \mathbf{k}_\mathrm{s}$, where the subscripts refer to the incident laser, electron plasma wave, and scattered light wave, respectively. The production of scattered light by SRS is deleterious to the SI scheme as it diverts laser energy away from the target and reduces laser-illumination uniformity. The effect of SRS-generated electron plasma waves on shock-ignition is less well understood, as it depends on the specific waves which are amplified. As the SRS EPWs Landau damp, they transfer energy to electrons with velocities $v \sim v_\mathrm{ph} = \omega/k > v_\mathrm{th}$. Of these suprathermal electrons, those with energies less than $ 100\mathrm{keV}$ are predicted to stop behind the converging shock, and could act to augment the ignition shock \cite{Ribeyre_2009}. Electrons with energies greater than $100\mathrm{keV}$ are likely to deposit their energy ahead of the shock and pre-heat the target, making it more difficult to compress \cite{Batani_2014}. For this reason, it is important to understand the energy distribution of suprathermal electrons produced by SRS in shock-ignition scenarios. Previous simulation studies of stimulated Raman scattering in shock-ignition have identified dominant SRS growth at $n_\mathrm{cr}/4$ and $n_\mathrm{cr}/16$, where it grows as an absolute instability\cite{Klimo2010,Klimo2011}. SRS has also been observed at densities such that it grows as a weakly kinetic convective instability; characterised by the condition that\cite{Cristoforetti2017}: $ 0.15 < k_\mathrm{EPW}\lambda_\mathrm{D} < 0.25$. Furthermore, some studies have shown the presence of SRS in the strongly kinetic limit ($k_\mathrm{EPW}\lambda_\mathrm{D} > 0.25$), where it occurs by kinetic inflation \cite{Riconda2011,Batani_2014}, as explained below. SRS which occurs via kinetic inflation (from here on referred to as inflationary SRS, or iSRS) has been studied extensively in the low-density homogeneous plasmas relevant to indirect-drive ICF on the NIF \cite{Vu2002,Yin2006,Vu2007,Strozzi2007,Yin2008,Yin2012,Ellis2012}. In attempting to explain experimental measurements of large SRS-reflectivities at high values of $k_\mathrm{EPW}\lambda_\mathrm{D}$ \cite{Fernandez2000,Montgomery2002}, it was suggested that some mechanism caused a reduction in the Landau damping rate by four to five times, compared to the damping for a Maxwellian plasma \cite{Montgomery2002}. Anomalously large SRS-reflectivities were recreated in simulations \cite{Vu2001,Vu2002}, and were explained by reference to O'Neil's 1965 model of reduced EPW damping caused by electron-trapping\cite{ONeil1965}. In homogeneous plasmas, iSRS occurs when an SRS EPW grows to a point where it can trap electrons for one complete bounce period or longer, without them becoming de-trapped due to velocity-space diffusion or side-loss\cite{Vu2002}. This trapped electron population leads to modification of the distribution function, in the form of a locally flattened region around the EPW phase velocity. This translates to a modification of the dielectric properties of the plasma, resulting in a reduction in the EPW's associated Landau damping rate \cite{ONeil1965,Vu2002}, and increased SRS growth. Key results of previous studies of inflationary SRS in homogeneous plasmas include: a theory for the saturation of iSRS in terms of EPW bowing and the trapped-particle modulation instability \cite{Yin2008}; the derivation of an inflation threshold intensity in terms of competition between trapping in the EPW and diffusion in velocity space \cite{Vu2007}; and the description of iSRS in terms of a transition from convective to absolute growth \cite{Wang2018}. Inflationary SRS has also been identified as an important mechanism in simulations with ensembles of laser speckles \cite{Yin2012,Winjum2019}. In the large-scale inhomogeneous plasmas associated with shock-ignition ($L_n =n_e/(dn_e/dx) \simeq 300-1000 \si{\micro\metre}$) the mechanism and effects of inflationary SRS have, so far, received little attention. The few papers which do refer to iSRS in SI inhomogeneous plasmas assume that the explanation of iSRS in a homogeneous plasma in terms of reduced Landau damping also applies to iSRS in an inhomogeneous plasma. However, iSRS in an inhomogeneous plasma actually happens by a different mechanism. In the case of a homogeneous plasma, reduced Landau damping due to electron-trapping in the EPW leads to an increase of the SRS growth rate which, if sufficiently large, can cause a transition from convective to absolute growth \cite{Wang2018}. For an inhomogeneous plasma, where the growth of SRS is always convective, the reduction in Landau damping associated with trapping in the EPW has no net effect on the convective gain. While the local SRS growth rate may depend on the EPW damping rate in an inhomogeneous plasma, the region of SRS convective growth is also extended, leading to a net Rosenbluth gain\cite{Rosenbluth1972} which is independent of Landau damping \cite{Williams1991,Liu1994}. We therefore look to another non-linear effect caused by the trapped electrons in the SRS EPW, the non-linear frequency shift\cite{Morales1972}. In an inhomogeneous plasma, the frequency shift resulting from electron-trapping can compensate for the wave-number mismatch on propagating up the density gradient, thereby allowing growth over a larger region - an auto-resonance \cite{Chapman2010,Chapman2012}. Chapman {\it et al.} (2012) \cite{Chapman2012}, proposed this theory for iSRS in an inhomogeneous $(L_n \lesssim 100 \si{\micro\metre})$ plasma close to the hohlraum wall in indirect-drive ICF. They demonstrated the auto-resonant interaction\cite{Chapman2010} between the non-linear frequency shift associated with electron-trapping in EPW and the wave-number mismatch caused by plasma inhomogeneity; which allows larger SRS gain\cite{Chapman2012}. Inflationary SRS has been suggested as the cause of SRS from low densities in simulations of LPI in shock ignition \cite{Klimo2014}. Sub-scale shock-ignition experiments have detected SRS scattered light from densities $0.09 - 0.16 n_\mathrm{cr}$, where the inflationary mechanism should be important \cite{Cristoforetti2017}. Recent full-scale ($L_n > 500 \si{\micro\metre}$, $T_e = 5\si{\kilo \electronvolt}$) directly-driven experiments have detected significant SRS-reflected light from densities $0.15 - 0.21 n_{\mathrm{cr}}$ \cite{Rosenberg2020}. Another full-scale ($L_n = 450 \si{\micro\metre}$, $T_e = 4.5\si{\kilo \electronvolt}$) SI experiment measured SRS-reflected light from densities between $0.05 - 0.15 n_{\mathrm{cr}}$. Under the conditions of the experiment, $k_{\mathrm{EPW}}\lambda_D$ ranges from $0.3 - 0.6$ and the measured SRS is assumed to be inflationary in origin \cite{Baton2020}. For a single laser speckle in an inhomogeneous plasma with density scale-length $L_n\simeq 70 \si{\micro\metre}$, Riconda {\it et al.} (2011) \cite{Riconda2011} demonstrated that iSRS was associated with electron-trapping in the EPW. By varying $a_0=eE_0/c m_e \omega_0$ from 0.03 to 0.06, i.e. an increase in laser intensity from $1.0\times10^{16} \si{W/\centi\metre^2}$ to $4\times10^{16} \si{W/\centi\metre^2}$, they showed a transition to iSRS. The primary aim of this study is to identify shock-ignition plasma parameters where iSRS may occur. This information will guide future studies examining the longer term consequences of iSRS growth, such as its saturation mechanisms or interaction with other instabilities, which will require more detailed and computationally expensive modelling. Here we show that inflationary SRS can occur in inhomogeneous plasmas with density scale-lengths $L_n\simeq 300-1000 \si{\micro\metre}$, such as expected for shock-ignition coronal plasmas, and at shock-ignition laser intensities and plasma temperatures. We demonstrate that iSRS in these simulations is characterised by electron-trapping, frequency shift of the EPW and the appearance of beam-acoustic modes (BAM). Through a set of parameter studies we estimate the transition threshold for iSRS at different density scale-lengths and densities. These simulations are all restricted to 1D to determine where iSRS can occur in isolation from other effects. We also comment on the possible risk to SI from inflationary SRS and approaches to maximise the benefit of the hot-electron production. The outline of this paper is as follows: Section \ref{sec:code&IC} describes the code used and the choice of initial conditions. In Section \ref{sec:signatures} we demonstrate how one may detect the presence of iSRS in PIC simulations of shock-ignition plasmas. Section \ref{sec:paramScan} describes how the iSRS behaviour changes with the density scale-length of the coronal plasma. We also characterise the hot-electron populations, and comment on their potential impact on the ignition shock. Section \ref{sec:conclusion} summarises our results. \section{Code and initial conditions}\label{sec:code&IC} Simulations are performed using the particle-in-cell (PIC) code EPOCH \cite{Arber2015}, which solves Maxwell's equations on a fixed grid and self-consistently moves particles under the Lorentz force law. The initial plasma conditions span a range of SI-relevant parameters. The simulation parameters are chosen to achieve our primary aim of identifying plasma parameters where iSRS may occur, which does not require large simulations of the entire LPI system. The simulations all used a domain size of $L_x = 100\si{\micro\metre} $ and ran to $T_\mathrm{end} = 2\si{\pico\second}$ with 2048 particles per cell (PPC) for the electron species. We treat the ions as a neutralising background population since we simulate only a two pico-second interval of SRS development, during which ion dynamics will not become important \cite{Rousseaux2006}. For the plasma parameters laid out above, electron-ion collisions occur on a characteristic timescale of approximately $7 \si{\pico\second}$ at the highest density probed, $0.22n_\mathrm{cr}$. Since the inflationary Raman process we are investigating takes place on a sub-picosecond timescale, we do not include collisions in our simulations. The plasma density profiles are given by the expression $n(x) = n_\mathrm{min}\mathrm{exp}(x/L_n)$ and can be seen in Table \ref{tab:densities}. We simulate a frequency-tripled Nd:glass laser with vacuum wavelength $\lambda_0 = 351\si{\nano\metre}$, polarised in the $y$-direction. The laser intensity was varied in 20 logarithmically evenly-spaced increments between $10^{14}$\si{W/\centi\metre^2} and $10^{16}$\si{W/\centi\metre^2}, with a half-Gaussian temporal profile followed by a flat top, and a rise-time of 50 laser periods. We use absorbing boundaries for the fields and thermal for the particles; these replace any particle leaving the simulation with an incoming particle with velocity consistent with a Maxwellian plasma based on the initial temperature of $4.5$\si{keV}. \begin{table}[ht] \caption{\label{tab:densities} Summary of density profiles and $k_{EPW}\lambda_\mathrm{D}$ values in each simulation. $L_n=n_e/(dn_e/dx)$ evaluated at $n_\mathrm{mid}$. For all but the case centred at $0.2n_\mathrm{cr}$, $k_\mathrm{EPW}\lambda_\mathrm{D} > 0.28$ and we are in the strongly kinetic regime. The total range of $k_\mathrm{EPW}\lambda_\mathrm{D}$ probed is 0.21-0.41. } \begin{ruledtabular} \begin{tabular}{cccc} $L_n/\si{\micro \metre}$ & $n_\mathrm{mid}/n_\mathrm{cr}$ & $(n_\mathrm{min},n_\mathrm{max})/n_\mathrm{cr}$ &$(k\lambda_\mathrm{D_{min}},k\lambda_\mathrm{D_{max}})$\\ \hline 300& 0.15 & $(0.13,0.18)$ & $(0.28,0.37)$\\ 500 & 0.12 &$(0.11,0.13)$ & $(0.37,0.41)$\\ 500 & 0.15 & $(0.14,0.17)$& $(0.29,0.35)$ \\ 500 & 0.20 & $(0.18,0.22)$& $(0.21,0.27)$\\ 1000 & 0.15 & $(0.14,0.16)$ & $(0.31,0.32)$ \\ \end{tabular} \end{ruledtabular} \end{table} EPOCH uses a pseudorandom number generator (PRNG) to generate the initial particle distribution. Each simulation was repeated 10 times with a different PRNG seed, allowing us to determine the sensitivity of SRS to plasma fluctuations. This allowed us to calculate both the mean and standard deviation of the intensity of the light scattered through SRS. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{fig1.eps} \caption{(Colour) Time-averaged intensity of SRS scattered light for a homogeneous simulation ($n_e=0.15n_\mathrm{cr}$, $T_e = 4.5$\si{\kilo \electronvolt}) with different numbers of particles per cell. Relative errors are given by one standard deviation of the SRS scattered light intensity as calculated from ten simulations.} \label{fig:convergence} \end{figure} SRS amplifies fluctuations in the plasma, and so is sensitive to number of particles per cell used in the simulation, as can be seen in Figure \ref{fig:convergence}. In this figure, the intensity of SRS back-scattered light (denoted in this paper by $\langle I_{\mathrm{SRS}}\rangle$) is plotted against the incoming laser intensity $I_0$ ranging between $0.4 - 4.0 \times 10^{15}$\si{W/\centi\metre^2} for a homogeneous plasma with $n_e=0.15n_\mathrm{cr}, T_e = 4.5$\si{\kilo \electronvolt}, for different numbers of particles per cell. At low incident intensity, $\langle I_{\mathrm{SRS}}\rangle$ is inversely proportional to the number of PPC used. This is as we would expect for the case of simple convective amplification \cite{Rosenbluth1972} of the product of two quantities ($E_y,B_z$) which vary as background PIC noise, which is proportional to $1/\sqrt{\mathrm{PPC}}$. The upper saturated level of $\langle I_{\mathrm{SRS}}\rangle$ is robust to the number of PPC for $\mathrm{PPC} > 100$. The transition between these two levels represents the change from standard convective amplification of SRS to enhanced growth of SRS due to trapping (inflationary SRS), hence we call this the inflation threshold. The existence of an inflation threshold is also robust to the number of particles per cell for $\mathrm{PPC} > 100$. In the region containing the inflation threshold, the error associated with the intensity of SRS scattered light is largest. This suggests that inflationary SRS is very sensitive to the initial distribution of particles in the simulation domain, and that a statistical analysis of the mean and standard deviation of intensity across different random seeds will be important if we are to determine the iSRS threshold intensity accurately. \section{Signatures of iSRS in inhomogeneous plasmas}\label{sec:signatures} Three signatures of inflationary SRS observed in the literature for homogeneous plasmas are: a threshold intensity past which scattering of laser light is enhanced above the level predicted by fluid theory \cite{Vu2007}; electron-trapping in the SRS EPWs leading to local flattening of the distribution function at the EPW phase velocity \cite{Vu2002}; and the growth of down-shifted SRS EPWs and a continuum of beam-acoustic modes (BAMs) \cite{Yin2006}. In what follows we show that all of these signatures are also present for iSRS in an inhomogeneous plasma, despite the instability arising through an auto-resonance rather than a transition from convective to absolute growth. We consider first the existence of a threshold intensity past which SRS growth is enhanced, by several orders of magnitude, above the predictions of fluid theory; this has been seen in experiments \cite{Kline2006} and simulations \cite{Vu2002,Yin2006,Vu2007,Riconda2011}. In order to identify kinetic inflation of the SRS scattered light intensity in our PIC simulations, we use a simple fluid model to calculate and compare the intensity of SRS scattered light in the absence of kinetic effects. According to fluid theory, the growth of a parametrically unstable mode in an inhomogeneous plasma is limited by the loss of resonance between the waves as they propagate through the plasma and experience wave-number shift \cite{Rosenbluth1972}. We can formulate this inhomogeneous growth in terms of the Rosenbluth gain exponent\cite{Rosenbluth1972} \begin{equation}\label{eqn:GRos} G_\mathrm{Ros} = 2\pi\gamma_0^2/|v_{g,1}v_{g,2}\kappa'|, \end{equation} where $\gamma_0$ is the growth rate of the equivalent mode in a homogeneous plasma, $v_{g,1}, v_{g,2}$ are the group speeds of the scattered EM wave/EPW and $\kappa'$ is the $x$-derivative of the wave-number mismatch $\kappa(x) = k_0(x) -k_\mathrm{s}(x) -k_\mathrm{EPW}(x)$. The maximum intensity reached by a parametrically unstable wave which has grown from noise at point $x$ is then given by the expression $I_\mathrm{noise}\mathrm{exp}(G_\mathrm{Ros}(x))$. In order to calculate the intensity of scattered light due to SRS, we substitute for $k_0,k_\mathrm{s},k_\mathrm{EPW}$ using the electromagnetic and Bohm-Gross dispersion relations in one dimension, to get: \begin{equation}\label{eqn:kappaPrime} \frac{d\kappa}{dx}= -\frac{1}{2}\frac{q_e^2}{m_e\epsilon_0} \left(\frac{1}{c^2k_0}-\frac{1}{3v_\mathrm{th}^2k_\mathrm{EPW}}-\frac{1}{c^2k_\mathrm{s}}\right)\frac{dn_e}{dx}. \end{equation} Substituting this back into $G_\mathrm{Ros}$ with the growth rate for backward SRS in a homogeneous plasma\cite{kruer2003}, \begin{equation}\label{eqn:gamma0} \centering \gamma_0 = \frac{k_\mathrm{EPW}v_{os}}{4}\left[\frac{\omega_{\mathrm{pe}}^2}{\omega_\mathrm{EPW}(\omega_0-\omega_\mathrm{EPW})}\right]^{1/2}, \end{equation} gives an appropriate Rosenbluth gain exponent for calculating convective amplification of back-scattered SRS light in our simulations. We make several simplifying assumptions that allow us to estimate the maximal scattered light intensity at a point in our simulation domain. Firstly, we neglect the dependence of the scattered light velocity on space, and consider it to be fixed at $c$. This means that we slightly over-estimate the amount of scattered light which is able to reach the point $x$ in time $t$. We also assume that the laser achieves its maximum intensity starting at $t=0$ rather than ramping up, as it does in the simulations. We neglect collisional damping of the scattered EM waves and assume that the noise source $I_\mathrm{noise}$ is homogeneous in the domain. Finally, we assume that the amplification described by the Rosenbluth gain exponent occurs locally and instantaneously at the point of perfect matching $(\kappa=0)$, rather than across the resonance region defined by $\ell \sim 1/\sqrt{\kappa'}$. For all the simulations presented in this paper $\ell < 6\si{\micro\metre}$. The scattered light intensity is then given by \begin{equation}\label{eqn:fluid_model} I(x) = \frac{1}{L_x}\int_x^{L_x}I_\mathrm{noise}\mathrm{exp}(G_\mathrm{Ros}(s)) ds. \end{equation} The prefactor $1/L_x$ ensures that if $G_\mathrm{Ros}=0$, such that there is no growth, then the back-scattered signal remains at the the noise intensity. The steady-state intensity of SRS scattered light measured at the laser-entry boundary is then given by $\langle I_{\mathrm{SRS}} \rangle = I(0)$. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{fig2.eps} \caption{ (Colour) Blue triangular markers show the intensity of SRS scattered light calculated using the fully-kinetic EPOCH code for parameters: $L_n = 500 \si{\micro\metre} $ and $n_{\mathrm{mid}} = 0.15n_\mathrm{cr}$. Red circular markers show the intensity of SRS scattered light calculated, for the same plasma parameters, from the fluid model presented above. The initial noise level in the fluid model was calculated from a PIC simulation without the laser driver: $I_\mathrm{noise}=\langle E_yB_z\rangle_{x,t} / \mu_0 = 8\times 10^{10} \si{W/\centi\metre^2}$. } \label{fig:kineticVsfluid} \end{figure}{} The red circular markers in Figure \ref{fig:kineticVsfluid} show the results of applying this method to the case of a $500\si{\micro\metre} $ density scale-length plasma, with the density profile centred at $0.15n_\mathrm{cr}$, for incident laser intensities ranging from $10^{14} - 10^{16}$\si{W/\centi\metre^2}. The blue triangular markers show the intensity of SRS scattered light calculated from the equivalent kinetic EPOCH simulations. The relationship between the kinetic and fluid results changes as the incident laser intensity increases. At low intensities the fluid and kinetic models are well matched, but not identical, suggesting that there is always some kinetic element to the SRS behaviour in these simulations. Continuing this analysis to intensities $<10^{13}$\si{W/\centi\metre^2} (well below those relevant to shock-ignition) shows that the two methods converge for low intensities, where the behaviour is purely fluid. Once the incident laser intensity exceeds $I_\mathrm{threshold} \sim 1.4\times10^{15}$\si{W/\centi\metre^2}, the intensity of SRS scattered light measured in the kinetic simulations exceeds the fluid prediction by between one and three orders of magnitude, until the intensity reaches $I_0 = 10^{16}$\si{W/\centi\metre^2}, where it appears to saturate. In the fluid model, $\langle I_{\mathrm{SRS}}\rangle$ is a smooth function of incident laser intensity and we cannot define such a threshold intensity. This implies that kinetic effects in our simulations are responsible for the increase in $\langle I_{\mathrm{SRS}} \rangle$ and that we have observed iSRS. The fluid estimate shows no sign of saturating at high intensities, since the Rosenbluth gain formula used is based on unbounded linear SRS growth over the resonance region $\ell$ and the model does not include pump depletion. By constructing plots such as these, which show the fully kinetic PIC results alongside results from our simple fluid model, we are able to identify the iSRS threshold as the point past which the kinetic and fluid models differ by at least one order of magnitude. \begin{figure}[!ht] \centering \includegraphics[width=\columnwidth]{fig3_3a_3b_3c_3d.pdf} \caption{(Colour) Time-resolved comparison of SRS-reflectivity (a,c) and electron distribution functions (b,d) for two simulations with parameters: $L_n = 500 \si{\micro\metre} $; centred at $0.15n_\mathrm{cr}$; and $T_e = 4.5$\si{\kilo\electronvolt}. The distribution function of electron momentum is averaged over the entire spatial domain at four times, normalised to the initial thermal momentum. Panels (a,b) have an initial laser intensity below the threshold for inflationary SRS; $I_0 = 1.13\times10^{15}$\si{W/\centi\metre^2}. Panels (c,d) have an initial laser intensity above the iSRS threshold; $I_0 = 4.83\times10^{15}$\si{W/\centi\metre^2}.} \label{fig:reflAndDist} \end{figure} A second signature of iSRS, as reported in the literature for homogeneous plasmas, is electron-trapping in the SRS electron plasma waves, leading to a non-linear frequency shift and enhanced SRS-reflectivies at large $k\lambda_\mathrm{D}$ \cite{Vu2002}. A typical manifestation of this, for our inhomogeneous simulations, is shown in Figure \ref{fig:reflAndDist}. Figure \ref{fig:reflAndDist} shows the instantaneous SRS-reflectivity measured at the left boundary of the simulation domain (a,c), alongside the box-averaged electron distribution function at four times (b,d), for two simulations with laser intensities above and below the iSRS threshold. Sub-figures \ref{fig:reflAndDist} (a,b) show that, when driven below threshold, the distribution of electron momenta is Maxwellian throughout the simulation, and that the maximum instantaneous power in SRS-reflected light is consequently very low ($P \sim 10^{-3}P_0$). In sub-figures (c,d), where the incident laser intensity is well above the iSRS threshold, we see that the power in SRS-reflected light is correlated with the growth of a non-Maxwellian tail in the distribution function, corresponding to an electron population trapped in the SRS electron plasma waves. There is a general trend of increasing SRS-reflected light that correlates with the increasing trapped electron population. Throughout the simulation presented in Figure \ref{fig:reflAndDist}, the SRS- reflectivity exhibits a `bursty' behaviour on the sub-picosecond timescale, even though the distribution function appears to vary smoothly as a function of time. In Figure \ref{fig:reflAndDist}, the distribution functions have been averaged over the entire simulation domain, but upon inspection of Figure \ref{fig:downshift} (c) we can see that growth of iSRS EPWs has spatial dependence. Averaging the distribution over the whole domain potentially masks the localised effects responsible for the bursty behaviour, the period of which has been shown in Winjum {\it et al.} (2010) \cite{Winjum2010} to depend on the local trapping-induced non-linear frequency shift. \begin{figure}[h!] \centering \includegraphics[width=\columnwidth]{fig4_4a_4b_4c_4d.eps} \caption{(Colour) Top panels show the spectra of electrostatic (a) and electromagnetic (b) waves over the period $0-0.7\si{\pico\second}$. The white (orange) dashed lines represent the linear predictions for the spectra of backward (forward) SRS. The bottom panels show the same spectra calculated over the period $1.5-1.9\si{\pico\second}$. The $E_x$ ($B_z$) spectrum is significantly down-shifted (up-shifted), demonstrating a trapped population of electrons in the EPW \cite{Yin2006}. The orange solid line represents the plasma frequency $\omega_{\mathrm{pe}}$ for the simulation parameters: $L_n = 500 \si{\micro\metre} $ centred at $0.15n_\mathrm{cr}$ and $I_0 = 4.83\times10^{15}$\si{W/\centi\metre^2}.} \label{fig:downshift} \end{figure}{} Electron-trapping in the SRS-driven EPW causes a time-dependent non-linear frequency shift of the EPWs \cite{Morales1972,Kline2006}, and the growth of a sequence of beam-acoustic modes \cite{Yin2006}; this is the third signature of iSRS. Figure \ref{fig:downshift} shows the spatially resolved frequency spectra of EPWs (a,c) and EMWs (b,d) at 0-0.7ps (a,b) and 1.5-1.9ps (c,d). In panels (a,b), the signal maxima sit very close to the white dashed line, which represents the frequencies predicted by the SRS matching-conditions for the original Maxwellian plasma. This means that, at early time, the SRS EPWs and their associated back-scattered light waves are excited at the frequencies matching those of the linear theory without trapping. They are slightly down-shifted from the analytical prediction, which suggests that the trapping becomes important almost immediately in our simulations. At later time, Figure \ref{fig:downshift} (c) shows that the EPW spectrum is down-shifted in frequency at every location in the simulation domain, including to frequencies below the plasma frequency for the original Maxwellian plasma (orange solid line). This is evidence of a large trapped particle population removing energy from the wave, causing the frequency of the wave to decrease such that energy is conserved \cite{Morales1972}. We also note that in Figure \ref{fig:downshift} (d) the back-scattered light spectrum is up-shifted in frequency space, so as to maintain frequency matching. As well as obvious up-shift of the electromagnetic spectrum, we can also see more general broadening as we move from Figure \ref{fig:downshift} (b) to (d). This could be caused by waves from a higher density propagating to smaller $x$, so that at a particular location the spectrum covers waves from a range of densities. Further evidence for a large trapped particle population can be seen in the growth of a beam acoustic mode in the electrostatic $(\omega,k)$ spectrum. Figure \ref{fig:BAM} shows the electrostatic dispersion relation from a simulation; it is calculated by taking a 2D Fourier transform of the $E_x$ field over the entire spatial domain, and over two distinct time intervals. At early time, shown in Figure \ref{fig:BAM} (a), electron plasma waves are excited, from background noise, between the two white dashed curves. These represent the Bohm-Gross dispersion relations $\omega_\mathrm{EPW}^2 = \omega_{\mathrm{pe}}^2 + 3v_\mathrm{th}^2k_\mathrm{EPW}^2$ for the highest density in the domain (top line) and the lowest density (bottom line). According to fluid theory, SRS will grow where the Stokes branch, defined by $(\omega-\omega_0)^2 = \omega_{\mathrm{pe}}^2 + c^2(k-k_0)^2$, intersects with this dispersion curve. This fluid-SRS signal can be seen in Figure \ref{fig:BAM} (a). \begin{figure}[ht] \centering \includegraphics[width=1.02\columnwidth]{fig5_5a_5b.pdf} \caption{(Colour) (a) 2D FFT of $E_x$ over the period $ 0 - 0.8 \si{\pico\second}$. (b) 2D FFT of $E_x$ over the period $1.2 - 1.9 \si{\pico\second}$. The white dashed lines represent the analytical dispersion relations corresponding to the minimum (bottom line) and maximum (top line) plasma densities, assuming a Maxwellian electron distribution. The pink dashed line shows the Stokes line for down-shifted EM waves. Simulation parameters: $L_n = 500 \si{\micro\metre} $ centred at $0.15n_\mathrm{cr}$ and $I_0 = 4.83\times10^{15} \si{W/\centi\metre^2}$. } \label{fig:BAM} \end{figure}{} The right hand panel of Figure \ref{fig:BAM} shows the EPW dispersion relation calculated from the simulation between $1.2 - 1.9 \si{\pico\second}$. Inspection of the distribution function in Figure \ref{fig:reflAndDist} shows that, at these times, the distribution function is modified from the initial Maxwellian and has a large flattened region, which acts as an effective beam population \cite{Yin2006}. According to linear theory, this change in the distribution function $f$ changes the kinetic dispersion relation for the electrostatic waves in the system, defined by: $\epsilon(\omega,k) = 1 - \frac{q_e^2}{\epsilon_0m_ek}\int\frac{\partial{f}/\partial{v}}{v-\omega/k} dv = 0$. This change in the dielectric properties of the plasma is realised in the $(\omega,k)$ spectrum as a continuum of beam acoustic modes \cite{Yin2006}, this is the large spectral feature in the right hand panel of Figure \ref{fig:BAM} that sits strictly below the Bohm-Gross dispersion curves. These beam acoustic modes are frequency downshifted, recovering the result from Morales and O'Neil's non-linear analysis\cite{Morales1972}. The maximum of the BAM signal at $k \sim 1.5k_0$ is the intersection of the BAM with the Stokes branch, the new location of SRS growth. We can also see in Figure \ref{fig:BAM} (b) a signal at $k\sim 0.5k_0$ which sits on the intersection of the Stokes branch with the range of EPWs satisfying the Bohm-Gross dispersion relations. This represents forward-scattered SRS EPWs, which have not undergone a significant frequency shift. For all the simulations presented in this paper, when driven above threshold, the power in forward SRS scattered light is of the order $P \sim 10^{-3} P_0$ or lower, and is therefore energetically unimportant. \section{Intensity threshold and hot electron scaling}\label{sec:paramScan} Using the method developed in Section \ref{sec:signatures} for locating the inflation threshold, and the analysis of electron trapping and downshifted EPWs to ensure that the SRS observed is inflationary in origin, we investigate how iSRS depends on various plasma parameters relevant to shock-ignition. Using the PIC simulation set-up as in Section \ref{sec:code&IC} (with the same simulation domains, plasma densities, and temperatures), we varied the plasma density scale-length across the range of values predicted for shock-ignition $(300\si{\micro\metre} - 1000\si{\micro\metre})$ \cite{Ribeyre_2009}. As well as varying the density scale-length, we also centred the density profiles at different values of density. Figure \ref{fig:paramScan} shows the result of this parameter scan. From Figure \ref{fig:paramScan} (a) we see that as the density scale-length of the SI coronal plasma decreases, the intensity threshold for iSRS increases. Vu \textit{et al.} (2007) \cite{Vu2007} derived a condition for the kinetic inflation threshold of SRS in a homogeneous plasma. They showed that the magnitude of the trapped electron potential energy in the EPW must be greater than or equal to the energy gained by a particle in one complete trapped orbit due to velocity diffusion in the background plasma fluctuations. This ensures that trapping remains for at least one bounce period. No such analytic threshold has been derived for an inhomogeneous plasma. However, when $L_n$ is smaller the inhomogenous gain is smaller and the amplitude reached by convective amplification of the SRS EPW is lower for the same intensity. Hence SRS in a shorter density scale length plasma is less likely to generate EPWs with sufficient amplitude for electron trapping effects to trigger the transition to iSRS. \begin{figure}[!ht] \centering \includegraphics[width=0.99\columnwidth]{fig6_6a_6b.eps} \caption{ (Colour) (a) Relationship between incident laser intensity and the intensity of SRS scattered light for three different density scale-lengths, with plasma density profiles centred at $0.15n_\mathrm{cr}$. (b) Relationship between incident laser intensity and the intensity of SRS scattered light for three simulations with $L_n=500\si{\micro\metre}$ centred at three different densities. Each coloured dashed line represents the prediction of the fluid model presented in Section \ref{sec:signatures} for the same parameters as the solid line of the same colour. } \label{fig:paramScan} \end{figure} Figure \ref{fig:paramScan} (b) shows the measured intensity of SRS scattered light in three sets of simulations with density profiles centred at $0.12,0.15,0.20 n_\mathrm{cr}$, chosen so that the density ranges do not overlap (see Table \ref{tab:densities}). As the central density decreases, the intensity threshold for iSRS increases. As for the case of varying scale-lengths changing the threshold, this can be explained in terms of the Rosenbluth gain\cite{Rosenbluth1972}. For a fixed density scale-length, as the density decreases, the Rosenbluth gain exponent also decreases. This means that the fluid gain through convective SRS is reduced. Hence SRS at a low density is less likely than that at higher density to generate EPWs with sufficient amplitude for electron trapping effects to trigger the transition to iSRS. For the parameters of Figure \ref{fig:hotelectrons} (b) with $I_0 = 6.16\times 10^{15} \si{W / \centi \metre^2}$, the Rosenbluth gain exponent increases from $\sim1$ to $\sim25$ as the densities increase. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{fig7_7a_7b.pdf} \caption{(Colour) (a) Hot electron flux through the right boundary in three simulations with parameters: $I_0 = 2.98\times 10^{15} \si{W / \centi \metre^2}$; $n_\mathrm{mid}=0.15 n_\mathrm{cr}$; $L_n=300,500,1000\si{\micro\metre}$. (b) Hot electron flux through the right boundary in three simulations with parameters: $I_0 = 6.16\times 10^{15} \si{W / \centi \metre^2}$; $L_n=500\si{\micro\metre}$; $n_\mathrm{mid}=0.12,0.15,0.20 n_\mathrm{cr}$ respectively. Each distribution is normalised to its maximum value. The smooth black line corresponds to the equivalent flux for a Maxwellian distribution with $T_e=4.5\si{\kilo \electronvolt}$, for comparison with the bulk plasma.} \label{fig:hotelectrons} \end{figure}{} As well as understanding how the density scale-length of the plasma and the density at which iSRS is driven affects the iSRS threshold, we would like to understand how these factors affect the hot electron population. We consider three simulations from Figure \ref{fig:paramScan} (a) with $I_0 = 2.98\times 10^{15} \si{W / \centi \metre^2}$, and three from Figure \ref{fig:paramScan} (b), with $I_0 = 6.16\times 10^{15} \si{W / \centi \metre^2}$. Figures \ref{fig:hotelectrons} (a,b) show the hot electron population in these simulations, in the form of histograms for the electron flux through the right boundary. Figure \ref{fig:hotelectrons} (a) shows the electron distribution function resulting from iSRS for three density scale-lengths. These are for a laser intensity above the onset threshold for iSRS but below an intensity which would lead to saturation. All results are for the same central density. The most prominant difference is that the peak electron energy increases with decreasing density scale-length. This results from the fact that the shorter density scale-length simulation access a higher peak density since the simulation domain size is the same for all three cases. The SRS matching conditions for these higher densities result in a higher phase speed of driven EPWs. Solving the SRS matching conditions for these densities, we find that the hot-electron energies calculated from the phase velocities are between $35 - 50 \si{\kilo \electronvolt}$ for all three cases. Figure \ref{fig:hotelectrons} (b), however, shows a clear dependence of the hot electrons from iSRS on density. As the density increases the maximum hot- electron kinetic energy also increases in line with the increase in SRS EPW phase velocities. Over the 2ps of the simulations the fraction of incident laser energy converted into hot-electrons with $\mathrm{energy} > 100\si{\kilo \electronvolt}$ are: 0, $0.002$, and $0.15$ for the $0.12,0.15,0.20 n_\mathrm{cr}$ densities. For the density scale-lengths $L_n=300,500,1000\si{\micro\metre}$ the fractions of incident laser energy converted into $> 100\si{\kilo \electronvolt}$ hot- electrons are: $0.005,0.001$ and $0$ respectively. \section{Conclusion}\label{sec:conclusion} Inflationary SRS has been detected in PIC simulations of a inhomogeneous plasmas with parameters relevant to the shock-ignition model of ICF. This study demonstrates an iSRS threshold $I_\mathrm{threshold} \lesssim 5\times 10^{15} \si{W/\centi\metre^2}$ across the whole range of parameters tested and that the location of this threshold depends on the density scale-length $L_n$. For the case with $L_n=500 \si{\micro\metre}$ and $I_0 =4.83\times10^{15} \si{W/\centi\metre^2}$ significant iSRS would occur at $0.15 n_{\mathrm{cr}}$ generating hot-electrons with mostly $< 100 \si{\kilo \electronvolt}$ energies and depleting the laser drive available at higher densities. This is potentially beneficial to shock-ignition in that these electrons are likely to enhance the ignitor shock and prevent significant SRS at higher densities, potentially absolute at $0.25 n_\mathrm{cr}$. SRS at higher densities is likely to generate electron distributions with a higher percentage of $> 100 \si{\kilo \electronvolt}$ electrons than that from iSRS at lower densities. These results suggest that a potential route to use iSRS to the advantage of shock-ignition, assuming all SRS cannot be removed by other means, would be for the shock ignitor pulse to have the largest possible amplitude. This would ensure significant iSRS at lower densities and generate only hot-electrons with energies below $100 \si{\kilo \electronvolt}$. This in turn would pump deplete the laser reducing SRS at higher densities which could generate hot-electrons with energy above $100 \si{\kilo \electronvolt}$. These conclusions are however only valid for the restricted 1D, collisionless simulations presented in this paper and more detailed simulations, as outlined below, would be needed to fully assess the hot-electron distribution and its impact on SI schemes. The simulations presented in this paper highlight the importance of a thorough investigation of iSRS for any shock-ignition plans. These results are however a first study of the plasmas parameters where iSRS may occur. A full theoretical investigation of the potential impact of iSRS on shock-ignition will require significantly larger scale simulations. Of particular importance are multi-dimensional effects and laser speckle profiles. These would allow the competition between SRS and TPD as sources of hot-electrons to be assessed in two and three dimensions. The transverse non-uniformity associated with multi-dimensional effects is likely to affect iSRS through trapped electron side losses and a broader spectrum of EPWs resulting from side-scatter and TPD. Furthermore, the auto-resonance responsible for iSRS in these simulations may not be possible when a full speckle profile is included; since the extension of the resonance region may take the resonant waves outside of an individual speckle. The use of broadband laser systems to mitigate LPI will also need to be assessed in the kinetic regime of iSRS. All of these refinements to iSRS simulations will require considerably more computing resources but are none-the-less needed for a comprehensive treatment of LPI relevant to shock-ignition. \begin{acknowledgments} We are grateful to the EPOCH developer team (K. Bennett, C. S. Brady and H. Ratcliffe) for their adaptations to the code in preparation for this simulation campaign. The EPOCH code used in this work was in part funded by the UK EPSRC grants EP/G054950/1, EP/G056803/1, EP/G055165/1, EP/ M022463/1 and EP/P02212X/1. We also acknowledge the use of Athena at HPC Midlands+, which was funded by the EPSRC on grant EP/P020232/1, in this research, as part of the HPC Midlands+ consortium. This work has been carried out within the framework of the EUROfusion Enabling Research Project: ENR- IFE19.CEA-01 ``Study of Direct Drive and Shock Ignition for IFE: Theory, Simulations, Experiments, Diagnostics development" and has received funding from the Euratom research and training programme. The views and opinions expressed herein do not necessarily reflect those of the European Commission. \end{acknowledgments} \section*{Data availability} Support in generating the data that support the findings of this study, using the EPOCH PIC code, are available from the corresponding author upon reasonable request. \bibliographystyle{apsrev4-2}
1,108,101,563,985
arxiv
\section{\label{Intro} Introduction} The discoveries of high-$T_{\rm c}$ superconductivity in iron pnictides and chalcogenides motivated many efforts to identify the mechanism\cite{Rotter2008a, Chen2008, Sasmal2008, Wu2008, Jeevan2008, Sefat2008, Torikachvili2008, Ishida2009, Alireza2009, Johnston2010, Canfield2010, Mandrus2010} and the relationships of these materials to the high-$T_{\rm c}$ cuprates.\cite{Johnston2010, Canfield2010, Mandrus2010, Johnston1997, Damascelli2003, Lee2006} In the latter compounds the copper has a Cu$^{+2}$ $3d^9$ electronic configuration and carries a local magnetic moment with spin $S = 1/2$ which is retained even in the superconducting regime of the phase diagram. Here we are concerned with the so-called 122-type subclass of the iron arsenide superconductors with the body-centered tetragonal (bct) ${\rm ThCr_2Si_2}$ structure with space group $I4/mmm$. If a 122-type Cu-arsenide compound having localized $S=1/2$ Cu$^{+2}$ moments could be synthesized, such a compound would bridge the gap between the iron arsenide and cuprate families of high-$T_c$ superconductors. With this in mind, we previously reported the physical properties of ${\rm SrCu_2As_2}$ and ${\rm SrCu_2Sb_2}$ that instead turned out to be nonmagnetic $sp$-band metals,\cite{Anand2012} consistent with the theoretical prediction for SrCu$_2$As$_2$ by Singh.\cite{Singh2009} His electronic structure calculations for ${\rm BaCu_2As_2}$ and ${\rm SrCu_2As_2}$ indicated that the Cu 3$d$ bands in these compounds are narrow and lie about 3~eV below the Fermi energy $E_{\rm F}$ and therefore the Cu~3$d$ orbitals give very small contributions to the density of states at $E_{\rm F}$.\cite{Singh2009} From a systematic study of the interlayer $X$--$X$ distance $d_{X-X}$ ($A$ = Ca, Sr, Ba; $X$ = P, As) and the $c/a$ ratio for $AM_2X_2$ $3d$ transition metal $M$ compounds with the ${\rm ThCr_2Si_2}$ structure we concluded that ${\rm SrCu_2As_2}$ crystallizes in a collapsed tetragonal (cT) structure.\cite{Anand2012} Recent activity in the Fe-based superconductor field has focused on the remarkable physical properties of the class of $A_{1-x}$Fe$_{2-y}$Se$_2$ ($A$ = alkali metal) compounds that are similar to the layered ${\rm ThCr_2Si_2}$-type materials except with substantial numbers of vacancies on the $A$ and Fe sublattices that can become spatially ordered.\cite{Guo2010,Reviews} Depending on the Fe vacancy concentration, superconductivity at temperatures up to $\sim 30$~K and/or large-moment antiferromagnetism with very high N\'eel temperatures up to $\sim 600$~K can be stabilized. Thus, the influence of transition metal site vacancies on the physical properties of such 122-type compounds is of great current interest. Here we report powder x-ray diffraction, magnetic susceptibility $\chi$, isothermal magnetization $M$, specific heat $C_{\rm p}$, and $ab$-plane electrical resistivity $\rho$ measurements as a function of temperature $T$ and magnetic field $H$ on single crystals of ${\rm CaCu_{1.7}As_2}$ which was previously reported by Pilchowski and Mewis to form in the bct ${\rm ThCr_2Si_2}$ structure with a large ($\sim 15$\%) concentration of vacancies on the Cu site.\cite{Pilchowski1990} To our knowledge, there are no previous studies of the physical properties of this compound including possible temperature-induced Cu vacancy ordering transitions. Like ${\rm SrCu_2As_2}$, we find that ${\rm CaCu_{1.7}As_2}$ exhibits an anisotropic $T$-independent diamagnetic behavior indicating that the Cu atoms have a nonmagnetic $3d^{10}$ Cu$^{+1}$ electronic configuration. However, the $\rho(T)$ data for ${\rm CaCu_{1.7}As_2}$ exhibit a phase transition of unknown origin at a transition temperature $T_{\rm t} = 54$--56~K, depending on the crystal, as revealed by a sharp well-defined decrease on cooling below $T_{\rm t}$. In view of the large ($\sim 15$\%) disordered vacancy concentration on the Cu sites found by Pilchowski and Mewis at room temperature and confirmed by us, this phase transition may reflect the occurrence of Cu vacancy ordering at $T_{\rm t}$. This possibility could be checked via \mbox{low-$T$} x-ray and/or neutron diffraction measurements. \section{\label{ExpDetails} Experimental Details} Single crystals of ${\rm CaCu_{1.7}As_2}$ were grown using prereacted CuAs self-flux starting with the high purity elements Ca (99.98\%), Cu (99.999\%) and As (99.99999\%) from Alfa Aesar. Ca and CuAs were taken in a 1:4 molar ratio and placed in an alumina crucible which was then sealed inside an evacuated quartz tube. The sample was heated to 1100~$^\circ$C at a rate of 60~$^\circ$C/h, held there for 12~h and then cooled to 800~$^\circ$C at a rate of 2.5~$^\circ$C/h at which time the flux was decanted using a centrifuge. Shiny plate-like crystals of typical size $2.5 \times 2 \times 0.3$~mm$^3$ were obtained using this procedure. All crystals for which measurements are reported here were obtained from the same growth batch. The structure of the crystals was determined by powder x-ray diffraction (XRD) using Cu K$_\alpha$ radiation on a Rigaku Geigerflex x-ray diffractometer. The chemical composition was determined by wavelength dispersive x-ray spectroscopy (WDS) analysis using a JEOL JXA-8200 electron probe microanalyzer. The magnetization $M$ measurements were carried out using a Quantum Design, Inc.\ superconducting quantum interference device-based magnetic properties measurement system in applied magnetic fields $H$ up to 5.5~T\@. A crystal was mounted on a 0.5~mm diameter horizontal rotatable high-purity quartz rod that was inserted in holes in a clear plastic straw attached to the sample hang-down rod. The sample was attached to the quartz rod using a small amount of GE~7031 varnish. The contribution to the measured magnetization due to the quartz rod and varnish was measured separately and corrected for. The $C_{\rm p}(T)$ and $\rho(T,H)$ measurements were carried out using a Quantum Design, Inc.\ physical properties measurement system using the heat capacity and ac transport options, respectively, at fields up to 8~T\@. The subscript ``p'' in $C_{\rm p}$ refers to measurements at constant pressure. The $C_{\rm p}$ was measured using a relaxation method. The $\rho$ was measured in the $ab$-plane using a standard four-probe ac technique with 25~$\mu$m diameter Pt leads attached to the sample with EPO-TEK P1011 silver epoxy that was cured in air at 110~$^\circ$C for 1~h. We did not cut the as-grown rectangular-shaped crystals \#1 and \#2 that we used for the resistivity measurements because such cutting can potentially introduce microcracks in the crystals and/or exfoliation of the Cu$_{1.7}$As layers. The accuracy of the measurements due to uncertainties in the geometric factor is estimated to be $\sim 10$\%. \section{\label{CaCu2As2} Results and Discussion} \subsection{Crystallography} \begin{figure} \includegraphics[width=3in]{Fig1.eps} \caption{(Color online) Powder x-ray diffraction pattern of ${\rm CaCu_{1.7}As_2}$ recorded at room temperature. The solid line through the experimental points is the Rietveld refinement profile calculated for the body-centered tetragonal ThCr$_2$Si$_2$-type structure (space group $I4/mmm$). The short vertical bars mark the fitted Bragg peak positions. The lowermost curve represents the difference between the experimental and calculated intensities. The unindexed peaks marked with stars correspond to peaks from small amounts of flux that could not be removed from the crystals before crushing them for the XRD measurements.} \label{fig:CaCu2As2_XRD} \end{figure} \begin{table} \caption{\label{tab:XRD1} Crystallographic and Rietveld refinement parameters obtained from powder XRD data of crushed ${\rm CaCu_{1.7}As_2}$ crystals. Also included are data from Ref.~\onlinecite{Pilchowski1990}.} \begin{ruledtabular} \begin{tabular}{lll} Structure & ${\rm ThCr_2Si_2}$-type\\ Space group & $I4/mmm$ \\ Formula units/unit cell & $Z = 2$\\ \underline{Lattice parameters (RT)} & & \underline{Ref.~\onlinecite{Pilchowski1990}}\\ \hspace{0.8 cm}$a$ (\AA) & 4.1148(2) & 4.129(1)\\ \hspace{0.8 cm}$c$ (\AA) & 10.1914(4) & 10.251(1)\\ \hspace{0.8 cm}$c/a$ & 2.4768(2) & 2.482(2)\\ \hspace{0.8 cm}$V_{\rm cell}$ (\AA$^3$) & 172.55(1) & 174.8(1)\\ \underline{Refinement quality} \\ \hspace{0.8 cm} $\chi^2$ & 5.63 \\ \hspace{0.8 cm} $R_{\rm p}$ (\%) & 2.98 \\ \hspace{0.8 cm} $R_{\rm wp}$ (\%) & 4.44 \\ \end{tabular} \end{ruledtabular} \end{table} \begin{table} \caption{\label{tab:XRD2} Atomic coordinates obtained from the Rietveld refinements of powder XRD data of crushed ${\rm CaCu_{1.7}As_2}$ crystals. Also included are data from Ref.~\onlinecite{Pilchowski1990}.} \begin{ruledtabular} \begin{tabular}{ccccccc} Atom & Wyckoff & $x$ & $y$ & $z$ & Fractional & Ref.\\ & position & & & & occupancy\\ & & & & & (\%)\\ \hline Ca & 2a & 0 & 0 & 0 & 100 & This work\\ Ca & 2a & 0 & 0 & 0 & 100 & \onlinecite{Pilchowski1990}\\ Cu & 4d & 0 & 1/2 & 1/4 & 82.5(5) & This work\\ Cu & 4d & 0 & 1/2 & 1/4 & 87.5(8) & \onlinecite{Pilchowski1990}\\ As & 4e & 0 & 0 & 0.3779(2) & 100 & This work\\ As & 4e & 0 & 0 & 0.3799(2) & 100 & \onlinecite{Pilchowski1990} \end{tabular} \end{ruledtabular} \end{table} Powder XRD data were collected on crushed ${\rm CaCu_{1.7}As_2}$ single crystals at room temperature (RT) and analyzed by Rietveld refinement using the {\tt FullProf} software. \cite{Rodriguez1993} Figure~\ref{fig:CaCu2As2_XRD} shows the XRD data and the \mbox{Rietveld} fit profile. The refinement confirmed the single phase nature of the crystals and the ${\rm ThCr_2Si_2}$-type body-centered tetragonal structure with space group $I4/mmm$ found previously by Pilchowski and Mewis from a single-crystal structure refinement.\cite{Pilchowski1990} Our crystallographic and refinement parameters are listed in Tables~\ref{tab:XRD1} and \ref{tab:XRD2} and compared with those of Pilchowski and Mewis.\cite{Pilchowski1990} The lattice parameters $a$ and $c$ and the $z$-coordinate of the As atoms $z_{\rm As}$ that we obtained are in good agreement with the literature values.\cite{Pilchowski1990} While refining the XRD powder pattern for ${\rm CaCu_{1.7}As_2}$ we noticed that the lattice parameters and the As $c$-axis position parameter $z_{\rm As}$ were insensitive to changes in the thermal parameters $B$ within the error bars, so we kept $B$ fixed to $B \equiv 0$. On the other hand, the refinement quality and the calculated line intensities were very sensitive to the fractional occupancy of the 4d sites by Cu. Therefore we allowed the occupancy of this site by Cu to vary during the refinement, while the occupancies of the Ca and As positions were kept fixed at the stoichiometric values of unity. As shown in Table~\ref{tab:XRD2}, the refined values of the Cu site occupancy obtained both by us and by Pilchowski and Mewis show a large vacancy concentration on the Cu sites of $\approx 15$\%, corresponding to an approximate composition of ${\rm CaCu_{1.7}As_2}$. However, there are no indications of superstructure reflections in the XRD patterns that would indicate ordering of the Cu vacancies at room temperature and Pilchowski and Mewis also did not report such reflections. Thus we assume that the Cu vacancies are randomly distributed on the Cu sites at room temperature. Our WDS analysis at about ten points on the $ab$ plane of a ${\rm CaCu_{1.7}As_2}$ crystal gave the average atomic ratios as Ca\,:\,Cu\,:\,As = 21.4(2)\,:\,36.5(3)\,:\,42.1(2), corresponding to the stoichiomentry ${\rm Ca_{1.02(2)}Cu_{1.73(2)}As_2}$ assuming the occupancy of the As site to be 100\%. This result confirms complete occupancy of the Ca site and again indicates a large vacancy concentration on the Cu sites. For the molar heat capacity, magnetization and magnetic susceptibility data presented below, a ``mole'' is defined as a mole of ${\rm CaCu_{1.7}As_2}$ formula units (f.u.). From the crystal data in Tables~\ref{tab:XRD1} and \ref{tab:XRD2} we obtain the interlayer As--As distance as $d_{\rm As-As} = (1-2z_{\rm As})c = 2.49$~\AA. This value of $d_{\rm As-As}$ for ${\rm CaCu_{1.7}As_2}$ is close to the covalent (single) bond distance 2.38~\AA\ for As.\cite{Cordero2008} Furthermore, the values of $c/a$ and $d_{\rm As-As}$ fall in the respective ranges for the collapsed tetragonal structure compounds shown in Fig.~22 of Ref.~\onlinecite{Anand2012}. This shows that like ${\rm SrCu_2As_2}$, ${\rm CaCu_{1.7}As_2}$ also has a cT structure. A consequence of the formation of the cT structure is an unusual oxidation state of As$^{-2} \equiv $ [As--As]$^{-4}$, which together with Ca in the Ca$^{+2}$ oxidation state, indicates that the Cu in ${\rm CaCu_{1.7}As_2}$ has an oxidation state of $\approx +1.2$. However, the nonmagnetic nature of this compound deduced from our magnetic measurements presented below in Sec.~\ref{Sec:CaCu2As2_ChiMH} suggests instead a filled Cu $3d$ shell, a Cu electronic configuration $3d^{10}$, and formal oxidation states Cu$^{+1}$ and As$^{-1.85}$. The latter value suggests the presence of hole conduction on the As sublattice. \subsection{\label{Sec:CaCu2As2_HC} Heat Capacity} \begin{figure} \includegraphics[width=3in]{Fig2.eps} \caption{(Color online) Heat capacity $C_{\rm p}$ of a ${\rm CaCu_{1.7}As_2}$ single crystal as a function of temperature $T$ measured in zero magnetic field. The red solid curve is the fitted sum of the contributions from the Debye lattice heat capacity $C_{\rm V\,Debye}(T)$ and predetermined electronic heat capacity $\gamma T$ according to Eq.~(\ref{eq:Debye_HC-fit}). Inset: $C_{\rm p}/T$ versus $T^2$ for $T \leq 4.5$~K\@. The straight red line is a fit of the data by $C_{\rm p}/T = \gamma + \beta T^2$ for $1.8~{\rm K} \leq T \leq 3.5$~K\@.} \label{fig:HC_CaCu2As2} \end{figure} Figure~\ref{fig:HC_CaCu2As2} shows $C_{\rm p}$ versus $T$ of a ${\rm CaCu_{1.7}As_2}$ crystal from 1.8 to 300~K\@. No obvious anomaly that might be associated with the occurrence of a phase transition is observed over this $T$ range. The value of $C_{\rm p}(300$~K) is $\approx 116$~J/mol\,K, which is close to the classical Dulong-Petit prediction of the lattice heat capacity $C_{\rm V} = 3nR = 14.1R$ = 117.2~J/mol\,K at constant volume, where $n$ is the number of atoms per formula unit ($n=4.7$ here) and $R$ is the molar gas constant. \cite{Kittel2005, Gopal1966} A conventional linear fit of $C_{\rm p}(T)/T = \gamma + \beta T^2$ in the temperature range $1.8~{\rm K} \leq T\leq 3.5$~K is shown in the inset of Fig.~\ref{fig:HC_CaCu2As2} and gives the electronic Sommerfeld specific heat coefficent $\gamma=2.0(2)$~mJ/mol\,K$^2$ and the lattice heat capacity coefficient $\beta= 0.33(2)$~mJ/mol\,K$^4$. The density of states at the Fermi energy ${\cal D}(E_{\rm F})$ for both spin directions is estimated from $\gamma$ using the single-band relation \cite{Kittel2005} $\gamma = (\pi^2 k_{\rm B}^2/3) {\cal D}(E_{\rm F})$, from which we obtain ${\cal D}(E_{\rm F}) = 0.85(9)$~states/eV\,f.u.\ for both spin directions. The Debye temperature $\Theta_{\rm D}$ is estimated from $\beta$ using the relation \cite{Kittel2005} $\Theta_{\rm D} = (12 \pi^{4} n R/5 \beta)^{1/3}$, giving $\Theta_{\rm D}= 303(6)$~K\@. The entire $C_{\rm p}(T)$ data set from 1.8 to 300~K was fitted by \begin{equation} C_{\rm p}(T) = \gamma T + n C_{\rm{V\,Debye}}(T), \label{eq:Debye_HC-fit} \end{equation} where $\gamma$ was fixed to the value $\gamma = 2.0~{\rm mJ/mol\,K^2}$ obtained above and the Debye heat capacity $C_{\rm{V\,Debye}}(T)$ describes the heat capacity due to acoustic phonons at constant volume~V and is given by \cite{Gopal1966} \begin{equation} C_{\rm{V\,Debye}}(T) = 9 R \left( \frac{T}{\Theta_{\rm{D}}} \right)^3 {\int_0^{\Theta_{\rm{D}}/T} \frac{x^4 e^x}{(e^x-1)^2}\,dx}. \label{eq:Debye_HC} \end{equation} The fit is shown as the solid red curve in Fig.~\ref{fig:HC_CaCu2As2}. In the fit, we used our analytic Pad\'e approximant function given in Ref.~\onlinecite{Ryan2012} that accurately represents $C_{\rm{V\,Debye}}(T)$ and obtained $\Theta_{\rm D} = 265(1)$~K, which is somewhat smaller than the value $\Theta_{\rm D} = 303(6)$~K obtained from fitting the \mbox{low-$T$} $C_{\rm p}(T)$ data above. The difference between these two values reflects the $T$ dependence of $\Theta_{\rm D}$.\cite{Gopal1966, Ryan2012} \subsection{\label{Sec:CaCu2As2_ChiMH} Magnetization and Magnetic Susceptibility} \begin{figure} \includegraphics[width=3in]{Fig3.eps} \caption{(Color online) Zero-field-cooled magnetic susceptibility $\chi$ of a ${\rm CaCu_{1.7}As_2}$ single crystal versus temperature $T$ in the temperature range 1.8--350~K measured in a magnetic field $H = 3.0$~T applied along the $c$-axis ($\chi_c,\ H \parallel {\bf c}$) and in the $ab$-plane ($\chi_{ab},\ H \perp {\bf c}$) (solid symbols). The open symbols represent the intrinsic susceptibility of ${\rm CaCu_{1.7}As_2}$ after correcting for the ferromagnetic and paramagnetic impurity contributions as described in the Appendix. The filled blue stars represent the intrinsic $\chi$ obtained from fitting $M(H)$ isotherm data by Eq.~(\ref{eq:MH_fit}) and are more reliable.} \label{fig:MT_CaCu2As2} \end{figure} Figure~\ref{fig:MT_CaCu2As2} shows the zero-field-cooled magnetic susceptibility $\chi \equiv M/H$ of a ${\rm CaCu_{1.7}As_2}$ crystal as a function of $T$ from 1.8 to 350~K for $H = 3.0$~T applied along the $c$-axis ($\chi_c,\ H \parallel c$) and in the $ab$-plane ($\chi_{ab},\ H \perp c$). The $\chi(T)$ data for both directions of $H$ are diamagnetic and nearly independent of $T$\@. The $\chi_{c}$ is significantly more negative than $\chi_{ab}$. The same type of $\chi$ anisotropy was previously observed for ${\rm SrCu_2As_2}$,\cite{Anand2012} BaFe$_2$As$_2$,\cite{Wang2009} SrFe$_2$As$_2$,\cite{Yan2008} and other doped and undoped FeAs-based compounds.\cite{Johnston2010} Curie-like upturns occur in $\chi(T)$ at low~$T$ in Fig.~\ref{fig:MT_CaCu2As2} that are likely due at least in part to the presence of small amounts of saturable paramagnetic (PM) impurities in the ${\rm CaCu_{1.7}As_2}$ crystal. Our analysis of $M(H)$ isotherms in the Appendix allows us to approximately correct for such contributions. The intrinsic susceptibilities after corrections for paramagnetic and ferromagnetic (FM) impurity contributions are shown as open symbols in Fig.~\ref{fig:MT_CaCu2As2}. The corrected susceptibilities still show small residual upturns at low~$T$ which, in view of the intrinsic $T$-independent susceptibilities obtained at eight temperatures from analysis of the $M(H)$ isotherms (shown in Fig.~\ref{fig:MT_CaCu2As2} as filled blue stars), are due to inaccuracies in correcting for the impurity contributions to the $M/H$ versus $T$ data. The intrinsic $\chi$ consists of different contributions given by \begin{equation} \chi=\chi_{\rm {core}}+\chi_{\rm {VV}}+\chi_{\rm {L}} + \chi_{\rm {P}}. \label{eq:chi} \end{equation} The first three terms are orbital susceptibilities. $\chi_{\rm {core}}$ is the diamagnetic core susceptibility, $\chi_{\rm {VV}}$ is the paramagnetic Van Vleck susceptibility and $\chi_{\rm {L}}$ the diamagnetic Landau susceptibility of the conduction electrons. The last term $\chi_{\rm {P}}$ is the paramagnetic Pauli spin susceptibility. The $\chi_{\rm {core}}$ = $-1.53 \times 10^{-4}$~cm$^3$/mol is estimated using the atomic diamagnetic susceptibilities. \cite{Mendelsohn1970} $\chi_{\rm {P}}$ is estimated from $\chi_{\rm {P}} = \mu_{\rm B}^2 {\cal D}(E_{\rm F})$ (Ref.~\onlinecite{Ashcroft1976}), giving $\chi_{\rm {P}} = 2.7(3) \times 10^{-5}$~cm$^3$/mol using ${\cal D}(E_{\rm F}) = 0.85(9)$~states/eV\,f.u.\ for both spin directions obtained above in Sec.~\ref{Sec:CaCu2As2_HC}. The $\chi_{\rm {L}}$ is obtained from $\chi_{\rm {L}} = - \frac{1}{3} \left( \frac {m_{\rm e}}{m^*} \right)^2 \chi_{\rm {P}}$,\cite{Ashcroft1976, Elliott1998} which gives $\chi_{\rm {L}} = -0.9 \times 10^{-5}$~cm$^3$/mol assuming that the effective mass $m^*$ equals the free electron mass $m_{\rm e}$. The angle and temperature average of the anisotropic $\chi$ in Fig.~\ref{fig:MT_CaCu2As2} over the $T$ range 30 to 350~K is $\langle\chi\rangle = [2 \langle\chi_{ab}\rangle + \langle\chi_{c}\rangle]/3 = -5.3\times 10^{-5}$~cm$^3$/mol. We can now estimate $\langle\chi_{\rm {VV}}\rangle$ using the above estimated values of $\chi_{\rm {core}}$, $\chi_{\rm {P}}$ and $\chi_{\rm {L}}$ yielding the powder-averaged $\langle\chi_{\rm {VV}}\rangle = 8.2 \times 10^{-5}$~cm$^3$/mol from Eq.~(\ref{eq:chi}), which is a physically realistic value. The $T$-independent anisotropic Van Vleck contributions are $\chi_{\rm {VV}}^c = 6.3 \times 10^{-5}$~cm$^3$/mol and $\chi_{\rm {VV}}^{ab} = 9.1 \times 10^{-5}$~cm$^3$/mol for $\ H \parallel c$ and $\ H \perp c$, respectively. \subsection{\label{Sec:CaCu2As2ChiMH} Electrical Resistivity} \begin{figure} \includegraphics[width=3in]{Fig4a.eps}\vspace{0.1in} \includegraphics[width=3in]{Fig4b.eps} \caption{(Color online) (a) In-plane electrical resistivity $\rho$ of ${\rm CaCu_{1.7}As_2}$ single crystal \#1 as a function of temperature $T$ measured in zero magnetic field (open circles) showing a transition at $T_{\rm t} = 54$~K\@. A fit by the Bloch-Gr\"uneisen model from 57 to 300~K is shown by the solid red curve, and an extrapolation of the fit to $T=0$ is shown as the dashed red curve. (b) Expanded plot of $\rho$ versus $T$ for heating and cooling cycles through $T_{\rm t}$ at $T<100$~K\@.} \label{fig:rho_CaCu2As2} \end{figure} \begin{figure} \includegraphics[width=3in]{Fig5.eps} \caption{ In-plane electrical resistivity $\rho$ of ${\rm CaCu_{1.7}As_2}$ single crystal \#2 versus temperature $T$ in zero magnetic field showing the same shape of transition at $T_{\rm t} = 56$~K as seen for crystal~\#1 at 54~K in Fig.~\ref{fig:rho_CaCu2As2}. } \label{fig:rho_CaCu2As2No2} \end{figure} Figure~\ref{fig:rho_CaCu2As2} shows $\rho(T)$ of ${\rm CaCu_{1.7}As_2}$ crystal~\#1 from 1.8 to 300~K in $H=0$. The residual resistivity at $T= 1.8$~K is $\rho_0 = 26.7(1)\,\mu\Omega\,{\rm cm}$ and the residual resistivity ratio is ${\rm RRR} \equiv \rho(300\,{\rm K})/\rho_0 \approx 2.5$. The $\rho(T)$ data exhibit metallic behavior with an almost linear $T$ dependence of $\rho$ above 55~K\@. A sharp decrease is observed in $\rho(T)$ on cooling below a transition temperature $T_{\rm t} = 54$~K\@. This behavior is reproduced without hysteresis upon heating and cooling through $T_{\rm t}$ as shown in Fig.~\ref{fig:rho_CaCu2As2}(b), suggesting a second-order transition. The transition anomaly at 54~K was reproduced in a $\rho(T)$ measurement on another crystal~\#2 for which we found $T_{\rm t} = 56$~K, as shown in Fig.~\ref{fig:rho_CaCu2As2No2}. As noted in previous sections, no evidence for the transition was observed in our $\chi(T)$ measurements, suggesting that the transition is not magnetic in nature. The transition may be associated with spatial ordering of the Cu vacancies discussed above. \begin{figure} \includegraphics[width=3in]{Fig6a.eps}\vspace{0.1in} \includegraphics[width=3in]{Fig6b.eps} \caption{(Color online) (a) In-plane electrical resistivity $\rho$ of a ${\rm CaCu_{1.7}As_2}$ single crystal versus temperature $T$ measured in the indicated magnetic fields $H$. The $\rho(T)$ data in this figure were measured after remounting the leads, resulting in a slight change in the absolute values of $\rho$ between this figure and Fig.~\ref{fig:rho_CaCu2As2}. (b) Magnetoresistance $[\rho(H)-\rho(0)]/\rho(0)$ versus $H$ at $T=1.8$ and 10~K\@.} \label{fig:rho_CaCu2As2_H} \end{figure} We also measured $\rho(T < 100)$~K in high magnetic fields $H\leq 8.0$~T as shown in Fig.~\ref{fig:rho_CaCu2As2_H}(a). The transition temperature $T_{\rm t}$ is found to be independent of $H$ over this field range. However, a magnetoresistance (MR) $\Delta \rho/\rho(0) = [\rho(H)-\rho(0)]/\rho(0)$ is observed at temperatures $T<T_{\rm t}$ and is plotted versus $H$ for $T = 1.8$ and 10~K in Fig.~\ref{fig:rho_CaCu2As2_H}(b). The MR is positive up to 8.0~T with MR~$\approx$ 8.7\% at 1.8 K and $H = 8.0$~T\@. The reason that measurable MR values are only observed at $T<T_{\rm t}$ is not clear. Our detailed $\rho(T,H)$ measurements on crystal~\#1 indicate that the transition at $T_{\rm t} = 54$~K is not due to microcracks in the sample because the data on heating and cooling the same sample in Figs.~\ref{fig:rho_CaCu2As2} and \ref{fig:rho_CaCu2As2_H}(a) reproducibly show the same sharp transition temperature with the same shape of the anomaly, and the transition occurs reproducibly between samples as seen in Fig.~\ref{fig:rho_CaCu2As2No2}. Furthermore, we can see no mechanism by which the magnetoresistance that is observed only below $T_{\rm t}$ in Fig.~\ref{fig:rho_CaCu2As2_H} could arise from microcracks. The transition in ${\rm CaCu_{1.7}As_2}$ seen in $\rho(T)$ at 54--56~K in Figs.~\ref{fig:rho_CaCu2As2} and~\ref{fig:rho_CaCu2As2No2} might potentially be due to an extrinsic phase transition in residual Cu-As flux attached to the crystals. Three binary phases exist in the Cu-As binary phase diagram with compositions of approximately Cu$_3$As, ${\rm Cu_5As_2}$ and Cu$_2$As. Previous $\rho,\ \chi$ and nuclear quadrupole resonance measurements versus $T$ on these phases\cite{Pauwels1973, Begaev2002} showed no phase transitions at any temperature near our phase transition temperature \mbox{$T_{\rm t} = 54$--56 K\@.} Moreover, the ${\rm SrCu_2As_2}$ crystals for which we reported the properties in Ref.~\onlinecite{Anand2012} were grown using the same CuAs self-flux, and these crystals showed no evidence for a transition at $T \approx 55$~K in our $\rho(T)$ measurements on them. Thus we rule out this extrinsic cause of the transition we see in ${\rm CaCu_{1.7}As_2}$ at $T_{\rm t}$. The lack of an obvious heat capacity anomaly at $\sim 55$~K in Fig.~\ref{fig:HC_CaCu2As2} might be construed as evidence that the second-order phase transition at $T_{\rm t} = 54$--56~K indicated by the resistivity data in Figs.~\ref{fig:rho_CaCu2As2}--\ref{fig:rho_CaCu2As2_H} is not a bulk effect. However, an observable change in the heat capacity is only expected if the temperature derivative $dS/dT$ of the entropy $S$ changes sufficiently strongly at $T_{\rm t}$. In particular, the heat capacity of a material is $C_{\rm p} = dQ/dT = T\,dS/dT$, where $dQ$ is the increment of heat absorbed and $dS = dQ/T$ is the incremental change in the total entropy of the system. Bulk phase transitions can occur that involve very little change in the slope of the entropy versus temperature of the system. For example, the bulk second-order antiferromagnetic phase transition at the N\'eel temperature $T_{\rm N} \sim 300$~K of the layered cuprate ${\rm La_2CuO_4}$ has not been observed by heat capacity measurements because the change in $dS/dT$ at $T_{\rm N}$ is too small.\cite{Sun1991} Thus the lack of an observable heat capacity anomaly at $T_{\rm t}$ in Fig.~\ref{fig:HC_CaCu2As2} does not rule out a bulk phase transition at that temperature. In the Bloch-Gr\"{u}neisen (BG) model, the resistivity arises from scattering of conduction electrons by longitudinal acoustic lattice vibrations, given by\cite{Blatt1968} \begin{equation} \rho_{\rm {BG}}(T)= 4 \mathcal{R}(\Theta _{\rm R}) \left( \frac{T}{\Theta _{\rm{R}}}\right)^5 \int_0^{\Theta_{\rm{R}}/T}{\frac{x^5}{(e^x-1)(1-e^{-x})}dx}, \label{eq:BG} \end{equation} where $\Theta_{\rm{R}}$ is the Debye temperature obtained from fitting resistivity data. For polyatomic systems the prefactor $\mathcal{R}(\Theta _{\rm R})$ is given by \cite{Anand2012, Ryan2012} \begin{equation} \mathcal{R}(\Theta _{\rm R})=\frac{\hbar}{e^2} \left[ \frac{\pi^3 (3 \pi^2)^{1/3} \hbar^2}{4 n_{\rm{cell}}^{2/3} a^\ast k_{\rm{B}} \Theta_{\rm{R}}} \left(\frac{1}{M}\right)_{\rm ave} \right] \label{eq:BG-R} \end{equation} where $\hbar$ is Planck's constant divided by $2\pi$, $k_{\rm B}$ is Boltzmann's constant, $e$ is the fundamental electric charge, $n_{\rm cell}$ is the number of conduction (valence) electrons per atom, $(1/M)_{\rm ave}$ is the average inverse mass of the atoms in the compound, and $a^\ast =[V_{\rm cell}/nZ]^{1/3}$ is the equivalent lattice parameter of a primitive cubic unit cell containing one atom, $Z$ being the number of formula units per unit cell and $n$ the number of atoms per f.u. As discussed in Refs.~\onlinecite{Ryan2012} and~\onlinecite{Blatt1968}, it is usually not possible to obtain an accurate fit to $\rho(T)$ data by the BG model with the single adjustable parameter $\Theta_{\rm R}$. Therefore we allowed the prefactor in Eq.~(\ref{eq:BG}) to vary independently and fitted our in-plane $\rho(T>T_{\rm t})$ data for ${\rm CaCu_{1.7}As_2}$ in Fig.~\ref{fig:rho_CaCu2As2}(a) by \begin{equation} \rho(T) = \rho_0' + \rho(\Theta_{\rm R}) f(T/\Theta_{\rm R}), \label{eq:BG_fit} \end{equation} where $\rho_0'$ is the residual resistivity extrapolated from $T>T_{\rm t}$ and from Eq.~(\ref{eq:BG}) one obtains\cite{Anand2012, Ryan2012} \begin{equation} \begin{split} f(y) & = \frac{\rho_{\rm BG}(T)}{\rho_{\rm BG}(T=\Theta_{\rm R})} \\ & = 4.226\,259 \,y^5 \int_0^{1/y}\frac{x^5}{(e^x - 1)(1-e^{-x})}\,dx, \label{eq:BG_fn} \end{split} \end{equation} where $y=T/\Theta_{\rm R}$ and \begin{equation} \rho_{\rm BG}(T=\Theta_{\rm R})=0.9\,464\,635\,{\cal R}(\Theta _{\rm R}). \label{eq:BG_R} \end{equation} A fit of $\rho(T)$ data by Eqs.~(\ref{eq:BG_fit}) and (\ref{eq:BG_fn}) thus has three independent adjustable parameters $\rho_0'$, $\rho(\Theta_{\rm R})$ and $\Theta_{\rm R}$. A good fit of the $\rho(T)$ data in Fig.~\ref{fig:rho_CaCu2As2}(a) for 57~K~$\leq T \leq$~300~K was obtained, as shown by the solid red curve in Fig.~\ref{fig:rho_CaCu2As2}(a) where we used an accurate analytic Pad\'e approximant function of $y$ in place of Eq.~(\ref{eq:BG_fn}) as given in Ref.~\onlinecite{Ryan2012}. The parameters obtained from the fit are $\rho_0' =34.2(1)\,\mu\Omega$\,cm, $\rho(\Theta_{\rm{R}}) = 33.7(6)\,\mu\Omega$\,cm and $\Theta_{\rm{R}} = 320(6)$~K\@. The $\mathcal{R}(\Theta _{\rm R})$ calculated from the value of $\rho(\Theta_{\rm{R}})$ using Eq.~(\ref{eq:BG_R}) is $\mathcal{R}(\Theta _{\rm R}) = 35.6\,\mu\Omega$\,cm. In order to compare the resistivity at, e.g. room temperature, with the value predicted by the BG theory, one needs an estimate of the conduction carrier concentration $n_{\rm cell}$ in Eq.~(\ref{eq:BG-R}). Such an estimate is not currently available. \section{\label{Conclusion} Conclusions} \begin{table} \caption{\label{Tab:Parameters} Values of parameters obtained from analyses of heat capacity, magnetic susceptibility and electrical resistivity measurements of ${\rm CaCu_{1.7}As_2}$. The notation $\langle\cdots\rangle$ denotes a temperature and/or angular average of the enclosed quantity.} \begin{ruledtabular} \begin{tabular}{ll} Property & Value \\ \hline \underline{Heat Capacity}\\ $\gamma$ & 2.0(2)~mJ/mol\,K$^2$ \\ $\beta$ & 0.33(2)~mJ/mol\,K$^4$ \\ ${\cal D}(E_{\rm F})$ & 0.85(9)~states/eV\,f.u.\ (both spin directions) \\ $\Theta_{\rm D} $ & 303(6)~K (from low-$T$)\\ $\Theta_{\rm D} $ & 265(1)~K (from all $T$) \\ \\ \underline{Susceptibility} \\ $\langle \chi \rangle$ & $-5.3\times 10^{-5}$~cm$^3$/mol\\ $\chi_{\rm core}$ & $-1.53 \times 10^{-4}$~cm$^3$/mol \\ $\chi_{\rm {P}}$ & $2.7(3) \times 10^{-5}$~cm$^3$/mol \\ $\chi_{\rm L}$ & $-0.9 \times 10^{-5}$~cm$^3$/mol \\ $\langle\chi_{\rm VV}\rangle$ & $8.2 \times 10^{-5}$~cm$^3$/mol \\ $\chi_{\rm {VV}}^{ab}$ & $9.1 \times 10^{-5}$~cm$^3$/mol\\ $\chi_{\rm {VV}}^{c}$ & $6.3 \times 10^{-5}$~cm$^3$/mol\\ \\ \underline{Resistivity}\\ $\rho_0$ & 26.7(1)~$\mu \Omega\,{\rm cm}$ \\ RRR & $\approx 2.5$ \\ $\rho_0'$ & 34.2(1)~$\mu \Omega$\,cm \\ $\rho(\Theta_{\rm{R}})$ & 33.7(6)~$\mu \Omega$\,cm \\ $\mathcal{R}(\Theta _{\rm R}) $ & 35.6~$\mu \Omega$\,cm\\ $\Theta_{\rm{R}}$ & 320(6)~K \\ \end{tabular} \end{ruledtabular} \end{table} We have successfully grown single crystals of ${\rm CaCu_{1.7}As_2}$ and investigated their crystallographic, magnetic, thermal, and electronic transport properties. \mbox{Rietveld} refinements of powder XRD data for crushed crystals and WDS chemical analyses of single crystal surfaces indicate the presence of $\approx 15$\% vacancies on the Cu sites, consistent with literature data. No superconductivity was observed above 1.8~K\@. Our crystallographic and refinement parameters are listed in Tables~\ref{tab:XRD1} and \ref{tab:XRD2} and a summary of the parameters obtained from our various physical property measurements on ${\rm CaCu_{1.7}As_2}$ is given in Table~\ref{Tab:Parameters}. The $\chi(T)$ data reveal a nearly $T$-independent anisotropic diamagnetic behavior, indicating that the Cu atoms in ${\rm CaCu_{1.7}As_2}$ are in the Cu$^{+1}$ oxidation state with a nonmagnetic $3d^{10}$ electronic configuration, as expected from the collapsed-tetragonal crystal structure of this compound. The formal oxidation state of the As, which participates in As--As interlayer bonding, is then As$^{-1.85}$ which suggests hole conduction on the As sublattice. The $C_{\rm p}(T)$ and $\rho (T)$ data reveal metallic behavior. A small density of states at the Fermi level is found, consistent with ${\rm CaCu_{1.7}As_2}$ being an $sp$-band metal. The overall $C_{\rm p}(T)$ and $\rho(T > T_{\rm t})$ behaviors are well-described by the Debye model and the Bloch-Gr\"{u}neisen model, respectively. However, the $\rho(T)$ of ${\rm CaCu_{1.7}As_2}$ exhibits a transition of unknown origin at $T_{\rm t} = 54$--56~K without any thermal hysteresis, suggesting that the transition is second order. A significant positive magnetoresistence develops below this transition temperature. This transition may arise from spatial ordering of the Cu vacancies on cooling below $T_{\rm t}$. High-resolution x-ray and/or neutron diffraction measurements at low~$T$ could test this hypothesis. \noindent\emph{Note Added ---} After submission of this paper, \mbox{Cheng~{\it et al.}}\cite{Cheng2012} reported in this journal the observation of a transition in ``${\rm CaCu_2As_2}$'' single crystals at 50~K from $\rho(T)$ measurements. As in the present paper, their $\chi(T)$ data for this compound showed no evidence of the transition. These authors did not mention the large concentration of vacancies on the Cu sites previously reported in Ref.~\onlinecite{Pilchowski1990} and confirmed by us. Cheng~{\it et al.} noted that the shape of the transition in $\rho(T)$ is similar to those observed for CaFe$_2$(As$_{1-x}$P$_x)_2$ (Ref.~\onlinecite{Kasahara2011}) and Ca$_{1-x}R_x$Fe$_2$As$_2$ ($R$ = lanthanide, Ref.~\onlinecite{Saha2012}) arising from transitions from tetragonal to collapsed-tetragonal (cT) structures on cooling below room temperature. However, as we have discussed herein and previously,\cite{Anand2012} ${\rm CaCu_{1.7}As_2}$ as well as ${\rm SrCu_2As_2}$ and ${\rm BaCu_2As_2}$ are already in the cT phase at room temperature. \acknowledgments This research was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering. Ames Laboratory is operated for the U.S. Department of Energy by Iowa State University under Contract No.~DE-AC02-07CH11358. \clearpage
1,108,101,563,986
arxiv
\section{Introduction} The celebrated theorem of M. Ratner in 1992 classifies all {\it finite} invariant measures for unipotent flows on the quotient space of a connected Lie group by its discrete subgroup \cite{Ra}. The problem of classifying invariant {\it locally finite} Borel measures (i.e., Radon measures) is far from being understood in general. Most of known classification results are restricted to the class of horopsperical invariant measures on a quotient of a simple Lie group of rank one (\cite{Bu,Roblin, Wi}, \cite{Bab,Led,LedSa,Sar}). In this article, we obtain a classification of Radon measures invariant under unipotent flow in one of the most basic examples of the quotient of a higher rank semisimple Lie group by a discrete subgroup of infinite co-volume. \medskip Let $G= \gf$ where $\mathbb{F}$ is either $\mathbb{R} $ or $ \c$. Let $\Gamma_1$ and $\Gamma_2$ be finitely generated Zariski dense, discrete subgroups of $G$. Set $$Z:=(\Gamma_1\times \Gamma_2) \backslash (G\times G)=X_1\times X_2$$ where $X_i=\Gamma_i\backslash G$ for $i=1,2$. For $S\subset G$, $\Delta(S):=\{(s,s):s\in S\}$ denotes the diagonal embedding of $S$ into $G\times G$. \begin{thm}[\cite{BQ}, Benoist-Quint] Assume that $\Gamma_1<G$ is co-compact. Then any ergodic $\Delta(G)$-invariant Radon measure $\mu$ on $Z$ is, up to a constant multiple, one of the following: \begin{itemize} \item $\mu$ is the product $m^{\operatorname{Haar}}\times m^{\operatorname{Haar}}$ of Haar measures; \item $\mu$ is the graph of the Haar measure, in the sense that for some $g_0\in G$ with $[\Gamma_2: g_0^{-1}\Gamma_1 g_0\cap \Gamma_2]<\infty$, $\mu=\iota_*m^{\operatorname{Haar}}_{(g_0^{-1}\Gamma_1 g_0\cap \Gamma_2)}$, i.e., the push-forward of the $\operatorname{Haar}$-measure on $(g_0^{-1}\Gamma_1 g_0\cap \Gamma_2)\backslash G$ to the closed orbit $[(g_0, e)]\Delta (G)$ via the isomorphism $\iota$ given by $[g]\mapsto [(g_0g, g)]$. \end{itemize} \end{thm} Indeed, it is proved in \cite{BQ} that any ergodic $\Gamma_2$-invariant Borel probability measure on $X_1$ is either a Haar measure or supported on a finite orbit of $\Gamma_2$. This result is equivalent to the above theorem, in view of the homeomorphism $\nu\mapsto \tilde \nu$ between the space of all $\Gamma_2$-invariant measures on $X_1$ and the space of all $\Delta(G)$-invariant measures on $Z$, given by $$\tilde \nu (f)=\int_{X_2}\int_{X_1} f(\Gamma_1 hg, \Gamma_2 g) \operatorname{d}\!\nu(h)\operatorname{d}\!m^{\operatorname{Haar}}(g). $$ Since the Haar measure $m^{\operatorname{Haar}}$ on $X_1$ is ergodic for any element of $G$ which generates an unbounded subgroup, it follows that $m^{\operatorname{Haar}}$ is $\Gamma_2$-ergodic and hence the product $m^{\operatorname{Haar}}\times m^{\operatorname{Haar}}$ of the Haar measures in $X_1\times X_2$ is $\Delta(G)$-ergodic. We now consider the action of $\Delta(N)$ on $Z$ where $N$ is a horospherical subgroup of $G$, i.e., $N$ is conjugate to the subgroup $$\left\{\begin{pmatrix} 1 & 0\{\tbf} & 1\end{pmatrix}: t\in \mathbb{F}\right\}.$$ A $\dN$-invariant Radon measure on $Z$ is said to be {\it conservative} if for any subset $S$ of positive measure in $Z$, the measure of $\{n\in N: xn\in S\}$, with respect to the Haar measure of $N$, is infinite for almost all $x \in S$. \medskip The aim of this paper is to classify all $\dN$-invariant ergodic conservative Radon measures on $Z$ assuming $\Gamma_2$ is geometrically {finite}. Since Ratner~\cite{Ra} classified all such finite measures, our focus lies on {\it infinite} Radon measures. Note that if $\mu$ is a $\dN$-invariant measure, then the translate {$w_* \mu$} is also $\dN$-invariant for any $w$ in the centralizer of $\dN$. The centralizer of $\dN$ in {$G\times G$} is equal to $N\times N$. Hence it suffices to classify $\dN$-invariant measures, up to a translation by an element of $N\times N$. Let $m^{\operatorname{BR}}_{\Gamma_2}$ denote the $N$-invariant Burger-Roblin measure on $X_2$. It is known that $m^{\operatorname{BR}}_{\Gamma_2}$ is the unique $N$-invariant ergodic conservative measure on $X_2$, which is not supported on a closed $N$-orbit (\cite{Bu}, \cite{Roblin}, \cite{Wi}). When $\Gamma_2$ is of infinite co-volume, $m^{\BR}_{\Gamma_2}$ is an infinite measure. In the following two theorems, which are main results of this paper, we assume that $\Gamma_1<G$ is cocompact and $\Gamma_2$ is a Zariski dense, geometrically finite subgroup of $G$ with infinite co-volume. \begin{thm}\label{ergg} The product measure $m^{\operatorname{Haar}}\times m^{\operatorname{BR}}_{\Gamma_2}$ on $Z$ is a $\dN$-ergodic conservative infinite Radon measure. \end{thm} \begin{thm}\label{main} Any $\dN$-invariant, ergodic, conservative, infinite Radon measure $\mu$ on $Z$ is one of the following, up to a translation by an element of $N\times N$: \begin{enumerate} \item $\mu$ is the product measure $m^{\operatorname{Haar}}\times m^{\operatorname{BR}}_{\Gamma_2};$ \item $\mu$ is the graph of the $\operatorname{BR}$-measure, in the sense that for some $g_0\in \gf$ with $[\Gamma_2: g_0^{-1}\Gamma_1 g_0\cap \Gamma_2]<\infty$, \[ \mu=\phi_*m^{\operatorname{BR}}_{(g_0^{-1}\Gamma_1 g_0\cap \Gamma_2)}, \] i.e., the push-forward of the $\operatorname{BR}$-measure on $(g_0^{-1}\Gamma_1 g_0\cap \Gamma_2)\backslash G$ to the closed orbit $[(g_0, e)]\Delta (G)$ via the isomorphism $\phi$ given by $[g]\mapsto [(g_0g, g)]$. \item $\mathbb{F}=\c$ and there exists a closed orbit $x_2N$ in $ X_2$ homeomorphic to $\mathbb{R} \times \mathbb S^1$ such that $\mu$ is supported on $X_1\times x_2N$. To describe $\mu$ more precisely, let $U< N$ denote the one dimensional subgroup containing $\operatorname{Stab}_{N}(x_2)$ and $\operatorname{d}\!n$ the $N$-invariant measure on $x_2N$ in $X_2$. We then have one of the two possibilities: \begin{enumerate} \item $$\mu= m^{\operatorname{Haar}}\times \operatorname{d}\!n;$$ \item there exist a connected subgroup $L\simeq{\rm{SL}}_2(\mathbb{R})$ with $L\cap N=U$, a compact $L$ orbit $Y$ in $ X_1$ and an element $n\in N$ such that $$\mu=\int_{x_2N} \mu_{x}\operatorname{d}\!x$$ where $\mu_{x_2n_0}$ is given by $\mu_{x_2n_0}(\psi )=\int_{Y} \psi (yn n_0, x_2n_0)\operatorname{d}\!y $ with $\operatorname{d}\!y$ being the $L$-invariant probability measure on $Y$. \end{enumerate}\end{enumerate} \end{thm} We deduce Theorem \ref{ergg} as a consequence of Theorem \ref{main} (see subsection \ref{dedd}). Two main ingredients of the proof of Theorem \ref{main} are Ratner's { classification of probability measures on $X_1$ which are invariant and ergodic under a one parameter unipotent subgroup of $G$,} and the classification of $N$-equivariant {(set-valued) Borel maps} $X_2\to X_1$, established in our earlier work \cite{MO}. \section{Recurrence and algebraic actions on measure spaces}\label{rec} In this section, let $G={\rm PSL}_2(\c)$ and let $\Gamma<G$ be a Zariski dense geometrically finite discrete subgroup. Set $X=\Gamma\backslash G$. Let $N$ be the horospherical subgroup \[ \left\{n_t:=\begin{pmatrix} 1 & 0\{\tbf} & 1\end{pmatrix}: t\in \mathbb{F}\right\} \] and let $m^{\operatorname{BR}}$ denote the Burger-Roblin measure on $X$ invariant under $N$. Recall that $m^{\operatorname{BR}}$ is the unique ergodic $N$-invariant Radon measure on $X$ which is not supported on a closed $N$-orbit. Let $U<N$ be a non-trivial connected subgroup of $N$. We denote by ${\mathcal P}(U\backslash N)$ the space of probability measures on $U\backslash N$. The natural action $N$ on $U\backslash N$ induces an action of $N$ on ${\mathcal P}(U\backslash N)$. The aim of this section is to prove the following technical result: \begin{prop \label{lem:Borel-map-var} If there exists an essentially $N$-equivariant Borel map \[ f:(X, m^{\operatorname{BR}})\to {\mathcal P}(U\backslash N), \] then $U=N$ and hence $f$ is essentially constant. \end{prop} {For the proof, we will first observe that the $N$ action on ${\mathcal P}(U\backslash N)$ is smooth ~\cite[Def.\ 2.1.9]{Zi}. By the fact that $m^{\operatorname{BR}}$ is $N$-ergodic, it then follows that $f$ is essentially concentrated on a single $N$-orbit in ${\mathcal P}(U\backslash N)$. We will use a recurrence property of $m^{\operatorname{BR}}$, which is stronger than the conservativity, to prove $U=N$. We begin with the following lemma. The space $\mathcal P(\mathbb{R})$ is equipped with a weak topology: i.e., $\nu_n \to \nu$ if and only if $\nu_n(\psi)\to \nu(\psi)$ for all $\psi\in C_c(\mathbb{R})$. \begin{lem}\label{lem:prob-R} If $\{t_n:n=1,2, \cdots \}$ is sequence in $\mathbb{R}$, so that $t_{n*}\sigma\to\sigma'$ for some $\sigma,\sigma'\in{\mathcal P}(\mathbb{R})$, then $\{t_{n}\}$ is bounded. \end{lem} \begin{proof} Assume the contrary and after passing to a subsequence suppose $t_n\to\infty.$ Since $\sigma$ and $\sigma'$ are probability measures on $\mathbb{R},$ there is some $M>1$ such that \[ \sigma([-M,M])>0.9 \quad\text{and}\quad \sigma'([-M,M])>0.9. \] Let $\psi \in C_c(\mathbb{R})$ be a continuous function so that $0\leq \psi \leq 1,$ $\psi |_{[-M,M]}=1$ and $\psi |_{(-\infty,-M-1)\cup(M+1,\infty)}=0.$ Since $t_n\to\infty$ we have \[ \left( [-M-1,M+1]-t_n\right) \cap[-M-1,M+1]=\emptyset \text{ for all large $n.$} \] Therefore, $t_{n*}\sigma (\psi )<0.1$ but $\sigma'(\psi )>0.9,$ which contradicts the assumption that $t_{n*}\sigma\to\sigma'.$ \end{proof} As was mentioned above, we will need certain recurrence properties of the action of $N$ on $(X,m^{\operatorname{BR}})$. This will be deduced from recurrence properties of the Bowen-Margulis-Sullivan measure $m^{\operatorname{BMS}}$ on $X$ with respect to the diagonal flow ${\rm diag}(e^{t/2},e^{-t/2})$. We normalize so that $m^{\operatorname{BMS}}$ is the probability measure. These two measures $m^{\operatorname{BMS}}$ and $m^{\operatorname{BR}}$ are quasi-product measures and on weak-stable manifolds (i.e., locally transversal to $N$-orbits), they are absolutely continuous to each other. Set $M=\{{\rm{diag}}(z,z^{-1}): |z|=1\}$. Then $G/M$ can be identified with the unit tangent bundle of the hyperbolic $3$-space $\mathbb H^3$. Hence for every $g\in G$, we can associate a point $g^-$ in the boundary of $\mathbb H^3$ which is the backward end point of the geodesic determined by the tangent vector $gM$. Now the set $X_{\rm rad}:=\{\Gamma g\in X: g^- \text{ is a radial limit point of $\Gamma$}\}$ has a full $\operatorname{BMS}$-measure as well as a full $\operatorname{BR}$-measure. For $x\in X_{\rm rad}$, $n \mapsto xn$ is a bijection $N\to xN$, and $\mu_x^{\operatorname{PS}}$ denotes the leafwise measure of $m^{\operatorname{BMS}}$, considered as a measure on $N$ (see ~\cite[\S2]{MO}). We recall the following: \begin{thm} [\cite{Rudol}, Theorem 17] \label{thm:Rudol} For any Borel set $B$ of $X$ and any $\eta>0$, the set $$ \left\{x\in X_{\rm rad}:\liminf_T \tfrac{1}{\mu_x^{\operatorname{PS}}(B_N(T))}\int_{B_N(T)} \chi_{B} (x{n_t})d\mu_x^{\operatorname{PS}}({\mathbf{t}})\geq (1-\eta)m^{\operatorname{BMS}}( B )\right\} $$ has full $\operatorname{BMS}$ measure $m^{\operatorname{BMS}}$. \end{thm} \begin{lem}\label{lem:recurrence} Let $U$ be a one-dimensional connected subgroup of $N$. Then for every subset $B\subset X$ of positive $\operatorname{BMS}$-measure, the set \[ \{{n}\in U\backslash N: x{n}\in B\} \] is unbounded for $m^{\operatorname{BMS}}$-a.e.\ $x\in X.$ \end{lem} \begin{proof} We denote by $\rm{Nbd}_R(U)$ the $R$-neighborhood of $U$, i.e., $${\rm{Nbd}}_R(U)=\{n_t\in N: |t-s|<R\text{ for some $n_s\in U$}\}.$$ We set $B_N(R):= {\rm{Nbd}}_R(\{e\})$ which is the $R$-neighborhood of $e$. Let $B\subset X$ be any Borel set of positive BMS-measure. Then by Theorem~\ref{thm:Rudol}, there is a $\operatorname{BMS}$ full measure set $X'$ of $X_{\rm rad}$ with the following property: for all $x\in X'$, there is $T_x>0$ such that if $T>T_x$, then \begin{equation}\label{eq:max-erg} \mu_x^{\operatorname{PS}}\{{n_t}\in B_N(T): xn_t \in B\}\geq 0.9\, \mu_x^{\operatorname{PS}}(B_N(T))m^{\operatorname{BMS}}(B). \end{equation} Let $x\in X'$. Since $x$ is a radial limit point for $\Gamma$, there exists a sequence $T_i\to\infty$ so that $xa_{-\log T_i}$ converges to some $y\in{\rm supp}(m^{\operatorname{BMS}})$. Therefore, we have \begin{equation}\label{eq:PS-cont} \mu_{xa_{-\log T_i}}^{\operatorname{PS}}\to\mu_y^{\operatorname{PS}}, \end{equation} in the space of regular Borel measures on $N$ endowed with the weak-topology (see~\cite[Lemma 2.1]{MO}). Moreover, by ~\cite[Lemma 4.3]{MO}, for every $\epsilon>0,$ there exists $\rho_0>0$ such that for every $0<\rho\leq\rho_0$ we have \begin{equation}\label{eq:var-null} \mu_y^{\operatorname{PS}}(B_N(1)\cap {\rm{Nbd}}_{\rho}(U))\leq \epsilon\cdot \mu_y^{\operatorname{PS}}(B_N(1)) \end{equation} Since \[ \tfrac{\mu_x^{\operatorname{PS}}(B_N(T_i)\cap {\rm{Nbd}}_{R}(U)) }{\mu_x^{\operatorname{PS}}(B_N(T_i))} =\tfrac{\mu_{xa_{-\log T_i}}^{\operatorname{PS}}(B_N(1)\cap {\rm{Nbd}}_{R/T_i}(U)) }{\mu_{xa_{-\log T_i}}^{\operatorname{PS}}(B_N(1))}, \] it follows from \eqref{eq:PS-cont} and \eqref {eq:var-null} that for every $\epsilon>0$ and for all sufficiently large $i$ such that $R/T_i<\rho$, \begin{equation}\label{eq:nbhd-U} \mu_x^{\operatorname{PS}}(B_N(T_i)\cap {\rm{Nbd}}_{R}(U))\leq \epsilon \cdot \mu_x^{\operatorname{PS}}(B_N(T_i)). \end{equation} Put $\epsilon=1/10 \cdot m^{\operatorname{BMS}}(B).$ Given any $j$, there exists $i_j>\max\{j, T_x\}$ such that $$ \mu_x^{\operatorname{PS}}(B_N(T_j))\leq\epsilon\cdot \mu_x^{U}(B_N(T_{i_j})) .$$ Then for all sufficiently large $i>i_j$, we have $$ \mu_x^{\operatorname{PS}} (B_N(T_j) ) +\mu_x^{\operatorname{PS}} (B_N(T_i)\cap {\rm{Nbd}}_{R}(U)) \le 2 \epsilon \mu_x^{\operatorname{PS}} (B_N(T_i)) .$$ Therefore it follows from ~\eqref{eq:max-erg} that for any $j$ and for all $i>i_j$, \begin{multline*} \mu_x^{\operatorname{PS}}\{{n_t}\in B_N(T_i)\setminus (B_N(T_j)\cup {\rm{Nbd}}_{R}(U) ): x{n_t} \in B\}\geq \\ 0.5\mu_x^{\operatorname{PS}}(B_N(T_i))m^{\operatorname{BMS}}(B)>0. \end{multline*} This implies that the set of $x$ with $xn_t\in B$ cannot be contained in any bounded neighborhood of $U$, proving the claim. \end{proof} \medskip \noindent {\it Proof of Proposition ~\ref{lem:Borel-map-var}.} If $U=N$, the claim is clear. Hence we suppose $U$ is a one-dimensional connected subgroup of $N$. First by modifying $f$ on a $\operatorname{BR}$-null set, we may assume that for all $x\in X$, and for all $n\in N$, \[ f(x{n})=n_* f(x) . \] Fix a compact subset $Q\subset X_{\rm{rad}}$ such that $f$ is continuous on $Q$ and $m^{\operatorname{BR}}(Q)>0$. This is possible by Lusin's theorem. We claim that for some $y\in Q$, the set \[ \{n\in N: yn\in Q\} \] is unbounded in the quotient space $U\backslash N$. First note that there exists $\rho_0>0$ such that $QB_N({\rho_0})$ has a positive $\operatorname{BMS}$-measure. By Lemma~\ref{lem:recurrence} there is a $\operatorname{BMS}$-full measure set $X'$ so that for all $x\in X'$, \[ \{{n}\in N : x{n} \in QB_N({\rho_0})\} \text{ is unbounded in $U\backslash N.$ }\] Using the fact that $N$ is abelian, the above implies that \begin{equation}\label{unb} \{{n}\in N : y{n} \in Q\}\text{ is unbounded in $U\backslash N$ for all $y\in X'N$.} \end{equation} The set $X'N$ is a $\operatorname{BR}$-conull set and $m^{\operatorname{BR}}(Q)>0$. Therefore, there is some $y\in Q$ which satisfies \eqref{unb}, proving the claim. Now, there is a sequence $\{n_{t_i}\in N\}$ such that $n_{t_i}\to \infty$ in $U\backslash N$ and that $yn_{t_i}\in Q$ and $yn_{t_i}\to z\in Q.$ The function $f$ is continuous on $Q$. Therefore we get \[ (n_{t_i})_*f(y)\to f(z). \] Since $f(y)$ and $ f(z)$ are probability measures on $U\backslash N\simeq \mathbb{R}$, and $n_{t_i}\to \infty$ in $U\backslash N\simeq \mathbb{R}$, this contradicts Lemma~\ref{lem:prob-R}. This yields that $U=N$ is the only possibility and finishes the proof. \qed \section{Proof of Theorems \ref{ergg} and \ref{main}} We continue the notations set up in the introduction. Let $\mathbb{F}=\mathbb{R}$ or $\c$ and $G=\gf$. Let $\Gamma_1<G$ be a cocompact lattice and $\Gamma_2<G$ be a geometrically finite and Zariski dense subgroup. Set $X_i=\Gamma_i\backslash G$ for $i=1,2$. Let $Z=X_1\times X_2$. Let $N<G$ be a horospherical subgroup. Without loss of generality, we may assume $$N:=\left\{n_t:=\begin{pmatrix} 1 & 0\{\tbf} & 1\end{pmatrix}: t\in \mathbb{F}\right\}.$$ We denote by $m^{\BR}_{\Gamma_2}$ the $N$-invariant Burger-Roblin measure on $X_2$; this is unique up to a constant multiple. Let $\mu$ be a $\Delta (N)$-invariant, ergodic, conservative {\it infinite} Radon measure on $Z$. Let $$ \pi: Z\to X_2 $$ be the canonical projection. Since $X_1$ is compact, the push-forward $\pi_*\mu$ defines an $N$-invariant ergodic conservative {\it infinite} Radon measure on $X_2$. \begin{thm} Up to a constant multiple, $$\pi_*\mu= m^{\BR}_{\Gamma_2}\quad\text{or }\quad \pi_*\mu=\operatorname{d}\!n$$ for the $N$-invariant measure $\operatorname{d}\!n$ on a closed orbit $x_2N$ homeomorphic to $\mathbb{R}\times \mathbb S^1$. The latter happens only when $\mathbb{F}=\c$ and $\Gamma$ has a parabolic limit point of rank one. \end{thm} \begin{proof} Since $\Gamma_2$ is assumed to be geometrically finite and Zariski dense, up to a proportionality, the measure $\pi_*\mu$ is either $m^{\BR}_{\Gamma_2}$ or it is the $N$-invariant measure supported on a closed $N$-orbit $x_2N$ in $X_2 $ (\cite{Roblin} and~\cite{Wi}). In the latter case, $x_2N$ is homeomorphic to one of the following: $\mathbb S^1\times \mathbb S^1$, $\mathbb{R}\times \mathbb{R}$, and $\mathbb{R}\times \mathbb S^1$. The first possibility cannot happen as that would mean that $\mu$ is a finite measure. The second possibility would contradict the assumption that $\mu$ is $N$-conservative. Hence $x_2N$ must be $ \mathbb{R}\times \mathbb S^1$, up to a homeomorphism. \end{proof} The following is one of the main ingredients of our proof of Theorem \ref{main}, established in \cite{MO}. \begin{thm} \label{pro} One of the following holds, up to a constant multiple: \begin{enumerate} \item $\pi_*\mu= m^{\BR}_{\Gamma_2}$ and $\mu$ is invariant under $U\times \{e\}$ for a non-trivial connected subgroup $U$ of $N$; \item $\pi_*\mu=m^{\BR}_{\Gamma_2}$ and the fibers of the map $\pi$ are finite with the same cardinality almost surely. Moreover, in this case, $\mu$ is the graph of the $\operatorname{BR}$-measure in the sense of Theorem \ref{main}(2); \item $\mathbb{F}=\c$ and $\pi_*\mu=\operatorname{d}\!n$ for the $N$-invariant measure $\operatorname{d}\!n$ on a closed orbit $x_2N$ homeomorphic to $\mathbb{R}\times \mathbb S^1$. \end{enumerate} \end{thm} \begin{proof} For the case when $\pi_*\mu=m^{\BR}_{\Gamma_2}$, it follows from~\cite[Thm.~7.12 and Thm.~7.17]{MO} either that the fibers of the map $\pi$ are finite with the same cardinality almost surely or that $\mu$ is invariant under a non-trivial connected subgroup of $N$, yielding the cases (1) and (2). Indeed~\cite[Thm.\ 7.12]{MO} states this under the assumption that $\mu$ is an $N$-joining, but all that is used in the proof is the fact that the projection of the measure onto one of the factors is the $\operatorname{BR}$ measure. \end{proof} \subsection{Proof of Theorem \ref{main}} \subsubsection{The case of $G={\rm PSL}_2(\mathbb{R})$.}\label{sec:sl2r} In this case, $m^{\BR}_{\Gamma_2}$ is the unique infinite conservative $N$-invariant measure on $X_2$. Therefore we may assume, after the normalization of $m^{\BR}_{\Gamma_2} $ if necessary, that $\pi_*\mu=m^{\BR}_{\Gamma_2}$. By the standard disintegration theorem, we have \[ \mu=\int_{X_2} \mu_x \operatorname{d}\!m^{\BR}_{\Gamma_2} (x) \] where $\mu_x$ is a probability measure on $X_1$ for $m^{\BR}_{\Gamma_2}$-a.e.\ $x$. Suppose that Theorem \ref{pro}(1) holds, i.e., $\mu$ is invariant under $N\times \{e\}$. Then, since every element in the $\sigma$-algebra \[ \{X_1\times B: B\subset X_2\text{ is a Borel set}\} \] is invariant under $N\times \{e\},$ we get that $\mu_x$ is an $N$-invariant probability measure on $X_1$ for $m^{\BR}_{\Gamma_2}$-a.e.\ $x$. By the unique ergodicity of $N$ on the compact space $X_1$ \cite{Fu}, we have \begin{equation}\label{fin} \mu_x=m^{\operatorname{Haar}}\quad \text{ for $m^{\BR}_{\Gamma_2}$-a.e.\ $x$;}\end{equation} hence $\mu=m^{\operatorname{Haar}}\times m^{\BR}_{\Gamma_2}$. If Theorem \ref{pro}(2) holds, we obtain that $\mu$ is the graph of the $\operatorname{BR}$-measure as desired in Theorem \ref{main}. \subsubsection{The case of $G={\rm PSL}_2(\c)$} In analyzing the three cases in Theorem \ref{pro}, we use the following special case of Ratner's measure classification theorem \cite{Ra}: \begin{thm}\label{ratner} Let $\Gamma_1<G={\rm PSL}_2(\c)$ be a cocompact lattice. Let $U$ be a one-parameter unipotent subgroup of $G$. Let $L\simeq {\rm PSL}_2(\mathbb{R})$ be the connected subgroup generated by $U$ and its transpose $U^t$. Then any ergodic $U$-invariant probability measure on $\Gamma_1\backslash G$ is either the Haar measure or a $v^{-1} Lv$-invariant measure supported on a compact orbit $\Gamma_1\backslash \Gamma_1 g Lv$ for some $g\in G$ and $v\in N$. \end{thm} Indeed, the same conclusion holds for any ergodic $u$-invariant probability measure on $\Gamma_1\backslash G$ for any non-trivial element $u\in U$, as was obtained in \cite{Shah}. Also note that in the second case of Theorem~\ref{ratner} the support of the measure is contained in $yLN$ for some compact orbit $yL$. We now investigate each case of Theorem \ref{pro} as follows: \begin{thm} \label{prod} For $k=1,2,3$, Theorem~\ref{pro}(k) implies Theorem~\ref{main}(k). \end{thm} \begin{proof} Observe first that the case of $k=2$ follows directly from Theorem~\ref{pro}. Consider the case $k=1$: suppose that $\mu$ is invariant under a subgroup $U\times \{e\}$ for a non-trivial connected subgroup $U$ of $N$. We normalize $m^{\BR}_{\Gamma_2}$ so that $\pi_*\mu=m^{\BR}_{\Gamma_2}$. It follows from the standard disintegration theorem that \begin{equation}\label{eq:mu-disint} \mu=\int_{X_2} \mu_x \operatorname{d}\!m^{\BR}_{\Gamma_2} (x). \end{equation} {Arguing as in \S\ref{sec:sl2r}, since $\mu$ is invariant under $U\times \{e\},$ we get that $\mu_x$ is a $U$-invariant probability measure on $X_1$ for $m^{\BR}_{\Gamma_2}$-a.e.\ $x$.} We claim that \begin{equation}\label{fin} \mu_x=m^{\operatorname{Haar}}\quad \text{ for $m^{\BR}_{\Gamma_2}$-a.e.\ $x$;} \end{equation} this implies $\mu=m^{\operatorname{Haar}}\times m^{\BR}_{\Gamma_2}$ and finishes the proof in this case. We apply Theorem \ref{ratner} to $U$. Let $L\simeq {\rm PSL}_2(\mathbb{R})$ be defined as in Theorem \ref{ratner}. Compactness of $\Gamma_1\backslash \Gamma_1 g L$ implies that $g^{-1}\Gamma_1 g\cap L$ is a cocompact lattice of $L$. In particular, $g^{-1}\Gamma_1 g\cap L$ is finitely generated and Zariski dense in $L$. This implies there are only countably many compact $L$ orbits in $X_1$. Let $\{y_iL : i=0, 1, 2, \ldots \}$ be the collection of all compact $L$-orbits in $X_1.$ Then for $m^{\BR}_{\Gamma_2}$-a.e.\ $x\in X_2$, we have \begin{equation}\label{eq:erg-dec-ratner} \mu_x=c_x m^{\operatorname{Haar}}+ \sum_i \mu_{x, i} \end{equation} where $c_x \ge 0$ and $\mu_{x, i}$ is a $U$-invariant finite measure supported in $y_i L N$. The set $\{(x_1,x_2): c_{x_2}>0\}$ is a $\Delta(N)$-invariant Borel measurable set. Therefore,~\eqref{fin} follows if this set has positive measure. In view of this, we assume from now that $c_x=0$ for $m^{\BR}_{\Gamma_2}$-a.e.\ $x$. Then the support of $\mu$ is contained in a countable union \[ \bigcup_i (y_i L N\times X_2). \] Hence for some $i$, \begin{equation} \label{un} \mu(y_i LN\times X_2)>0 . \end{equation} Without loss of generality, we may assume $i=0$. Since $y_0 LN\times X_2$ is $\Delta(N)$-invariant and $\mu$ is $\Delta(N)$-ergodic,~\eqref{un} implies that $y_0 LN\times X_2$ is $\mu$-conull. Therefore, $\mu_x$ is supported on $y_0LN$ for $m^{\BR}_{\Gamma_2}$-a.e.\ $x\in X_2$. For each $n\in N$, let $\eta_n$ be the probability measure supported on $y_0Ln$, invariant under $n^{-1}Ln$. Noting that $y_0Ln=y_0Ln'$ if $n\in Un'$, the map $n\mapsto \eta_n$ factors through $U\backslash N$. We also have \begin{equation}\label{eta0} n_0\eta_n=\eta_{nn_0}\quad\text{for any $n, n_0\in N$}.\end{equation} By Theorem \ref{ratner}, the collection $\{\eta_n: n\in U\backslash N\}$ provides all $U$-invariant ergodic probability measures on $X_1$ whose supports are contained in $y_0LN$. Hence the $U$-ergodic decomposition of $\mu_x$ gives that for a.e. $x\in X_2$, there is a probability measure $\sigma_x$ on $U\backslash N$ such that \[ \mu_x=\int_{U\backslash N}\eta_{n} \operatorname{d}\!\sigma_x(n). \] Since $\mu$ is $\Delta(N)$-invariant, we have \begin{equation}\label{eq:N-mu-x} \mu_{xn_0}=n_0\mu_x\quad\text{ for $m^{\BR}_{\Gamma_2}$-a.e. $x\in X_2$ and all $n_0\in N.$ } \end{equation} Observe that \begin{equation} \label{u1} \mu_{xn_0}=\int_{U\backslash N }\eta_{n} \operatorname{d}\!\sigma_{xn_0}(n), \end{equation} and that \begin{align*} n_0\mu_x&=\int_{X_1} n_0 \eta_n\operatorname{d}\!\sigma_x(n) =\int_{X_1} \eta_{nn_0}\operatorname{d}\!\sigma_x(n) =\int_{X_1} \eta_{n}\operatorname{d}(n_0\sigma_x)(n). \end{align*} Therefore \eqref{eq:N-mu-x} implies that for $m^{\BR}_{\Gamma_2}$-a.e. $x\in X_2$ and for a.e. $n_0\in N$, \begin{equation}\label{ns} n_0\sigma_x=\sigma_{xn_0}. \end{equation} It follows that the Borel map $f:(X_2,m^{\BR}_{\Gamma_2})\to \mathcal P(U\backslash N)$ defined by \[ f(x):=\sigma_x \] is essentially $N$-equivariant for the natural action of $N$ on $\mathcal P(U\backslash N)$. As $U$ is one dimensional, this yields a contradiction to Proposition \ref{lem:Borel-map-var} and hence completes the proof of case $k=1$. We now turn to the proof of the case $k=3$. The argument is similar to the above case. Let $x_2N$ be a closed orbit as in the statement of Theorem \ref{pro}(3). We disintegrate $\mu$ as follows: \begin{equation}\label{eq:dis-int} \mu=\int_{x_2N} \mu_x \operatorname{d}\!n \end{equation} where $\mu_x$ is a probability measure on $X_1$ for a.e.\ $x\in x_2N$. As $x_2N$ is homeomorphic to $\mathbb{R} \times \mathbb S^1$, the stabilizer of $x_2$ in $N$ is generated by a unipotent element, say, $u$. Note that $u$ acts trivially on $x_2N$ and $\Delta(u)$ leaves $\mu$ invariant. Hence again we have \begin{equation}\label{eq:mux-inv} \mbox{$\mu_x$ is $u$-invariant almost surely. } \end{equation} We apply \eqref{eq:erg-dec-ratner} for $u$-invariant measures $\mu_x$. Let $L\simeq {\rm PSL}_2(\mathbb{R})$ denoted the connected closed subgroup containing $u$ and $u^t$ and let $\{y_iL:i=0,1,\ldots \}$ be the collection of all compact $L$-orbits. Then for almost every $x \in x_2N$ we write \[ \mu_x=c_x m^{\operatorname{Haar}}+ \sum_i \mu_{x, i}, \] where $\mu_{x, i}$ is a $u$-invariant finite measure supported in $y_i L N$. As before, if $c_x>0$ on a positive measure subset of $x_2N$, then $c_x=1$ almost surely by the $\dN$ ergodicity of $\mu.$ Then $\mu=m^{\operatorname{Haar}}\times dn$; note that this measure is $\dN$ ergodic since $m^{\operatorname{Haar}}$ is $u$-ergodic. This is the case of Theorem~\ref{main}(3)(a). Lastly we consider the case when $c_x=0$ almost surely. As before, \[ \mu(y_i LN\times x_2N)>0 \] for some $i$, and hence almost all $\mu_x$ is supported on one $y_iLN$ by the ergodicity of $\mu$. We assume $i=0$ without loss of generality. Set $U=L\cap N$. Then $\{\eta_n: n\in U\backslash N\}$ (with $\eta_n$ defined as in the previous case) is the set of all $u$-ergodic probability measures on $X_1$ whose supports are contained in $y_0LN$ by Theorem \ref{ratner} and the remark following it. Therefore, we get a probability measure $\sigma_x\in \mathcal P(U\backslash N)$ such that \[ \mu_x=\int_{n\in U\backslash N} \eta_{n} \operatorname{d}\!\sigma_x(n). \] Moreover, $n_*\sigma_x=\sigma_{xn}$ for a.e.\ $x$ and all $n\in N$. Put $\sigma:=\sigma_x$ for some fixed $x$. Without loss of generality, we assume $x=x_2$. Then for $\psi\in C_c(Z)$, $$\mu(\psi) =\int_{n\in U\backslash N} \int_{x_2n_0 \in x_2N} \int_Y \psi(yn_0n, x_2n_0) dy \operatorname{d}\!n_0\operatorname{d}\!\sigma(n).$$ However for each $n\in U\backslash N$, $\psi \mapsto\int_{x_2n_0 \in x_2N} \int_Y \psi(yn_0n, x_2n_0) dy \operatorname{d}\!n_0$ defines a $\Delta(N)$-invariant measure, and hence by the $\Delta(N)$-ergodicity assumption on $\mu$, $\sigma$ must be a delta measure at a point, say $n\in U\backslash N$. Therefore we arrive at Theorem \ref{main}(3)(b). \end{proof} \subsection{Proof of Theorem \ref{ergg}}\label{dedd} Suppose that the product measure \[ \mu:=m^{\operatorname{Haar}}\times m^{\BR}_{\Gamma_2} \] is not ergodic for the action of $\dN$. Let $\Omega$ be the support of $\mu$. We consider the decomposition $\Omega=\Omega_d\cup \Omega_c$ where $\Omega_d$ and $\Omega_c$ are maximal $\dN$-invariant dissipative and conservative subsets respectively. That is, for any positive measure $S\subset \Omega_d$ (resp.\ $S\subset \Omega_c$), the Haar measure of $\{n\in N: xn\in S\}$ is finite (resp.\ infinite) for almost all $s\in S$ (see~\cite{Kr}). Consider the ergodic decomposition of $\mu$. By Theorem \ref{main}, any ergodic conservative component in the ergodic decomposition of $\mu$ should be one of the measures as described in Theorem \ref{main}(2) and \ref{main}(3). Now $\mu $ gives measure zero to sets of the form \[ (x_1,x_2)\Delta(G)(N\times\{e\}) \] where $(x_1,x_2)\Delta(G)$ is a closed orbit. Moreover, there are only countably many closed $\Delta(G)$ orbits in $Z$. Also, any closed $N$ orbit $x_2N$ gives rise to the family $x_2NA$ of closed $N$-orbits where $A$ is the diagonal subgroup. There are only finitely many such $AN$-orbits in $X_2$, as $\Gamma_2$ is geometrically finite and hence there are only finitely many $\Gamma$ orbits of parabolic limit points. Therefore $m^{\BR}_{\Gamma_2}$ gives zero measure to the set of all closed $N$-orbits in $X_2$. It follows that $\Omega_c$ is trivial and hence the product measure $m^{\operatorname{Haar}}\times m^{\BR}_{\Gamma_2}$ is completely dissipative. This is a contradiction since $X_1$ is compact and $m^{\BR}_{\Gamma_2}$ is $N$-conservative. This proves Theorem \ref{ergg}. \qed
1,108,101,563,987
arxiv
\section{Introduction} Drones, especially quadrotors, are transformed by enthusiasts in spectacular racing platforms. After years of development, drone racing has become a major e-sports, where the racers fly their drones in a preset course at high speed. It was reported that an experienced first person view (FPV) racer can achieve speeds up to $190 km/h$ when sufficient space is available. The quadrotor itself uses an inertial measurement unit (IMU) to determine its attitude and rotation rates, allowing it to execute the human's steering commands. The human mostly looks at the images and provides the appropriate steering commands to fly through the track as fast as possible. The advance in research areas such as computer vision, artificial intelligence and control raises the question: would drones not be able to fly faster than human pilots if they flew completely by themselves? Until now, this is an open question. In 2016, the world's first autonomous drone race was held at IROS 2016 \cite{moon2017iros}, which became an annual event trying to answer this question (Figure \ref{fig:tracks}). We focus on developing computationally efficient algorithms and extremely light weight autonomous racing drones that have the same or even better performance than currently existing larger drones. We believe that these drones may be able to fly faster, as the gates will be relatively larger for them. Moreover, a cheap, light-weight solution to drone racing would allow many people to use autonomous drones for training their racing skills. When the autonomous racing drone becomes small enough, people may even practice with such drones in their own home. \begin{figure} [hbt!] \centering \subfigure[IROS 2016 drone race track]{ \includegraphics[width=.3\textwidth]{introduction/IROS2016track.jpg} } \subfigure[IROS 2017 drone race track]{ \includegraphics[width=.3\textwidth]{introduction/IROS2017track.jpg} } \subfigure[IROS 2018 drone race track]{ \includegraphics[width=.3\textwidth]{introduction/IROS2018_track.pdf} } \caption{The IROS autonomous drone race track over the years 2016 - 2018 (a-c). The rules have always been the same. Flight is to be fully autonomous, so there can be no human intervention. The drone that passes through most subsequent gates in the track wins the race. When the number of passed gates is the same, or the track is fully completed, the fastest drone wins the race.} \label{fig:tracks} \end{figure} Autonomous drone racing is indebted to earlier work on agile flight. Initially, quadrotors made agile maneuvers with the help of external motion capture systems \cite{mellinger2011minimum,mellinger2012trajectory}. The most impressive feats involved passing at high speeds through gaps and circles. More recently, various researchers have focused on bringing the necessary state estimation for these maneuvers onboard. Loianno et al. plan an optimal trajectory through a narrow gap with difficult angles while using Visual Inertial Odometry (VIO) for navigation \cite{loianno2017estimation}. The average maximum speed of their drone can achieve $4.5m/s$. However, the position of the gap is known accurately a priori, so no gap detection module is included in their research. Falanga et al. have their research on flying a drone through a gap aggressively by detecting the gap with fully onboard resources \cite{falanga2017aggressive}. They fuse the pose estimation from the detected gap and onboard sensors to estimate the state. In their experiment, the platform with a forward-facing fish-eye camera can fly through the gap with $3 m/s$. Sanket et al. develop a solution for a drone to fly through arbitrarily shaped gaps without building an explicit 3D model of a scene, using only a monocular camera \cite{sanket2018gapflyt}. Drone racing represents a larger, even more challenging problem than performing short agile flight maneuvers. The reasons for this are that: (1) all sensing and computing has to happen on board, (2) passing one gate is not enough. Drone races can contain complex trajectories through many gates, requiring good estimation and (optimal) control also on the longer term, and (3) depending on the race, gate positions can change, other obstacles than gates can be present, and the environment is much less controlled than an indoor motion tracking arena. One category of strategies for autonomous drone racing is to have an accurate map of the track, where the gates have to be in the same place. One of the participants of the IROS 2017 autonomous drone race, the Robotics and Perception Group, reached gate $8$ in $35s$. In their approach, waypoints were set using the pre-defined map and VIO was used for navigation. A depth sensor was used for aligning the track reference system with the odometry reference system. NASA's JPL lab report in their research results that their drone can finish their race track in a similar amount of time as a professional pilot. In their research, a visual-inertial localization and mapping system is used for navigation and an aggressive trajectory connecting waypoints is generated to finish the track \cite{morrell2018differential}. Gao et al. come up with a teach-and-repeat solution for drone racing \cite{gao2019optimal}. In the teaching phase, the surrounding environment is reconstructed and a flight corridor is found. Then, the trajectory can be optimized within the corridor and be tracked during the repeating phase. In their research, VIO is employed for pose estimation and the speed can reach $3m/s$. However, this approach is sensitive to changing environments. When the position of the gate is changed, the drone has to learn the environment again. The other category of strategies for autonomous drone race employs coarser maps and is more oriented on gate detection. This category is more robust to displacements of gates. The winner of IROS 2016 autonomous drone race, Unmanned Systems Research Group, uses a stereo camera for detecting the gates \cite{jung2018direct}. When the gate is detected, a waypoint will be placed in the center of the gate and a velocity command is generated to steer the drone to be aligned with the gate. The winner of the IROS 2017 autonomous drone race, the INAOE team, uses metric monocular SLAM for navigation. In their approach, the relative waypoints are set and the detection of the gates is used to correct the drift of the drone \cite{moon2019challenges}. Li et al. combine gate detection with onboard IMU readings and a simplified drag model for navigation \cite{li2018autonomous}. With their approach, a Parrot Bebop 1 ($420g$) can use its native onboard camera and processor to fly through $15$ gates with $1.5m/s$ along a narrow track in a basement full of exhibits. Kaufmann et al. use a trained CNN to map the input images to the desired waypoint and the desired speed to approach it \cite{kaufmann2018deep}. With the generated waypoint, a trajectory through the gate can be determined and executed while VIO is used for navigation. The winner of the IROS 2018 autonomous drone race, the Robotics and Perception Group, finished the track with $2 m/s$ \cite{kaufmann2018beauty}. During the flight, the relative position of the gates and a corresponding uncertainty measure are predicted by a Convolutional Neural Network (CNN). With the estimated position of the gate, the waypoints are generated, and a model predictive controller (MPC) is used to control the drone to fly through the waypoints while VIO is used for navigation. From the research mentioned above, it can be seen that many of the strategies for autonomous drone racing are based on generic, but computationally relatively expensive navigation methods such as VIO or SLAM. These methods require heavier and more expensive processors and sensors, which leads to heavier and more expensive drone platforms. Forgoing these methods could lead to a considerable gain in computational effort, but raises the challenge of still obtaining fast and robust flight. In this paper, we present a solution to this challenge. In particular, we propose a Visual Model-predictive Localization (VML) approach to autonomous drone racing. The approach does not use generic vision methods such as VIO and SLAM and is still robust to gate changes, while reaching speeds competitive to the currently fastest autonomous racing drones. The main idea is to rely as much as possible on a predictive model of the drone dynamics, while correcting the model and localizing the drone visually based on the detected gates and their supposed positions in the global map. To demonstrate the efficiency of our approach, we implement the proposed algorithms on a cheap, commercially available smart-camera called ``Jevois'' and mount it on the ``Trashcan'' racing drone. The modified Trashcan weighs only $72g$ and is able to fly the race track with high speed (up to $2.6m/s$). The vision-based navigation and high-level controller run on the Jevois camera while the low-level controller provided by the open source Paparazzi autopilot \cite{gati2013open,hattenberger2014using} runs on the Trashcan. To the best of our knowledge, the presented drone is the smallest and one of the fastest autonomous racing drone in the world. Figure \ref{fig:weight and speed comparision} shows the weight and the speed of our drone in comparison to the drones of the winners of the IROS autonomous drone races. \begin{figure} [hbt!] \centering \includegraphics[scale = 0.6,trim={0cm 7cm 0cm 8cm},clip]{introduction/Weight_comparison.pdf} \caption{The weight and the speed of the approach proposed in this article and the winners' of IROS autonomous drone race. All weights are either directly from the articles or estimated from online specs of the used processors.} \label{fig:weight and speed comparision} \end{figure} \section{Problem Formulation and System Description} \subsection{Problem Formulation} In this work, we will develop a hardware and a software system that the flying platform can fly through a drone race track fully autonomously with high speed using only onboard resources. The racing track setup can be changed and the system should be adaptive to this change autonomously. For visual navigation, instead of using SLAM or VIO, we directly use a computationally efficient vision algorithm for the detection of the racing gate to provide the position information. However, implementing such a vision algorithm on low-grade vision and processing hardware results in low frequency, noisy detections with occasional outliers. Thus, a filter should be employed to still provide high frequency and accurate state estimation. In Section~\ref{lab:MHE}, we first briefly introduce the 'Snake Gate Detection' method and a pose estimation method used to provide position measurements. Then, we propose and analyze the novel visual model-predictive localization technique that estimates the drone's states within a time window. It fuses the low-frequency onboard gate detections and high-frequency onboard sensor readings to estimate the position and the velocity of the drone. The control strategy to steer the drone through the racing track is discussed. The simulation result in Section~\ref{lab:simulation result} shows the comparison between the proposed filter and the Kalman filter in different scenarios with outliers and delay. In Section~\ref{sec:Experiment Result}, we will introduce the flying experiment of the drone flying through a racing track with gate displacement, different altitude and moving gate during the flight. In Section \ref{sec:Discussion}, the generalization and the limitation of the proposed method are discussed. Section~\ref{lab:conclusion} concludes the article. \subsection{System Overview} To illustrate the efficiency of our approach, we use a small racing drone called Trashcan (Figure \ref{fig:trashcan_jevois}). This racing drone is designed for FPV racing with the Betaflight flight controller software. In our case, to fly this Trashcan autonomously, we replaced Betaflight by the Paparazzi open source autopilot for its flexibility of adding custom code, stable communication with the ground for testing code and active maintenance from the research community. In this article, the Paparazzi software only aims to provide a low level controller. The main loop frequency is $2k$Hz. We employ a basic complementary filter for attitude estimation and the attitude control loop is a cascade control including a rate loop and an attitude loop. For each loop, a P-controller is used. The details of Trashcan's hardware can be found in Table \ref{tab:specifications of Trashcan} \begin{figure} [hbt!] \centering \includegraphics[scale=0.06,trim={0cm 0cm 0cm 0cm},clip]{experiment/Trancan_mavlab.jpeg} \caption{The flying platform. The Jevois is mounted on the Trashcan. The Trashcan provides power to the Jevois and they communicate with each other by the MAVLink protocol. The weight of the whole platform is only $72g$.} \label{fig:trashcan_jevois} \end{figure} \begin{table}[H] \caption{The specifications of Trashcan's hardware} \centering \begin{tabular}{|c|c|} \hline \centering Weight & $48g$ (with the original camera) \\ \hline Size & $98mm\times98mm\times36mm$ \\ \hline Motor & TC0803 KV15000 \\ \hline MCU & STM32F4 ($100$MHZ) \\ \hline Receiver & FrSky D16 \\ \hline \end{tabular} \label{tab:specifications of Trashcan} \end{table} For the high level vision, flight planning and control tasks, we use a light-weight smart camera ($17g$) called Jevois, which is equipped with a quad core ARM Cortex A7 processor and a dual core Mali-400 GPU. In our experiment, there are two threads running on the Jevois, one of which is for vision detection and the other one is for filtering and control (Figure \ref{fig:two threads}(a)). In our case, the frequency of detecting gates ranges from $10$HZ to $30$HZ and the frequency of filtering and control is set to $512$HZ. The Gate detection thread processes the images in sequence. When it detects the gate it will send a signal telling the other thread a gate is detected. The control and filtering thread keeps predicting the states and calculating control command in high frequency. It uses a novel filtering method, explained in Section \ref{lab:MHE}, for estimating the state based on the IMU and the gate detections. \begin{figure} [hbt!] \centering \subfigure[The two threads structure running on Jevois. For the gate detection thread, the frequency of gate detection ranges from $10$HZ to $30$HZ while the frequency of control and filtering thread is $512$HZ]{\includegraphics[scale=0.45,trim={0cm 0cm 0cm 0cm},clip]{experiment/two_threads.pdf}} \hspace{1cm} \\ \subfigure[The software architecture of the UAV platform. The vision detection, filtering and control are all running on Jevois. Paparazzi provides the low level controller to stabilize the drone]{\includegraphics[scale=0.5,trim={0cm 0cm 0cm 0cm},clip]{experiment/architecture.pdf}} \caption{The architectures of the software on Jevois and the software of the whole flying platform} \label{fig:two threads} \end{figure} The communication between the Jevois and Trashcan is based on the MAVLink protocol with a baud rate of $115200$. Trashcan sends the AHRS estimation with a frequency of $512$HZ. And the Jevois sends the attitude and altitude commands to Trashcan with a frequency of $200$HZ. The software architecture of the flying platform can be found in Figure \ref{fig:two threads}(b). In Figure \ref{fig:two threads}(b), the Gate detection and Pose estimation module first detects the gate and estimates the relative position between the drone and the gate. Next, the relative position will be sent to the Gate assignment module to be transferred to global position. With the global position measurements and the onboard AHRS reading, the proposed VML filter fuses them together to have accurate position and velocity estimation. Then, the Flight plan and high level controller will calculate the desired attitude commands to steer the drone through the whole track. These attitude commands will be sent to the drone via MAVLink protocol. On the Trashcan drone, Paparazzi provides the low level controller to stabilize the drone. \section{Robust Visual Model-predictive Localization (VML) and Control} \label{lab:MHE} State estimation is an essential part of drones' autonomous navigation. For outdoor flight, fusing a GPS signal with onboard inertial sensors is a common way to estimate the pose of the drone \cite{santana2015outdoor}. However, for indoor flight, a GPS signal is no longer available. Thus, off-board cameras \cite{lupashin2014platform}, Ultra Wide Band Range beacons \cite{mueller2015fusing} or onboard cameras \cite{mcguire2017efficient} can be used to provide the position or velocity measurements for the drone. The accuracy and time-delay of these types of infrastructure setups differ from each other. Hence, the different sensing setups have an effect on what type of filtering is best for each situation. The most commonly used state estimation technique in robotics is the Kalman filter and its variants, such as the Extended Kalman filter \cite{weiss2012versatile,santamaria2018autonomous,gross2012flight}. However, the racing scenario has properties that make it challenging for a Kalman filter. Position measurements from gate detections often are subject to outliers, have non-Gaussian noise, and can arrive at a low frequency. This makes the typical Kalman filter approach unsuitable because it is sensitive to outliers, is optimal only for Gaussian noise, and can converge slowly when few measurements arrive. In this section, we will propose a visual model-predictive localization technique which is robust to low-frequency measurements with significant numbers of outliers. Subsequently, we will also present the control strategy for the autonomous drone race. \subsection{Gate assignment} In this article, we use the ``snake gate detection'' and pose estimation technique as in Li et al. \cite{li2018autonomous}. The basic idea of snake gate detection is searching for continuing pixels with the target color to find the four corners of the gate. Subsequently, a perspective $n$-point (PnP) problem is solved, using the position of the four corners in the image plane, the camera's intrinsic parameters, and the attitude estimation to solve the relative position between the drone and the $i^{th}$ gate at time $k$, $\Delta \bar{\mathbf{x}}_k^i = [\Delta \bar{x}_k^i, \Delta \bar{y}_k^i]$. Figure \ref{fig:gate detection} shows this procedure, which is explained more in detail in \cite{li2018autonomous}. In most cases, when the light is even and the camera's auto exposure works properly, the gate in the image is continuous and the Snake gate detection algorithm can detect the gate correctly. However, after an aggressive turn, such as a turn to a window, the camera cannot adapt to the new light condition immediately. In this case, Snake gate detection usually cannot detect the gate. Another failure case is that due to the uneven light condition or the similar color in the background, Snake gate detection may get interfered with. These situations make the searching stop in the middle of the bar or stop at the background pixels. Although we have some mechanism to prevent these false positive detections, there is still a small chance that a false positive happens. The negative effect is that outliers may appear which leads to a challenge for the filter and the controller. \begin{figure} [hbt!] \centering \subfigure[Snake gate detection. From one point on the gate $P_0$, the Snake gate detection method first searches up and down, then left and right to find all the four corners of the gate]{ \includegraphics[scale = 0.3,trim={0cm 0 0cm 0},clip]{MHE/SNG_step1.png}} \hspace{2cm} \subfigure[When the four points of the gate are found, The relative position between the drone and the gate is calculated with the points' position, the camera's intrinsic parameters and the current attitude estimation]{ \includegraphics[width=.40\textwidth,trim={0cm 0 0cm 0},clip]{MHE/4_rays_meet.png} } \caption{The Snake gate detection method and pose estimation method \cite{li2018autonomous}} \label{fig:gate detection} \end{figure} Since for any race a coarse map of the gates is given a priori (cf. Figure \ref{fig:tracks}), the position and the heading of gate $i$, $\mathbf{x}_g^i=[x_g^i,y_g^i,\psi_g^i]$ can be known roughly (Figure \ref{fig:gate mismatch}). We use the gates' positions to transfer the relative position $\Delta \mathbf{\bar{x}}_k^i$ measured by camera to a global position $\mathbf{\bar{x}}_k = [\bar{x}_k, \bar{y}_k]$ by equation \ref{equ:local2global}. In equation \ref{equ:local2global}, $x_g^i$, $y_g^i$ and $\psi_g^i$ are the position of the gate $i$ which are known from the map. \begin{align} \begin{bmatrix} \bar{x}_k \\ \bar{y}_k \end{bmatrix} = \begin{bmatrix} x_g^i \\ y_g^i \end{bmatrix}+\begin{bmatrix} \cos{\psi_g^i} & \sin{\psi_g^i} \\ -\sin{\psi_g^i} & \cos{\psi_g^i} \end{bmatrix} \begin{bmatrix} \Delta \bar{x}_k^i \\ \Delta \bar{y}_k^i \end{bmatrix} \label{equ:local2global} \end{align} Here, we assume that the position of the gate is fixed. Any error experienced in the observations is then assumed to be due to estimation drift on the part of the drone. Namely, without generic VIO, it is difficult to make the difference between drone drift and gate displacements. If the displacements of the gates are moderate, this approach will work: after passing a displaced gate, the drone will see the next gate, and correct its position again. We only need a very rough map with the supposed global positions of the gates (Figure \ref{fig:gate mismatch}). Gate displacements only become problematic if after passing gate $i$ the gate $i+1$ would not be visible when following the path from the expected positions of gate $i$ to gate $i+1$. \begin{figure} [hbt!] \centering \includegraphics[scale=0.6,trim={0cm 0cm 0cm 0cm},clip]{gate_assignment/global_position_map.pdf} \caption{The gates are displaced. The drone uses the gate's position on the map to navigate. After passing through the first gate, it will use the second gate's position on the map for navigation. After seeing the second gate, the position of the drone will be corrected.} \label{fig:gate mismatch} \end{figure} At the IROS drone race, gates are identical, so for our position to be estimated well, we need to assign a detection to the right gate. For this, we rely on our current estimated global position $\hat{\mathbf{x}}_{k} = [\hat{x}_k,\hat{y}_k]$. When a gate is detected, we go through all the gates on the map using equation \ref{equ:local2global} to calculate the predicted position $\bar{\mathbf{x}}_k^i=[\bar{x}_k^i,\bar{y}_k^i]$. Then, we calculate the distance between the predicted drone's position ${\bar{\mathbf{x}}}_k^i$ and its estimated position ${\hat{\mathbf{x}}}_{k}$ at time $t_k$ by \begin{align} \Delta d_k^i = \norm{\bar{\mathbf{x}}_k^i - \hat{\mathbf{x}}_{k}}_2 \end{align} After going through all the gates, the gate with the predicted position closest to the estimated drone position is considered as the detected gate. At time $t_k$, the measurement position is determined by \begin{align} \begin{split} &j = \operatorname*{argmin}_i\Delta d_k^i \\ &\bar{\mathbf{x}}_k = \bar{\mathbf{x}}_k^j \end{split} \end{align} \begin{figure} [hbt!] \centering \subfigure[It iterates through all gates, evaluating where the drone would be if it was observing those gates. The position closest to the current global position is chosen as the right observation.]{ \includegraphics[width=.45\textwidth,trim={5cm 0cm 5cm 0},clip]{gate_assignment/gate_assignment_1.pdf}} \hspace{0.5cm} \subfigure[The drone detects other gate instead of the one to be flew through. This still helps state estimation, as the observed gate indeed gives an estimate closest to the current estimated global position. ]{ \includegraphics[width=.45\textwidth,trim={5cm 0 5cm 0},clip]{gate_assignment/gate_assignment_2.pdf} } \caption{In most cases the drone will detect the next gate in the race track. However, the proposed gate assignment strategy also allows to exploit detections of other gates.} \label{fig:gate_assignment} \end{figure} The gate assignment technique can help us obtain as much information on the drone's position as possible when a gate is detected. Namely, it can also use detections of other gates than the next gate, and allows to use multiple gate detections at the same time in order to improve the estimation. Still, this procedure will always output a global coordinate for any detection. Hence, false positive or inaccurate detections can occur and have to be dealt with by the state estimation filter. \subsection{Visual Model-predictive Localization (VML)} The racing drone envisaged in this article has a forward-looking camera and an Inertial Measurement Unit (IMU). As explained in the previous section, the camera is used for localization in the environment, with the help of gate detections. Using a typical, cheap CMOS camera will result in relatively slow position updates from the gate detection, with occasional outliers. The IMU can provide high-frequency, and quite accurate attitude estimation by means of an Attitude and Heading Reference System (AHRS). The accelerations can also be used in predicting the change in translational velocities of the drone. In traditional \emph{inertial} approaches, the accelerations would be integrated. However, for smaller drones the accelerometer readings become increasingly noisy, due to less possible damping of the autopilot. Integrating accelerometers is `acceleration stable', meaning that a bias in the accelerometers that is not accounted for can lead to unbounded velocity estimates. Another option is to use the accelerometers to measure the drag on the frame, which - assuming no wind - can be easily mapped to the drone's translational velocity (cf. \cite{li2018autonomous}). Such a setup is `velocity stable', meaning that an accelerometer offset of drag model error would lead to a proportional velocity offset, which is bounded. On really small vehicles like the one we will use in the experiments, the accelerometers are even too noisy for reliably measuring the drag. Hence, the proposed approach uses a prediction model that only relies on the attitude estimated by the AHRS which is an indirect way of using the accelerometer. It uses the attitude and a constant altitude assumption to predict the forward acceleration, and subsequently velocity of the drone. The model is corrected from time to time by means of the visual localization. Although the IMU is used for estimating attitude, it is not used as an inertial measurement for updating translational velocities. This leads to the name of the method; Visual Model-predictive Localization (VML), which will be explained in detail in this subsection. \subsubsection{Prediction Error Model} As mentioned above, the attitude estimated from the AHRS is used in the prediction of the drone's velocity and position. However, due to the AHRS bias and the model inaccuracy, the prediction will diverge from the ground truth over time. Fortunately, we have visual gate detections to provide position information. This \emph{vision-based localization} will not integrate the error over time but it has a low frequency. Figure \ref{fig:MHE} is a sketch of what the onboard predictions and the vision measurements look like. The red curve is the prediction result diverging from the ground truth curve because of AHRS biases. The magenta dots are the low frequency detections which distribute around the ground truth. The error between the prediction and measurements can be modeled as a linear function of time which will be explained later in this section. When the error model is estimated correctly, it can be used to compensate for the divergence of the prediction to obtain accurate state estimation. \begin{figure} [hbt!] \centering \includegraphics[scale=0.45]{MHE/MHE.pdf} \caption{Illustrative sketch of the time window $t\in [t_{k-q},t_k]$. At the beginning of this time window, the difference between the ground truth and the prediction is $\Delta x_{k-q}$ and $\Delta v_{k-q}$. The prediction can be done with high frequency AHRS estimates. The vision algorithm outputs low frequency unbiased measurements. The prediction curve deviates more and more from the ground truth curve over time because of the AHRS bias and model inaccuracy.} \label{fig:MHE} \end{figure} Assuming that there is no wind, and knowing the attitude, we can predict the acceleration in the $x$ and $y$ axis. Figure \ref{fig:Free body diagram} shows the forces the drone experiences. $T_*^*$ denotes the acceleration caused by the thrust of the drone. It provides the forward acceleration together with the pitch angle $\theta$. $D_*^*$ denotes the acceleration caused by the drag which is simplified as a linear function of body velocity. \cite{faessler2017differential} \begin{align} \begin{cases} D^B_x &= c_xv^B_x \\ D^B_y &= c_yv^B_y \end{cases} \end{align} where $c_*$ is the drag coefficient. \begin{figure} [hbt!] \centering \includegraphics[scale=0.6,trim={0cm 6cm 0cm 5cm},clip]{MHE/dynamic_model.pdf} \caption{Free body diagram of the drone. $v_*^*(t)$ is the velocity of the drone. The superscript $E$ denotes north-east-down (NED) earth frame while $B$ denotes body frame. $T_*^*$ is the acceleration caused by thrust and $D_*^*$ is the acceleration caused by the drag, which is a linear function of the body velocity. $g$ is the gravity factor and $c$ is the drag factor which is positive. $\theta(t)$ is the pitch angle of the drone. It should be noted that since we use NED frame, $\theta < 0$ when the drone pitches down.} \label{fig:Free body diagram} \end{figure} According to Newton's second law in $xoz$ plane, \begin{align} \begin{bmatrix} a_x(t) \\ a_z(t) \end{bmatrix}= \begin{bmatrix} 0 \\ g \end{bmatrix}+\Re_B^E(\theta) \begin{bmatrix} 0 \\ T_z^B(t) \end{bmatrix}+\Re_B^E(\theta)\mathbf{D}\Re_E^B(\theta) \begin{bmatrix} v^E_x(t) \\ v^E_z(t) \end{bmatrix} \label{equ:Newton second law} \end{align} Expand equation \ref{equ:Newton second law}, we have \begin{align} \begin{cases} a_x(t) = \sin{\theta(t)}T^B_z(t)-v^E_x(t)c \\ a_z(t) = \cos{\theta(t)}T^B_z(t)+g-v^E_z(t)c \end{cases} \end{align} where $\Re_E^B(\theta)$ is the rotation matrix and $\mathbf{D}=\begin{bmatrix} -c & 0 \\ 0 & -c \end{bmatrix}$ is the drag coefficient matrix. If the altitude is kept the same as in the IROS drone race, we have \begin{align} \begin{cases} T^B_z(t) = \frac{-g}{\cos\theta(t)} \\ a_x(t) = -g\tan\theta(t) - v^E_x(t)c \end{cases} \end{align} Since the model in the $y$ axis has the same form as in the $x$ axis, the dynamic model of the quadrotor can be simplified as \begin{align} \begin{cases} \dot{x}(t) &= v^E_x(t) \\ \dot{y}(t) &= v^E_y(t) \\ \dot{v}^E_x(t) &= -g\tan{\theta}(t) - v^E_x(t)c \\ \dot{v}^E_y(t) &= g\tan{\phi}(t) - v^E_y(t)c \end{cases} \label{equ:dynamics model} \end{align} where $x(t)$ and $y(t)$ are the position of the drone, and $\phi$ is the roll angle of the drone. In equation \ref{equ:dynamics model}, the movement in $x$ and $y$ axis is decoupled. Thus we only analyze the movement in the $x$ axis. The result can be directly generalized to the $y$ axis. The nominal model of the drone in $x$ axis can be written by \begin{align} \dot{\mathbf{x}}^n(t) = \mathbf{A}\mathbf{x}^n(t) + \mathbf{B} u^n(t) \label{equ:nominal model} \end{align} where $\mathbf{x}^n(t) = \begin{bmatrix} x^n(t) \\ v_x^n(t) \end{bmatrix}$,$\mathbf{A} = \begin{bmatrix} 0 & 1 \\ 0 & -c \end{bmatrix}$, $\mathbf{B} = \begin{bmatrix}0 \\ -g\end{bmatrix}$ and $u^n = \tan(\theta)$. The superscript $n$ denotes the nominal model. Similarly, with the assumption that the drag factor is accurate, the prediction model can be written as \begin{align} \dot{\mathbf{x}}^p(t) = \mathbf{A}\mathbf{x}^p(t) + \mathbf{B} u^p(t) \label{equ:prediction model} \end{align} where $\mathbf{x}^p(t) = \begin{bmatrix} x^p(t) \\ v_x^p(t) \end{bmatrix}$ and $u^p = \tan(\theta + \theta_b)$. $\theta_b$ is the AHRS bias and is assumed to be a constant in short time. Consider a time window $t\in [t_{k-q},t_k]$, the states of nominal model at time $t_k$ are \begin{align} \mathbf{x}^n_k = (\mathbf{I}+\mathbf{A}T_s)^{q}\mathbf{x}^n_{k-q}+\sum_{i=1}^{q}(\mathbf{I}+\mathbf{A}T_s)^{i-1}\mathbf{B}T_su^n_{k-i} \end{align} where $T_s$ is the sampling time. The predicted states of model \ref{equ:prediction model} are \begin{align} \mathbf{x}^p_k = (\mathbf{I}+\mathbf{A}T_s)^{q}\mathbf{x}^p_k+\sum_{i=1}^{q}(\mathbf{I}+\mathbf{A}T_s)^{i-1}\mathbf{B}T_su^p_{k-i} \end{align} Thus, the error between the predicted model and nominal model can be written as \begin{align} \Delta \mathbf{x}^p_k = (\mathbf{I}+\mathbf{A}T_s)^{q}\left[\mathbf{x}^p_{k-q}-\mathbf{x}^n_{k-q} \right]+ \sum_{i=1}^{q}(\mathbf{I}+\mathbf{A}T_s)^{i-1} T_s\mathbf{B}u_b \label{equ:prediction error} \end{align} where $u_b = (u^p_{k-i}-u^n_{k-i})$ is the input bias which can be considered as a constant in a short time. In equation \ref{equ:prediction error}, \begin{align} (\mathbf{I}+\mathbf{A}T_s)^{i}=\begin{bmatrix} 1 & T_s\sum\limits_{j=1}^{i}(1-cT_s)^{j-1} \\ 0 & (1-cT_s)^{i} \end{bmatrix} \end{align} Since the sampling time $T_s$ is small, ($T_s=0.002s$ in our case), we can assume \begin{align} (\mathbf{I}+\mathbf{A}T_s)^{i} \approx \begin{bmatrix} 1 & iT_s \\ 0 & 1 \end{bmatrix} \end{align} Hence, equation \ref{equ:prediction error} can be approximated by \begin{align} \Delta \mathbf{x}^p_k &=(\mathbf{I}+\mathbf{A}T_s)^{q}\left[\mathbf{x}^p_{k-q}-\mathbf{x}^n_{k-q} \right]+\sum_{i=1}^{q}\begin{bmatrix} 1 & iT_s \\ 0 & 1 \end{bmatrix}T_s\mathbf{B}u_b \\ &= \begin{bmatrix} 1 & qT_s \\ 0 & 1 \end{bmatrix}\begin{bmatrix} \Delta x_k^p \\ \Delta v_k^p \end{bmatrix}+\begin{bmatrix} q & \frac{q(q+1)}{2}T_s \\ 0 & q \end{bmatrix}T_s\mathbf{B}u_b \label{equ:prediction error 2} \end{align} Expanding equation \ref{equ:prediction error 2}, we have \begin{align} \begin{cases} \Delta x^p_{k} = \Delta x^p_{k-q} + {qT_s}\Delta {v_x^p}_{k-q} - \frac{q(q+1)}{2}{T_s^2}g{u_b} \\ \Delta v^p_{k} = \Delta {v_x^p}_{k-q} - q{T_s}g{u_b} \end{cases} \label{equ:quad model} \end{align} Actually, $qT_s = t_k-t_{k-q}$ is the time span of the time window. If we neglect $T_s^2$ term, we can have the prediction error at time $t_k$ \begin{align} \Delta x^p_k = \Delta x^p_{k-q} + (t_k-t_{k-q})\Delta v^p_{k-q} \label{equ:linear_regression_model} \end{align} Thus, within a time window, the state estimation problem can be transformed to a linear regression problem with model equation \ref{equ:linear_regression_model}, where $\mathbf{\hat{\beta}}=[\Delta x^p_{k-q},\Delta v^p_{k-q}]^{\rm T}$ are the parameters to be estimated. From equation \ref{equ:linear_regression_model}, we can see that in a short time window, the AHRS bias does not affect the prediction error. The error is mainly caused by the initial prediction error $\Delta x^p_{k-q}$. Furthermore, velocity error $\Delta v^p_{k-q}$ can cause the prediction error to diverge over time. If the time window is updated frequently, model \ref{equ:linear_regression_model} can remain accurate enough. Hence, in this work, we focus on the main contributors to the prediction error and will not estimate the bias term. The next step is how to efficiently and robustly estimate $\Delta x^p_{k-q}$ and $\Delta v^p_{k-q}$. In this simplified linear prediction error model, we use the constant altitude assumption to approximate the thrust $T^B_z$ on the drone, which may lead to inaccuracy of the model. During the flight, this assumption may be violated by aggressive maneuvers in $z$ axis. However, if the maneuver in $z$ axis is not very aggressive and the time window is small (in our case less than $2s$), the prediction error model's inaccuracy level can be kept in an acceptable range. In the simulation and the real-world experiment shown later, we will show that although the altitude of the drone changes $1m$ in $2s$, the proposed filter can still have very high accuracy with this assumption. Another way to improve the model accuracy is to estimate the thrust by fusing the accelerometer readings and rotor speed together, which needs the establishment of the rotors' model. It should also be noted that we neglect $T_s^2$ term in equation \ref{equ:quad model} to have a linear model. To increase the model accuracy, the prediction error model can be a quadratic model. In our case, since the time window is small, the linear model is accurate enough. \subsubsection{Parameter Estimation Method} The classic way for solving the linear regression problem based on equation \ref{equ:linear_regression_model} is to use the Least Square Method (LS Method) with all data within the time window and estimate the parameters $\hat{\mathbf{\beta}}$. \begin{align} \mathbf{\hat{\beta}} = (\mathbf{X}^{\rm T}\mathbf{X})^{-1}\mathbf{X}^{\rm T}\mathbf{Y} \label{equ:solution of LS} \end{align} where \begin{align*} \mathbf{\hat{\beta}} &= \begin{bmatrix} \Delta x^p_{k-q} & \Delta v^p_{k-q} \end{bmatrix}, \mathbf{X} = \begin{bmatrix} 1 & t_{k-q}-t_{k-q} \\ 1 & t_{k-q+1} - t_{k-q} \\ \vdots & \vdots \\ 1 & t_k -t_{k-q} \end{bmatrix}, \mathbf{Y} = \begin{bmatrix} x^p_{k-q} - \bar{x}_{k-q} \\ x^p_{k-q+1} - \bar{x}_{k-q+1} \\ \vdots \\ x^p_{k} - \bar{x}_{k} \\ \end{bmatrix} \end{align*} The LS Method in equation \ref{equ:solution of LS} can give optimal unbiased estimation. However, if there exist outliers in the time window $t\in [t_{k-q},t_k]$, they will be considered equally during the estimation process. These outliers can significantly affect the estimation result. Thus, to exclude the outliers, we employ random sample consensus (RANSAC) to increase the performance \cite{fischler1981random}. In a time window $t\in [t_{k-q},t_k]$, we first calculate the prediction error $\Delta \mathbf{x}^p_{k-q,k}=\{\Delta x^p_{k-q+i}|\Delta x^p_{k-q+i}=\bar{x}_{k-q+i} - x^p_{k-q+i},0\leq i \leq q\}$ and time difference $\Delta \mathbf{t} = \{\Delta t_i| \Delta t_i = t_i-t_{k-q},0\leq i \leq q\}$. For each iteration $i$, the subsets of $\Delta \mathbf{x}^p_{k-q,k}$ and $\Delta \mathbf{t}_{k-q,k}$ are randomly selected, which are denoted by $\Delta \mathbf{x}^s_{k-q,k}$ and $\Delta \mathbf{t}^s_{k-q,k}$. The size of the subset $n^s$ can be calculated by $n^s=q\sigma_s$, where $\sigma_s$ is the ratio of sampling. We use subsets $\Delta \mathbf{x}^s_{k-q,k}$ and $\Delta \mathbf{t}^s_{k-q,k}$ to estimate the parameters $\hat{\mathbf{\beta}}_i$ (Figure~\ref{fig:ransac}). \begin{figure} [hbt!] \centering \includegraphics[scale=0.6,trim={0cm 6cm 0cm 5cm},clip]{fitting_method/ransac.pdf} \caption{In the $i^{th}$ iteration, the data in the time window $t\in [t_1,t_9]$ will be randomly sampled into $\Delta \mathbf{t}^s_{k-q,k}$ and $\Delta \mathbf{x}^s_{k-q,k}$. Then LS Method (equation \ref{equ:solution of LS}) will be used to estimate the parameters $\hat{\mathbf{\beta}}_i$. In this example, $\sigma_s=0.4$, which means that $n^s=9\times0.4\approx4$ samples should be sampled. } \label{fig:ransac} \end{figure} When $\hat{\mathbf{\beta}}_i$ is estimated, it will be used to calculate the total prediction error $\varepsilon_i$ of the all the data in the time window $t_i\in [t_{k-q},t_k]$ by \begin{align} \varepsilon_i = \sum^k_{j=k-q}\epsilon_j \label{equ:residual} \end{align} where \begin{align} \epsilon_j=\left\{ \begin{array}{@{}ll@{}} \norm{\Delta {v^p_{k-q}}_i(\Delta t_j - \Delta t_{k-q})+\Delta {x^p_{k-q}}_i - \Delta x^p_j}_2, & \text{if}\ \epsilon_j < \sigma_{th} \\ \sigma_{th}, & \text{otherwise} \end{array}\right. \end{align} In the process of equation \ref{equ:residual}, if $\epsilon_j$ is larger than a threshold $\sigma_{th}$, it counts the threshold as the error. After all the iterations, the parameters $\hat{\mathbf{\beta}_i}$ which has the least prediction error will be selected to be the estimated parameters for this time window $t_i\in [t_{k-q},t_k]$. The pseudo-code of this Basic RANSAC Fitting (BRF) method can be found in Algorithm $2$. With the Basic RANSAC Fitting (BRF) method, the influence of the outliers is reduced, but it has no mechanism to handle over-fitting. For example, in time window $t_i\in [t_{k-q},t_k]$, BRF can estimate the optimal parameters $\hat{\beta}$ with the minimal error. However, sometimes it will set $\Delta v^p_{k-q}$ to unrealistically high values. This happens when there are few detections in the time window, which may result in the inaccurate estimation of the parameters. In reality, the drone flies at maximum speed $3m/s$, so the velocity prediction error at the start of time window $t_{k-q}$ should not be too large. To avoid over-fitting, we add a penalty factor/prior matrix $\mathbf{P}$ to limit $\Delta v^p_{k-q}$ in the fitting process. The loss function can be written as \begin{align} J(\mathbf{\hat{\beta}})= \norm{\mathbf{X}\mathbf{\hat{\beta}}-\mathbf{Y}}_2^2 + \mathbf{\hat{\beta}}^{\rm T}\mathbf{P}\mathbf{\hat{\beta}} \end{align} where \begin{align} \mathbf{P} = \begin{bmatrix} p_x & 0 \\ 0 & p_v \end{bmatrix} \end{align} is the penalty factor/prior matrix. To minimize the loss function, we take derivatives of $J(\mathbf{\hat{\beta}})$ and let it be $0$ \begin{align} \frac{\partial J(\mathbf{\hat{\beta}})}{\partial \hat{\beta}}=2\mathbf{X}^{\rm T}\mathbf{X}\mathbf{\hat{\beta}}-2\mathbf{X}^{\rm T}\mathbf{Y}+\mathbf{P}\mathbf{\hat{\beta}} + \mathbf{P}^{\rm T}\mathbf{\hat{\beta}}=0 \label{equ:prior} \end{align} Then we have the estimated parameters by \begin{align} \mathbf{\hat{\beta}}=(\mathbf{X}^{\rm T}\mathbf{X}+\mathbf{P})^{-1}\mathbf{X}^{\rm T}\mathbf{Y} \label{equ:solution of prior LS} \end{align} We call the use of equation \ref{equ:solution of prior LS} inside the RANSAC fitting the Prior RANSAC fitting (PRF). Compared to equation \ref{equ:solution of LS}, PRF has the penalty factor/prior matrix $\mathbf{P}$ in it. By tuning matrix $\mathbf{P}$ we can add the prior knowledge to the parameter distribution. For example, in our case $\Delta v^p_{k-q}$ should not be high. Thus, we can increase $p_v$ in $\mathbf{P}$ to limit the value of $\Delta v^p_{k-q}$. To conclude, in this part we propose $3$ methods for estimating the parameters $\mathbf{\hat{\beta}}$. The first one is the LS Method which considers all the data in a time window equally. The second method is Basic RANSAC Fitting method (BRF), which has the mechanism to exclude the outliers. And the third one is Prior RANSAC Fitting method (PRF), which can not only exclude the outliers but also take into account the prior knowledge to avoid over-fitting. In the next section, we will discuss and compare these $3$ methods in simulation to see which one is the most suitable for our drone race scenario. \subsubsection{Prediction Compensation} After the error model (equation \ref{equ:linear_regression_model}) is estimated in time window $k$, the error model can be used to compensate the prediction by \begin{align} \begin{bmatrix} \hat{x}_{k+i} \\ \hat{v}_{k+i} \end{bmatrix} =\begin{bmatrix} x^p_{k+i} \\ v^p_{k+i} \end{bmatrix} + \begin{bmatrix} 1 & t_{k+i}-t_{k-q} \\ 0 & 1 \end{bmatrix}\begin{bmatrix}\Delta x^p_{k-q}\\ \Delta v^p_{k-q} \end{bmatrix} \end{align} Also, at each prediction step, the length $\Delta T = t_k-t_{k-q}$ of the time window will be checked, since the simplified model \ref{equ:linear_regression_model} is based on the assumption that the time span of the time window $\Delta T$ is small. If $\Delta T$ is larger than the allowed maximum time window size $\Delta T_{max}$, the filter will delete the oldest elements until $\Delta T < \Delta T_{max}$. The pseudo-code of the proposed VML with LS Method can be found in Algorithm $3$ and Algorithm $4$. \subsubsection{Comparison with Kalman Filter} When it comes to state estimation or filtering technique, it is inevitable to mention the Kalman filter which is the most commonly used state estimation method. The basic idea of the Kalman filter is that at time $t_{k-1}$, it first predicts the states at time $t_{k}$ with its error covariance $\mathbf{P}_{k|k-1}$ to have prior knowledge of the states at $t_{k}$. \begin{align} \begin{split} &\hat{\mathbf{X}}_{k|k-1}=\hat{\mathbf{X}}_{k-1}+\mathbf{f}(\hat{\mathbf{X}}_{k-1},\mathbf{u}_{k-1}){\rm T_s} \\ &\mathbf{F}_{k-1}=\frac{\partial}{\partial \mathbf{x}}\mathbf{f}(\mathbf{x}(t),\mathbf{u}(t))|_{{\mathbf{x}(t)=\hat{\mathbf{x}}}_{k-1}} \\ &\Phi_{k|k-1}\approx\mathbf{I}+\mathbf{F}_{k-1}{\rm T} \\ &\mathbf{H}_k=\frac{\partial}{\partial \mathbf{x}}\mathbf{h}(\mathbf{x}(t))|_{{\mathbf{x}(t)=\hat{\mathbf{x}}}_{k}} \\ &\mathbf{P}_{k|k-1}=\mathbf{\Phi}_{k|k-1}\mathbf{P}_{k-1}\mathbf{\Phi}_{k|k-1}^{\rm T}+\mathbf{Q}_{k-1} \\ \end{split} \end{align} When an observation arrives, the Kalman filter uses an optimal gain $\mathbf{K}_k$ which is a combination of the prior error covariance $\mathbf{P}_{k+1|k}$ and the observation's covariance $\mathbf{R}_k$ to compensate the prediction, which as a result, leads to the minimum error covariance $\mathbf{P}_{k}$. \begin{align} \begin{split} &\delta\hat{\mathbf{X}}_k = \mathbf{K}_k\left \{ \mathbf{Z}_k-\mathbf{h}[\hat{\mathbf{X}}_{k|k-1},k]\right \} \\ &\mathbf{K}_k =\mathbf{P}_{k|k-1}\mathbf{H}_k^{\rm T}[\mathbf{H}_k\mathbf{P}_{k|k-1}\mathbf{H}_k^{\rm T}+\mathbf{R}_k]^{-1} \\ &\hat{\mathbf{X}}_k = \hat{\mathbf{X}}_{k|k-1}+\delta\hat{\mathbf{X}}_k \\ &\mathbf{P}_k=(\mathbf{I}-\mathbf{K}_k\mathbf{H}_k)\mathbf{P}_{k/k-1}(\mathbf{I}-\mathbf{K}_k\mathbf{H}_k)^{\rm T}+\mathbf{K}_k\mathbf{R}_k\mathbf{K}_k^{\rm T} \end{split} \end{align} According to \cite{diderrich1985kalman}, a Kalman filter is a least square estimation made into a recursive process by combining prior data with coming measurement data. The most obvious difference between the Kalman filter and the proposed VML is that VML is not a recursive method. It does not estimate the states at $t_k$ only based on the last step states $\mathbf{\hat{x}}_{k-1}$. It estimates the states considering the previous prediction and observations in a time window. In the VML approach, we use least square method within a time window, which looks similar to the least square estimation method. However, there are two major differences between the two methods. The first one is that in the proposed VML, the prediction information is fused to the VML. Secondly and most importantly, we estimate the prediction error model $\hat{\beta}$ instead of estimating all the states in the time window as in the least square method. Thus, the VML has its advantages of handling outliers and delay by its time window mechanism and it also has the advantage of computational efficiency to the Least Square Estimation. In Section \ref{lab:simulation result}, we will introduce Kalman filter's different variants for outliers and delay and compare them with VML in estimation accuracy and computation load in detail. \subsection{Flight Plan and High Level Control} With the state estimation method explained above, to fly a racing track, we employ a flight plan module which sets the waypoints that guide the drone through the track and a two-loop cascade P-controller to execute the reference trajectory (Figure \ref{fig:controller}). \begin{figure} [hbt!] \centering \includegraphics[scale=0.5,trim={0cm 0cm 0cm 0cm},clip]{simulation/controller.pdf} \caption{The Flight plan module generates the waypoints for the drone to fly the track. When the distance between the drone and the current waypoint $d < D_{turn}$, the drone starts to turn to the next waypoint while still approaching the current waypoint. When $d < D_{switch\_wp}$, the drone switches the current waypoint to the next one. The cascade P-controller is used for executing the reference trajectory from the flight plan module. The attitude and rate controllers are provided by the Paparazzi autopilot. $k_r$ is a positive constant to adjust the speed of the drone's yawing to the setpoint. In the real world experiment and simulation, we set $k_r=1$.} \label{fig:controller} \end{figure} Usually, the waypoint is just behind the gate. When the distance between the drone and the waypoint is less than a threshold $D_{turn}$, the gate can no longer be detected by our method, and we set the heading of the drone to the next waypoint. This way, the drone will start turning towards the next gate before arriving at the waypoint. When the distance between the drone and the waypoint is within another threshold $D_{switch\_wp}$, the waypoint switches to the next point. With this strategy, the drone will not stop at one waypoint but already start accelerating to the next waypoint, which can help to save time. The work flow of flight plan module can be found in Algorithm $5$. We employ a two-loop cascade P controller (equation \ref{equ:simulation controller}) to control the drone to reach the waypoints and follow the heading reference generated from the flight plan module. The altitude and attitude controllers are provided by the Paparazzi autopilot, and are both two-loop cascade controllers. \begin{align} \mathbf{\Phi}^c(k)=\mathbf{R}_{\psi}\mathbf{K}_v(\mathbf{K}_x(\mathbf{x}^r(k)-\hat{\mathbf{x}}(k))-\hat{\mathbf{v}}(k)) \label{equ:simulation controller} \end{align} where $\mathbf{\Phi}^c(k) = [\phi^c(k), \theta^c(k)]^{\rm T}$, $\mathbf{R}_{\psi}=\begin{bsmallmatrix} \cos(\psi) & -\sin(\psi) \\ \sin(\psi) & \cos(\psi) \end{bsmallmatrix}$, $\mathbf{K}_v=\begin{bsmallmatrix}{k_v}{}_x & 0 \\ 0 & {k_v}_y \end{bsmallmatrix}$, $\mathbf{K}_x=\begin{bsmallmatrix}k_x & 0 \\ 0 & k_y \end{bsmallmatrix}$, $\mathbf{x}^r(k) = [x^r(k), y^r(k)]^{\rm T}$, $\hat{\mathbf{x}}(k)=[\hat{x}(k),\hat{y}(k) ]^{\rm T}$, $\hat{\mathbf{v}}(k)=[\hat{v}_x(k),\hat{v}_y(k) ]^{\rm T}$. \section{Simulation Experiments} \label{lab:simulation result} \subsection{Simulation Setup} To verify the performance of VML in the drone race scenario, we first test it in simulation and then use an Extended Kalman filter as benchmark to compare both filters to see which one is more suitable in different operation points. We first introduce the drone's dynamics model used in the simulation. \begin{align} \begin{split} \begin{bmatrix} \dot{x}\\\dot{y} \\\dot{z} \end{bmatrix} =& \begin{bmatrix} v_x \\ v_y \\v_z \end{bmatrix}\\ \begin{bmatrix} \dot{v}_x \\ \dot{v}_y \\ \dot{v}_z \end{bmatrix} =& \begin{bmatrix} 0 \\ 0 \\ g \end{bmatrix}+ \Re_B^E\begin{bmatrix} 0 \\ 0 \\ T \end{bmatrix}+\Re_B^E\mathbf{K}\Re_E^B\begin{bmatrix} v_x \\ v_y \\ v_z \end{bmatrix} \\ \begin{bmatrix} \dot{\phi} \\ \dot{\theta} \\ \dot{\psi} \\\dot{T} \end{bmatrix} =& \begin{bmatrix} k_{\phi}(\phi^{c}-\phi) \\ k_{\theta}(\theta^{c}-\theta) \\ k_{\psi}(\psi^{c}-\psi) \\ k_{T}(T^{c}-T) \end{bmatrix} \end{split} \label{equ:drone simulation model} \end{align} where $(x, y, z)$ is the position of the drone in the Earth frame. $v_*$ is the velocity of the drone. $g$ is the gravity factor. $T$ is the acceleration caused by the thrust force. $\phi$, $\theta$, $\psi$ are the three Euler angles of the body frame. And $\Re_B^E$ is the rotation matrix from the Body frame to the Earth frame. $\mathbf{K}=diag([-0.5, -0.5, 0])$ is the simplified first order drag matrix, where the values are based on a linear fit of the drag based on real-world data with the Trashcan drone. $\Re_B^E\mathbf{K}\Re_E^B[ v_x v_y v_z]^{\rm T}$ is the acceleration caused by other aerodynamics. The last four equations are the simplified first order model of the attitude controller and thrust controllers where the proportional feedback factors are $k_{\phi} = 6$, $k_{\theta} = 6$,$k_{\psi} = 5$,$k_{T} = 3$. Thus, the model \ref{equ:drone simulation model} in the simulation is a $10$ states $\mathbf{x} = [x, y, z, v_x, v_x, v_x, \phi, \theta, \psi, T]^{\rm T}$ and $4$ inputs $\mathbf{u}=[\phi^{c}, \theta^{c}, \psi^{c}, T^{c}]^{\rm T}$ nonlinear system. In this simulation, we use the same flight plan module and high-level controllers discussed in Section \ref{lab:MHE} (Figure \ref{fig:controller}) to generate a ground truth trajectory through a 4-gate square racing track. In this track, we use different height to test if the altitude change affects the accuracy of the VML. \begin{table}[H] \caption{The map of the simulated racing track} \centering \begin{tabular}{|c|c|c|c|c|} \hline \centering Gate ID & $x[m]$ & $y[m]$ & $z[m]$ & $\psi[^{\circ}]$ \\ \hline\hline $1$ & $4$ & $0$ & $-1.5$ & $0$ \\ \hline $2$ & $4$ & $4$ & $-2.5$ & $90$ \\ \hline $3$ & $0$ & $4$ & $-1.0$ & $180$ \\ \hline $4$ & $0$ & $0$ & $-1.5$ & $270$ \\ \hline \end{tabular} \end{table} With the ground truth states, next step is to generate the sensor reading. In the real world, AHRS estimation outputs biased attitude estimation because of the accelerator's bias. To model AHRS bias, we have a simplified AHRS bias model \begin{align} \begin{bmatrix} \phi_b \\ \theta_b \end{bmatrix} = \begin{bmatrix} \cos{\psi} & \sin{\psi} \\ -\sin{\psi} & \cos{\psi} \end{bmatrix}\begin{bmatrix} B_N \\ B_E \end{bmatrix} \label{equ:AHRS bias model} \end{align} where $\phi_b$ and $\theta_b$ are the AHRS biases on $\phi$ and $\theta$. $B_N$ and $B_E$ are the north and east bias caused by the accelerometer bias, which can be considered as constants in short time. From real-world experiments, they are less than $3^{\circ}$. Thus, the AHRS reading can be modelled by \begin{align} \begin{bmatrix} \bar{\phi}_k \\ \bar{\theta}_k \end{bmatrix} = \begin{bmatrix} \phi_k \\ \theta_k \end{bmatrix} + \begin{bmatrix} \cos{\psi} & \sin{\psi} \\ -\sin{\psi} & \cos{\psi} \end{bmatrix} \begin{bmatrix} B_N \\ B_E \end{bmatrix} + \begin{bmatrix} \epsilon_{\phi} \\ \epsilon_{\theta} \end{bmatrix} \end{align} where $\epsilon_{*} \sim N(0,\sigma_{*})$ is the AHRS noise and in our simulation we will set $\sigma_{*} = 0.5^{\circ}$, $B_N = -2^{\circ}$, $B_E = 1^{\circ}$. For vision measurements generation, we first determine the segment $[u,v]$ of the trajectory where the drone can detect the gate. Then, we calculate the number of the detection by $n_v = \frac {t_u-t_v}{f_{v}}$, where $f_v$ is the detection frequency. Next, we randomly select $n_v$ points between $u$ and $v$ to be vision points. For these points, we generate detection measurement by \begin{align} \begin{bmatrix} \bar{x}_k \\ \bar{y}_k \end{bmatrix} = \begin{bmatrix} x_k \\ y_k \end{bmatrix} + \begin{bmatrix} \epsilon_{x} \\ \epsilon_{y} \end{bmatrix} \label{equ:vision measurement model} \end{align} In equation \ref{equ:vision measurement model}, $\epsilon_{*} \sim N(0,\sigma_{*})$ is the detection noise and $\sigma_*=0.1m$ In these $n_v$ vision points, we also randomly select a few points as outlier points, which have the same model with equation \ref{equ:vision measurement model} but $\sigma_{*}=3m$. In the following simulations, the parameters are the same with the value mentioned in this section if there is no statement. The simulated ground truth states and sensor measurements are shown in Figure \ref{fig:simulated gt and sensor}. \begin{figure} [hbt!] \centering \subfigure[Generated ground truth states and vision measurements in $x-y$ plane]{ \includegraphics[scale=0.4,trim={1cm 6cm 1cm 6cm},clip]{simulation/sensor_3D.pdf}} \subfigure[Generated ground truth position and vision measurements]{ \includegraphics[scale=0.4,trim={1cm 6cm 1cm 6cm},clip]{simulation/sensor_X.pdf} } \caption{In the simulation, the ground truth states are first generated (blue curve). Then, vision measurements and AHRS readings are generated. It can be seen clearly that the bias of AHRS readings changes with the heading, as on a real drone. Namely, the offset in $\phi$ and $\theta$ changes when the $\psi$ changes. This phenomenon is modeled by equation \ref{equ:AHRS bias model}. In this simulation $f_v=30$HZ,$\sigma_x = \sigma_y = \sigma_z=0.1m$}. \label{fig:simulated gt and sensor} \end{figure} \subsection{Simulation result and analysis} \subsubsection{Comparison between EKF, BRF and PRF without outliers} We employ an EKF as benchmarks to compare the performance of our proposed filter. The details of the EKF can be found in the Appendix. We first do the simulation in only one operation point, where $f_v=30$HZ, $\sigma_*= 0.1m$ and the probability of outliers $P_{out}=N_{outliers}/N_{detection}=0$. At this operation point, three filters are run separately. The result is shown in Figure \ref{fig:filter comparision with no outliers}. \begin{figure} [hbt!] \centering \subfigure[Position estimation of EKF, BRF and PRF]{\includegraphics[scale=0.4,trim={2cm 8cm 2cm 8cm},clip]{simulation/revision/position_no_outlier.pdf}} \subfigure[Velocity estimation of EKF, BRF and PRF]{\includegraphics[scale=0.4,trim={2cm 8cm 2cm 8cm},clip]{simulation/revision/velocity_no_outlier.pdf}} \caption{The filtering result of EKF, BRF and PRF. $f_v=50HZ$ and $\sigma_x=\sigma_y = 0.1$. When there are no outliers, EKF, BRF and PRF's estimating result all converge to ground truth value. In velocity estimation, however, EKF has longer startup period than VML and BRF shows peaks, which is caused by the over-fitting. To limit this overfitting, in PFR, we add a prior matrix $\mathbf{P} = \begin{bmatrix} 0 & 0 \\ 0 & 0.3 \end{bmatrix}$ and the velocity's peak is significantly smoothed and is closer to the ground truth velocity.} \label{fig:filter comparision with no outliers} \end{figure} When there are no outliers, all three filters can converge to the ground truth value. However, the EKF has a longer startup period and BRF overfits after turning, leading to unlikely high velocity offsets (the peaks in Figure \ref{fig:filter comparision with no outliers}b)). This is because, after the turn, the RANSAC buffer is empty. When the first few detections come into the buffer, the RANSAC has a larger chance to estimate inaccurate parameters. In PRF, however, we add a prior matrix $\mathbf{P} = \begin{bmatrix} 0 & 0 \\ 0 & 0.3 \end{bmatrix}$ to limit the value of $\Delta v$ and the number of the peaks in the velocity estimation is significantly decreased. At the same time, the velocity estimation is closer to the ground truth value. \begin{figure}[hbt!] \centering \centering \subfigure[Estimation error of the filters with different detection frequencies. ]{\includegraphics[scale=0.5,trim={4cm 8cm 4cm 8cm},clip]{simulation/revision/error_bar_no_outiler.pdf}} \subfigure[Calculation time of the filters.]{\includegraphics[scale=0.5,trim={4cm 8cm 4cm 8cm},clip]{simulation/revision/time_no_outiler.pdf}} \caption{The simulation result of the filters. It can be seen that when the detection frequencies are below $20HZ$, the EKF performs better than BRF and PRF. However, when the detection frequencies are higher than $20HZ$, BRF and PRF start performing better than the EKF. In terms of computation time, the EKF is affected by the detection frequency slightly while the computation load of BRF and PRF increase significantly higher detection frequencies} \label{fig:performance of the filters without outliers} \end{figure} To evaluate the estimation accuracy of each filter, we first introduce a variable, average estimation error $\gamma$, to be an index of the filter's performance: \begin{align} \gamma = \sqrt{\frac{\sum^N_{i=1}(\hat{x}_i-x_i)^2+(\hat{y}_i-y_i)^2}{N}} \label{equ:filter index} \end{align} where $N$ is the number of the sample points on the whole trajectory. $\hat{x}$ and $\hat{y}$ are the estimated states by the filter. $x$ and $y$ are the ground truth positions generated by the simulation. $\gamma$ captures how much the estimated states deviate from the ground truth states. A smaller $\gamma$ indicates a better filtering result. We use running time to evaluate the computation efficiency of each filter. It should be noted that since we need to store all the simulation data for visualization and MATLAB has no mechanism of passing pointers, data accessing can take much computation time. Thus, we only count the running time of the core parts of the filters, which are the prediction and the correction. The results are shown in Figure \ref{fig:performance of the filters without outliers}. In the simulation, the time-window in BRF and PRF is set to be $1s$ and $5$ iterations are performed in the RANSAC procedure. For each frequency, the filters are run $10$ times separately and their average $\gamma$ and running time are calculated. It can be seen in Figure \ref{fig:performance of the filters without outliers}(a) that when the detection frequency is larger than $30$ HZ, BRF and PRF perform close to the EKF. In terms of calculation time, the EKF is heavier than BRF and PRF when the frequency is lower than $40HZ$. It is because that during the prediction phase, the EKF not only predicts the states but also calculates the Jacobian matrix and the prior error covariance $\mathbf{P}_{k|k-1}$ by high frequency while BRF and PRF only do the state prediction. However, when the detection comes, the EKF does the correction by several matrix operations while BRF and PRF do the RANSAC which is much heavier. This explains why the EKF's computation load is only slightly affected by the detection frequency but BRF and PRF's computation load increases significantly with higher detection frequency. \subsubsection{Comparison between EKF, BRF and PRF with outliers} When outliers appear, the regular EKF can be affected significantly. Thus, outlier rejection strategies are always used within an EKF to increase its robustness. A commonly used method is using Mahalanobis distance between the observation and its mean as an index to determine whether an observation is an outlier. \cite{chang2014robust,li2016gps} Thus, in this section, we implement an EKF with outlier rejection (EKF-OR) as a benchmark to compare the outlier rejection performance of BRF and PRF. The basic idea for the EKF-OR is that the square of the observation's Mahalanobis distance is Chi-square distributed. Hence, when the observation arrives, its Mahalanobis distance will be calculated and checked whether it is within a threshold $\chi_{\alpha}$. If it is not, this observation will be rejected. \begin{figure} [!hbt] \centering \subfigure[When outliers appear, EKF-OR, BRF and PRF can reject them.]{\includegraphics[scale=0.45,trim={2cm 8cm 2cm 8cm},clip]{simulation/revision/position_outliers_normal.pdf}} \subfigure[After a long time of pure prediction, EKF-OR has large error covariance. Once it meets an outlier, it has a high chance to jump to it. As a consequence, the later true positive detections are beyond the threshold $\chi_{\alpha}$ and EKF-OR will treat them as outliers]{\includegraphics[scale=0.45,trim={2cm 8cm 2cm 8cm},clip]{simulation/revision/position_outliers_diverge.pdf}} \caption{In most cases, EKF-OR, BRF and PRF can reject the outliers. But after a long time of pure prediction, EKF-OR is very vulnerable to the outliers while BRF and PRF still perform well.} \label{fig:EKF-OR normal and diverge} \end{figure} Two examples of the filters' rejecting outliers are shown in Figure \ref{fig:EKF-OR normal and diverge}. The first figure shows a common case that the three filters can reject the outliers successfully. However, in some special cases, EKF-OR is vulnerable to the outliers. In Figure \ref{fig:EKF-OR normal and diverge}(b), for instance, after a long time of pure prediction, the error covariance $\mathbf{P}_{k|k-1}$ becomes large. Once EKF-OR meets an outlier, it has a high chance to jump to it. The subsequent true positive detections will be treated as outliers and EKF-OR starts diverging. At the same time, BRF and PRF are more robust to the outliers. The essential reason is that for EKF-OR, it depends on its current state estimation (mean and error covariance) to identify the outliers. When the current state estimation is not accurate enough, like the long-time prediction in our case, EKF-OR loses its ability to identify outliers. In other words, it tends to trust whatever it meets. The worse situation is that after jumping to the outlier, its error covariance become smaller which, as a consequence, leads to the rejection of the coming true positive detections. However, for BRF and PRF, outliers are determined in a time window including history. Thus, after long time of prediction, when BRF and PRF meet an outlier, they will judge it considering the detections in the past. If there is no other detection in the time window, they will wait for enough detections to make a decision. With this mechanism, BRF and PRF become more robust than EKF-OR especially when EKF-OR's estimation is not accurate. \begin{figure} [!hbt] \centering \subfigure[Estimation error of the EKF-OR, BRF and PRF with different detection frequencies]{\includegraphics[scale=0.35,trim={4cm 8cm 4cm 8cm},clip]{simulation/revision/error_outliers.pdf}} \subfigure[Partial enlarged drawing of (a)]{\includegraphics[scale=0.35,trim={4cm 8cm 4cm 8cm},clip]{simulation/revision/error_outliers_local.pdf}} \subfigure[ Calculation time of the filters]{\includegraphics[scale=0.35,trim={4cm 8cm 4cm 8cm},clip]{simulation/revision/time_outliers.pdf}} \caption{The estimation error of EKF-OR, BRF and PRF and their calculation time with outliers. EKF-OR has some chance ($15\%$) to diverge, which leads to the high estimation error.} \label{fig:error_and_time_outliers} \end{figure} Figure \ref{fig:error_and_time_outliers} shows the estimation error and the calculation time of the three filters. As we stated before, although EKF-OR has the mechanism of dealing with the outliers, it still can diverge due to the outliers in some special cases. Thus, in Figure \ref{fig:error_and_time_outliers}(a) EKF-OR has large estimation error when the detection frequency is both low and high. In terms of calculation time, it can be seen that it has no significant difference with the non-outlier case. \subsubsection{Filtering result with delayed detection} Image processing and visual algorithms can be very computationally expensive for running onboard a drone, which can lead to significant delay. \cite{van2019event,weiss2012versatile} Many visual navigation approaches ignore this delay and directly fuse the visual measurements with the onboard sensors, which sacrifices the accuracy of the state estimation. A commonly used approach for compensating this vision delay is a modified Kalman filter proposed by Weiss et al. \cite{weiss2012versatile}. The main idea of this approach, called EKF delay handler (EKF-DH), is having a buffer to store all sensor measurements within a certain time. At time $t_k$, a vision measurement corresponding to the states at earlier time $t_s$ arrives. It will be used to correct the states at time $t_s$. Then, the states will be propagated again from $t_s$ to $t_k$. (Figure \ref{fig:sketches delay}(a)) Although updating the covariance matrix is not needed according to \cite{weiss2012versatile}, this approach still requires updating history states whenever a measurement arrives, which can be computationally expensive especially when the delay and the measurement frequency get larger. In our case, we need to use the error covariance for outlier rejections, it is necessary to update the history error covariance matrices, which in turn increases the computation load further. At the same time, for VML, when the measurement arrives, it will first be pushed into the buffer. Then, the error model will be estimated within the buffer/time window. With the estimated parameter $\hat{\beta}$, the prediction at $t_k$ can be corrected directly without the need of correcting all the states between $t_s$ and $t_k$. (Figure \ref{fig:sketches delay}(b)) Thus, the computational burden will not increase when the delay exists. \begin{figure} [!hbt] \centering \subfigure[The sketch of the EKF-DH proposed in \cite{weiss2012versatile}. When the measurement arrives at $t_k$, EKF-DH first corrects the corresponding states at $t_s$ and then updates the states until $t_k$.]{\includegraphics[scale=0.35,trim={0cm 0cm 0cm 0cm},clip]{simulation/revision/EKF_handling_delay.pdf}} \hspace{1cm} \subfigure[The sketch of VML's mechanism of handling delay. When the measurement arrives, it will be pushed to the buffer with the corresponding states. Then, the error model will be estimated by the RANSAC approach. At last, the estimated model will be used to compensate the prediction at $t_k$. There is no need to update all the states between $t_s$ and $t_k$]{\includegraphics[scale=0.35,trim={0cm 0cm 0cm 0cm},clip]{simulation/revision/VML_handling_delay.pdf}} \caption{The sketches of EKF-DH and VML's handling delay mechanism.} \label{fig:sketches delay} \end{figure} Figure \ref{fig:example delay} shows an example of the simulation result of the three filters when both outliers and delay exist. In this simulation, the visual delay is set to be $0.1s$. It can be seen that although there is a lag between the vision measurements and the ground-truth, all the filters can estimate accurate states. However, EKF-DH requires much more computation effort. Figure \ref{fig:error_and_time_delay} shows the estimation error and the computation time of the three filters. \begin{figure} [!hbt] \centering \subfigure[Position estimation of the three filters with outliers and delay]{\includegraphics[scale=0.35,trim={1cm 8cm 1cm 8cm},clip]{simulation/revision/pos_delay.pdf}} \hspace{2mm} \subfigure[Velocity estimation of the three filters with outliers and delay]{\includegraphics[scale=0.35,trim={1cm 8cm 1cm 8cm},clip]{simulation/revision/vel_delay.pdf}} \caption{An example of the performance of the three filters when outliers and delay exist.} \label{fig:example delay} \end{figure} In Figure \ref{fig:error_and_time_delay}, we can see that the computation load of EKF-DH increases significantly due to its mechanism of handling delay. Unsurprisingly, EKF-DH is still sensitive to some outliers while BRF and PRF can handle the outliers. \begin{figure} [!hbt] \centering \subfigure[Estimation error of the EKF-DH, BRF and PRF with different detection frequencies]{\includegraphics[scale=0.3,trim={2cm 8cm 2cm 8cm},clip]{simulation/revision/error_delay.pdf}} \subfigure[Partial enlarged drawing of (a)]{\includegraphics[scale=0.3,trim={2cm 8cm 2cm 8cm},clip]{simulation/revision/error_delay_local.pdf}} \subfigure[ Calculation time of the filters]{\includegraphics[scale=0.3,trim={2cm 8cm 2cm 8cm},clip]{simulation/revision/time_delay.pdf}} \caption{The estimation error of EKF-DH, BRF and PRF and their calculation time with outliers and delay.} \label{fig:error_and_time_delay} \end{figure} \section{Real-world Experiments} \label{sec:Experiment Result} \subsection{Processing time of each component} Before testing the whole system, we first test on the ground how much time the Snake gate detection, the VML and the controller take when running on a Jevois smart camera. On the ground, we set an orange gate in front of a Jevois camera and calculate the time that each component takes. For each image, we start timing when a new image arrives and the Snake gate detection is run. Then, we stop timing when the snake gate finishes. For VML, in each loop, the timing includes both prediction and correction no matter if there are enough detections for correction. We start counting when the Jevois is powered on. In this test, the vision detection frequency is $15HZ$ and the number of RANSAC iterations in VML is set to $5$. Table \ref{tab:processing time} shows the statistical results of the time each component takes on the Jevois. \begin{table}[H] \caption{The processing time of each component of the approach running on the Jevois} \centering \begin{tabular}{|c|c|c|} \hline \centering Snake gate detection (each image) & VML (each loop) & Controller (each loop) \\ \hline \hline $17\pm 2.5 ms$ & $0.02\pm 0.15ms$ & $0.01\pm 0.1ms$ \\ \hline \end{tabular} \label{tab:processing time} \end{table} From Table \ref{tab:processing time}, it can be seen that vision takes much more time than the other two parts. Please note though that the snake gate computer vision detection algorithm is already a very efficient gate detection algorithm. In fact, it has tunable parameters, i.e., the number of samples taken per image for the detection (3000 in the current setup), which allow the algorithm to run even much faster at the cost of having less accuracy (see \cite{li2018autonomous} for more details). The main gain in time in the approach presented in this article is that we do not employ VIO and SLAM, which would take substantially more processing. However, as the Snake gate detection provides relatively low-frequency and noisy position measurements, the VML needs to run in high frequency and cope with the detection noise to still provide accurate estimation for the controller. \subsection{Flying experiment without gate displacement} \begin{figure} [hbt!] \centering \includegraphics[width=0.6\columnwidth,trim={0cm 0cm 0 0cm},clip]{experiment/3_laps_trajectory.jpg} \caption{The picture of the Trashcan flying the track where the gates are displaced. The average speed is $2m/s$ and the maximum speed is $2.6m/s$.} \label{fig:flying picture} \end{figure} Figure \ref{fig:2_laps_log} shows the flying result of the drone flying the track without gate displacement. The position of the $4$ gates is listed in Table \ref{tab:race track no diaplacement}. In Table \ref{tab:race track no diaplacement}, $x_g$ and $y_g$ are the position of the gates in the real world and $\Tilde{x}_g$ and $\Tilde{y}_g$ are their position on the map. In this situation, they are the same. The aim of this experiment is to test the filter's performance with sufficient detections. Thus, the velocity is set to be $1.5m/s$ to give the drone more time to detect the gate. In Figure \ref{fig:2_laps_log}, the blue curve is the ground truth data from Optitrack motion capture system and the yellow curves are the filtering results. From the flying result, it can be seen that the filtered results are smooth and coincide with the ground truth position well. During the period when the detections are not available, the state prediction is still accurate enough to navigate the drone to the next gate. When the drone detects the next gate, the filter will correct the prediction. In this situation, the divergence of the states is only caused by the prediction drift. It should also be noted that when the outliers appears at $84s$, the filter is not affected by them because of the RANSAC technique in the filter. The processing time of the visual detection, the filter and the controller are listed in Table \ref{tab:processing time}. It can be seen that the VML proposed in this article is extremely efficient. \begin{table}[H] \caption{The position of the gates without displacement} \centering \begin{tabular}{|c|c|c|c|c|} \hline \centering gate ID & $x_g[m]$ & $y_g[m]$ & $\Tilde{x}_g[m]$ & $\Tilde{y}_g[m]$ \\ \hline \hline 1 & 5 & 0 & 5 & 0 \\ \hline 2 & 6.5 & 5 & 6.5 & 5 \\ \hline 3 & 1 & 7 & 1 & 7 \\ \hline 4 & 0 & 1 & 0 & 1 \\ \hline \end{tabular} \label{tab:race track no diaplacement} \end{table} \begin{figure} [!hbt] \centering \includegraphics[width=0.7\columnwidth,trim={0cm 7cm 0cm 8cm},clip]{experiment/2_laps_log.pdf} \caption{The flying result of the drone flying the track without the gate displacement.} \label{fig:2_laps_log} \end{figure} \subsection{Flying experiment with gate displacement} In this section, we test our strategy under a difficult condition where the drone flies faster, the gates are displaced and the detection frequency is low. The real gate positions and their position on the map are listed in Table \ref{tab:race track} and shown in Figure \ref{fig:track_map}(a). Gates are displaced between 0 and 1.5m from their supposed positions. The dashed orange lines in Figure \ref{fig:track_map}(a) denote the gate positions on the map while the solid orange lines denote the real gate positions which are displaced from the map. Figure \ref{fig:track_map}(b) shows the flight data of the first lap. The orange solid gates are the ground truth positions of the gates. The yellow curve is the filtered position based on the gates' positions on the map (orange dashed gates). In other words, the yellow curve is where the drone thinks it is based on the knowledge of the map. After passing through one gate, when the drone detects the next gate, the filter will start correcting the filtering error from the prediction error and the gate displacement. The whole flight result is shown in Figure \ref{fig:flying log}. From the result, it can be seen that the drone can fly the track for $3$ laps with an average speed of $2m/s$ and a maximum speed of $2.6m/s$ while an experienced pilot flies the same drone in the same track with an average speed of $2.7m/s$ after several runs of training. Figure \ref{fig:flying log}(a) is the filtering result of the position. It should be noted that the filtering result does not coincide with the ground truth curve because of the displacement of the gates. The pose estimation is based on the gates' position on the map. When the gates are displaced, the drone still thinks they are at the position which the map indicates. After the turn, when the drone sees the next gate, which is displaced, it will attribute the misalignment to the prediction error and correct the prediction by means of new detections. With this strategy, our algorithm is robust to the displacement of the gates. \begin{figure} [hbt!] \centering \subfigure[The map of the race track where the gates in the real world are displaced.]{ \includegraphics[width=0.3\columnwidth,trim={0cm 0cm 0cm 0cm},clip]{experiment/race_track.pdf}} \subfigure[The flying data of the first lap.]{\includegraphics[scale=0.5,trim={3cm 9cm 3cm 8cm},clip]{experiment/Log_3D.pdf}} \caption{The experiment where the gates are displaced. When the drone sees the next gate after passing through one gate, the filter will start correcting the error caused by the prediction drift and the gate's displacement. Thus, there is a jump in the filtering result.} \label{fig:track_map} \end{figure} \begin{figure} [!htb] \centering \subfigure[The position estimation result. It should be noted that the position estimation curve does not coincide with the ground truth curve coming from our motion capture system because the gate displacements.]{\includegraphics[width=0.45\columnwidth,trim={2cm 8cm 2cm 8cm},clip]{experiment/log_x.pdf}} \subfigure[The velocity estimation result of VML]{\includegraphics[width=0.45\columnwidth,trim={2cm 8cm 2cm 8cm}]{experiment/log_v.pdf}} % \caption{The result of flying the track with the gate displacement.}% \label{fig:flying log} \end{figure} \begin{table}[!htb] \caption{The position of the gates with displacement} \centering \begin{tabular}{|c|c|c|c|c|} \hline \centering gate ID & $x_g[m]$ & $y_g[m]$ & $\Tilde{x}_g[m]$ & $\Tilde{y}_g[m]$ \\ \hline \hline 1 & 5 & 0 & 4 & 0 \\ \hline 2 & 6.5 & 5 & 5 & 5 \\ \hline 3 & 1 & 7 & 1 & 6 \\ \hline 4 & 0 & 1 & 0 & 1 \\ \hline \end{tabular} \label{tab:race track} \end{table} \subsection{Flying experiment with different altitude and moving gate} We also show a more challenging trace track where the height of the gates varies from $0.5m$ to $2.5m$. Also, during the flight, the position of the second gate ($2.5m$) is changed after the drone passes through it. In the next lap, the drone can adapt to the changing position of the gate. (Figure \ref{fig:photos_moving_gate}) \begin{figure} [!hbt] \centering \subfigure[After the drone passes through the second gate, the gate is moved.]{\includegraphics[scale = 0.08,trim={2cm 0cm 2cm 0cm},clip]{experiment/revision/change_the_position_of_gate.jpg}} \hspace{1cm} \subfigure[In the next lap, the drone can adapt to the changing position of the gate and fly through it.]{\includegraphics[scale = 0.08,trim={2cm 0cm 2cm 0cm}]{experiment/revision/pass_through_the_gate.jpg}} % \caption{The flying experiment where the heights of the gates vary from $0.5m$ to $2.5m$. During the flight, the position of the second gate is changed.}% \label{fig:photos_moving_gate} \end{figure} The flight result is shown in Figure \ref{fig:log_chaning_height}. In this flight, the waypoints are not changed and the gates are deployed without any ground truth measurement. Thus, the estimated position does not coincide with the ground-truth position. It should be noted that the height difference between the second gate and the third gate is $2m$. With this altitude change which violates the constant altitude assumption for the prediction error model, the proposed VML is still accurate enough to navigate the drone through the gate. \begin{figure} [!hbt] \centering \includegraphics[width=0.7\columnwidth,trim={0cm 7cm 0cm 8cm},clip]{experiment/revision/flight_log_changing_position.pdf} \caption{The flying result of the drone flying the track with different height and the gate's position changing.} \label{fig:log_chaning_height} \end{figure} From the real flight result, we can see that the VML performs well and can navigate the drone through the racing track with high speed even though the gates are displaced. Also, this strategy does not need computationally expensive methods like generic VIO and SLAM. This allows it to be run on a very light-weight flying platform. \section{Discussion} \label{sec:Discussion} In this paper, we proposed a novel state estimation method called Visual Model-predictive Localization which provides navigation information for a 72 gram autonomous racing drone. The algorithm's properties were thoroughly studied in simulation and the feasibility of real-world implementation was shown in challenging real world experiments. Although in this paper VML is used for a specific drone race scenario, this method can be directly used for navigation in other more general scenarios where the sensors have low frequency, temporary failure, outliers and delays. For example, our approach can be directly adopted into an outdoor environment where position measurements are provided by a GPS signal that has a delay, temporary failures and outliers. Just as in our drone race experiments, the proposed approach should be more reliable than a Kalman filter. For indoor flight, we used a common linear drag model for state prediction which does not need a lot of effort and precise equipment to identify. Outdoor flight would require adaptations to this model, for instance such as the ones explained in, e.g., \cite{sikkel2016novel}. We implemented our approach by adding a cheap smart camera Jevois to a tiny racing drone Trashcan. With very limited carrying capacity and more complex aerodynamics property, it is still demonstrated that this light-weight flying platform has the ability to finish the drone race task autonomously. Compared to a regular size racing drone, the Trashcan has more complex aerodynamics and is more sensitive to disturbances. On the other hand, it has faster dynamics which can make maneuvers more agile. More importantly, it is much safer than a regular size racing drone, which may even allow for flying at home. In any case, the present approach represents another direction of the autonomous drone race, which does not need high performance and heavy onboard computers. Also, without computationally expensive navigation methods such as SLAM and VIO, the proposed approach is still able to make the drone navigate autonomously with relatively high speed. However, the proposed approach still has its limitations. First of all, in this approach, we don't estimate the thrust. Instead, we use a non-changing altitude assumption to approximate the thrust to derive the prediction error model. The simulation and real world experiments have shown that violating this assumption can still have accurate estimation. Still, when the racing track will contain more considerable height changes, it will become desirable to estimate the thrust with a model, in order to have a more accurate error model and increase the estimation accuracy, especially in more aggressive flight. Secondly, the current detection method is sensitive to light conditions. Most failures are caused by the non-detection of the gate. This is a major bottleneck of increasing the speed of the flight. In the future, we will design a gate detection method using deep learning methods to detect the gate in a more complex environment. This deep net can then run on the GPU of the Jevois. Also, higher speeds could be attainable. Thirdly, in this paper, we mainly focus on the navigation part of the drone. The guidance is only a way-point based method and the controller is a PID controller. To make the drone fly faster, optimal guidance and control methods are needed. Another direction is to explore joint estimation for navigation. This will become very useful when one assumes that gates are mostly not displaced. Then, over multiple laps, the drone can get a better idea of where the gates are. In the future, with the high speed development of computational capacity, when the more reliable gate detection and online optimal control are implemented onboard, the speed of this autonomous racing drone should certainly be increased significantly. Compared to regularly sized drones, this tiny flying platform should perform faster and more agile flight. At that time, the proposed VML approach will still be suitable for providing stable state estimation for the drone. \section{Conclusion} \label{lab:conclusion} In this paper, we presented an efficient Visual Model-predictive Localization (VML) approach to autonomous drone racing. The approach employs a velocity-stable model that predicts lateral accelerations based on attitude estimates from the AHRS. Vision is used for detecting gates in the image, and - by means of their supposed location in the map - for localizing the drone in the coarse global map. Simulation and real-world flight experiments show that VML can provide robust estimates with sparse visual measurements and large outliers. This robust and computationally very efficient approach was tested on an extremely lightweight flying platform, i.e., a Trashcan racing drone with a Jevois camera. In the flight experiments, the Trashcan flew a track of $3$ laps with an average speed of $2m/s$ and a maximum speed of $2.6m/s$. To the best of our knowledge, it is the world's smallest autonomous racing drone with a weight $6$ times lighter than the currently lightest autonomous racing drone setup, while its velocity is on a par with the currently fastest autonomously flying racing drones seen at the latest IROS autonomous drone race. \bibliographystyle{apalike}
1,108,101,563,988
arxiv
\part{} \def\spacingset#1{\renewcommand{\baselinestretch}% {#1}\small\normalsize} \spacingset{1} \setcounter{Maxaffil}{0} \renewcommand\Affilfont{\itshape\small} \spacingset{1.42} \maketitle \begin{abstract} We study the stability of posterior predictive inferences to the specification of the likelihood model and perturbations of the data generating process. In modern big data analyses, the decision-maker may elicit useful broad structural judgements but a level of interpolation is required to arrive at a likelihood model. One model, often a computationally convenient canonical form, is chosen, when many alternatives would have been equally consistent with the elicited judgements. Equally, observational datasets often contain unforeseen heterogeneities and recording errors. Acknowledging such imprecisions, a faithful Bayesian analysis should be stable across reasonable equivalence classes for these inputs. We show that traditional Bayesian updating provides stability across a very strict class of likelihood models and \DGP{}s, while a generalised Bayesian alternative using the $\beta$-divergence loss function is shown to be stable across practical and interpretable neighbourhoods. We illustrate this in linear regression, binary classification, and mixture modelling examples, showing that stable updating does not compromise the ability to learn about the \DGP. These stability results provide a compelling justification for using generalised Bayes to facilitate inference under simplified canonical models. \end{abstract} \noindent {\it Keywords:} Stability; Generalised Bayes; $\beta$-divergence; Total Variation; Generalised linear models \spacingset{1.45} \section{Introduction}{\label{Sec:Introduction}} Bayesian inferences are driven by the posterior distribution \begin{equation} \pi(\theta|y)= \frac{\pi(\theta)f(y;\theta)}{\int \pi(\theta)f(y;\theta)d\theta}.\label{Equ:bayesrule} \end{equation} which provides the provision to update parameter prior $\pi(\theta)$ using observed data $y = (y_1, \ldots, y_n) \in\mathcal{Y}^n$ assumed to have been generated according to likelihood $f(\cdot;\theta)$. The quality of such posterior inference depends on the specification of the prior, likelihood, and collection of the data. In controlled experimental environments where time is available to carefully consider such specifications, a posterior calculated in this way might be credible. However, modern applications often involve high-dimensional observational data and are undertaken by non-experts. In such scenarios, it is natural to question the quality of the specification of $\pi(\theta)$ and $f(\cdot;\theta)$ and the collection of $y$ and therefore wonder to what extent posterior inference through \eqref{Equ:bayesrule} can be trusted. Much work has previously investigated the stability of \eqref{Equ:bayesrule} to the specification of $\pi(\theta)$, therefore our focus here will be on $f(\cdot;\theta)$ and $y$. The likelihood model captures the decision maker's (\acro{\smaller DM}'s) beliefs regarding the generation of data $y$. However, accurately formulating expert judgements as probability densities is difficult. Even for a well trained expert, so doing requires many more probability specifications to be made at a much higher precision than is possible within the time constraints of a typical problem \citep{goldstein1990influence}. This is not to say that an elicited model is useless. Often domain experts can reliably elicit important broad judgements. However, the resulting ``\textit{functional}'' model $f(\cdot;\theta)$ generally involves some form of interpolating approximation of the \acro{\smaller DM}'s ``\textit{true}'' beliefs. So doing is not unreasonable. However, a consequence of such expediency is that not only does the \acro{\smaller DM} not believe all the judgements made by $f(\cdot;\theta)$, its specific form is likely only one member of an equivalence class of models that also capture the \acro{\smaller DM}'s elicited beliefs and \textit{could} have used for inference. A typical example of the above is when applied practitioners deploy computationally convenient canonical models, for which there are software and illustrative examples available, to their domain specific problems. While the broad structure of such models may be suitable across domains, it is the practitioner's familiarly with its form, its software implementation or the platform on which it was published that motivates its use for inference, rather than a careful consideration of how it captures beliefs about the new environment Similarly, the data were not necessarily collected exactly how the \acro{\smaller DM} imagined when specifying $f(\cdot;\theta)$. There may be unforeseen heterogeneities, outliers, or recording errors. Alternatively, the \acro{\smaller DM} may be deploying someone else's carefully elicited model to an analogous but not necessarily exchangeable scenario. We therefore also consider the data generating process (\DGP) that generated the \acro{\smaller DM}'s data $y$ to belong to an equivalence class of \DGP{}s to which the \acro{\smaller DM} \textit{could} have deployed their inference. Given the inevitable lack of specificity in $f$ and $y$, a faithful Bayesian analysis should be able to demonstrate that it is not overly dependent on arbitrary choices across equivalence classes of its inputs. Such stability would allow \acro{\smaller DMs} to continue using familiar models in the knowledge that their selection is not driving the critical posterior inferences. This paper shows that the requirement for such stability necessitates the consideration of an updating rule different from \eqref{Equ:bayesrule}. Consider, for example, using a Gaussian distribution, $\mathcal{N}(y; \mu,\sigma^2)$ to approximate beliefs about data $y$. While the Gaussian distribution is ubiquitous, the top of Figure \ref{Fig:norm_t_neighbourhood_predictives} shows that a Student's-$t$ likelihood $t_{5}(y; \mu,\sigma^2)$ with 5 degrees of freedom would also have sufficed for this specification. The two likelihoods appear almost indistinguishable for all values of their shared $\mu$ and $\sigma^2$. Therefore, it would be unreasonable to expect that any \acro{\smaller DM} will strongly prefer one or the other of these. However, the bottom left of Figure \ref{Fig:norm_t_neighbourhood_predictives} shows that when updating according to \eqref{Equ:bayesrule} each model can result in very different posterior inferences. Equally, \eqref{Equ:bayesrule} is not stable to perturbations of the data either, as a small proportion of outliers moves the posterior inferences away from the uncontaminated part of the \DGP. We demonstrate that this is a feature of the fact that implicitly \eqref{Equ:bayesrule} learns about the parameter of the model minimising the Kullback-Leibler Divergence (\acro{\smaller KLD}) between the data generating process (\DGP) and the model, and that stability can only be expected here when the \acro{\smaller DM} is sure of the tail specification of their model and the data. See Section \ref{Sub:GaussianStudent} for full details of this example. \begin{figure}[!ht \begin{center} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/t_normal_neighbourhood_tikz-1.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/t_normal_neighbourhood_tikz-2.pdf}\\ \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/eps_cont_KL_norm_t_tikz-1.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/eps_cont_beta_norm_t_tikz-1.pdf} \caption{\textbf{Top:} Probability density function (\acro{\smaller pdf}) and cumulative density function (\acro{\smaller cdf}) of a {\color{black}{Gaussian}} $f_{\sigma_{adj}^2}(y;\theta)=\mathcal{N}\left(y;\mu,\sigma_{adj}^2\sigma^2\right)$ and a {\color{black}{Student's-t}} $h_{\nu}(y;\eta)=t_{\nu}(y;\mu,\sigma^2)$ random variable, with $\mu=0$, $\sigma^2=1$, $\nu=5$ and $\sigma_{adj}^2=1.16$. \textbf{Bottom:} The resulting posterior predictive distributions using {\color{black}{traditional}} and {\color{black}{\acro{$\beta$D}-Bayes}} updating on $n=1000$ observations from an $\epsilon$ contamination model $g(y) = 0.9\times\mathcal{N}\left(y;0,1\right) + 0.1 \times \mathcal{N}\left(y;5,3^2\right)$. } \label{Fig:norm_t_neighbourhood_predictives} \end{center} \end{figure} Under traditional Bayesian updating, it is therefore left up to the \acro{\smaller DM} to perform some kind of \textit{post hoc} sensitivity analysis to examine the impact their chosen model and particular features of the data had on the inference \citep[see][and references within]{box1980sampling,berger1994overview}. However, such analyses are usually unsystematic and limited to the investigation of a small number of alternative models within the equivalence class. An alternative, motivated by the \textit{M}-open world assumption that the model is misspecified for the \DGP \citep{bernardo2001bayesian}, is to use general Bayes \citep{bissiri2016general} to update beliefs about model parameters minimising a divergence different from the \acro{\smaller KLD} \citep{jewson2018principles}. A particularly convenient alternative is the $\beta$-divergence (\acro{$\beta$D}) which has previously been motivated as providing inference that is robust to outliers \citep{basu1998robust,ghosh2016robust} and desirable from a decision making point of view \citep{jewson2018principles}. In this paper, we extend the motivation for using \acro{$\beta$D}-Bayes further, showing that its posterior predictive inferences are provably stable across an interpretable equivalence class of likelihood models and \DGP{}s. We treat stability to $f$ and $y$ separately, first showing that \acro{$\beta$D}-Bayes inference is stable to the choice likelihood model for a given \DGP, and then that inferences for a fixed model are stable to small perturbations to the \DGP. Importantly, the stability afforded to \acro{$\beta$D}-Bayes inference does not compromise its ability to extract useful inferences about the \DGP. \acro{$\beta$D}-Bayes has the appealing property that if the model is correctly specified for the \DGP, then the data generating parameter will be learned, and there exists a growing literature that advocates using the \acro{$\beta$D} for applied analyses \citep[e.g.][]{knoblauch2018doubly, knoblauch2022generalized, girardi2020robust, sugasawa2020robust}. This is further demonstrated in our experiments. For example, Figure \ref{Fig:norm_t_neighbourhood_predictives} shows that as well as producing similar inference for the Gaussian and Student's-$t$ likelihood models, the \acro{$\beta$D}-Bayes inferences both capture the modal part of the observed data. Further, inferences must be also stable to the selection of the \acro{$\beta$D} and its hyperparameter. We discuss methods to select $\beta$ and demonstrate reasonable insensitivity to its selection. Results regarding the stability of \eqref{Equ:bayesrule} have largely focused on the parameter prior. \cite{gustafson1995local} proved that the total variation divergence (\acro{\smaller TVD}) between two posteriors resulting from priors in linear and geometric $\epsilon$-contamination neighbourhoods divergences as $\epsilon\rightarrow 0$ at a rate exponential in the dimension of the parameter space. However, \cite{smith2012isoseparation} showed that the \acro{\smaller TVD} between two posteriors converges to 0 provided the two priors under consideration are close as measured by the local De Robertis distance. Our first results provide analogies to these for the specification of the likelihood model. \cite{gilboa1989maxmin,whittle1990risk,hansen2001acknowledging,hansen2001robust,watson2016approximate} consider the stability of optimal decision making and consider minimax decision across neighbourhoods of the posterior. However, they do not consider what perturbations of the inputs of \eqref{Equ:bayesrule} would leave a \acro{\smaller DM} in such a neighbourhood \textit{a posteriori}. Most similar to our work is \cite{miller2018robust}, which considers Bayesian updating conditioning on data arriving within a \acro{\smaller KLD} ball of the observed data and results concerning `global bias-robustness' to contaminating observations, for example of the kernel-Stein discrepancy posteriors of \cite{matsubara2021robust}. We consider stability to an interpretable neighbourhood of the data which as a special case contains the globally bias-robust contamination. Bayes linear methods \citep{goldstein1999bayes}, which concern only the sub-collection of probabilities and expectations the \acro{\smaller DM} considers themselves to be able to specify \citep{goldstein2006subjective}, is an alternative to \eqref{Equ:bayesrule} designed to be stable to interpolating approximations. We prefer, however, to adopt the general Bayesian paradigm in this analysis. Firstly, the general Bayesian paradigm includes traditional Bayesian updating as a special case and produces familiar posterior and predictive distributions. Secondly, linear Bayes requires the elicitation of expectations and variances of unbounded quantities which are themselves unstable to small perturbations \citep[see discussion on][]{goldstein1994robustness}. Lastly, rather than demanding stability across an equivalence class of models, the \acro{\smaller DM} could let the data guide any decision the \acro{\smaller DM} themselves is not able to make using methods such as penalised likelihood approaches \citep[e.g.][]{akaike1973information, schwarz1978estimating}, Bayes' factors \citep{kass1995bayes} or Bayesian model averaging \citep{hoeting1999bayesian}. In particular, \cite{williamson2015posterior} propose methods for combining posterior beliefs across an equivalence class of analyses. However, such methods can be computationally burdensome to compute across even a finite class of models \citep[e.g.][]{rossell2021approximate} and the \acro{\smaller DM} could reasonably only consider a handful of the models that might fit with their beliefs, a subset of the full equivalence class. The rest of the paper is organised as follows: Section \ref{Sec:paradigm} presents our inference paradigm, introducing general Bayesian updating \citep{bissiri2016general}, robustified inference with the \acro{$\beta$D}, and defining how we will investigate posterior predictive stability. Section \ref{Sec:StabilityLikelihood} presents our theoretical contributions surrounding the stability of Bayesian analyses to the choice of the likelihood function and Section \ref{Sec:StabilityDGP} presents our results on the stability of inference to perturbations of the \DGP. Proofs of all of our results are deferred to the supplementary material. Section \ref{Sec:SettingBeta} discusses methods to set the $\beta$ hyperparameter and Section \ref{Sec:Experiments} illustrates the stability of the \acro{$\beta$D}-Bayes inference in continuous and binary regression examples from biostatistics and a mixture modelling astrophysics example, where stability is shown not to compromise the model's ability to learn about the \DGP. Code to reproduce all of the examples in this paper can be found at \url{https://github.com/jejewson/stabilityGBI}. \section{A paradigm for inference and stability}{\label{Sec:paradigm}} \subsection{General Bayesian Inference} Under the assumption that the model used for inference $f(y; \theta)$ does not exactly capture the \acro{\smaller DM}'s beliefs, we find it appealing to adopt the general Bayesian perspective of inference. \cite{bissiri2016general} showed that the posterior update \begin{align} \pi^{\ell}(\theta|y)&= \frac{\pi(\theta)\exp\left(-w\sum_{i=1}^n \ell(\theta,y_i)\right)}{\int \pi(\theta)\exp\left(-w\sum_{i=1}^n \ell(\theta,y_i)\right)d\theta}.\label{Equ:GBI} \end{align} provides a coherent means to update prior beliefs about parameter $\theta^{\ell}_g:= \argmin_{\theta\in\Theta} \int \ell(\theta,z)g(z)dz$ after observing data $y \sim g(\cdot)$ without requiring that $\theta$ index a model for the data generating density $g(\cdot)$. The parameter $w>0$ in \eqref{Equ:GBI} calibrates the loss with the prior to accounts for the fact that $\exp(-\ell(\theta,y_i))$ is no longer constrained to integrate to 1, as was the likelihood in \eqref{Equ:bayesrule}. \cite{lyddon2018generalized} set $w$ to match the asymptotic information in the general Bayesian posterior to that of a sample from the `loss-likelihood bootstrap', while \cite{giummole2019objective}, building on the work of \cite{ribatet2012bayesian}, directly calibrate the curvature of the posterior to match that of the frequentist loss minimiser. We focus on a subset of loss functions, known as scoring rules, that depend upon the \acro{\smaller DM}'s likelihood model, continuing to allow the \acro{\smaller DM} to use this to encode their beliefs about the \DGP. Under the \logscorecomma $\ell(\theta,y)=-\log f(y;\theta)$ \eqref{Equ:GBI} collapses to \eqref{Equ:bayesrule}. The parameter $\theta^{\ell}_g$ associated with the \logscore is the minimiser of the \acro{\smaller KLD} between the distribution of the sample and the model \citep{berk1966limiting}. We therefore call updating using \eqref{Equ:bayesrule} \acro{\smaller KLD}-Bayes. However, it is well known that minimising the \logscore puts large importance on correctly capturing the tails of the data \citep{bernardo2001bayesian} and can have negative consequences for posterior decision making \citep{jewson2018principles}. This is demonstrated in the bottom left of Figure \ref{Fig:norm_t_neighbourhood_predictives} \subsection{\acro{$\beta$D}-Bayes} An alternative to the \logscore is the $\beta$-divergence loss \citep{basu1998robust} \begin{equation} \ell_{(\beta)}(y,f(\cdot;\theta))= -\frac{1}{\beta-1}f(y;\theta)^{\beta-1}+\frac{1}{\beta}\int f(z;\theta)^{\beta}dz,\label{Equ:betaDloss} \end{equation} so called as $\argmin_{\theta} \mathbb{E}_{y\sim g}\left[\ell_{(\beta)}(y,f(\cdot;\theta))\right] = \argmin_{\theta} \acro{\smaller$D_{B}^{(\beta)}$}(g || f(\cdot;\theta))$ where $\acro{\smaller$D_{B}^{(\beta)}$}(g || f)$ is the $\beta$-divergence defined in Section \ref{sec:DivergenceDefinitions}. We refer to updating using \eqref{Equ:GBI} and loss \eqref{Equ:betaDloss} as \acro{$\beta$D}-Bayes. This was first used by \cite{ghosh2016robust} to produce a robustified Bayesian posterior (\acro{$\beta$D}-Bayes) and has since been deployed for a variety of examples \citep[e.g.][]{knoblauch2018doubly, knoblauch2022generalized, girardi2020robust, sugasawa2020robust}. The implicit robustness to outliers exhibited by the \acro{$\beta$D}-Bayes is illustrated in the bottom right of Figure \ref{Fig:norm_t_neighbourhood_predictives}, where, unlike the \acro{\smaller KLD}-Bayes, the \acro{$\beta$D}-Bayes continues to captures the distribution of the majority of observations under outlier contamination. \cite{jewson2018principles} argued that updating in a manner that is automatically robust to outliers, removes the burden on the \acro{\smaller DM} to specify their beliefs in a way that is robust to outliers is removed. The results of the coming sections provide a formal rationale for adopting this methodology to provide stability to the canonical model choice and departures from the \DGP. While Bayesian inference has been proposed minimising several alternative divergences including the Hellinger divergence, $\alpha$-divergence, and the \acro{\smaller TVD} \citep[e.g.][]{hooker2014bayesian,jewson2018principles,knoblauch2020robust} such methods require a non-parametric density estimate, prohibiting their use for high-dimensional problems with continuous data. We restrict our attention to local methods not requiring such an estimate and in particular to the \acro{$\beta$D} and \acro{\smaller KLD}. The $\gamma$-divergence \citep{fujisawa2008robust} has also been shown to produce robust inference without requiring a non-parametric density estimate \citep{hung2018robust,knoblauch2022generalized} and in general behaves very similarly, see Section \ref{App:gammaD}. \subsection{Posterior Predictive Stability }\label{Sub:NotionsStability} Our results will investigate the stability of general Bayesian posterior predictive distributions \begin{align} m^D_{f}(y_{new}|y)&=\int f(y_{new};\theta)\pi^D(\theta|y)d\theta.\label{Equ:PredictiveDensityMetric} \end{align} for exchangeable observation $y_{new}\in\mathcal{Y}$ to the specification of the model $f$, and the \DGP $g$. As a result, we focus on the stability of the posterior distribution for observables $y\in\mathcal{Y}$ to perturbations of the prior for observables, $f$, and generating distributions for these observables $g$. From a decision-making perspective, the posterior predictive is often integrated over to calculate expected utilities, and therefore stable posterior predictive distributions correspond to stable decision making. We consider two metrics for stability, the first is the divergence between posterior predictives, which if small, indicates that a \acro{\smaller DM} with either distribution would make similar decisions. The second measures the difference between the posterior predictives' divergence to the \DGP. Predictives that are close to the \DGP will make close to optimal decisions and therefore, two predictives that are equally close will make similarly good decisions Predictive stability is also a more reasonable requirement than say posterior stability. The parameter posteriors for two distinct models/\DGPs will generally converge in different places \cite[e.g.][]{smith2007local}. However, divergent parameter posteriors do not necessarily imply divergent posterior predictives, as we show. Further, focusing on observables allows us to consider interesting cases of neighbouring models with nested parameter spaces (see Section \ref{Sub:MixtureModeling}) \section{Stability to the specification of the likelihood function}{\label{Sec:StabilityLikelihood}} In this section we consider two potential likelihood models for the data. These could correspond to the \acro{\smaller DM}'s true and functional beliefs, or two, equally preferable candidates for the later. In both cases, the \acro{\smaller DM} would not wish their posterior inferences to diverge if one candidate was used in place of the other. \subsection{An interpretable neighbourhood of likelihood models} We first consider the stability of inference to the specification of the \acro{\smaller DM}'s likelihood model. Likelihood models $f$ and $h$ are considered to be in the same equivalence class of likelihood models for $y\in\mathcal{Y}$ if they satisfy Definition \ref{Def:LikelihoodNeighbourhood} \begin{definition}[\acro{\smaller TVD} neighbourhood of likelihood models] Likelihood models $f(\cdot;\theta)$ and $h(\cdot;\eta)$ for observable $y\in\mathcal{Y}$ are in the neighbourhood $\mathcal{N}_{\epsilon}^{\acro{\smaller TVD}}$ of size $\epsilon$ if \begin{align} &\forall \theta \in \Theta, \exists \eta \in \mathcal{A} \textrm{ s.t. } \acro{\smaller TVD}(f(\cdot;\theta), h(\cdot; \eta)) \leq \epsilon \quad\textrm{and}\quad \forall \eta \in \mathcal{A}, \exists \theta \in \Theta \quad\textrm{s.t.} \quad \acro{\smaller TVD}(f(\cdot;\theta), h(\cdot; \eta)) \leq \epsilon \nonumbe \end{align} \label{Def:LikelihoodNeighbourhood} \end{definition} Neighbourhood $\mathcal{N}_{\epsilon}^{\acro{\smaller TVD}}$ demands the existence of functions $I_f: \Theta \mapsto \mathcal{A}$ and $I_h: \mathcal{A}\mapsto \Theta $ such that for all $\theta$, $\acro{\smaller TVD}(f(\cdot; \theta), h(\cdot; I_f(\theta))$ is small and for all $\eta$, $\acro{\smaller TVD}(h(\cdot; \eta), f(\cdot; I_h(\eta))$ is also small. The symmetry of Definition \ref{Def:LikelihoodNeighbourhood} allows $\Theta$ and $\mathcal{A}$ to have different dimensions. For two likelihoods to be close in terms of \acro{\smaller TVD} requires that the greatest difference in any of the probability statements made by the two likelihoods be small on the natural scale. \begin{equation} \acro{\smaller TVD}(f(\cdot;\theta),h(\cdot;\theta)) := \sup_{Y\in\mathcal{Y}}\left|f(Y;\theta)-h(Y;\theta)\right| = \frac{1}{2}\int \left|f(y;\theta)-h(y;\theta)\right|dy \label{Equ:TVD} \end{equation} Additionally, \acro{\smaller TVD} neighbourhoods contain $\epsilon$-contaminations considered in the context of prior stability by \cite{gustafson1995local} and often used as outlier models \citep[e.g.][]{aitkin1980mixture}. As a result, it is reasonable for a \acro{\smaller DM} to be able to elicit their beliefs within a $\mathcal{N}_{\epsilon}^{\acro{\smaller TVD}}$ neighbourhood of their chosen model, and such a neighbourhood contains standard perturbations for sensitivity analysis. The weak conditions required for the results of the following sections are formally stated in Section \ref{Sub:Conditions}. Briefly, Condition \ref{Cond:BoundedDensities} requires the boundedness of the essential supremum of models $f$ and $h$ and the \DGP $g$, and Condition \ref{Cond:StochasticPosteriorConcentration} requires sufficient concentration of posterior $\pi^D_{f}(\theta|y)$ around $\theta^{D}_f$. For clarity of argument, we proceed under the assumption that prior $\pi^D(\theta)$ and $\pi^D(\eta)$ are fixed. \subsection{The stability of the \acro{$\beta$D}-Bayes} In the first of our main results, Theorem \ref{Thm:StabilityPosteriorPredictivebetaDiv} bounds the \textit{a posteriori} divergence between the predictive distributions resulting from likelihood models $f$ and $h$ as a function of the size of the \textit{a priori} neighbourhood $\mathcal{N}_{\epsilon}^{\acro{\smaller TVD}}$. \begin{theorem}[Stability of the posterior predictive distributions of two models under the \acro{$\beta$D}-Bayes inference] Given $1< \beta\leq 2$ and two likelihood models $\left\lbrace f(\cdot;\theta): \theta\in\Theta\right\rbrace$ and $\left\lbrace h(\cdot; \eta):\eta\in\mathcal{A}\right\rbrace$ such that $f,h\in\mathcal{N}_{\epsilon}^{\acro{\smaller TVD}}$ for $\epsilon>0$. Then provided there exists $M<\infty$ such that Condition \ref{Cond:BoundedDensities} holds, and $y$, $\pi^{(\beta)}(\theta)$ and $\pi^{(\beta)}(\eta)$ satisfy Condition \ref{Cond:StochasticPosteriorConcentration} for $D = $ \acro{\smaller$D_{B}^{(\beta)}$} \begin{align} \acro{\smaller$D_{B}^{(\beta)}$}(m^{(\beta)}_{f}(\cdot|y)||m^{(\beta)}_{h}(\cdot|y)) &\leq \frac{M^{\beta - 1}(3\beta - 2)}{\beta(\beta - 1)}\epsilon + \frac{1}{c_1} + 2\frac{M^{\beta - 1}}{\beta-1}\int \acro{\smaller TVD}(g, f(\cdot;\theta))\pi^{(\beta)}_{f}(\theta|y)d\theta\nonumber\\ \acro{\smaller$D_{B}^{(\beta)}$}(m^{(\beta)}_{h}(\cdot|y)||m^{(\beta)}_{f}(\cdot|y)) &\leq \frac{M^{\beta - 1}(3\beta - 2)}{\beta(\beta - 1)}\epsilon + \frac{1}{c_2} + 2\frac{M^{\beta - 1}}{\beta-1}\int \acro{\smaller TVD}(g, h(\cdot;\eta))\pi^{(\beta)}_{h}(\eta|y)d\eta,\nonumber \end{align} where $c_1$ and $c_2$ are defined in Condition \ref{Cond:StochasticPosteriorConcentration} \label{Thm:StabilityPosteriorPredictivebetaDiv}. \end{theorem} Further, Theorem \ref{Thm:StabilityDGPapproxBeta} bounds the absolute distance between the \acro{$\beta$D} of the posterior predictive distributions produced from two likelihood models within $\mathcal{N}_{\epsilon}^{\acro{\smaller TVD}}$ from the \DGP \begin{theorem}[The stability in the posterior predictive approximation of two models to the \DGP of \acro{$\beta$D}-Bayes inference] Given $1< \beta\leq 2$ and two likelihood models $\left\lbrace f(\cdot;\theta): \theta\in\Theta\right\rbrace$ and $\left\lbrace h(\cdot; \eta):\eta\in\mathcal{A}\right\rbrace$ such that $f,h\in\mathcal{N}_{\epsilon}^{\acro{\smaller TVD}}$ for $\epsilon>0$. Then provided there exists $M<\infty$ such that Condition \ref{Cond:BoundedDensities} holds and $y$, $\pi^{(\beta)}(\theta)$ and $\pi^{(\beta)}(\eta)$ satisfy Condition \ref{Cond:StochasticPosteriorConcentration} for $D = $ \acro{\smaller$D_{B}^{(\beta)}$} \begin{equation} |\acro{\smaller$D_{B}^{(\beta)}$}(g||m^{(\beta)}_{f}(\cdot|y))- \acro{\smaller$D_{B}^{(\beta)}$}(g||m^{(\beta)}_{h}(\cdot|y))|\leq \frac{M^{\beta - 1}(3\beta - 2)}{\beta(\beta - 1)}\epsilon+ \frac{1}{c} + C^{(\beta)}(f,h,y),\nonumbe \end{equation} where $c = \min\{c_1, c_2\}$ as defined in Condition \ref{Cond:StochasticPosteriorConcentration} and \begin{align} C^{(\beta)}(f,h,y):&= \max \left\lbrace\int\acro{\smaller$D_{B}^{(\beta)}$}(g||f(\cdot;\theta))\pi^{(\beta)}_{f}(\theta|y)d\theta-\acro{\smaller$D_{B}^{(\beta)}$}(g||m^{(\beta)}_{f}(\cdot|y)),\right.\nonumber\\ &\qquad\left.\int\acro{\smaller$D_{B}^{(\beta)}$}(g||h(\cdot;\eta))\pi^{(\beta)}_{h}(\eta|y)d\eta-\acro{\smaller$D_{B}^{(\beta)}$}(g||m^{(\beta)}_{h}(\cdot|y)) \right\rbrace.\nonumber \end{align} \label{Thm:StabilityDGPapproxBeta} \end{theorem} The value $M$ present in both Theorems \ref{Thm:StabilityPosteriorPredictivebetaDiv} and \ref{Thm:StabilityDGPapproxBeta} is often easy to bound, for example by selecting a minimum value of the scale of Gaussian or Student's-$t$ likelihood models, and we expect $c_1, c_2 \rightarrow\infty$ as $n\rightarrow\infty$ (see Section \ref{Sub:Conditions}). The final term in Theorem \ref{Thm:StabilityPosteriorPredictivebetaDiv} involves the \acro{\smaller TVD} between the models under consideration and the unknown \DGP. While it is difficult to say anything formal about this, Lemma \ref{Lem:BoundingBetaDTVD} shows that the \acro{$\beta$D} can be bounded above by the \acro{\smaller TVD}, and therefore any values of parameters $\theta$ and $\eta$ that are close to $g$ in \acro{\smaller TVD} should have high posterior mass under the \acro{$\beta$D} posterior. On the other hand, $C^{(\beta)}(f,h,y)$ in Theorem \ref{Thm:StabilityDGPapproxBeta}, is is related to the concentration of the posteriors $\pi^{(\beta)}_{f}(\theta|y)$ and $\pi^{(\beta)}_{h}(\eta|y)$ with Jensen's inequality and the convexity of the \acro{$\beta$D} guaranteeing that $C^{(\beta)}(f,h,y)\geq 0$. Under suitable regularity conditions as $n\rightarrow\infty$ and the posterior collapses to a point mass \citep{chernozhukov2003mcmc, lyddon2018generalized}, then this term converges to 0. Importantly, Theorem \ref{Thm:StabilityDGPapproxBeta} does not depend on how well specified the two likelihood models are for the \DGP. \subsection{The stability of the \acro{\smaller KLD}-Bayes}{\label{Sub:StabilityLikelihood_KLD}} Figure \ref{Fig:norm_t_neighbourhood_predictives} demonstrates that the stability afforded by the \acro{$\beta$D}-Bayes is not afforded by the \acro{\smaller KLD}-Bayes. The \acro{\smaller KLD} is recovered from the \acro{$\beta$D} as $\beta\rightarrow1$. However, in such a scenario, the bounds proven in the previous sections tend to infinity. Instead, Lemma \ref{Thm:StabilityDGPapproxKLD} provides an analogous stability result for traditional Bayesian updating. \begin{lemma}[The stability in the posterior predictive approximation of the \DGP of \acro{\smaller KLD}-Bayes inference] For any two two likelihood models $\left\lbrace f(\cdot;\theta): \theta\in\Theta\right\rbrace$ and $\left\lbrace h(\cdot; \eta):\eta\in\mathcal{A}\right\rbrace$, and $y$, $\pi^{\acro{\smaller KLD}}(\theta)$ and $\pi^{\acro{\smaller KLD}}(\eta)$ satisfying Condition \ref{Cond:StochasticPosteriorConcentration} for $D = $ \acro{\smaller KLD}, we have that \begin{align} |\acro{\smaller KLD}(g||m^{\acro{\smaller KLD}}_{f}(\cdot|y))- \acro{\smaller KLD}(g||m^{\acro{\smaller KLD}}_{h}(\cdot|y))|&\leq C^{\acro{\smaller KLD}}(f,h,y) + \frac{1}{c} + T(f,h,y), \nonumber \end{align} where $c := \min\{c_1, c_2\}$ as defined in Condition \ref{Cond:StochasticPosteriorConcentration} and \begin{align} T(f,h,y):&= \max \left\lbrace\int \int g(\cdot) \log \frac{f(\cdot;\theta)}{h(\cdot;I_f(\theta))}d\mu\pi^{\acro{\smaller KLD}}_{f}(\theta|y)d\theta,\right.\nonumber\\ &\qquad\left. \int\int g(\cdot) \log \frac{h(\cdot;\eta)}{f(\cdot;I_h(\eta))}d\mu\pi^{\acro{\smaller KLD}}_{h}(\eta|y)d\eta \right\rbrace\label{Equ:StabilityTermKLD} \\ C^{\acro{\smaller KLD}}(f,h,y):&= \max \left\lbrace\int\acro{\smaller KLD}(g||f(\cdot;\theta))\pi^{\acro{\smaller KLD}}_{f}(\theta|y)d\theta-\acro{\smaller KLD}(g||m^{\acro{\smaller KLD}}_{f}(\cdot|y)),\right.\nonumber\\ &\qquad\left.\int\acro{\smaller KLD}(g||h(\cdot;\eta))\pi^{\acro{\smaller KLD}}_{h}(\eta|y)d\eta-\acro{\smaller KLD}(g||m^{\acro{\smaller KLD}}_{h}(\cdot|y)) \right\rbrace.\nonumber \end{align} \label{Thm:StabilityDGPapproxKLD} \end{lemma} We investigate $T(f,h,y)$, the term not analagous to any of those from Theorem \ref{Thm:StabilityDGPapproxBeta}. Without loss of generality assume that the second term in \eqref{Equ:StabilityTermKLD} is the largest. Then, the reverse Pinsker's inequality \citep{sason2016f,binette2019a} provides \begin{align} \int g(\cdot) \log \frac{h(\cdot;\eta)}{f(\cdot;I_h(\eta))}d\mu = \int \frac{g(\cdot)}{h(\cdot;\eta)}h(\cdot;\eta) \log \frac{h(\cdot;\eta)}{f(\cdot;I_h(\eta))}d\mu &\leq M^{\ast}_h\acro{\smaller KLD}(h(\cdot;\eta)||f(\cdot;I_h(\eta)))\nonumber\\ &\leq M^{\ast}_h K_{h,f}\acro{\smaller TVD}(h(\cdot;\eta),f(\cdot;I_h(\eta)))\nonumber \end{align} where $M^{\ast}_h=\esssup \frac{g}{h(\cdot;\theta_h)}$ and $K_{h,f}=\left(\frac{\log(a)}{a-1}+\frac{\log(b)}{1-b}\right)$ with $a = \essinf \frac{dF}{dH}$ and $b = \esssup \frac{dF}{dH}$. As a result, a \acro{\smaller TVD} ball around the likelihood model is not sufficient for posterior stability when using \bayesrule updating. In fact, posterior stability can only be guaranteed according to Lemma \ref{Thm:StabilityDGPapproxKLD} if \begin{equation} \left|\log(h(\cdot;\eta))-\log(f(\cdot;I_h(\eta)))\right|\label{Equ:log_h_minus_log_f} \end{equation} is small in regions where $g$ has density. Without knowledge of $g$, this requires that \eqref{Equ:log_h_minus_log_f} be small everywhere, requiring the \acro{\smaller DM} to be confident in the accuracy of their probability statements on the log-scale rather than on the natural scale as was the case for $\mathcal{N}^{\acro{\smaller TVD}}_{\epsilon}$. Logarithms act to inflate the magnitude of small numbers and thus ensuring that $\left|\log(h(\cdot;\eta))-\log(f(\cdot;I_h(\eta)))\right|$ is small requires that $f$ and $h$ are increasingly similar as their values decrease. This requires the \acro{\smaller DM} to be more and more confident of the accuracy of their probability specifications as they get further and further into the tails, something that is known to already be very difficult for low dimensional problems \citep{winkler1968evaluation,o2006uncertain}, and becomes increasingly difficult as the dimension of the observation space increases. \section{Stability to the \DGP}{\label{Sec:StabilityDGP}} \subsection{A reasonable neighbourhood of \DGP perturbations} Our second series of results concern the stability of inferences from a single model $\left\{f(\cdot;\theta); \theta \in \Theta\right\}$ to perturbations of the \DGP for $y \in\mathcal{Y}$. We consider updating on datasets $y_1:=(y_1,\ldots, y_{n_1})\sim g_1$ or $y_2:=(y_1,\ldots, y_{n_2})\sim g_2$ with $n_1, n_2 > 0$ and $g_1$ and $g_2$ satisfying Definition \ref{Def:DGPNeighbourhood} \begin{definition}[\acro{\smaller TVD} Neighbourhood of data generating processes] Data generating processes $g_1$ and $g_2$ for observable $y\in\mathcal{Y}$ are in the neighbourhood $\mathcal{G}_{\epsilon}^{\acro{\smaller TVD}}$ of size $\epsilon$ if $\acro{\smaller TVD}(g_1, g_2) \leq \epsilon$ \label{Def:DGPNeighbourhood} \end{definition} The \acro{\smaller TVD} provides a relevant and reasonable way to describe perturbations of the \DGP. It contains $\epsilon$-contamination neighbourhoods as considered by \cite{matsubara2021robust} in the context of `global bias-robustness' and also in Figure \ref{Fig:norm_t_neighbourhood_predictives}. It demands that the data sets were generated under mechanisms that were absolutely close on the natural scale, rather than the log-score considered in the \acro{\smaller KLD} neighbourhoods on \cite{miller2018robust}. Conceptually, it is convenient to think about datasets such that $n_1 = n_2$ but this is not necessary. The conditions for the results of the next sections are similar to those required in Section \ref{Sec:StabilityLikelihood} and are stated in full in Section \ref{Sub:Conditions}. \subsection{The stability of the \acro{$\beta$D}} Theorem \ref{Thm:StabilityPosteriorPredictivebetaDiv2} bounds the \acro{$\beta$D} between the posterior predictive distributions resulting from model $f$ and data from two \DGP{}s in the $\mathcal{G}_{\epsilon}^{\acro{\smaller TVD}}$ neighbourhood. \begin{theorem}[The stability of the posterior predictive distribution under two \DGP{}s of the \acro{$\beta$D}-Bayes inference] Given $1< \beta\leq 2$ and likelihood model $\left\lbrace f(\cdot;\theta): \theta\in\Theta\right\rbrace$ and two data sets $y_1:=(y_1,\ldots, y_{n_1}) \sim g_1$ and $y_2:=(y_1,\ldots, y_{n_2}) \sim g_2$ for $n_1, n_2 > 0$ with $\{g_1, g_2\}\in \mathcal{G}_{\epsilon}^{\acro{\smaller TVD}}$. Then provided there exists $M<\infty$ such that Condition \ref{Cond:BoundedDensities2} hold, Condition \ref{Cond:StochasticPosteriorConcentration2} holds for $D = $ \acro{\smaller$D_{B}^{(\beta)}$}, $y_1$, $y_2$ and $\pi^{(\beta)}(\theta)$ then, \begin{align} \acro{\smaller$D_{B}^{(\beta)}$}(m^{(\beta)}_{f}(\cdot|y_1)||m^{(\beta)}_{f}(\cdot|y_2)))\leq& 2\frac{M^{\beta - 1}}{\beta-1}\epsilon + \frac{1}{c_{\mathcal{S}^{(1)}}} + 2\frac{M^{\beta - 1}}{\beta-1}\int \acro{\smaller TVD}(g_1, f(\cdot;\theta_1))\pi^{(\beta)}_{f}(\theta_1|y_1)d\theta_1.\nonumber\\ \acro{\smaller$D_{B}^{(\beta)}$}(m^{(\beta)}_{f}(\cdot|y_2)||m^{(\beta)}_{f}(\cdot|y_1))) \leq& 2\frac{M^{\beta - 1}}{\beta-1}\epsilon + \frac{1}{c_{\mathcal{S}^{(2)}}} + 2\frac{M^{\beta - 1}}{\beta-1}\int \acro{\smaller TVD}(g_2, f(\cdot;\theta_2))\pi^{(\beta)}_{f}(\theta_2|y_2)d\theta_2.\nonumber \end{align} where $c_{\mathcal{S}^{(1)}}$ and $c_{\mathcal{S}^{(2)}}$ are defined in Condition \ref{Cond:StochasticPosteriorConcentration2} \label{Thm:StabilityPosteriorPredictivebetaDiv2} \end{theorem} Further, Theorem \ref{Thm:StabilityDGPapproxBeta2} bounds the difference in the \acro{$\beta$D} from the \DGP of the \acro{$\beta$D}-Bayes posterior predictive distributions resulting from data from the two \DGP{}s. \begin{theorem}[The stability in the posterior predictive approximation of two \DGPs under the same model of \acro{$\beta$D}-Bayes inference] Given $1< \beta\leq 2$ and likelihood model $\left\lbrace f(\cdot;\theta): \theta\in\Theta\right\rbrace$ and two data sets $y_1:=(y_1,\ldots, y_{n_1}) \sim g_1$ and $y_2:=(y_1,\ldots, y_{n_2}) \sim g_2$ for $n_1, n_2 > 0$ with $\{g_1, g_2\}\in \mathcal{G}_{\epsilon}^{\acro{\smaller TVD}}$. Then provided there exists $M<\infty$ such that Condition \ref{Cond:BoundedDensities2} holds, and Condition \ref{Cond:StochasticPosteriorConcentration2} holds for $D = $ \acro{\smaller$D_{B}^{(\beta)}$}, $y_1$, $y_2$ and $\pi^{(\beta)}(\theta)$ then, \begin{equation} |\acro{\smaller$D_{B}^{(\beta)}$}(g_1||m^{(\beta)}_{f}(\cdot|y_1))- \acro{\smaller$D_{B}^{(\beta)}$}(g_2||m^{(\beta)}_{f}(\cdot|y_2))|\leq \frac{M^{\beta - 1}(\beta + 2)}{\beta(\beta - 1)}\epsilon + \frac{1}{c} + C^{(\beta)}(f,y_1, y_2),\nonumbe \end{equation} where $c:= \min\{c_{\mathcal{S}^{(1)}}, c_{\mathcal{S}^{(2)}}\}$ defined in Condition \ref{Cond:StochasticPosteriorConcentration2} and \begin{align} C^{(\beta)}(f,y_1, y_2):&= \max \left\lbrace\int\acro{\smaller$D_{B}^{(\beta)}$}(g_1||f(\cdot;\theta_1))\pi^{(\beta)}(\theta_1|y_1)d\theta_1-\acro{\smaller$D_{B}^{(\beta)}$}(g_1||m^{(\beta)}_{f}(\cdot|y_1)),\right.\nonumber\\ &\qquad\left.\int\acro{\smaller$D_{B}^{(\beta)}$}(g_2||f(\cdot;\theta_2))\pi^{(\beta)}(\theta_2|y_2)d\theta_2-\acro{\smaller$D_{B}^{(\beta)}$}(g_2||m^{(\beta)}_{f}(\cdot|y_2)) \right\rbrace \nonumber \end{align} \label{Thm:StabilityDGPapproxBeta2} \end{theorem} Theorems \ref{Thm:StabilityPosteriorPredictivebetaDiv2} and \ref{Thm:StabilityDGPapproxBeta2} are the analogous result to Theorems \ref{Thm:StabilityPosteriorPredictivebetaDiv} and \ref{Thm:StabilityDGPapproxBeta} respectively. The value $M$ is still easy to bound here and the concentration terms $\frac{1}{c_{\mathcal{S}^{(j)}}}$ are expected to shrink to 0 as $n\rightarrow\infty$. For Theorem \ref{Thm:StabilityPosteriorPredictivebetaDiv2}, we invoke Lemma \ref{Lem:BoundingBetaDTVD} and argue that the \acro{$\beta$D} posterior will place density on parameter values of model $f$ that are close to $g$ in \acro{\smaller TVD}. The bound of Theorem \ref{Thm:StabilityDGPapproxBeta2} depends on $C^{(\beta)}(f,y_1, y_2)$, which under mild regularity conditions goes to 0 as $n\rightarrow\infty$, demonstrating that the \acro{$\beta$D}-Bayes is stable to \acro{\smaller TVD} perturbations of the data, independently of how well the model approximates either of the \DGP{}s. \subsection{The stability of the \acro{\smaller KLD}-Bayes} Figure \ref{Fig:norm_t_neighbourhood_predictives} showed that updating using \eqref{Equ:bayesrule} is not stable to perturbations of the DGP. The data considered is within a $\mathcal{G}_{0.1}^{\acro{\smaller TVD}}$ neighbourhood of data generated from $\mathcal{N}(0, 1)$ and unlike the \acro{$\beta$D}-Bayes, the estimated posterior predictive is vastly different to what would have been estimated under the uncontaminated \DGP. Lemma \ref{Thm:StabilityDGPapproxKLD2} investigates perturbations of the \DGP that traditional Bayesian inference is stable too. \begin{lemma}[The stability in the posterior predictive approximation of two \DGPs under the same model of \acro{\smaller KLD}-Bayes inference] For likelihood model $\left\lbrace f(\cdot;\theta): \theta\in\Theta\right\rbrace$ and data sets $y_1:=(y_1,\ldots, y_{n_1}) \sim g_1$ and $y_2:=(y_1,\ldots, y_{n_2}) \sim g_2$ for $n_1, n_2 > 0$, given Condition \ref{Cond:StochasticPosteriorConcentration2} holds for $D = $ \acro{\smaller KLD}, $y_1$, $y_2$ and $\pi^{\acro{\smaller KLD}}(\theta)$, we have that \begin{align} |\acro{\smaller KLD}(g||m^{\acro{\smaller KLD}}_{f}(\cdot|y))- \acro{\smaller KLD}(g||m^{\acro{\smaller KLD}}_{h}(\cdot|y))|&\leq C^{\acro{\smaller KLD}}(f,y_1, y_2) + \frac{1}{c} + T_1(g_1, g_2) + T_2(f,y_1, y_2), \nonumber \end{align} where $c:= \min\{c_{\mathcal{S}^{(1)}}, c_{\mathcal{S}^{(2)}}\}$ as defined in Condition \ref{Cond:StochasticPosteriorConcentration2} and \begin{align} T_1(g_1, g_2):&= \max\left\lbrace \int g_2\log g_2 - g_1 \log g_1d\mu, \int g_1\log g_1 - g_2\log g_2d\mu\right\rbrace\nonumber\\ T_2(f,y_1, y_2):&= \max \left\lbrace\int \int (g_1 - g_2)\log f(\cdot;\theta_1)d\mu\pi^{\acro{\smaller KLD}}(\theta_1|y_1)d\theta_1,\right.\nonumber\\ &\qquad\left.\int \int (g_2 - g_1)\log f(\cdot;\theta_2)d\mu\pi^{\acro{\smaller KLD}}(\theta_2|y_2)d\theta_2 \right\rbrace\nonumber \\ C^{\acro{\smaller KLD}}(f,y_1, y_2):&= \max \left\lbrace\int\acro{\smaller KLD}(g_1||f(\cdot;\theta_1))\pi^{\acro{\smaller KLD}}(\theta_1|y_1)d\theta_1-\acro{\smaller KLD}(g_1||m^{\acro{\smaller KLD}}_{f}(\cdot|y_1)),\right.\nonumber\\ &\qquad\left.\int\acro{\smaller KLD}(g_2||f(\cdot;\theta_2))\pi^{\acro{\smaller KLD}}(\theta_2|y_2)d\theta_2-\acro{\smaller KLD}(g_2||m^{\acro{\smaller KLD}}_{f}(\cdot|y_2)) \right\rbrace \nonumber \end{align} \label{Thm:StabilityDGPapproxKLD2} \end{lemma} Lemma \ref{Thm:StabilityDGPapproxKLD2} shows that stability of the \acro{\smaller KLD} approximation of \DGP by model $f$ to perturbations of the \DGP requires that $T_1(g_1, g_2)$ and $T_2(f,y_1, y_2)$ are small. Small $T_1(g_1, g_2)$ requires $g_1$ and $g_2$ to have similar entropy, which is not necessarily guaranteed by \DGPs according to Definition \ref{Def:DGPNeighbourhood}. Alternatively, if $|\log f(\cdot; \theta)|$ is bounded then $T_2(f,y_1, y_2)$ can be bounded above by $\acro{\smaller TVD}(g_1, g_2)$. However, boundedness of the log-likelihood is unlikely, as $f(y; \theta)\rightarrow 0$, $|\log f(y;\theta)| \rightarrow \infty$. Therefore, $T_2(f,y_1, y_2)$ being small requires $g_1$ and $g_2$ to be increasingly close in the tails of the fitted models, prohibiting, for example, outlier contaminations such as in Figure \ref{Fig:norm_t_neighbourhood_predictives}. \section{Setting $\beta$}{\label{Sec:SettingBeta}} The only additional specification required from the \acro{\smaller DM} when implementing the \acro{$\beta$D}-Bayes compared with the \acro{\smaller KLD}-Bayes is that they select the value of $\beta$. This hyperparameter regulates the trade-off between robustness and efficiency \citep[e.g.][]{basu1998robust}. Minimising the \acro{\smaller KLD} ($\beta=1$) provides the most efficient inference but is very sensitive to outliers. Increasing $\beta$ away from 1 gains robustness to outliers at a cost to efficiency. The bounds of the previous theorems all depend on $\beta$ and we can therefore additionally interpret $\beta$ as a sort of meta prior for the \acro{\smaller DM}'s confidence in their elicited model or data collection. The less confident they are, the greater $\beta$ will need to be to prevent non-negligible \textit{a posteriori} divergence. Eliciting $\beta$ as such requires the \acro{\smaller DM} to reflect on the value of $\epsilon$ associated with their beliefs or the quality of the data. For the neighbourhoods of Definition \ref{Def:LikelihoodNeighbourhood}, this can be obtained by considering for a given set of parameters what the largest possible error in any of the probability statements could be, or for Definition \ref{Def:DGPNeighbourhood} by considering the minimal proportion of a population that they believe is consistent with the \DGP. Our results are also informative about when the value of $\beta$ might be too large. The \acro{\smaller DM} should want their \acro{$\beta$D}-Bayes inferences be stable because $\epsilon$ is small, and not because the terms involving $\beta$ that multiply $\epsilon$ in the theorems in Sections \ref{Sec:StabilityLikelihood} and \ref{Sec:StabilityDGP} are small. Alternatively, there is increasing interest in data-driven methods to learn $\beta$. \cite{warwick2005choosing, ghosh2015robust, basak2021optimal} consider procedures to estimate $\beta$ to minimise the mean squared error (\MSE) of estimated model parameters, \cite{toma2011dual, kang2014minimum} estimate $\beta$ to minimise the maximum perturbation of the parameter estimates resulting from replacing one observation by the population estimated mean, and \cite{jewson2022general, yonekura2021adaptation} estimate $\beta$ to minimise the Fisher's divergence to the \DGP. Finally, \acro{$\beta$D}-Bayes inference appears not to be overly sensitive to the exact value of $\beta$. Figure \ref{Fig:norm_t_neighbourhood_predictives_sensitivity} demonstrates that for the example introduced in Section \ref{Sec:Introduction}, inference for the Gaussian and Student's-$t$ models is almost identical for values of $\beta\geq 1.3$. Section \ref{Sec:Sensitivity} provides further demonstration of this. \begin{figure \begin{center} \includegraphics[trim= {0.0cm 0.00cm 0.4cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/Gaussian_t_sensitivity_plot_tikz-7.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.4cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/Gaussian_t_sensitivity_plot_tikz-9.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.4cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/Gaussian_t_sensitivity_plot_tikz-11.pdf}\\ \includegraphics[trim= {0.0cm 0.00cm 0.4cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/Gaussian_t_sensitivity_plot_tikz-15.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.4cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/Gaussian_t_sensitivity_plot_tikz-19.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.4cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/Gaussian_t_sensitivity_plot_tikz-23.pdf} \caption{Posterior predictive distributions using {\color{black}{\acro{$\beta$D}-Bayes}} updating on $n=1000$ observations from an $\epsilon$-contamination model $g(y) = 0.9\times\mathcal{N}\left(y;0,1\right) + 0.1 \times \mathcal{N}\left(y;5,3^2\right)$ for different values of $\beta$.} \label{Fig:norm_t_neighbourhood_predictives_sensitivity} \end{center} \end{figure} \section{Experiments}{\label{Sec:Experiments}} \subsection{Gaussian and Student's-$t$ likelihood}{\label{Sub:GaussianStudent}} We revisit the Gaussian and Student's-$t$ example briefly introduced in Section \ref{Sec:Introduction}. The likelihood models considered here are \begin{align} f_{\sigma^2_{adj}}(y;\theta):= \mathcal{N}\left(y;\mu,\sigma^2\times\sigma^2_{adj}\right)\textrm{ and }h_{\nu}(y;\eta):= \textrm{Student's}-t_{\nu}\left(y;\mu,\sigma^2\right).\label{Equ:GaussianStudent} \end{align} Hyperparameters, $\nu=5$ and $\sigma^2_{adj}=1.16$ are fixed to match the quartiles of the two distributions for all $\mu$ and $\sigma^2$. These were inspired by \cite{o2012probabilistic}, who argued that for absolutely continuous probability distributions, it is only reasonable to ask an expert to make a judgement about the median and the quartiles of a distribution along with maybe a few specially selected features. This is justified as adequate as any two distributions with similar percentiles will look very similar, see for example Figure \ref{Fig:norm_t_neighbourhood_predictives}. However, Section \ref{Sub:StabilityLikelihood_KLD} suggests that greater precision is required to ensure the stability of \bayesrule updating. On the other hand, the likelihoods in \eqref{Equ:GaussianStudent} are contained in $\mathcal{N}^{\acro{\smaller TVD}}_{0.043}$. We generated $n=1000$ observations from the $\epsilon$-contamination model $g(x) = 0.9\times\mathcal{N}\left(y;0,1\right) + 0.1 \times \mathcal{N}\left(y;5,3^2\right)$ contained within the $\mathcal{G}^{\acro{\smaller TVD}}_{0.1}$ neighbourhood of $\mathcal{N}\left(y;0,1\right)$. We then conducted Bayesian updating under the Gaussian and Student's-$t$ likelihood using both \bayesrule and the \acro{$\beta$D}-Bayes ($\beta = 1.5$) under shared priors $\pi(\mu,\sigma^2) = \mathcal{N}\left(\mu;\mu_0,v_0\sigma^2\right)\mathcal{IG}(\sigma^2;a_0,b_0)$, with hyperparameters $(a_0=0.01,b_0=0.01,\mu_0=0,v_0=10)$. Figure \ref{Fig:norm_t_neighbourhood_predictives} and Figure \ref{Fig:norm_t_posteriors}, which plots the parameter posterior distributions for both models under both updating mechanisms, clearly demonstrate the stability of the \acro{$\beta$D}-Bayes across these two models and the lack of stability of traditional Bayesian updating. Not only is the \acro{$\beta$D} inference more stable across $\mathcal{N}^{\acro{\smaller TVD}}_{\epsilon}$, the \acro{$\beta$D} predictive better captures the majority of the \DGP than either of the predictive do under traditional Bayesian updating. The capturing of the $\mathcal{N}\left(y;0,1\right)$ mode further illustrates the \acro{$\beta$D}-Bayes' stability across neighbourhoods of the \DGP. Figure \ref{Fig:norm_t_influence_functions} plots influence functions \citep{west1984outlier} for the \acro{\smaller KLD}-Bayes and \acro{$\beta$D}-Bayes under the Gaussian and Student's-$t$ model. Influence functions are the gradient of the loss function evaluated at parameter estimates as a function of the observations and show the impact that observation had on the analysis. Under the \acro{$\beta$D}-Bayes, the influence functions of the Gaussian and Student's-$t$ likelihoods are closer for almost every $y$, illustrating the stability to the model, and additionally, the influence functions for both models under the \acro{$\beta$D}-Bayes vary less with $y$, illustrating stability to the \DGP. \begin{figure \begin{center} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/betaD_Influence_Curve_tikz-1.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/betaD_Influence_Curve_tikz-2.pdf} \caption{Influence functions for parameter $\mu$ and $\sigma^2$ of the Gaussian and Student's-$t$ likelihood models under the \acro{\smaller KLD}-Bayes and \acro{$\beta$D}-Bayes with $\beta = 1.5$. } \label{Fig:norm_t_influence_functions} \end{center} \end{figure} \subsubsection{\acro{\smaller DLD} data}{\label{Sub:DLD}} We consider an RNA-sequencing data set from \cite{yuan2016plasma} measuring gene expression for $n = 192$ patients with different types of cancer. \cite{rossell2018tractable} studied the impact of 57 predictors on the expression of \acro{\smaller DLD}, a gene that can perform several functions such as metabolism regulation. To illustrate our results, we selected the 15 variables with the 5 highest loadings in the first 3 principal components, and fitted regression models using the neighbouring models in \eqref{Equ:GaussianStudent} for the residuals. Section \ref{App:SelectedVariables} lists the selected variables. Figure \ref{Fig:DLDRegressions} demonstrates that \acro{$\beta$D}-Bayes ($\beta = 1.5$) produces more stable estimates of the fitted residuals (top-left), the estimated density of the residuals (top-right), parameter estimates (bottom-left), and posterior predictive density for the observed data (bottom-right) than the traditional Bayesian inference. \cite{rossell2018tractable} found evidence that this data is heavy-tailed, further demonstrated in Figure \ref{Fig:QQNormal}, which caused the \acro{\smaller KLD}-Bayes to estimate very different densities under the Gaussian and Student's-$t$ model, while the \acro{$\beta$D}-Bayes is stable to this feature of the data. Figure \ref{Fig:DLDRegressionsHist} shows the fit of the models to the posterior mean estimates of the standardised residuals, showing that as well as being stable, the \acro{$\beta$D}-Bayes produces good estimation around the mode of the \acro{\smaller DLD} data under both models. Section \ref{App:TGFB} considers a further regression example showing that even when one of the models under consideration is `well-specified' for the data, the \acro{$\beta$D}-Bayes inference continues to perform adequately. \begin{figure}[!ht \begin{center} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/DLD_betaD_norm_vs_t_tikz-8.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/DLD_betaD_norm_vs_t_tikz-9.pdf}\\ \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/DLD_betaD_norm_vs_t_tikz-5.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/DLD_betaD_norm_vs_t_tikz-6.pdf}\\ \caption{Posterior mean estimates of standardised residuals (\textbf{top left}), posterior mean estimated residuals distribution (\textbf{top-right}), absolute difference in posterior mean parameter estimates (\textbf{bottom left}) and difference in posterior predictive densities of the observations (\textbf{bottom right}) under the Gaussian and Student's-$t$ model of \acro{\smaller KLD}-Bayes and \acro{$\beta$D}-Bayes ($\beta = 1.5$) for the \acro{\smaller DLD} data.} \label{Fig:DLDRegressions} \end{center} \end{figure} \subsection{Mixture Modeling}{\label{Sub:MixtureModeling}} An advantage of considering the stability of the distributions for observables rather than parameters is that it allows `neighbouring' models to have different dimensions to their parameter space. For example, consider initial model $f(\cdot;\theta)$ and then `neighbouring' model \begin{align} h(\cdot;\eta)& = (1 - \omega)\times f(\cdot;\theta) + \omega\times h^{'}(\cdot;\kappa),\nonumbe \end{align} for $\eta = \left\lbrace \theta,\kappa, \omega\right\rbrace$. Here, $h(\cdot;\eta)$ is a mixture model combining the likelihood model $f(\cdot;\theta)$, which could itself already be a mixture model, and some other density $h^{'}(\cdot;\kappa)$ with additional parameters $\kappa$. For all $\theta\in\Theta$ and any $\kappa \in K$ we have that $\acro{\smaller TVD}(f\left(\cdot; \theta\right), h\left(\cdot;\left\lbrace \theta,\kappa, \omega\right\rbrace\right))<\omega$ and therefore a \acro{\smaller TVD} neighbourhood can be defined by upper bounding $\omega$. \subsubsection{Shapley Galaxy Dataset} We examine the Shapley galaxy dataset of \cite{drinkwater2004large}, recording the velocities of 4215 galaxies in the Shapley supercluster, a large concentration of gravitationally-interacting galaxies; see Figure \ref{Fig:MixtureModels}. The clustering tendency of galaxies continues to be a subject of interest in astronomy. \cite{miller2018robust} investigate this data using Gaussian mixture models and use their coarsened posterior to select the number of mixture components, finding considerable instability in the number of estimated components $K$ under different specifications of the coarsening parameter. See \cite{cai2021finite} for further issues with estimating the number of components in mixture models. We estimate Gaussian mixture models of the form \begin{align} f(y; \theta)= \sum_{k=1}^K \omega_j \mathcal{N}(y; \mu_j, \sigma_j), \nonumber \end{align} under the \acro{\smaller KLD}-Bayes and \acro{$\beta$D}-Bayes, considering number of components $K \in \{2, 3, 4, 5, 6\}$ and using the normal-inverse Wishart priors of \cite{fuquene2019choosing} (full details available in Section \ref{App:FiniteGaussianMixDetails}). \acro{$\beta$D}-Bayes inference for such one-dimensional mixture models is easy to implement using adaptive quadrature to approximate the necessary integral term $\frac{1}{\beta}\int h(z;\eta)^{\beta}dz$. We do not formally place any constraint on the estimation of $\omega_k$, however, any model that estimates a component with small $\omega_k$ can be seen as a neighbour of a model with one fewer component. Figure \ref{Fig:MixtureModels} demonstrates the posterior mean approximation to the histogram of the data of the Gaussian mixture models under the \acro{\smaller KLD}-Bayes and \acro{$\beta$D}-Bayes and Table \ref{Tab:MixtureModels} records the \acro{\smaller TVD} between the posterior mean predictive distribution of recursively adding components to the model. The \acro{$\beta$D}-Bayes inference for $\beta = 1.25$ and $1.5$ is more stable to the addition of an extra component. In particular, for $K \geq 3$ the \acro{$\beta$D}-Bayes inference stably estimates the biggest components of the data centered approximately at $5,000$ and $15,000$ $km/s$, while the \acro{\smaller KLD}-Bayes produces very different inference for these modes depending on the number of clusters selected. \begin{figure \begin{center} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.9\columnwidth]{Figures_new/KL_beta_MixtureBayesnorm_NIW_Shapley_stability_hists_tikz-1.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.9\columnwidth]{Figures_new/KL_beta_MixtureBayesnorm_NIW_Shapley_stability_hists_tikz-3.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.9\columnwidth]{Figures_new/KL_beta_MixtureBayesnorm_NIW_Shapley_stability_hists_tikz-2.pdf} \caption{Shapley Galaxy Data: Histograms of the data, in units of 1,000 km/s, excluding a small amount of data extending in a tail up to 80,000 km/s, with fitted Gaussian mixture models with $K = 2-6$ components under the \acro{\smaller KLD}-Bayes (\textbf{top}), \acro{$\beta$D}-Bayes with $\beta = 1.25$ (\textbf{middle}) and \acro{$\beta$D}-Bayes with $\beta = 1.5$ (\textbf{bottom}).} \label{Fig:MixtureModels} \end{center} \end{figure} \begin{table}[ht] \caption{Total variation distances between posterior mean predictive distributions for different number of mixture components $K$ under the \acro{\smaller KLD}-Bayes and \acro{$\beta$D} for $\beta = 1.25$ and $1.5$.} \centering \begin{tabular}{rcccc} \hline Method & $K = 2$ vs $K = 3$ & $K = 3$ vs $K = 4$ & $K = 4$ vs $K = 5$ & $K = 5$ vs $K = 6$ \\ \hline \acro{\smaller KLD} & 0.27 & 0.12 & 0.13 & 0.03 \\ \acro{$\beta$D} ($\beta = 1.25$) & 0.26 & 0.06 & 0.06 & 0.03 \\ \acro{$\beta$D} ($\beta = 1.5$) & 0.23 & 0.05 & 0.08 & 0.02 \\ \hline \end{tabular} \label{Tab:MixtureModels} \end{table} \subsection{Binary Classification}{\label{Sec:Classification}} Binary classification models predict $y \in \{0, 1\}$ from $p$-dimensional regressors $X$. The canonical model in such a setting is logistic regression where \begin{align} P_{LR}(y = 1| X, \theta) = \frac{1}{1 + \exp\left(- X\theta\right)}, \quad P_{LR}(y = 0 | X, \theta) = 1 - P_{LR}(Y = 1| X, \theta)\nonumber, \end{align} where $\theta\in\mathbb{R}^p$ are the regression parameters. Alternative, less ubiquitous models include, probit regression, which uses an alternative \acro{\smaller GLM} link function depending on the standard Gaussian \acro{\smaller CDF} $\Phi(\cdot)$, `heavier tailed' $t$-logistic regression \citep{ding2010t, ding2013t} and a mixture type model that explicitly models the chance of mislabelling of the observed classes. \begin{align} P_{PR}(y = 1| X, \eta) &= \Phi(w_{PR}\times X\theta), \quad P_{tLR}(y = 1| X, \eta) = \exp_t((w_{tLR}\times 0.5X\theta-G_t(w_{tLR}\times X\theta)))\nonumber\\ &P_{ML}(y = 1| X, \eta) = (1-\nu_1)P_{LR}(y = 1 | X, \theta) + \nu_0(1-P_{LR}(y = 1 | X, \theta))\nonumber \end{align} where $0 < t < 2$ and $0 < \nu_0, \nu_1 < 1$. The so-called $t$-exponential `$\exp_t$' and $G_t$ ensures that $P_{tLR}(y = 1| X, \eta)$ is normalised, both are defined in Section \ref{Sec:tLogistic}. Setting $t > 1$ results in heavier-tailed probabilities than the logistic model. For the probit and $t$-logistic models parameters $\theta$ are scalar multiples $w_{PR}, w_{tLR}\in\mathbb{R}$ of the logistic regression parameters $\theta \mapsto w\theta$. These are calculated in order to minimise the \textit{a priori} \acro{\smaller TVD} between the models and the logistic regression baseline according to $\mathcal{N}_{\epsilon}^{\acro{\smaller TVD}}$ (see Section \ref{Sec:LogisticTransform}). We upper bound $\nu_0$ and $\nu_1$ by 0.05 making $\epsilon = 0.05$ for these models. Figure \ref{Fig:BinaryClassifiers} plots $P(y = 1 | X, \theta)$ as a function of $X\theta$ for all four models (left) and the \acro{\smaller TVD} between each alternative model and the logistic regression (right), demonstrating that all four produce very similar binary probabilities. \begin{figure \begin{center} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/logisticRegression_vs_TVDs_tikz-1.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/logisticRegression_vs_TVDs_tikz-2.pdf} \caption{\textbf{Left}: $P(y = 1 | X, \theta)$ for logistic, probit, $t$-logistic and mislabelled models. \textbf{Right}: \acro{\smaller TVD} between the logistic regression canonical model and the probit, $t$-logistic and mislabelled models. The $\theta$ parameters of the probit and $t$-logistic models are scalar multiplied in a fashion that minimise the \acro{\smaller TVD} to the logistic regression} \label{Fig:BinaryClassifiers} \end{center} \end{figure} \subsubsection{Colon Cancer Dataset} To investigate the stability of posterior predictive inferences across the logistic, probit, $t$-logistic, and mislabelled binary regression models we consider the colon cancer dataset of \cite{alon1999broad}. The dataset contains the expression levels of 2000 genes from 40 tumours and 22 normal tissues and there is purportedly evidence that certain tissue samples may have been cross-contaminated \citep{tibshirani2013robust}. Rather than consider the full 2000 genes we first run a frequentist LASSO procedure, estimating the hyperparameter via cross-validation, and focus our modelling only on the nine genes selected by this procedure. We understand that such post-model selection biases parameter estimates, but the stability of the predictive inference is our focus here. Figure \ref{Fig:TVDColonCancer} compares the \textit{a posteriori} \acro{\smaller TVD} distance between the posterior mean estimated distribution for each observation with the \textit{a priori} \acro{\smaller TVD} distance between each of the models (top) and the difference between the posterior mean regression parameter estimates of the two models (bottom) under the \acro{\smaller KLD}-Bayes and \acro{$\beta$D}-Bayes with $\beta = 1.5$. The stability of the \acro{$\beta$D}-Bayes is once again demonstrated here, for almost every observation and every pair of models the posterior predictive inference is as stable as it was \textit{a priori}, while the KLD-Bayes inference is more often divergent. For the $t$-logistic and mislabelled models the predictive stability of the \acro{$\beta$D}-Bayes also provides greater stability in the posterior mean parameter estimates \begin{figure}[!ht \begin{center} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/colon_cancer_KLD_betaD_stabilityl_logit_preds_tikz-1.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/colon_cancer_KLD_betaD_stabilityl_logit_preds_tikz-2.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/colon_cancer_KLD_betaD_stabilityl_logit_preds_tikz-3.pdf}\\ \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/colon_cancer_KLD_betaD_stabilityl_logit_params_tikz-1.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/colon_cancer_KLD_betaD_stabilityl_logit_params_tikz-2.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/colon_cancer_KLD_betaD_stabilityl_logit_params_tikz-3.pdf} \caption{Colon Cancer Data. \textbf{Top}: \acro{\smaller TVD} between the posterior mean estimated probabilities for each observation of the probit (\textbf{left}), $t$-logistic (\textbf{centre}) and mislabelled (\textbf{right}) models and the canonical logistic regression under the \acro{\smaller KLD}-Bayes and \acro{$\beta$D}-Bayes ($\beta = 1.5$). The dotted line represented the \textit{a priori} \acro{\smaller TVD} distance between the models. \textbf{Bottom}: Absolute differences between posterior mean parameter estimates and those of the logistic regression } \label{Fig:TVDColonCancer} \end{center} \end{figure} \section{Discussion} This paper investigated the posterior predictive stability of traditional Bayesian updating and a generalised Bayesian alternative minimising the \acro{$\beta$D}. In practice, the model used for inference is usually a convenient and canonical member of a wider class that capture the broad belief statements made by the \acro{\smaller DM} and the observed data was not necessarily collected in the manner the \acro{\smaller DM} imagined. We proved that \acro{$\beta$D}-Bayes inference is provably stable across a class of likelihood models and data generating processes whose probability statements are absolutely close, a \acro{\smaller TVD} neighbourhood, by establishing bounds on how far their predictive inferences can diverge. On the other hand, our results require the \acro{\smaller DM} to be sure about the tail properties of their beliefs and the \DGP to guarantee stability for standard Bayesian inference. The results of this paper simplify the process of belief elicitation for the \acro{$\beta$D}-Bayes, bounding the \textit{a posteriori} consequences for a given level of \textit{a priori} inaccuracy, leaving the \acro{\smaller DM} free to use the best guess approximation of their beliefs that they are most comfortable with, rather than switch to a less familiar model with better outlier rejection properties \citep{o1979outlier}. Such stability is achieved through a minimal amount of extra work compared with traditional \bayesrule inference, and it provides a similarly recognisable output. We hope such results help to justify the increased use of the \acro{$\beta$D} to make robust inferences in statistics and machine learning applications. A key issue motivating the departure from standard Bayesian methods here is a lack of concordance between the likelihood model and the data. Such an issue can be attributed to either a failure of the modeller to think carefully enough about the \DGP, or errors in data collection. However, we treat these results separately to exemplify two different manifestations of the instability of Bayes' rule. Future work could explore the applicability of such results in multivariate settings where belief specification and data collection are harder, and further investigate our \acro{\smaller KLD}-Bayes results. While we argued when you could guarantee the stability of such methods, identifying for which statements \acro{\smaller KLD}-Bayes is not stable would provide important and useful results to facilitate more focused belief elicitation. To continue to facilitate the deployment of \acro{$\beta$D}-Bayes methods in practice, more work is required to study and build upon existing methods to select $\beta$, particularly in high dimensions. While it is clear that considerable gains can be made over standard methods in certain scenarios, an adversarial analysis of the \acro{$\beta$D} performance compared with its \acro{\smaller KLD}-Bayes analogue would further motivate its wider applications. \section*{Acknowledgements} The authors would like to thank Danny Williamson, Christian Robert, and Sebastian Vollmer for their insightful discussions on the topics in this paper. JJ was partially funded by the Ayudas Fundación BBVA a Equipos de Investigación Cientifica 2017, the Government of Spain's Plan Nacional PGC2018-101643-B-I00, and a Juan de la Cierva Formación fellowship FJC2020-046348-I. CH was supported by the EPSRC Bayes4Health programme grant and The Alan Turing Institute, UK. \part{} \def\spacingset#1{\renewcommand{\baselinestretch}% {#1}\small\normalsize} \spacingset{1} \setcounter{Maxaffil}{0} \renewcommand\Affilfont{\itshape\small} \spacingset{1.42} \maketitle \begin{abstract} We study the stability of posterior predictive inferences to the specification of the likelihood model and perturbations of the data generating process. In modern big data analyses, the decision-maker may elicit useful broad structural judgements but a level of interpolation is required to arrive at a likelihood model. One model, often a computationally convenient canonical form, is chosen, when many alternatives would have been equally consistent with the elicited judgements. Equally, observational datasets often contain unforeseen heterogeneities and recording errors. Acknowledging such imprecisions, a faithful Bayesian analysis should be stable across reasonable equivalence classes for these inputs. We show that traditional Bayesian updating provides stability across a very strict class of likelihood models and \DGP{}s, while a generalised Bayesian alternative using the $\beta$-divergence loss function is shown to be stable across practical and interpretable neighbourhoods. We illustrate this in linear regression, binary classification, and mixture modelling examples, showing that stable updating does not compromise the ability to learn about the \DGP. These stability results provide a compelling justification for using generalised Bayes to facilitate inference under simplified canonical models. \end{abstract} \noindent {\it Keywords:} Stability; Generalised Bayes; $\beta$-divergence; Total Variation; Generalised linear models \spacingset{1.45} \section{Introduction}{\label{Sec:Introduction}} Bayesian inferences are driven by the posterior distribution \begin{equation} \pi(\theta|y)= \frac{\pi(\theta)f(y;\theta)}{\int \pi(\theta)f(y;\theta)d\theta}.\label{Equ:bayesrule} \end{equation} which provides the provision to update parameter prior $\pi(\theta)$ using observed data $y = (y_1, \ldots, y_n) \in\mathcal{Y}^n$ assumed to have been generated according to likelihood $f(\cdot;\theta)$. The quality of such posterior inference depends on the specification of the prior, likelihood, and collection of the data. In controlled experimental environments where time is available to carefully consider such specifications, a posterior calculated in this way might be credible. However, modern applications often involve high-dimensional observational data and are undertaken by non-experts. In such scenarios, it is natural to question the quality of the specification of $\pi(\theta)$ and $f(\cdot;\theta)$ and the collection of $y$ and therefore wonder to what extent posterior inference through \eqref{Equ:bayesrule} can be trusted. Much work has previously investigated the stability of \eqref{Equ:bayesrule} to the specification of $\pi(\theta)$, therefore our focus here will be on $f(\cdot;\theta)$ and $y$. The likelihood model captures the decision maker's (\acro{\smaller DM}'s) beliefs regarding the generation of data $y$. However, accurately formulating expert judgements as probability densities is difficult. Even for a well trained expert, so doing requires many more probability specifications to be made at a much higher precision than is possible within the time constraints of a typical problem \citep{goldstein1990influence}. This is not to say that an elicited model is useless. Often domain experts can reliably elicit important broad judgements. However, the resulting ``\textit{functional}'' model $f(\cdot;\theta)$ generally involves some form of interpolating approximation of the \acro{\smaller DM}'s ``\textit{true}'' beliefs. So doing is not unreasonable. However, a consequence of such expediency is that not only does the \acro{\smaller DM} not believe all the judgements made by $f(\cdot;\theta)$, its specific form is likely only one member of an equivalence class of models that also capture the \acro{\smaller DM}'s elicited beliefs and \textit{could} have used for inference. A typical example of the above is when applied practitioners deploy computationally convenient canonical models, for which there are software and illustrative examples available, to their domain specific problems. While the broad structure of such models may be suitable across domains, it is the practitioner's familiarly with its form, its software implementation or the platform on which it was published that motivates its use for inference, rather than a careful consideration of how it captures beliefs about the new environment Similarly, the data were not necessarily collected exactly how the \acro{\smaller DM} imagined when specifying $f(\cdot;\theta)$. There may be unforeseen heterogeneities, outliers, or recording errors. Alternatively, the \acro{\smaller DM} may be deploying someone else's carefully elicited model to an analogous but not necessarily exchangeable scenario. We therefore also consider the data generating process (\DGP) that generated the \acro{\smaller DM}'s data $y$ to belong to an equivalence class of \DGP{}s to which the \acro{\smaller DM} \textit{could} have deployed their inference. Given the inevitable lack of specificity in $f$ and $y$, a faithful Bayesian analysis should be able to demonstrate that it is not overly dependent on arbitrary choices across equivalence classes of its inputs. Such stability would allow \acro{\smaller DMs} to continue using familiar models in the knowledge that their selection is not driving the critical posterior inferences. This paper shows that the requirement for such stability necessitates the consideration of an updating rule different from \eqref{Equ:bayesrule}. Consider, for example, using a Gaussian distribution, $\mathcal{N}(y; \mu,\sigma^2)$ to approximate beliefs about data $y$. While the Gaussian distribution is ubiquitous, the top of Figure \ref{Fig:norm_t_neighbourhood_predictives} shows that a Student's-$t$ likelihood $t_{5}(y; \mu,\sigma^2)$ with 5 degrees of freedom would also have sufficed for this specification. The two likelihoods appear almost indistinguishable for all values of their shared $\mu$ and $\sigma^2$. Therefore, it would be unreasonable to expect that any \acro{\smaller DM} will strongly prefer one or the other of these. However, the bottom left of Figure \ref{Fig:norm_t_neighbourhood_predictives} shows that when updating according to \eqref{Equ:bayesrule} each model can result in very different posterior inferences. Equally, \eqref{Equ:bayesrule} is not stable to perturbations of the data either, as a small proportion of outliers moves the posterior inferences away from the uncontaminated part of the \DGP. We demonstrate that this is a feature of the fact that implicitly \eqref{Equ:bayesrule} learns about the parameter of the model minimising the Kullback-Leibler Divergence (\acro{\smaller KLD}) between the data generating process (\DGP) and the model, and that stability can only be expected here when the \acro{\smaller DM} is sure of the tail specification of their model and the data. See Section \ref{Sub:GaussianStudent} for full details of this example. \begin{figure}[!ht \begin{center} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/t_normal_neighbourhood_tikz-1.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/t_normal_neighbourhood_tikz-2.pdf}\\ \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/eps_cont_KL_norm_t_tikz-1.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/eps_cont_beta_norm_t_tikz-1.pdf} \caption{\textbf{Top:} Probability density function (\acro{\smaller pdf}) and cumulative density function (\acro{\smaller cdf}) of a {\color{black}{Gaussian}} $f_{\sigma_{adj}^2}(y;\theta)=\mathcal{N}\left(y;\mu,\sigma_{adj}^2\sigma^2\right)$ and a {\color{black}{Student's-t}} $h_{\nu}(y;\eta)=t_{\nu}(y;\mu,\sigma^2)$ random variable, with $\mu=0$, $\sigma^2=1$, $\nu=5$ and $\sigma_{adj}^2=1.16$. \textbf{Bottom:} The resulting posterior predictive distributions using {\color{black}{traditional}} and {\color{black}{\acro{$\beta$D}-Bayes}} updating on $n=1000$ observations from an $\epsilon$ contamination model $g(y) = 0.9\times\mathcal{N}\left(y;0,1\right) + 0.1 \times \mathcal{N}\left(y;5,3^2\right)$. } \label{Fig:norm_t_neighbourhood_predictives} \end{center} \end{figure} Under traditional Bayesian updating, it is therefore left up to the \acro{\smaller DM} to perform some kind of \textit{post hoc} sensitivity analysis to examine the impact their chosen model and particular features of the data had on the inference \citep[see][and references within]{box1980sampling,berger1994overview}. However, such analyses are usually unsystematic and limited to the investigation of a small number of alternative models within the equivalence class. An alternative, motivated by the \textit{M}-open world assumption that the model is misspecified for the \DGP \citep{bernardo2001bayesian}, is to use general Bayes \citep{bissiri2016general} to update beliefs about model parameters minimising a divergence different from the \acro{\smaller KLD} \citep{jewson2018principles}. A particularly convenient alternative is the $\beta$-divergence (\acro{$\beta$D}) which has previously been motivated as providing inference that is robust to outliers \citep{basu1998robust,ghosh2016robust} and desirable from a decision making point of view \citep{jewson2018principles}. In this paper, we extend the motivation for using \acro{$\beta$D}-Bayes further, showing that its posterior predictive inferences are provably stable across an interpretable equivalence class of likelihood models and \DGP{}s. We treat stability to $f$ and $y$ separately, first showing that \acro{$\beta$D}-Bayes inference is stable to the choice likelihood model for a given \DGP, and then that inferences for a fixed model are stable to small perturbations to the \DGP. Importantly, the stability afforded to \acro{$\beta$D}-Bayes inference does not compromise its ability to extract useful inferences about the \DGP. \acro{$\beta$D}-Bayes has the appealing property that if the model is correctly specified for the \DGP, then the data generating parameter will be learned, and there exists a growing literature that advocates using the \acro{$\beta$D} for applied analyses \citep[e.g.][]{knoblauch2018doubly, knoblauch2022generalized, girardi2020robust, sugasawa2020robust}. This is further demonstrated in our experiments. For example, Figure \ref{Fig:norm_t_neighbourhood_predictives} shows that as well as producing similar inference for the Gaussian and Student's-$t$ likelihood models, the \acro{$\beta$D}-Bayes inferences both capture the modal part of the observed data. Further, inferences must be also stable to the selection of the \acro{$\beta$D} and its hyperparameter. We discuss methods to select $\beta$ and demonstrate reasonable insensitivity to its selection. Results regarding the stability of \eqref{Equ:bayesrule} have largely focused on the parameter prior. \cite{gustafson1995local} proved that the total variation divergence (\acro{\smaller TVD}) between two posteriors resulting from priors in linear and geometric $\epsilon$-contamination neighbourhoods divergences as $\epsilon\rightarrow 0$ at a rate exponential in the dimension of the parameter space. However, \cite{smith2012isoseparation} showed that the \acro{\smaller TVD} between two posteriors converges to 0 provided the two priors under consideration are close as measured by the local De Robertis distance. Our first results provide analogies to these for the specification of the likelihood model. \cite{gilboa1989maxmin,whittle1990risk,hansen2001acknowledging,hansen2001robust,watson2016approximate} consider the stability of optimal decision making and consider minimax decision across neighbourhoods of the posterior. However, they do not consider what perturbations of the inputs of \eqref{Equ:bayesrule} would leave a \acro{\smaller DM} in such a neighbourhood \textit{a posteriori}. Most similar to our work is \cite{miller2018robust}, which considers Bayesian updating conditioning on data arriving within a \acro{\smaller KLD} ball of the observed data and results concerning `global bias-robustness' to contaminating observations, for example of the kernel-Stein discrepancy posteriors of \cite{matsubara2021robust}. We consider stability to an interpretable neighbourhood of the data which as a special case contains the globally bias-robust contamination. Bayes linear methods \citep{goldstein1999bayes}, which concern only the sub-collection of probabilities and expectations the \acro{\smaller DM} considers themselves to be able to specify \citep{goldstein2006subjective}, is an alternative to \eqref{Equ:bayesrule} designed to be stable to interpolating approximations. We prefer, however, to adopt the general Bayesian paradigm in this analysis. Firstly, the general Bayesian paradigm includes traditional Bayesian updating as a special case and produces familiar posterior and predictive distributions. Secondly, linear Bayes requires the elicitation of expectations and variances of unbounded quantities which are themselves unstable to small perturbations \citep[see discussion on][]{goldstein1994robustness}. Lastly, rather than demanding stability across an equivalence class of models, the \acro{\smaller DM} could let the data guide any decision the \acro{\smaller DM} themselves is not able to make using methods such as penalised likelihood approaches \citep[e.g.][]{akaike1973information, schwarz1978estimating}, Bayes' factors \citep{kass1995bayes} or Bayesian model averaging \citep{hoeting1999bayesian}. In particular, \cite{williamson2015posterior} propose methods for combining posterior beliefs across an equivalence class of analyses. However, such methods can be computationally burdensome to compute across even a finite class of models \citep[e.g.][]{rossell2021approximate} and the \acro{\smaller DM} could reasonably only consider a handful of the models that might fit with their beliefs, a subset of the full equivalence class. The rest of the paper is organised as follows: Section \ref{Sec:paradigm} presents our inference paradigm, introducing general Bayesian updating \citep{bissiri2016general}, robustified inference with the \acro{$\beta$D}, and defining how we will investigate posterior predictive stability. Section \ref{Sec:StabilityLikelihood} presents our theoretical contributions surrounding the stability of Bayesian analyses to the choice of the likelihood function and Section \ref{Sec:StabilityDGP} presents our results on the stability of inference to perturbations of the \DGP. Proofs of all of our results are deferred to the supplementary material. Section \ref{Sec:SettingBeta} discusses methods to set the $\beta$ hyperparameter and Section \ref{Sec:Experiments} illustrates the stability of the \acro{$\beta$D}-Bayes inference in continuous and binary regression examples from biostatistics and a mixture modelling astrophysics example, where stability is shown not to compromise the model's ability to learn about the \DGP. Code to reproduce all of the examples in this paper can be found at \url{https://github.com/jejewson/stabilityGBI}. \section{A paradigm for inference and stability}{\label{Sec:paradigm}} \subsection{General Bayesian Inference} Under the assumption that the model used for inference $f(y; \theta)$ does not exactly capture the \acro{\smaller DM}'s beliefs, we find it appealing to adopt the general Bayesian perspective of inference. \cite{bissiri2016general} showed that the posterior update \begin{align} \pi^{\ell}(\theta|y)&= \frac{\pi(\theta)\exp\left(-w\sum_{i=1}^n \ell(\theta,y_i)\right)}{\int \pi(\theta)\exp\left(-w\sum_{i=1}^n \ell(\theta,y_i)\right)d\theta}.\label{Equ:GBI} \end{align} provides a coherent means to update prior beliefs about parameter $\theta^{\ell}_g:= \argmin_{\theta\in\Theta} \int \ell(\theta,z)g(z)dz$ after observing data $y \sim g(\cdot)$ without requiring that $\theta$ index a model for the data generating density $g(\cdot)$. The parameter $w>0$ in \eqref{Equ:GBI} calibrates the loss with the prior to accounts for the fact that $\exp(-\ell(\theta,y_i))$ is no longer constrained to integrate to 1, as was the likelihood in \eqref{Equ:bayesrule}. \cite{lyddon2018generalized} set $w$ to match the asymptotic information in the general Bayesian posterior to that of a sample from the `loss-likelihood bootstrap', while \cite{giummole2019objective}, building on the work of \cite{ribatet2012bayesian}, directly calibrate the curvature of the posterior to match that of the frequentist loss minimiser. We focus on a subset of loss functions, known as scoring rules, that depend upon the \acro{\smaller DM}'s likelihood model, continuing to allow the \acro{\smaller DM} to use this to encode their beliefs about the \DGP. Under the \logscorecomma $\ell(\theta,y)=-\log f(y;\theta)$ \eqref{Equ:GBI} collapses to \eqref{Equ:bayesrule}. The parameter $\theta^{\ell}_g$ associated with the \logscore is the minimiser of the \acro{\smaller KLD} between the distribution of the sample and the model \citep{berk1966limiting}. We therefore call updating using \eqref{Equ:bayesrule} \acro{\smaller KLD}-Bayes. However, it is well known that minimising the \logscore puts large importance on correctly capturing the tails of the data \citep{bernardo2001bayesian} and can have negative consequences for posterior decision making \citep{jewson2018principles}. This is demonstrated in the bottom left of Figure \ref{Fig:norm_t_neighbourhood_predictives} \subsection{\acro{$\beta$D}-Bayes} An alternative to the \logscore is the $\beta$-divergence loss \citep{basu1998robust} \begin{equation} \ell_{(\beta)}(y,f(\cdot;\theta))= -\frac{1}{\beta-1}f(y;\theta)^{\beta-1}+\frac{1}{\beta}\int f(z;\theta)^{\beta}dz,\label{Equ:betaDloss} \end{equation} so called as $\argmin_{\theta} \mathbb{E}_{y\sim g}\left[\ell_{(\beta)}(y,f(\cdot;\theta))\right] = \argmin_{\theta} \acro{\smaller$D_{B}^{(\beta)}$}(g || f(\cdot;\theta))$ where $\acro{\smaller$D_{B}^{(\beta)}$}(g || f)$ is the $\beta$-divergence defined in Section \ref{sec:DivergenceDefinitions}. We refer to updating using \eqref{Equ:GBI} and loss \eqref{Equ:betaDloss} as \acro{$\beta$D}-Bayes. This was first used by \cite{ghosh2016robust} to produce a robustified Bayesian posterior (\acro{$\beta$D}-Bayes) and has since been deployed for a variety of examples \citep[e.g.][]{knoblauch2018doubly, knoblauch2022generalized, girardi2020robust, sugasawa2020robust}. The implicit robustness to outliers exhibited by the \acro{$\beta$D}-Bayes is illustrated in the bottom right of Figure \ref{Fig:norm_t_neighbourhood_predictives}, where, unlike the \acro{\smaller KLD}-Bayes, the \acro{$\beta$D}-Bayes continues to captures the distribution of the majority of observations under outlier contamination. \cite{jewson2018principles} argued that updating in a manner that is automatically robust to outliers, removes the burden on the \acro{\smaller DM} to specify their beliefs in a way that is robust to outliers is removed. The results of the coming sections provide a formal rationale for adopting this methodology to provide stability to the canonical model choice and departures from the \DGP. While Bayesian inference has been proposed minimising several alternative divergences including the Hellinger divergence, $\alpha$-divergence, and the \acro{\smaller TVD} \citep[e.g.][]{hooker2014bayesian,jewson2018principles,knoblauch2020robust} such methods require a non-parametric density estimate, prohibiting their use for high-dimensional problems with continuous data. We restrict our attention to local methods not requiring such an estimate and in particular to the \acro{$\beta$D} and \acro{\smaller KLD}. The $\gamma$-divergence \citep{fujisawa2008robust} has also been shown to produce robust inference without requiring a non-parametric density estimate \citep{hung2018robust,knoblauch2022generalized} and in general behaves very similarly, see Section \ref{App:gammaD}. \subsection{Posterior Predictive Stability }\label{Sub:NotionsStability} Our results will investigate the stability of general Bayesian posterior predictive distributions \begin{align} m^D_{f}(y_{new}|y)&=\int f(y_{new};\theta)\pi^D(\theta|y)d\theta.\label{Equ:PredictiveDensityMetric} \end{align} for exchangeable observation $y_{new}\in\mathcal{Y}$ to the specification of the model $f$, and the \DGP $g$. As a result, we focus on the stability of the posterior distribution for observables $y\in\mathcal{Y}$ to perturbations of the prior for observables, $f$, and generating distributions for these observables $g$. From a decision-making perspective, the posterior predictive is often integrated over to calculate expected utilities, and therefore stable posterior predictive distributions correspond to stable decision making. We consider two metrics for stability, the first is the divergence between posterior predictives, which if small, indicates that a \acro{\smaller DM} with either distribution would make similar decisions. The second measures the difference between the posterior predictives' divergence to the \DGP. Predictives that are close to the \DGP will make close to optimal decisions and therefore, two predictives that are equally close will make similarly good decisions Predictive stability is also a more reasonable requirement than say posterior stability. The parameter posteriors for two distinct models/\DGPs will generally converge in different places \cite[e.g.][]{smith2007local}. However, divergent parameter posteriors do not necessarily imply divergent posterior predictives, as we show. Further, focusing on observables allows us to consider interesting cases of neighbouring models with nested parameter spaces (see Section \ref{Sub:MixtureModeling}) \section{Stability to the specification of the likelihood function}{\label{Sec:StabilityLikelihood}} In this section we consider two potential likelihood models for the data. These could correspond to the \acro{\smaller DM}'s true and functional beliefs, or two, equally preferable candidates for the later. In both cases, the \acro{\smaller DM} would not wish their posterior inferences to diverge if one candidate was used in place of the other. \subsection{An interpretable neighbourhood of likelihood models} We first consider the stability of inference to the specification of the \acro{\smaller DM}'s likelihood model. Likelihood models $f$ and $h$ are considered to be in the same equivalence class of likelihood models for $y\in\mathcal{Y}$ if they satisfy Definition \ref{Def:LikelihoodNeighbourhood} \begin{definition}[\acro{\smaller TVD} neighbourhood of likelihood models] Likelihood models $f(\cdot;\theta)$ and $h(\cdot;\eta)$ for observable $y\in\mathcal{Y}$ are in the neighbourhood $\mathcal{N}_{\epsilon}^{\acro{\smaller TVD}}$ of size $\epsilon$ if \begin{align} &\forall \theta \in \Theta, \exists \eta \in \mathcal{A} \textrm{ s.t. } \acro{\smaller TVD}(f(\cdot;\theta), h(\cdot; \eta)) \leq \epsilon \quad\textrm{and}\quad \forall \eta \in \mathcal{A}, \exists \theta \in \Theta \quad\textrm{s.t.} \quad \acro{\smaller TVD}(f(\cdot;\theta), h(\cdot; \eta)) \leq \epsilon \nonumbe \end{align} \label{Def:LikelihoodNeighbourhood} \end{definition} Neighbourhood $\mathcal{N}_{\epsilon}^{\acro{\smaller TVD}}$ demands the existence of functions $I_f: \Theta \mapsto \mathcal{A}$ and $I_h: \mathcal{A}\mapsto \Theta $ such that for all $\theta$, $\acro{\smaller TVD}(f(\cdot; \theta), h(\cdot; I_f(\theta))$ is small and for all $\eta$, $\acro{\smaller TVD}(h(\cdot; \eta), f(\cdot; I_h(\eta))$ is also small. The symmetry of Definition \ref{Def:LikelihoodNeighbourhood} allows $\Theta$ and $\mathcal{A}$ to have different dimensions. For two likelihoods to be close in terms of \acro{\smaller TVD} requires that the greatest difference in any of the probability statements made by the two likelihoods be small on the natural scale. \begin{equation} \acro{\smaller TVD}(f(\cdot;\theta),h(\cdot;\theta)) := \sup_{Y\in\mathcal{Y}}\left|f(Y;\theta)-h(Y;\theta)\right| = \frac{1}{2}\int \left|f(y;\theta)-h(y;\theta)\right|dy \label{Equ:TVD} \end{equation} Additionally, \acro{\smaller TVD} neighbourhoods contain $\epsilon$-contaminations considered in the context of prior stability by \cite{gustafson1995local} and often used as outlier models \citep[e.g.][]{aitkin1980mixture}. As a result, it is reasonable for a \acro{\smaller DM} to be able to elicit their beliefs within a $\mathcal{N}_{\epsilon}^{\acro{\smaller TVD}}$ neighbourhood of their chosen model, and such a neighbourhood contains standard perturbations for sensitivity analysis. The weak conditions required for the results of the following sections are formally stated in Section \ref{Sub:Conditions}. Briefly, Condition \ref{Cond:BoundedDensities} requires the boundedness of the essential supremum of models $f$ and $h$ and the \DGP $g$, and Condition \ref{Cond:StochasticPosteriorConcentration} requires sufficient concentration of posterior $\pi^D_{f}(\theta|y)$ around $\theta^{D}_f$. For clarity of argument, we proceed under the assumption that prior $\pi^D(\theta)$ and $\pi^D(\eta)$ are fixed. \subsection{The stability of the \acro{$\beta$D}-Bayes} In the first of our main results, Theorem \ref{Thm:StabilityPosteriorPredictivebetaDiv} bounds the \textit{a posteriori} divergence between the predictive distributions resulting from likelihood models $f$ and $h$ as a function of the size of the \textit{a priori} neighbourhood $\mathcal{N}_{\epsilon}^{\acro{\smaller TVD}}$. \begin{theorem}[Stability of the posterior predictive distributions of two models under the \acro{$\beta$D}-Bayes inference] Given $1< \beta\leq 2$ and two likelihood models $\left\lbrace f(\cdot;\theta): \theta\in\Theta\right\rbrace$ and $\left\lbrace h(\cdot; \eta):\eta\in\mathcal{A}\right\rbrace$ such that $f,h\in\mathcal{N}_{\epsilon}^{\acro{\smaller TVD}}$ for $\epsilon>0$. Then provided there exists $M<\infty$ such that Condition \ref{Cond:BoundedDensities} holds, and $y$, $\pi^{(\beta)}(\theta)$ and $\pi^{(\beta)}(\eta)$ satisfy Condition \ref{Cond:StochasticPosteriorConcentration} for $D = $ \acro{\smaller$D_{B}^{(\beta)}$} \begin{align} \acro{\smaller$D_{B}^{(\beta)}$}(m^{(\beta)}_{f}(\cdot|y)||m^{(\beta)}_{h}(\cdot|y)) &\leq \frac{M^{\beta - 1}(3\beta - 2)}{\beta(\beta - 1)}\epsilon + \frac{1}{c_1} + 2\frac{M^{\beta - 1}}{\beta-1}\int \acro{\smaller TVD}(g, f(\cdot;\theta))\pi^{(\beta)}_{f}(\theta|y)d\theta\nonumber\\ \acro{\smaller$D_{B}^{(\beta)}$}(m^{(\beta)}_{h}(\cdot|y)||m^{(\beta)}_{f}(\cdot|y)) &\leq \frac{M^{\beta - 1}(3\beta - 2)}{\beta(\beta - 1)}\epsilon + \frac{1}{c_2} + 2\frac{M^{\beta - 1}}{\beta-1}\int \acro{\smaller TVD}(g, h(\cdot;\eta))\pi^{(\beta)}_{h}(\eta|y)d\eta,\nonumber \end{align} where $c_1$ and $c_2$ are defined in Condition \ref{Cond:StochasticPosteriorConcentration} \label{Thm:StabilityPosteriorPredictivebetaDiv}. \end{theorem} Further, Theorem \ref{Thm:StabilityDGPapproxBeta} bounds the absolute distance between the \acro{$\beta$D} of the posterior predictive distributions produced from two likelihood models within $\mathcal{N}_{\epsilon}^{\acro{\smaller TVD}}$ from the \DGP \begin{theorem}[The stability in the posterior predictive approximation of two models to the \DGP of \acro{$\beta$D}-Bayes inference] Given $1< \beta\leq 2$ and two likelihood models $\left\lbrace f(\cdot;\theta): \theta\in\Theta\right\rbrace$ and $\left\lbrace h(\cdot; \eta):\eta\in\mathcal{A}\right\rbrace$ such that $f,h\in\mathcal{N}_{\epsilon}^{\acro{\smaller TVD}}$ for $\epsilon>0$. Then provided there exists $M<\infty$ such that Condition \ref{Cond:BoundedDensities} holds and $y$, $\pi^{(\beta)}(\theta)$ and $\pi^{(\beta)}(\eta)$ satisfy Condition \ref{Cond:StochasticPosteriorConcentration} for $D = $ \acro{\smaller$D_{B}^{(\beta)}$} \begin{equation} |\acro{\smaller$D_{B}^{(\beta)}$}(g||m^{(\beta)}_{f}(\cdot|y))- \acro{\smaller$D_{B}^{(\beta)}$}(g||m^{(\beta)}_{h}(\cdot|y))|\leq \frac{M^{\beta - 1}(3\beta - 2)}{\beta(\beta - 1)}\epsilon+ \frac{1}{c} + C^{(\beta)}(f,h,y),\nonumbe \end{equation} where $c = \min\{c_1, c_2\}$ as defined in Condition \ref{Cond:StochasticPosteriorConcentration} and \begin{align} C^{(\beta)}(f,h,y):&= \max \left\lbrace\int\acro{\smaller$D_{B}^{(\beta)}$}(g||f(\cdot;\theta))\pi^{(\beta)}_{f}(\theta|y)d\theta-\acro{\smaller$D_{B}^{(\beta)}$}(g||m^{(\beta)}_{f}(\cdot|y)),\right.\nonumber\\ &\qquad\left.\int\acro{\smaller$D_{B}^{(\beta)}$}(g||h(\cdot;\eta))\pi^{(\beta)}_{h}(\eta|y)d\eta-\acro{\smaller$D_{B}^{(\beta)}$}(g||m^{(\beta)}_{h}(\cdot|y)) \right\rbrace.\nonumber \end{align} \label{Thm:StabilityDGPapproxBeta} \end{theorem} The value $M$ present in both Theorems \ref{Thm:StabilityPosteriorPredictivebetaDiv} and \ref{Thm:StabilityDGPapproxBeta} is often easy to bound, for example by selecting a minimum value of the scale of Gaussian or Student's-$t$ likelihood models, and we expect $c_1, c_2 \rightarrow\infty$ as $n\rightarrow\infty$ (see Section \ref{Sub:Conditions}). The final term in Theorem \ref{Thm:StabilityPosteriorPredictivebetaDiv} involves the \acro{\smaller TVD} between the models under consideration and the unknown \DGP. While it is difficult to say anything formal about this, Lemma \ref{Lem:BoundingBetaDTVD} shows that the \acro{$\beta$D} can be bounded above by the \acro{\smaller TVD}, and therefore any values of parameters $\theta$ and $\eta$ that are close to $g$ in \acro{\smaller TVD} should have high posterior mass under the \acro{$\beta$D} posterior. On the other hand, $C^{(\beta)}(f,h,y)$ in Theorem \ref{Thm:StabilityDGPapproxBeta}, is is related to the concentration of the posteriors $\pi^{(\beta)}_{f}(\theta|y)$ and $\pi^{(\beta)}_{h}(\eta|y)$ with Jensen's inequality and the convexity of the \acro{$\beta$D} guaranteeing that $C^{(\beta)}(f,h,y)\geq 0$. Under suitable regularity conditions as $n\rightarrow\infty$ and the posterior collapses to a point mass \citep{chernozhukov2003mcmc, lyddon2018generalized}, then this term converges to 0. Importantly, Theorem \ref{Thm:StabilityDGPapproxBeta} does not depend on how well specified the two likelihood models are for the \DGP. \subsection{The stability of the \acro{\smaller KLD}-Bayes}{\label{Sub:StabilityLikelihood_KLD}} Figure \ref{Fig:norm_t_neighbourhood_predictives} demonstrates that the stability afforded by the \acro{$\beta$D}-Bayes is not afforded by the \acro{\smaller KLD}-Bayes. The \acro{\smaller KLD} is recovered from the \acro{$\beta$D} as $\beta\rightarrow1$. However, in such a scenario, the bounds proven in the previous sections tend to infinity. Instead, Lemma \ref{Thm:StabilityDGPapproxKLD} provides an analogous stability result for traditional Bayesian updating. \begin{lemma}[The stability in the posterior predictive approximation of the \DGP of \acro{\smaller KLD}-Bayes inference] For any two two likelihood models $\left\lbrace f(\cdot;\theta): \theta\in\Theta\right\rbrace$ and $\left\lbrace h(\cdot; \eta):\eta\in\mathcal{A}\right\rbrace$, and $y$, $\pi^{\acro{\smaller KLD}}(\theta)$ and $\pi^{\acro{\smaller KLD}}(\eta)$ satisfying Condition \ref{Cond:StochasticPosteriorConcentration} for $D = $ \acro{\smaller KLD}, we have that \begin{align} |\acro{\smaller KLD}(g||m^{\acro{\smaller KLD}}_{f}(\cdot|y))- \acro{\smaller KLD}(g||m^{\acro{\smaller KLD}}_{h}(\cdot|y))|&\leq C^{\acro{\smaller KLD}}(f,h,y) + \frac{1}{c} + T(f,h,y), \nonumber \end{align} where $c := \min\{c_1, c_2\}$ as defined in Condition \ref{Cond:StochasticPosteriorConcentration} and \begin{align} T(f,h,y):&= \max \left\lbrace\int \int g(\cdot) \log \frac{f(\cdot;\theta)}{h(\cdot;I_f(\theta))}d\mu\pi^{\acro{\smaller KLD}}_{f}(\theta|y)d\theta,\right.\nonumber\\ &\qquad\left. \int\int g(\cdot) \log \frac{h(\cdot;\eta)}{f(\cdot;I_h(\eta))}d\mu\pi^{\acro{\smaller KLD}}_{h}(\eta|y)d\eta \right\rbrace\label{Equ:StabilityTermKLD} \\ C^{\acro{\smaller KLD}}(f,h,y):&= \max \left\lbrace\int\acro{\smaller KLD}(g||f(\cdot;\theta))\pi^{\acro{\smaller KLD}}_{f}(\theta|y)d\theta-\acro{\smaller KLD}(g||m^{\acro{\smaller KLD}}_{f}(\cdot|y)),\right.\nonumber\\ &\qquad\left.\int\acro{\smaller KLD}(g||h(\cdot;\eta))\pi^{\acro{\smaller KLD}}_{h}(\eta|y)d\eta-\acro{\smaller KLD}(g||m^{\acro{\smaller KLD}}_{h}(\cdot|y)) \right\rbrace.\nonumber \end{align} \label{Thm:StabilityDGPapproxKLD} \end{lemma} We investigate $T(f,h,y)$, the term not analagous to any of those from Theorem \ref{Thm:StabilityDGPapproxBeta}. Without loss of generality assume that the second term in \eqref{Equ:StabilityTermKLD} is the largest. Then, the reverse Pinsker's inequality \citep{sason2016f,binette2019a} provides \begin{align} \int g(\cdot) \log \frac{h(\cdot;\eta)}{f(\cdot;I_h(\eta))}d\mu = \int \frac{g(\cdot)}{h(\cdot;\eta)}h(\cdot;\eta) \log \frac{h(\cdot;\eta)}{f(\cdot;I_h(\eta))}d\mu &\leq M^{\ast}_h\acro{\smaller KLD}(h(\cdot;\eta)||f(\cdot;I_h(\eta)))\nonumber\\ &\leq M^{\ast}_h K_{h,f}\acro{\smaller TVD}(h(\cdot;\eta),f(\cdot;I_h(\eta)))\nonumber \end{align} where $M^{\ast}_h=\esssup \frac{g}{h(\cdot;\theta_h)}$ and $K_{h,f}=\left(\frac{\log(a)}{a-1}+\frac{\log(b)}{1-b}\right)$ with $a = \essinf \frac{dF}{dH}$ and $b = \esssup \frac{dF}{dH}$. As a result, a \acro{\smaller TVD} ball around the likelihood model is not sufficient for posterior stability when using \bayesrule updating. In fact, posterior stability can only be guaranteed according to Lemma \ref{Thm:StabilityDGPapproxKLD} if \begin{equation} \left|\log(h(\cdot;\eta))-\log(f(\cdot;I_h(\eta)))\right|\label{Equ:log_h_minus_log_f} \end{equation} is small in regions where $g$ has density. Without knowledge of $g$, this requires that \eqref{Equ:log_h_minus_log_f} be small everywhere, requiring the \acro{\smaller DM} to be confident in the accuracy of their probability statements on the log-scale rather than on the natural scale as was the case for $\mathcal{N}^{\acro{\smaller TVD}}_{\epsilon}$. Logarithms act to inflate the magnitude of small numbers and thus ensuring that $\left|\log(h(\cdot;\eta))-\log(f(\cdot;I_h(\eta)))\right|$ is small requires that $f$ and $h$ are increasingly similar as their values decrease. This requires the \acro{\smaller DM} to be more and more confident of the accuracy of their probability specifications as they get further and further into the tails, something that is known to already be very difficult for low dimensional problems \citep{winkler1968evaluation,o2006uncertain}, and becomes increasingly difficult as the dimension of the observation space increases. \section{Stability to the \DGP}{\label{Sec:StabilityDGP}} \subsection{A reasonable neighbourhood of \DGP perturbations} Our second series of results concern the stability of inferences from a single model $\left\{f(\cdot;\theta); \theta \in \Theta\right\}$ to perturbations of the \DGP for $y \in\mathcal{Y}$. We consider updating on datasets $y_1:=(y_1,\ldots, y_{n_1})\sim g_1$ or $y_2:=(y_1,\ldots, y_{n_2})\sim g_2$ with $n_1, n_2 > 0$ and $g_1$ and $g_2$ satisfying Definition \ref{Def:DGPNeighbourhood} \begin{definition}[\acro{\smaller TVD} Neighbourhood of data generating processes] Data generating processes $g_1$ and $g_2$ for observable $y\in\mathcal{Y}$ are in the neighbourhood $\mathcal{G}_{\epsilon}^{\acro{\smaller TVD}}$ of size $\epsilon$ if $\acro{\smaller TVD}(g_1, g_2) \leq \epsilon$ \label{Def:DGPNeighbourhood} \end{definition} The \acro{\smaller TVD} provides a relevant and reasonable way to describe perturbations of the \DGP. It contains $\epsilon$-contamination neighbourhoods as considered by \cite{matsubara2021robust} in the context of `global bias-robustness' and also in Figure \ref{Fig:norm_t_neighbourhood_predictives}. It demands that the data sets were generated under mechanisms that were absolutely close on the natural scale, rather than the log-score considered in the \acro{\smaller KLD} neighbourhoods on \cite{miller2018robust}. Conceptually, it is convenient to think about datasets such that $n_1 = n_2$ but this is not necessary. The conditions for the results of the next sections are similar to those required in Section \ref{Sec:StabilityLikelihood} and are stated in full in Section \ref{Sub:Conditions}. \subsection{The stability of the \acro{$\beta$D}} Theorem \ref{Thm:StabilityPosteriorPredictivebetaDiv2} bounds the \acro{$\beta$D} between the posterior predictive distributions resulting from model $f$ and data from two \DGP{}s in the $\mathcal{G}_{\epsilon}^{\acro{\smaller TVD}}$ neighbourhood. \begin{theorem}[The stability of the posterior predictive distribution under two \DGP{}s of the \acro{$\beta$D}-Bayes inference] Given $1< \beta\leq 2$ and likelihood model $\left\lbrace f(\cdot;\theta): \theta\in\Theta\right\rbrace$ and two data sets $y_1:=(y_1,\ldots, y_{n_1}) \sim g_1$ and $y_2:=(y_1,\ldots, y_{n_2}) \sim g_2$ for $n_1, n_2 > 0$ with $\{g_1, g_2\}\in \mathcal{G}_{\epsilon}^{\acro{\smaller TVD}}$. Then provided there exists $M<\infty$ such that Condition \ref{Cond:BoundedDensities2} hold, Condition \ref{Cond:StochasticPosteriorConcentration2} holds for $D = $ \acro{\smaller$D_{B}^{(\beta)}$}, $y_1$, $y_2$ and $\pi^{(\beta)}(\theta)$ then, \begin{align} \acro{\smaller$D_{B}^{(\beta)}$}(m^{(\beta)}_{f}(\cdot|y_1)||m^{(\beta)}_{f}(\cdot|y_2)))\leq& 2\frac{M^{\beta - 1}}{\beta-1}\epsilon + \frac{1}{c_{\mathcal{S}^{(1)}}} + 2\frac{M^{\beta - 1}}{\beta-1}\int \acro{\smaller TVD}(g_1, f(\cdot;\theta_1))\pi^{(\beta)}_{f}(\theta_1|y_1)d\theta_1.\nonumber\\ \acro{\smaller$D_{B}^{(\beta)}$}(m^{(\beta)}_{f}(\cdot|y_2)||m^{(\beta)}_{f}(\cdot|y_1))) \leq& 2\frac{M^{\beta - 1}}{\beta-1}\epsilon + \frac{1}{c_{\mathcal{S}^{(2)}}} + 2\frac{M^{\beta - 1}}{\beta-1}\int \acro{\smaller TVD}(g_2, f(\cdot;\theta_2))\pi^{(\beta)}_{f}(\theta_2|y_2)d\theta_2.\nonumber \end{align} where $c_{\mathcal{S}^{(1)}}$ and $c_{\mathcal{S}^{(2)}}$ are defined in Condition \ref{Cond:StochasticPosteriorConcentration2} \label{Thm:StabilityPosteriorPredictivebetaDiv2} \end{theorem} Further, Theorem \ref{Thm:StabilityDGPapproxBeta2} bounds the difference in the \acro{$\beta$D} from the \DGP of the \acro{$\beta$D}-Bayes posterior predictive distributions resulting from data from the two \DGP{}s. \begin{theorem}[The stability in the posterior predictive approximation of two \DGPs under the same model of \acro{$\beta$D}-Bayes inference] Given $1< \beta\leq 2$ and likelihood model $\left\lbrace f(\cdot;\theta): \theta\in\Theta\right\rbrace$ and two data sets $y_1:=(y_1,\ldots, y_{n_1}) \sim g_1$ and $y_2:=(y_1,\ldots, y_{n_2}) \sim g_2$ for $n_1, n_2 > 0$ with $\{g_1, g_2\}\in \mathcal{G}_{\epsilon}^{\acro{\smaller TVD}}$. Then provided there exists $M<\infty$ such that Condition \ref{Cond:BoundedDensities2} holds, and Condition \ref{Cond:StochasticPosteriorConcentration2} holds for $D = $ \acro{\smaller$D_{B}^{(\beta)}$}, $y_1$, $y_2$ and $\pi^{(\beta)}(\theta)$ then, \begin{equation} |\acro{\smaller$D_{B}^{(\beta)}$}(g_1||m^{(\beta)}_{f}(\cdot|y_1))- \acro{\smaller$D_{B}^{(\beta)}$}(g_2||m^{(\beta)}_{f}(\cdot|y_2))|\leq \frac{M^{\beta - 1}(\beta + 2)}{\beta(\beta - 1)}\epsilon + \frac{1}{c} + C^{(\beta)}(f,y_1, y_2),\nonumbe \end{equation} where $c:= \min\{c_{\mathcal{S}^{(1)}}, c_{\mathcal{S}^{(2)}}\}$ defined in Condition \ref{Cond:StochasticPosteriorConcentration2} and \begin{align} C^{(\beta)}(f,y_1, y_2):&= \max \left\lbrace\int\acro{\smaller$D_{B}^{(\beta)}$}(g_1||f(\cdot;\theta_1))\pi^{(\beta)}(\theta_1|y_1)d\theta_1-\acro{\smaller$D_{B}^{(\beta)}$}(g_1||m^{(\beta)}_{f}(\cdot|y_1)),\right.\nonumber\\ &\qquad\left.\int\acro{\smaller$D_{B}^{(\beta)}$}(g_2||f(\cdot;\theta_2))\pi^{(\beta)}(\theta_2|y_2)d\theta_2-\acro{\smaller$D_{B}^{(\beta)}$}(g_2||m^{(\beta)}_{f}(\cdot|y_2)) \right\rbrace \nonumber \end{align} \label{Thm:StabilityDGPapproxBeta2} \end{theorem} Theorems \ref{Thm:StabilityPosteriorPredictivebetaDiv2} and \ref{Thm:StabilityDGPapproxBeta2} are the analogous result to Theorems \ref{Thm:StabilityPosteriorPredictivebetaDiv} and \ref{Thm:StabilityDGPapproxBeta} respectively. The value $M$ is still easy to bound here and the concentration terms $\frac{1}{c_{\mathcal{S}^{(j)}}}$ are expected to shrink to 0 as $n\rightarrow\infty$. For Theorem \ref{Thm:StabilityPosteriorPredictivebetaDiv2}, we invoke Lemma \ref{Lem:BoundingBetaDTVD} and argue that the \acro{$\beta$D} posterior will place density on parameter values of model $f$ that are close to $g$ in \acro{\smaller TVD}. The bound of Theorem \ref{Thm:StabilityDGPapproxBeta2} depends on $C^{(\beta)}(f,y_1, y_2)$, which under mild regularity conditions goes to 0 as $n\rightarrow\infty$, demonstrating that the \acro{$\beta$D}-Bayes is stable to \acro{\smaller TVD} perturbations of the data, independently of how well the model approximates either of the \DGP{}s. \subsection{The stability of the \acro{\smaller KLD}-Bayes} Figure \ref{Fig:norm_t_neighbourhood_predictives} showed that updating using \eqref{Equ:bayesrule} is not stable to perturbations of the DGP. The data considered is within a $\mathcal{G}_{0.1}^{\acro{\smaller TVD}}$ neighbourhood of data generated from $\mathcal{N}(0, 1)$ and unlike the \acro{$\beta$D}-Bayes, the estimated posterior predictive is vastly different to what would have been estimated under the uncontaminated \DGP. Lemma \ref{Thm:StabilityDGPapproxKLD2} investigates perturbations of the \DGP that traditional Bayesian inference is stable too. \begin{lemma}[The stability in the posterior predictive approximation of two \DGPs under the same model of \acro{\smaller KLD}-Bayes inference] For likelihood model $\left\lbrace f(\cdot;\theta): \theta\in\Theta\right\rbrace$ and data sets $y_1:=(y_1,\ldots, y_{n_1}) \sim g_1$ and $y_2:=(y_1,\ldots, y_{n_2}) \sim g_2$ for $n_1, n_2 > 0$, given Condition \ref{Cond:StochasticPosteriorConcentration2} holds for $D = $ \acro{\smaller KLD}, $y_1$, $y_2$ and $\pi^{\acro{\smaller KLD}}(\theta)$, we have that \begin{align} |\acro{\smaller KLD}(g||m^{\acro{\smaller KLD}}_{f}(\cdot|y))- \acro{\smaller KLD}(g||m^{\acro{\smaller KLD}}_{h}(\cdot|y))|&\leq C^{\acro{\smaller KLD}}(f,y_1, y_2) + \frac{1}{c} + T_1(g_1, g_2) + T_2(f,y_1, y_2), \nonumber \end{align} where $c:= \min\{c_{\mathcal{S}^{(1)}}, c_{\mathcal{S}^{(2)}}\}$ as defined in Condition \ref{Cond:StochasticPosteriorConcentration2} and \begin{align} T_1(g_1, g_2):&= \max\left\lbrace \int g_2\log g_2 - g_1 \log g_1d\mu, \int g_1\log g_1 - g_2\log g_2d\mu\right\rbrace\nonumber\\ T_2(f,y_1, y_2):&= \max \left\lbrace\int \int (g_1 - g_2)\log f(\cdot;\theta_1)d\mu\pi^{\acro{\smaller KLD}}(\theta_1|y_1)d\theta_1,\right.\nonumber\\ &\qquad\left.\int \int (g_2 - g_1)\log f(\cdot;\theta_2)d\mu\pi^{\acro{\smaller KLD}}(\theta_2|y_2)d\theta_2 \right\rbrace\nonumber \\ C^{\acro{\smaller KLD}}(f,y_1, y_2):&= \max \left\lbrace\int\acro{\smaller KLD}(g_1||f(\cdot;\theta_1))\pi^{\acro{\smaller KLD}}(\theta_1|y_1)d\theta_1-\acro{\smaller KLD}(g_1||m^{\acro{\smaller KLD}}_{f}(\cdot|y_1)),\right.\nonumber\\ &\qquad\left.\int\acro{\smaller KLD}(g_2||f(\cdot;\theta_2))\pi^{\acro{\smaller KLD}}(\theta_2|y_2)d\theta_2-\acro{\smaller KLD}(g_2||m^{\acro{\smaller KLD}}_{f}(\cdot|y_2)) \right\rbrace \nonumber \end{align} \label{Thm:StabilityDGPapproxKLD2} \end{lemma} Lemma \ref{Thm:StabilityDGPapproxKLD2} shows that stability of the \acro{\smaller KLD} approximation of \DGP by model $f$ to perturbations of the \DGP requires that $T_1(g_1, g_2)$ and $T_2(f,y_1, y_2)$ are small. Small $T_1(g_1, g_2)$ requires $g_1$ and $g_2$ to have similar entropy, which is not necessarily guaranteed by \DGPs according to Definition \ref{Def:DGPNeighbourhood}. Alternatively, if $|\log f(\cdot; \theta)|$ is bounded then $T_2(f,y_1, y_2)$ can be bounded above by $\acro{\smaller TVD}(g_1, g_2)$. However, boundedness of the log-likelihood is unlikely, as $f(y; \theta)\rightarrow 0$, $|\log f(y;\theta)| \rightarrow \infty$. Therefore, $T_2(f,y_1, y_2)$ being small requires $g_1$ and $g_2$ to be increasingly close in the tails of the fitted models, prohibiting, for example, outlier contaminations such as in Figure \ref{Fig:norm_t_neighbourhood_predictives}. \section{Setting $\beta$}{\label{Sec:SettingBeta}} The only additional specification required from the \acro{\smaller DM} when implementing the \acro{$\beta$D}-Bayes compared with the \acro{\smaller KLD}-Bayes is that they select the value of $\beta$. This hyperparameter regulates the trade-off between robustness and efficiency \citep[e.g.][]{basu1998robust}. Minimising the \acro{\smaller KLD} ($\beta=1$) provides the most efficient inference but is very sensitive to outliers. Increasing $\beta$ away from 1 gains robustness to outliers at a cost to efficiency. The bounds of the previous theorems all depend on $\beta$ and we can therefore additionally interpret $\beta$ as a sort of meta prior for the \acro{\smaller DM}'s confidence in their elicited model or data collection. The less confident they are, the greater $\beta$ will need to be to prevent non-negligible \textit{a posteriori} divergence. Eliciting $\beta$ as such requires the \acro{\smaller DM} to reflect on the value of $\epsilon$ associated with their beliefs or the quality of the data. For the neighbourhoods of Definition \ref{Def:LikelihoodNeighbourhood}, this can be obtained by considering for a given set of parameters what the largest possible error in any of the probability statements could be, or for Definition \ref{Def:DGPNeighbourhood} by considering the minimal proportion of a population that they believe is consistent with the \DGP. Our results are also informative about when the value of $\beta$ might be too large. The \acro{\smaller DM} should want their \acro{$\beta$D}-Bayes inferences be stable because $\epsilon$ is small, and not because the terms involving $\beta$ that multiply $\epsilon$ in the theorems in Sections \ref{Sec:StabilityLikelihood} and \ref{Sec:StabilityDGP} are small. Alternatively, there is increasing interest in data-driven methods to learn $\beta$. \cite{warwick2005choosing, ghosh2015robust, basak2021optimal} consider procedures to estimate $\beta$ to minimise the mean squared error (\MSE) of estimated model parameters, \cite{toma2011dual, kang2014minimum} estimate $\beta$ to minimise the maximum perturbation of the parameter estimates resulting from replacing one observation by the population estimated mean, and \cite{jewson2022general, yonekura2021adaptation} estimate $\beta$ to minimise the Fisher's divergence to the \DGP. Finally, \acro{$\beta$D}-Bayes inference appears not to be overly sensitive to the exact value of $\beta$. Figure \ref{Fig:norm_t_neighbourhood_predictives_sensitivity} demonstrates that for the example introduced in Section \ref{Sec:Introduction}, inference for the Gaussian and Student's-$t$ models is almost identical for values of $\beta\geq 1.3$. Section \ref{Sec:Sensitivity} provides further demonstration of this. \begin{figure \begin{center} \includegraphics[trim= {0.0cm 0.00cm 0.4cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/Gaussian_t_sensitivity_plot_tikz-7.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.4cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/Gaussian_t_sensitivity_plot_tikz-9.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.4cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/Gaussian_t_sensitivity_plot_tikz-11.pdf}\\ \includegraphics[trim= {0.0cm 0.00cm 0.4cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/Gaussian_t_sensitivity_plot_tikz-15.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.4cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/Gaussian_t_sensitivity_plot_tikz-19.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.4cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/Gaussian_t_sensitivity_plot_tikz-23.pdf} \caption{Posterior predictive distributions using {\color{black}{\acro{$\beta$D}-Bayes}} updating on $n=1000$ observations from an $\epsilon$-contamination model $g(y) = 0.9\times\mathcal{N}\left(y;0,1\right) + 0.1 \times \mathcal{N}\left(y;5,3^2\right)$ for different values of $\beta$.} \label{Fig:norm_t_neighbourhood_predictives_sensitivity} \end{center} \end{figure} \section{Experiments}{\label{Sec:Experiments}} \subsection{Gaussian and Student's-$t$ likelihood}{\label{Sub:GaussianStudent}} We revisit the Gaussian and Student's-$t$ example briefly introduced in Section \ref{Sec:Introduction}. The likelihood models considered here are \begin{align} f_{\sigma^2_{adj}}(y;\theta):= \mathcal{N}\left(y;\mu,\sigma^2\times\sigma^2_{adj}\right)\textrm{ and }h_{\nu}(y;\eta):= \textrm{Student's}-t_{\nu}\left(y;\mu,\sigma^2\right).\label{Equ:GaussianStudent} \end{align} Hyperparameters, $\nu=5$ and $\sigma^2_{adj}=1.16$ are fixed to match the quartiles of the two distributions for all $\mu$ and $\sigma^2$. These were inspired by \cite{o2012probabilistic}, who argued that for absolutely continuous probability distributions, it is only reasonable to ask an expert to make a judgement about the median and the quartiles of a distribution along with maybe a few specially selected features. This is justified as adequate as any two distributions with similar percentiles will look very similar, see for example Figure \ref{Fig:norm_t_neighbourhood_predictives}. However, Section \ref{Sub:StabilityLikelihood_KLD} suggests that greater precision is required to ensure the stability of \bayesrule updating. On the other hand, the likelihoods in \eqref{Equ:GaussianStudent} are contained in $\mathcal{N}^{\acro{\smaller TVD}}_{0.043}$. We generated $n=1000$ observations from the $\epsilon$-contamination model $g(x) = 0.9\times\mathcal{N}\left(y;0,1\right) + 0.1 \times \mathcal{N}\left(y;5,3^2\right)$ contained within the $\mathcal{G}^{\acro{\smaller TVD}}_{0.1}$ neighbourhood of $\mathcal{N}\left(y;0,1\right)$. We then conducted Bayesian updating under the Gaussian and Student's-$t$ likelihood using both \bayesrule and the \acro{$\beta$D}-Bayes ($\beta = 1.5$) under shared priors $\pi(\mu,\sigma^2) = \mathcal{N}\left(\mu;\mu_0,v_0\sigma^2\right)\mathcal{IG}(\sigma^2;a_0,b_0)$, with hyperparameters $(a_0=0.01,b_0=0.01,\mu_0=0,v_0=10)$. Figure \ref{Fig:norm_t_neighbourhood_predictives} and Figure \ref{Fig:norm_t_posteriors}, which plots the parameter posterior distributions for both models under both updating mechanisms, clearly demonstrate the stability of the \acro{$\beta$D}-Bayes across these two models and the lack of stability of traditional Bayesian updating. Not only is the \acro{$\beta$D} inference more stable across $\mathcal{N}^{\acro{\smaller TVD}}_{\epsilon}$, the \acro{$\beta$D} predictive better captures the majority of the \DGP than either of the predictive do under traditional Bayesian updating. The capturing of the $\mathcal{N}\left(y;0,1\right)$ mode further illustrates the \acro{$\beta$D}-Bayes' stability across neighbourhoods of the \DGP. Figure \ref{Fig:norm_t_influence_functions} plots influence functions \citep{west1984outlier} for the \acro{\smaller KLD}-Bayes and \acro{$\beta$D}-Bayes under the Gaussian and Student's-$t$ model. Influence functions are the gradient of the loss function evaluated at parameter estimates as a function of the observations and show the impact that observation had on the analysis. Under the \acro{$\beta$D}-Bayes, the influence functions of the Gaussian and Student's-$t$ likelihoods are closer for almost every $y$, illustrating the stability to the model, and additionally, the influence functions for both models under the \acro{$\beta$D}-Bayes vary less with $y$, illustrating stability to the \DGP. \begin{figure \begin{center} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/betaD_Influence_Curve_tikz-1.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/betaD_Influence_Curve_tikz-2.pdf} \caption{Influence functions for parameter $\mu$ and $\sigma^2$ of the Gaussian and Student's-$t$ likelihood models under the \acro{\smaller KLD}-Bayes and \acro{$\beta$D}-Bayes with $\beta = 1.5$. } \label{Fig:norm_t_influence_functions} \end{center} \end{figure} \subsubsection{\acro{\smaller DLD} data}{\label{Sub:DLD}} We consider an RNA-sequencing data set from \cite{yuan2016plasma} measuring gene expression for $n = 192$ patients with different types of cancer. \cite{rossell2018tractable} studied the impact of 57 predictors on the expression of \acro{\smaller DLD}, a gene that can perform several functions such as metabolism regulation. To illustrate our results, we selected the 15 variables with the 5 highest loadings in the first 3 principal components, and fitted regression models using the neighbouring models in \eqref{Equ:GaussianStudent} for the residuals. Section \ref{App:SelectedVariables} lists the selected variables. Figure \ref{Fig:DLDRegressions} demonstrates that \acro{$\beta$D}-Bayes ($\beta = 1.5$) produces more stable estimates of the fitted residuals (top-left), the estimated density of the residuals (top-right), parameter estimates (bottom-left), and posterior predictive density for the observed data (bottom-right) than the traditional Bayesian inference. \cite{rossell2018tractable} found evidence that this data is heavy-tailed, further demonstrated in Figure \ref{Fig:QQNormal}, which caused the \acro{\smaller KLD}-Bayes to estimate very different densities under the Gaussian and Student's-$t$ model, while the \acro{$\beta$D}-Bayes is stable to this feature of the data. Figure \ref{Fig:DLDRegressionsHist} shows the fit of the models to the posterior mean estimates of the standardised residuals, showing that as well as being stable, the \acro{$\beta$D}-Bayes produces good estimation around the mode of the \acro{\smaller DLD} data under both models. Section \ref{App:TGFB} considers a further regression example showing that even when one of the models under consideration is `well-specified' for the data, the \acro{$\beta$D}-Bayes inference continues to perform adequately. \begin{figure}[!ht \begin{center} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/DLD_betaD_norm_vs_t_tikz-8.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/DLD_betaD_norm_vs_t_tikz-9.pdf}\\ \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/DLD_betaD_norm_vs_t_tikz-5.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/DLD_betaD_norm_vs_t_tikz-6.pdf}\\ \caption{Posterior mean estimates of standardised residuals (\textbf{top left}), posterior mean estimated residuals distribution (\textbf{top-right}), absolute difference in posterior mean parameter estimates (\textbf{bottom left}) and difference in posterior predictive densities of the observations (\textbf{bottom right}) under the Gaussian and Student's-$t$ model of \acro{\smaller KLD}-Bayes and \acro{$\beta$D}-Bayes ($\beta = 1.5$) for the \acro{\smaller DLD} data.} \label{Fig:DLDRegressions} \end{center} \end{figure} \subsection{Mixture Modeling}{\label{Sub:MixtureModeling}} An advantage of considering the stability of the distributions for observables rather than parameters is that it allows `neighbouring' models to have different dimensions to their parameter space. For example, consider initial model $f(\cdot;\theta)$ and then `neighbouring' model \begin{align} h(\cdot;\eta)& = (1 - \omega)\times f(\cdot;\theta) + \omega\times h^{'}(\cdot;\kappa),\nonumbe \end{align} for $\eta = \left\lbrace \theta,\kappa, \omega\right\rbrace$. Here, $h(\cdot;\eta)$ is a mixture model combining the likelihood model $f(\cdot;\theta)$, which could itself already be a mixture model, and some other density $h^{'}(\cdot;\kappa)$ with additional parameters $\kappa$. For all $\theta\in\Theta$ and any $\kappa \in K$ we have that $\acro{\smaller TVD}(f\left(\cdot; \theta\right), h\left(\cdot;\left\lbrace \theta,\kappa, \omega\right\rbrace\right))<\omega$ and therefore a \acro{\smaller TVD} neighbourhood can be defined by upper bounding $\omega$. \subsubsection{Shapley Galaxy Dataset} We examine the Shapley galaxy dataset of \cite{drinkwater2004large}, recording the velocities of 4215 galaxies in the Shapley supercluster, a large concentration of gravitationally-interacting galaxies; see Figure \ref{Fig:MixtureModels}. The clustering tendency of galaxies continues to be a subject of interest in astronomy. \cite{miller2018robust} investigate this data using Gaussian mixture models and use their coarsened posterior to select the number of mixture components, finding considerable instability in the number of estimated components $K$ under different specifications of the coarsening parameter. See \cite{cai2021finite} for further issues with estimating the number of components in mixture models. We estimate Gaussian mixture models of the form \begin{align} f(y; \theta)= \sum_{k=1}^K \omega_j \mathcal{N}(y; \mu_j, \sigma_j), \nonumber \end{align} under the \acro{\smaller KLD}-Bayes and \acro{$\beta$D}-Bayes, considering number of components $K \in \{2, 3, 4, 5, 6\}$ and using the normal-inverse Wishart priors of \cite{fuquene2019choosing} (full details available in Section \ref{App:FiniteGaussianMixDetails}). \acro{$\beta$D}-Bayes inference for such one-dimensional mixture models is easy to implement using adaptive quadrature to approximate the necessary integral term $\frac{1}{\beta}\int h(z;\eta)^{\beta}dz$. We do not formally place any constraint on the estimation of $\omega_k$, however, any model that estimates a component with small $\omega_k$ can be seen as a neighbour of a model with one fewer component. Figure \ref{Fig:MixtureModels} demonstrates the posterior mean approximation to the histogram of the data of the Gaussian mixture models under the \acro{\smaller KLD}-Bayes and \acro{$\beta$D}-Bayes and Table \ref{Tab:MixtureModels} records the \acro{\smaller TVD} between the posterior mean predictive distribution of recursively adding components to the model. The \acro{$\beta$D}-Bayes inference for $\beta = 1.25$ and $1.5$ is more stable to the addition of an extra component. In particular, for $K \geq 3$ the \acro{$\beta$D}-Bayes inference stably estimates the biggest components of the data centered approximately at $5,000$ and $15,000$ $km/s$, while the \acro{\smaller KLD}-Bayes produces very different inference for these modes depending on the number of clusters selected. \begin{figure \begin{center} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.9\columnwidth]{Figures_new/KL_beta_MixtureBayesnorm_NIW_Shapley_stability_hists_tikz-1.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.9\columnwidth]{Figures_new/KL_beta_MixtureBayesnorm_NIW_Shapley_stability_hists_tikz-3.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.9\columnwidth]{Figures_new/KL_beta_MixtureBayesnorm_NIW_Shapley_stability_hists_tikz-2.pdf} \caption{Shapley Galaxy Data: Histograms of the data, in units of 1,000 km/s, excluding a small amount of data extending in a tail up to 80,000 km/s, with fitted Gaussian mixture models with $K = 2-6$ components under the \acro{\smaller KLD}-Bayes (\textbf{top}), \acro{$\beta$D}-Bayes with $\beta = 1.25$ (\textbf{middle}) and \acro{$\beta$D}-Bayes with $\beta = 1.5$ (\textbf{bottom}).} \label{Fig:MixtureModels} \end{center} \end{figure} \begin{table}[ht] \caption{Total variation distances between posterior mean predictive distributions for different number of mixture components $K$ under the \acro{\smaller KLD}-Bayes and \acro{$\beta$D} for $\beta = 1.25$ and $1.5$.} \centering \begin{tabular}{rcccc} \hline Method & $K = 2$ vs $K = 3$ & $K = 3$ vs $K = 4$ & $K = 4$ vs $K = 5$ & $K = 5$ vs $K = 6$ \\ \hline \acro{\smaller KLD} & 0.27 & 0.12 & 0.13 & 0.03 \\ \acro{$\beta$D} ($\beta = 1.25$) & 0.26 & 0.06 & 0.06 & 0.03 \\ \acro{$\beta$D} ($\beta = 1.5$) & 0.23 & 0.05 & 0.08 & 0.02 \\ \hline \end{tabular} \label{Tab:MixtureModels} \end{table} \subsection{Binary Classification}{\label{Sec:Classification}} Binary classification models predict $y \in \{0, 1\}$ from $p$-dimensional regressors $X$. The canonical model in such a setting is logistic regression where \begin{align} P_{LR}(y = 1| X, \theta) = \frac{1}{1 + \exp\left(- X\theta\right)}, \quad P_{LR}(y = 0 | X, \theta) = 1 - P_{LR}(Y = 1| X, \theta)\nonumber, \end{align} where $\theta\in\mathbb{R}^p$ are the regression parameters. Alternative, less ubiquitous models include, probit regression, which uses an alternative \acro{\smaller GLM} link function depending on the standard Gaussian \acro{\smaller CDF} $\Phi(\cdot)$, `heavier tailed' $t$-logistic regression \citep{ding2010t, ding2013t} and a mixture type model that explicitly models the chance of mislabelling of the observed classes. \begin{align} P_{PR}(y = 1| X, \eta) &= \Phi(w_{PR}\times X\theta), \quad P_{tLR}(y = 1| X, \eta) = \exp_t((w_{tLR}\times 0.5X\theta-G_t(w_{tLR}\times X\theta)))\nonumber\\ &P_{ML}(y = 1| X, \eta) = (1-\nu_1)P_{LR}(y = 1 | X, \theta) + \nu_0(1-P_{LR}(y = 1 | X, \theta))\nonumber \end{align} where $0 < t < 2$ and $0 < \nu_0, \nu_1 < 1$. The so-called $t$-exponential `$\exp_t$' and $G_t$ ensures that $P_{tLR}(y = 1| X, \eta)$ is normalised, both are defined in Section \ref{Sec:tLogistic}. Setting $t > 1$ results in heavier-tailed probabilities than the logistic model. For the probit and $t$-logistic models parameters $\theta$ are scalar multiples $w_{PR}, w_{tLR}\in\mathbb{R}$ of the logistic regression parameters $\theta \mapsto w\theta$. These are calculated in order to minimise the \textit{a priori} \acro{\smaller TVD} between the models and the logistic regression baseline according to $\mathcal{N}_{\epsilon}^{\acro{\smaller TVD}}$ (see Section \ref{Sec:LogisticTransform}). We upper bound $\nu_0$ and $\nu_1$ by 0.05 making $\epsilon = 0.05$ for these models. Figure \ref{Fig:BinaryClassifiers} plots $P(y = 1 | X, \theta)$ as a function of $X\theta$ for all four models (left) and the \acro{\smaller TVD} between each alternative model and the logistic regression (right), demonstrating that all four produce very similar binary probabilities. \begin{figure \begin{center} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/logisticRegression_vs_TVDs_tikz-1.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.49\columnwidth]{Figures_new/logisticRegression_vs_TVDs_tikz-2.pdf} \caption{\textbf{Left}: $P(y = 1 | X, \theta)$ for logistic, probit, $t$-logistic and mislabelled models. \textbf{Right}: \acro{\smaller TVD} between the logistic regression canonical model and the probit, $t$-logistic and mislabelled models. The $\theta$ parameters of the probit and $t$-logistic models are scalar multiplied in a fashion that minimise the \acro{\smaller TVD} to the logistic regression} \label{Fig:BinaryClassifiers} \end{center} \end{figure} \subsubsection{Colon Cancer Dataset} To investigate the stability of posterior predictive inferences across the logistic, probit, $t$-logistic, and mislabelled binary regression models we consider the colon cancer dataset of \cite{alon1999broad}. The dataset contains the expression levels of 2000 genes from 40 tumours and 22 normal tissues and there is purportedly evidence that certain tissue samples may have been cross-contaminated \citep{tibshirani2013robust}. Rather than consider the full 2000 genes we first run a frequentist LASSO procedure, estimating the hyperparameter via cross-validation, and focus our modelling only on the nine genes selected by this procedure. We understand that such post-model selection biases parameter estimates, but the stability of the predictive inference is our focus here. Figure \ref{Fig:TVDColonCancer} compares the \textit{a posteriori} \acro{\smaller TVD} distance between the posterior mean estimated distribution for each observation with the \textit{a priori} \acro{\smaller TVD} distance between each of the models (top) and the difference between the posterior mean regression parameter estimates of the two models (bottom) under the \acro{\smaller KLD}-Bayes and \acro{$\beta$D}-Bayes with $\beta = 1.5$. The stability of the \acro{$\beta$D}-Bayes is once again demonstrated here, for almost every observation and every pair of models the posterior predictive inference is as stable as it was \textit{a priori}, while the KLD-Bayes inference is more often divergent. For the $t$-logistic and mislabelled models the predictive stability of the \acro{$\beta$D}-Bayes also provides greater stability in the posterior mean parameter estimates \begin{figure}[!ht \begin{center} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/colon_cancer_KLD_betaD_stabilityl_logit_preds_tikz-1.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/colon_cancer_KLD_betaD_stabilityl_logit_preds_tikz-2.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/colon_cancer_KLD_betaD_stabilityl_logit_preds_tikz-3.pdf}\\ \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/colon_cancer_KLD_betaD_stabilityl_logit_params_tikz-1.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/colon_cancer_KLD_betaD_stabilityl_logit_params_tikz-2.pdf} \includegraphics[trim= {0.0cm 0.00cm 0.0cm 0.0cm}, clip, width=0.32\columnwidth]{Figures_new/colon_cancer_KLD_betaD_stabilityl_logit_params_tikz-3.pdf} \caption{Colon Cancer Data. \textbf{Top}: \acro{\smaller TVD} between the posterior mean estimated probabilities for each observation of the probit (\textbf{left}), $t$-logistic (\textbf{centre}) and mislabelled (\textbf{right}) models and the canonical logistic regression under the \acro{\smaller KLD}-Bayes and \acro{$\beta$D}-Bayes ($\beta = 1.5$). The dotted line represented the \textit{a priori} \acro{\smaller TVD} distance between the models. \textbf{Bottom}: Absolute differences between posterior mean parameter estimates and those of the logistic regression } \label{Fig:TVDColonCancer} \end{center} \end{figure} \section{Discussion} This paper investigated the posterior predictive stability of traditional Bayesian updating and a generalised Bayesian alternative minimising the \acro{$\beta$D}. In practice, the model used for inference is usually a convenient and canonical member of a wider class that capture the broad belief statements made by the \acro{\smaller DM} and the observed data was not necessarily collected in the manner the \acro{\smaller DM} imagined. We proved that \acro{$\beta$D}-Bayes inference is provably stable across a class of likelihood models and data generating processes whose probability statements are absolutely close, a \acro{\smaller TVD} neighbourhood, by establishing bounds on how far their predictive inferences can diverge. On the other hand, our results require the \acro{\smaller DM} to be sure about the tail properties of their beliefs and the \DGP to guarantee stability for standard Bayesian inference. The results of this paper simplify the process of belief elicitation for the \acro{$\beta$D}-Bayes, bounding the \textit{a posteriori} consequences for a given level of \textit{a priori} inaccuracy, leaving the \acro{\smaller DM} free to use the best guess approximation of their beliefs that they are most comfortable with, rather than switch to a less familiar model with better outlier rejection properties \citep{o1979outlier}. Such stability is achieved through a minimal amount of extra work compared with traditional \bayesrule inference, and it provides a similarly recognisable output. We hope such results help to justify the increased use of the \acro{$\beta$D} to make robust inferences in statistics and machine learning applications. A key issue motivating the departure from standard Bayesian methods here is a lack of concordance between the likelihood model and the data. Such an issue can be attributed to either a failure of the modeller to think carefully enough about the \DGP, or errors in data collection. However, we treat these results separately to exemplify two different manifestations of the instability of Bayes' rule. Future work could explore the applicability of such results in multivariate settings where belief specification and data collection are harder, and further investigate our \acro{\smaller KLD}-Bayes results. While we argued when you could guarantee the stability of such methods, identifying for which statements \acro{\smaller KLD}-Bayes is not stable would provide important and useful results to facilitate more focused belief elicitation. To continue to facilitate the deployment of \acro{$\beta$D}-Bayes methods in practice, more work is required to study and build upon existing methods to select $\beta$, particularly in high dimensions. While it is clear that considerable gains can be made over standard methods in certain scenarios, an adversarial analysis of the \acro{$\beta$D} performance compared with its \acro{\smaller KLD}-Bayes analogue would further motivate its wider applications. \section*{Acknowledgements} The authors would like to thank Danny Williamson, Christian Robert, and Sebastian Vollmer for their insightful discussions on the topics in this paper. JJ was partially funded by the Ayudas Fundación BBVA a Equipos de Investigación Cientifica 2017, the Government of Spain's Plan Nacional PGC2018-101643-B-I00, and a Juan de la Cierva Formación fellowship FJC2020-046348-I. CH was supported by the EPSRC Bayes4Health programme grant and The Alan Turing Institute, UK.
1,108,101,563,989
arxiv
\section{Introduction} It was known long since that the very basic assumption of the Shakura-Sunyaev disk (SSD, Shakura \& Sunyaev 1973), that is, the geometrical thinness of the disk, $H/R \ll 1$, where $H$ is the half thickness of the disk and $R$ is the radius in cylindrical coordinates, would break down for the inner region of the disk in some specific situations. For example, when the mass accretion rate $\dot M$ approaches and surpasses its critical value corresponding to the Eddington luminosity, radiation pressure will act to huff the inner region of the disk in the vertical direction; or when the cooling mechanism is inefficient, so that the temperature in the disk becomes very high, then gas pressure will act in a similar way. In either of these two situations, the inner region of the disk will get geometrically thick, i.e., with $H/R \sim 1$ (e.g., Frank et al. 2002, p.98). Based on these understandings, two types of models were proposed more than twenty years ago, namely the optically thick, radiation pressure-supported thick disk (Abramowicz et al. 1978; Paczy\'{n}ski \& Wiita 1980; Madau 1988) and the optically thin, ion pressure-supported thick disk (Rees et al. 1982). To avoid mathematical difficulties, in these models the disk was assumed to be purely rotating, i.e., with no mass accretion. However, the very existence of non-accreting thick disks was thrown into doubt by the discovery of Papaloizou \& Pringle (1984) that such disks are dynamically unstable to global non-axisymmetric modes. Since the work of Blaes (1987), it had been recognized that it is accretion, i.e., radial matter motion and energy advection into the central black hole, that can sufficiently stabilize all modes. Accordingly, the concept of advection dominance was introduced and two new types of models were constructed, namely the optically thick, radiation pressure-supported slim disk (Abramowicz et al. 1988) and the optically thin, ion pressure-supported, advection-dominated accretion flow (ADAF, Narayan \& Yi 1994; Abramowicz et al. 1995). Both these two types of models are popular nowadays. Slim disks and ADAFs were supposed to be geometrically slim, i.e., with $H/R \lesssim 1$, neither thin nor thick. The reason for this restriction is the following. As argued by Abramowicz et al. (1995), the advection factor $f_{\rm adv} \equiv Q_{\rm adv}/Q_{\rm vis}$, where $Q_{\rm adv}$ is the advective cooling rate per unit area and $Q_{\rm vis}$ is the viscous heating rate per unit area, should satisfy the relation \begin{eqnarray} f_{\rm adv} \gtrsim \left( \frac{H}{R} \right)^2 \ . \end{eqnarray} Obviously, advection can be important only for disks that are not thin. But the disk cannot be thick either, because the value of $f_{\rm adv}$ cannot exceed 1. Recently, Gu \& Lu (2007, hereafter GL07) addressed a problem in the slim disk model of Abramowicz et al. (1988, see also Kato et al. 1998). In this model, the gravitational potential was approximated in the form suggested by H\={o}shi (1977), i.e., \begin{eqnarray} \psi (R,z) \simeq \psi (R,0) + \frac{1}{2} \Omega_{\rm K}^2 z^2 \ , \end{eqnarray} where $\Omega_{\rm K}$ is the Keplerian angular velocity. As shown by GL07, such an approximation is valid only for geometrically thin disks with $H/R \lesssim 0.2$, and for a larger thickness it would greatly magnify the gravitational force in the vertical direction. Accordingly, the widely adopted relationship $H \Omega_{\rm K} /c_s =$ constant can approximately hold only for thin disks as well. Since formula (1) was derived by using this relationship, its validity for thicker disks has not been justified. GL07 noted that, when the vertical gravitational force is correctly calculated with the explicit potential $\psi(R,z)$, ``slim" disks are much thicker than previously thought. However, the work of GL07 was still within the framework of the slim disk model in some sense. In particular, those authors did not consider the vertical distribution of velocities, but instead kept the assumption of vertical hydrostatic equilibrium, \begin{eqnarray} \frac{1}{\rho} \frac{\p p}{\p z} + \frac{\p \psi}{\p z} = 0 \ , \end{eqnarray} which is a simplification of the more general vertical momentum equation \begin{eqnarray} \frac{1}{\rho} \frac{\p p}{\p z} + \frac{\p \psi}{\p z} + v_R \frac{\p v_z}{\p R} + v_z \frac{\p v_z}{\p z} = 0 \end{eqnarray} (e.g., Abramowicz et al. 1997), where $\rho$ is the mass density, $p$ is the pressure, and $v_R$ and $v_z$ are the cylindrical radial and vertical velocities, respectively. While the terms containing $v_z$ in equation (4) can be reasonably dropped for thin disks because in this case $v_z$ must be negligibly small, it needs a careful consideration whether the same can be done for not thin disks (Abramowicz et al. 1997, also see below in \S 2). Also regarding to the two main features of advection-dominated disks, i.e., the advection dominance and the slimness, an important different approach was made earlier by Narayan \& Yi (1995, hereafter NY95). NY95 considered rotating spherical accretion flows ranging from the equatorial plane to the rotation axis, i.e., with $H/R \to \infty$ and with no free surfaces. They assumed self-similarity in the radial direction and solved differential equations describing the vertical structure of the flow, and showed that, comparing to their exact solutions, the solutions obtained previously with the vertical integration approach are very good approximations, provided ``vertical" means the spherical polar angle $\theta$, rather than the cylindrical height $z$. This seemed to indicate that advection-dominated disks are not necessarily limited to be slim. However, those authors did not calculate the advection factor $f'_{\rm adv}$ (they defined $f'_{\rm adv} \equiv q_{\rm adv}/q_{\rm vis}$, with $q_{\rm adv}$ and $q_{\rm vis}$ being the advective cooling rate and the viscous heating rate per unit volume, respectively), but rather set it a priori to be a constant. It is still not answered how their $f'_{\rm adv}$ varies with $\theta$, or how $f_{\rm adv}$ per unit area varies with the thickness of the disk, and what is required for advection to be dominant. In this work we try to make some complementarity to NY95 and some refinements to GL07. We consider the vertical structure of accretion flows with free surfaces and show that advection-dominated disks must be geometrically thick rather than slim. Our results may suggest to recall the historical thick disk models mentioned above, but with improvements that they have to include accretion now. \section{Equations} We consider a steady state axisymmetric accretion flow in spherical coordinates ($r$, $\theta$, $\phi$) and use the Newtonian potential $\psi = - GM/r$ since it is convenient for the self-similar formalization adopted below, where $M$ is the black hole mass. The basic equations of continuity and momenta are \begin{eqnarray} \frac{1}{r^2} \frac{\p}{\p r} (r^2 \rho v_r) + \frac{1}{r \sin \theta} \frac{\p}{\p \theta} (\sin \theta \rho v_{\theta}) = 0 \ , \\ v_r \frac {\p v_r}{\p r} + \frac{v_{\theta}}{r} \left( \frac{\p v_r}{\p \theta} - v_{\theta} \right) -\frac{v_{\phi}^2}{r} = - \frac{GM}{r^2} - \frac{1}{\rho} \frac{\p p}{\p r} \ , \\ v_r \frac{\p v_{\theta}}{\p r} + \frac{v_{\theta}}{r} \left( \frac{\p v_{\theta}}{\p \theta} + v_r \right) - \frac{v_{\phi}^2}{r} \cot \theta = - \frac{1}{\rho r} \frac{\p p}{\p \theta} \ , \\ v_r \frac{\p v_{\phi}}{\p r} + \frac{v_{\theta}}{r} \frac{\p v_{\phi}}{\p \theta} + \frac{v_{\phi}}{r} (v_r + v_{\theta} \cot \theta ) = \frac{1}{\rho r^3} \frac{\p}{\p r} (r^3 t_{r\phi}) \end{eqnarray} (e.g., Xue \& Wang 2005), where $v_r$, $v_{\theta}$, and $v_{\phi}$ are the three velocity components. We assume that only the $r\phi$-component of the viscous stress tensor is important, which is $t_{r\phi} = \nu \rho r \p (v_{\phi}/r) /\p r$, where $\nu = \alpha c_s^2 r / v_{\rm K}$ is the kinematic viscosity coefficient, $\alpha$ is the constant viscosity parameter, $c_s$ is the sound speed defined as $c_s^2 = p/\rho$, and $v_{\rm K} = (GM/r)^{1/2}$ is the Keplerian velocity. We do not simply assume vertical hydrostatic equilibrium (eq. [3]). Equation (7) is the general vertical momentum equation in spherical coordinates, corresponding to equation (4) in cylindrical coordinates. Abramowicz et al. (1997) have given several reasons why spherical coordinates are a much better choice. We only mention one of these reasons that is particularly important for our study here. The stationary accretion disks calculated in realistic two-dimensional (2D) and three-dimensional (3D) simulations resemble quasi-spherical flows, i.e., in spherical coordinates the half-opening angle of the flow $\Delta \theta \approx$ constant, or in cylindrical coordinates the relative thickness $H/R \approx$ constant, much more than quasi-horizontal flows, i.e., $H \approx$ constant (e.g., Papaloizou \& Szuszkiewicz 1994; NY95). If no outflow production from the surface of the disk is assumed, then obviously $v_{\theta} = 0$ is a reasonable approximation for disks with any thickness (Xue \& Wang 2005); but $v_z$ cannot be neglected for not thin disks because there is a relation $v_z / v_R \sim H/R$ for quasi-spherical flows, making equation (4) difficult to deal with. Similar to NY95, we assume self-similarity in the radial direction \begin{eqnarray} v_r \propto r^{-1/2}; \ v_{\theta} = 0; \ v_{\phi} \propto r^{-1/2}; \nonumber \\ \rho \propto r^{-3/2}; \ c_s \propto r^{-1/2}. \nonumber \end{eqnarray} The above relation automatically satisfies the continuity equation (5). By substituting the relation, the momentum equations (6-8) are reduced to be \begin{eqnarray} \frac{1}{2} v_r^2 + \frac{5}{2} c_s^2 + v_{\phi}^2 - v_{\rm K}^2 = 0 \ , \\ \frac{c_s^2}{p} \frac {d p}{d \theta} = v_{\phi}^2 \cot \theta \ , \\ v_r = - \frac{3}{2} \frac{\alpha c_s^2}{v_{\rm K}} \ . \end{eqnarray} Four unknown quantities, namely $v_r$, $v_{\phi}$, $c_s$ and $p$, appear in these three equations. This is because we do not write the energy equation, whose general form is $q_{\rm vis} = q_{\rm adv} + q_{\rm rad}$, where $q_{\rm rad}$ is the radiative cooling rate per unit volume. In principle, the general energy equation should be solved, and then $f'_{\rm adv}$ is obtained as a variable, as done, e.g., by Manmoto et al. (1997) for ADAFs and by Abramowicz et al. (1988) and Watarai et al. (2000) for slim disks. But due to complications in calculating the radiation processes, in NY95 and even in works on global ADAF solutions (e.g., Narayan et al. 1997), $q_{\rm adv} = f'_{\rm adv} q_{\rm vis}$ or $Q_{\rm adv} = f_{\rm adv} Q_{\rm vis}$ was used instead as an energy equation and $f'_{\rm adv}$ or $f_{\rm adv}$ was given as a constant. Since our purpose here is to investigate the variation of $f_{\rm adv}$ with the thickness of the disk, we wish to calculate $Q_{\rm adv}$ and $Q_{\rm vis}$ respectively, and then estimate $f_{\rm adv}$. To do this, we further assume a polytropic relation, $p = K \rho ^{\gamma}$, in the vertical direction, which is often adopted in the vertically integrated models of geometrically slim disks (e.g., Kato et al. 1998, p.241). We admit that the polytropic assumption is a simple way to close the system, and then enables us to calculate the dynamical quantities and evaluate $f_{\rm adv}$ self-consistently. With the polytropic relation and the definition of the sound speed $c_s^2 = p/\rho$, equation (10) becomes \begin{eqnarray} \frac{d c_s^2}{d \theta} = \frac{\gamma -1}{\gamma} v_{\phi}^2 \cot \theta \ , \end{eqnarray} which along with equations (9) and (11) can be solved for $v_r$, $v_{\phi}$, and $c_s$. A boundary condition is required for solving the differential equation (12), which is set to be $c_s = 0$ (accordingly $\rho = 0$ and $p = 0$) at the surface of the disk. The quantities $q_{\rm adv} = p v_r (\p\ln p/\p r - \gamma \p\ln \rho/\p r)/(\gamma-1)$ and $q_{\rm vis} = \nu \rho r^2 [\p (v_{\phi}/r)/\p r]^2$ are expressed in the self-similar formalism as \begin{eqnarray} q_{\rm adv} = - \frac{5-3\gamma}{2(\gamma -1)} \frac{p v_r}{r} \ , \\ q_{\rm vis} = \frac{9}{4} \frac{\alpha p v_{\phi}^2}{r v_{\rm K}} \ , \end{eqnarray} then $Q_{\rm adv}$ and $Q_{\rm vis}$ are given by the vertical integration, \begin{eqnarray} Q_{\rm adv} = \int_{\frac{\pi}{2}-\Delta \theta}^{\frac{\pi}{2}+\Delta \theta} q_{\rm adv} \ r \sin\theta \ d\theta \ , \\ Q_{\rm vis} = \int_{\frac{\pi}{2}-\Delta \theta}^{\frac{\pi}{2}+\Delta \theta} q_{\rm vis} \ r \sin\theta \ d\theta \ , \end{eqnarray} and $f_{\rm adv} \equiv Q_{\rm adv}/Q_{\rm vis}$ is obtained. In our calculations $\alpha = 0.1$ is fixed. \section{Numerical results} We first study the variation of dynamical quantities with the polar angle $\theta$ for a given disk's half-opening angle $\Delta \theta$. Figure 1 shows the profiles of $v_r$ (the dashed line), $v_{\phi}$ (the dot-dashed line), $c_s$ (the solid line), and $\rho$ (the dotted line) for three pairs of parameters, i.e., $\gamma = 4/3$ and $\Delta \theta = 0.25\pi$ for Fig.~1$a$, $\gamma = 4/3$ and $\Delta \theta = 0.45\pi$ for Fig.~1$b$, and $\gamma = 1.65$ and $\Delta \theta = 0.498\pi$ for Fig.~1$c$. The parameters are marked in Figure~3 by filled stars, which clearly show the corresponding values of the advection factor $f_{\rm adv}$. Obviously, advection is not significant for case $a$ ($f_{\rm adv} < 0.1$), but is dominant for cases $b$ and $c$ ($0.5 < f_{\rm adv} < 1$). Comparing our results with Fig.~1 of NY95, it is seen that the profiles of $v_r$ and $\rho$ are similar, i.e., $v_r$ (the absolute value) and $\rho$ increase with increasing $\theta$ and achieve the maximal value at the equatorial plane ($\theta = \pi/2$). On the contrary, the two profiles of $c_s$ are significantly different. In their Fig.~1, the value of $c_s$ decreases with increasing $\theta$ and achieves the minimal value at the equatorial plane; in our Fig.~1, however, $c_s$ increases with increasing $\theta$ and achieves the maximal value at the equatorial plane. In our opinion, the difference results from different assumptions, i.e., NY95 assumed an energy advection factor $f'_{\rm adv}$ in advance, whereas we solve for the energy advection factor $f_{\rm adv}$ self-consistently based on a polytropic relation in the vertical direction. We think that our profile for $c_s$ is reasonable for disk-like accretion. For example, in the standard thin disk, the direction of the radiative flux is from the equatorial plane to the surface, which means that the temperature (or the sound speed) decreases from the equatorial plane to the surface. Such a picture agrees with our Fig.~1 but conflicts with Fig.~1 of NY95. Figure 2 shows the variation of $f_{\rm adv}$ with $\Delta \theta$ for the ratio of specific heats $\gamma = 4/3$. Advection dominance means $0.5 < f_{\rm adv} \le 1$. We first explain the two dashed lines and the dotted line that correspond to previous works in the slim disk model, then the solid line that represents our results here, and leave the dot-dashed line later. Both the two dashed lines are obtained by assuming vertical hydrostatic equilibrium (eq. [3]) and using the H\={o}shi form of potential (eq. [2]), thus the relation $H \Omega_{\rm K} /c_s =$ constant is adopted. The difference between these two lines is the following. For line $a$, the simple one-zone treatment in the vertical direction is made as in the SSD model; then in equation (3), $\p p/\p z \approx -p/H$, $\p \psi /\p z \approx \Omega_{\rm K}^2 H$, and $H \Omega_{\rm K}/c_s = 1$ is obtained (e.g., Kato et al. 1998, p.80). For line $b$, there is some improvement in the sense that the vertical structure of the disk is considered. By assuming a polytropic relation, the vertical integration of equation (3) gives $H \Omega_{\rm K}/c_s = 3$ (e.g., Kato et al. 1998, p.242). Because of these different treatments in the vertical direction, these two lines show different variations of $f_{\rm adv}$ with $\Delta \theta$ and different maximum values of $\Delta \theta$. The upper limit of $f_{\rm adv}$ is 1 (full advection dominance), beyond which there would be no thermal equilibrium solutions. It can be analytically derived that for the case of line $a$, the maximum value of $\Delta \theta$ corresponding to $f_{\rm adv} = 1$ is $\Delta \theta_{\rm max} = \arctan(\sqrt{2/7})$, or in cylindrical coordinates the maximum relative thickness $(H/R)_{\rm max} = \sqrt{2/7}$; and for the case of line $b$ it is $\Delta \theta_{\rm max} = \arctan(3/2)$ or $(H/R)_{\rm max} = 3/2$. As mentioned in \S 1, the thickness of the disk in the slim disk model had been underestimated because the vertical gravitational force was overestimated by the H\={o}shi form of potential. Even so, according to the more sophisticated version of the slim disk model (line $b$), advection dominance $f_{\rm adv} > 0.5$ would require $H/R > 1$ ($\Delta \theta > \pi/4$), and full advection dominance would require $H/R = 3/2$, in contradiction with $H/R \lesssim 1$, the supposed feature of the model. The dotted line in Figure 2 is for the results of GL07. The point made in that work was that the explicit potential $\psi (R,z)$, rather than its H\={o}shi approximation (eq. [2]), was used, so that the vertical gravitational force was correctly calculated. But GL07 still kept the assumption of vertical hydrostatic equilibrium (eq. [3]), i.e., the terms containing $v_z$ in equation (4) were incorrectly ignored. Because of this, the thickness of the disk was overestimated; and accordingly, it seemed that advection dominance can never be possible, since even for the extreme thickness $\Delta \theta = \pi/2$ (or $H/R \to \infty$) the value of $f_{\rm adv}$ can only marginally reach to 0.5. We make improvements over GL07. We use spherical coordinates with the assumption $v_{\theta} = 0$, which is better than $v_z = 0$ in cylindrical coordinates; and then calculate the vertical distribution of velocities ($v_r$ and $v_{\phi}$) and thermal quantities ($\rho$, $p$, and $c_s$). Our results are shown by the solid line in Figure 2. It is seen that advection dominance ($f_{\rm adv} > 0.5$) is possible, but only for $\Delta \theta > 2\pi /5$ (or 72$^{\circ}$). Therefore, advection-dominated disks must be geometrically thick, rather than slim as previously supposed. It is also seen that line $b$, the dotted line, and the solid line in Figure 2 almost coincide with each other for thin disks with $\Delta \theta \lesssim 0.1\pi$. This is natural, since for thin disks both the H\={o}shi approximation of potential and the assumption of vertical hydrostatic equilibrium are valid, and the three approaches represented by the three lines make no significant difference. But the one-zone treatment, i.e., total ignorance of the vertical structure of the disk, seems to be too crude, making the resulting line $a$ deviate from the other three lines even for thin disks. The value $\gamma = 4/3$ in Figure 2 corresponds to the optically thick and radiation pressure-dominated case, to which the historical radiation pressure-supported thick disk and the slim disk belong; while it is $\gamma \to 5/3$ for the optically thin and gas pressure-dominated case, to which the historical ion pressure-supported thick disk and the ADAF belong. In Figure 3, the four solid lines show variations of $\Delta \theta$ with $\gamma$ for four given values of $f_{\rm adv}$. It is seen that advection dominance ($f_{\rm adv} > 0.5$) requires $\Delta \theta$ to be large for any value of $\gamma$; and that for a fixed $f_{\rm adv}$ (the same degree of advection), the required $\Delta \theta$ increases with increasing $\gamma$, that is, for advection to be dominant, optically thin disks must get even geometrically thicker than optically thick ones. For the geometrically thin case, $\Delta \theta \ll 1$, the Taylor expansion of equations (9), (11), and (12) with respect to $\Delta \theta$ can be performed, and we derive an approximate analytic relation: \begin{eqnarray} f_{\rm adv} \approx \frac{(5-3\gamma)(2\gamma -1)}{3\gamma(5\gamma -3)} \cdot \Delta \theta ^2 \ , \end{eqnarray} which is similar to equation (1) in cylindrical coordinates. The dot-dashed lines in Figures 2 and 3 correspond to equation (17) for a fixed $\gamma = 4/3$ and for a fixed $f_{\rm adv} = 0.01$, respectively. It is seen from Figure 2 that, as expected, the analytic approximation of equation (17) agrees well with the correct numerical results (the solid line) for small $\Delta \theta$, but deviates a lot for large $\Delta \theta$. In Figure 3 a good agreement between equation (17) and the numerical results (the lowest solid line) is seen again, especially for small values of $\gamma$. The limitation that equation (17) is valid only for small $\Delta \theta$, and accordingly only for small $f_{\rm adv}$, should also apply to equation (1), because that equation is derived with the H\={o}shi form of potential. \section{Discussion} The key concept of the slim and ADAF disk models is advection dominance. This concept was introduced rather as an assumption, whether and under what physical conditions can it be realized have not been clarified. The main result of our work is to have shown that, in order for advection to be dominant, the disk must be geometrically thick with the half-opening angle $\Delta \theta > 2\pi /5$, rather than slim as suggested previously in the slim disk and ADAF models. Thus, advection-dominated disks are geometrically similar to the historical thick disks metioned in \S 1. This result is obvious because, as revealed in GL07, in the slim disk and ADAF models the vertical gravitational force was overestimated by using the H\={o}shi's approximate potential, and accordingly the disk's thickness was underestimated. NY95 considered accretion flows with no free surfaces and found that when the given advective factor $f'_{\rm adv} (\equiv q_{\rm adv} / q_{\rm vis}) \to 1$ (full advection dominance), their solutions approach nearly spherical accretion. If ``nearly spherical" can be regarded as extremely thick, then their results and ours agree with each other, but we take a different approach. We do not give the value of $f_{\rm adv} (\equiv Q_{\rm adv} / Q_{\rm vis})$ in advance, but instead consider accretion flows with free surfaces, i.e., accretion disks. The boundary condition is set to be $p = 0$, which is usually adopted in the literature (e.g., Kato et al. 1998). Then the thickness of the disk, $\Delta \theta$, makes sense, and we calculate $f_{\rm adv}$ to see how it relates to $\Delta \theta$. Many 2D and 3D numerical simulations of viscous radiatively inefficient accretion flows (RIAFs) revealed the existence of convection-dominated accretion flows (CDAFs), while ADAFs could not be obtained (e.g., Stone et al. 1999; Igumenshchev \& Abramowicz 2000; McKinney \& Gammie 2002; Igumenshchev et al. 2003). We think that this fact probably indicates that the existing analytic ADAF models might have hidden inconsistencies, and the incorrect treatment of the vertical structure might be one such inconsistency, as addressed in our work. Moreover, the recent radiation-MHD simulations (Ohsuga et al. 2009) showed that the disk is geometrically thick in their models A and C (corresponding to slim disks and ADAFs, respectively), which is in agreement with our results. Apart from the convective motion, the outflow is found in 2D and 3D MHD simulations of non-radiative accretion flows (e.g., Stone \& Pringle 2001; Hawley \& Balbus 2002). For optically thick flows, the circular motion and the outflow are found in 2D radiation-HD simulations (e.g., Ohsuga et al. 2005; Ohsuga 2006). The assumption $v_{\theta} = 0$ would break down when the convective motion or the outflowing motion is significant, thus we have to point out the limitation of our solutions, which are based on the self-similar assumption in the radial direction and particularly for $v_{\theta} = 0$. In this paper we have not shown the exact thermal equilibrium solution for a certain mass accretion rate. We wish to stress that our main concern here is the relationship between the energy advection factor and the thickness of the disk. The well-known formula (1), which was previously believed to be valid for both optically thick and thin disks, implied that advection-dominated accretion disks are geometrically slim. As shown in Figures 2 and 3, however, formula (1) is inaccurate for disks that are not geometrically thin. We think that the new relationship between $f_{\rm adv}$ and $\Delta \theta$, shown in Figures 2 and 3, should also work for both optically thick and thin cases. Even without the exact solutions, we can predict that advection-dominated accretion disks ought to be geometrically thick rather than slim. Our next work will concentrate on the optically thick disks and take the radiative cooling into consideration. In the vertical direction, we will solve the dynamical equations combined with the radiative transfer equations, thus the polytropic assumption will be relaxed. At that step, we will be able to calculate the thermal equilibrium solutions with given mass accretion rates and show the optical depth, pressure, and luminosity of the disks. \bigskip We thank Marek A. Abramowicz, Ramesh Narayan, and Ken Ohsuga for beneficial discussions and the referee for helpful comments. This work was supported by the National Basic Research Program of China under Grant No. 2009CB824800, the National Natural Science Foundation of China under Grants No. 10778711 and 10833002, the Program for New Century Excellent Talents in University under Grant No. 06-0559, and the China Postdoctoral Science Foundation funded project 20080441038.
1,108,101,563,990
arxiv
\section{Introduction} How did the first objects form after the Big Bang? In hierarchical cosmogonies (\eg Turner 1998), the first gravitationally bound systems may have been stars and small star--forming systems which merge to form galaxies in large dark matter halos. Arising from the end products of stellar evolution and mergers, central black holes could grow to become extremely massive. However, it is not clear how this process would work at very high redshifts, where little time is available. It has been suggested that primordial black holes may form well before their host galaxies (Loeb 1993). In any case, accretion events fueling massive black holes are thought to manifest themselves as active galactic nuclei (AGN; \eg Rees 1984). Due to their extreme luminosity, AGN are convenient beacons for exploring these formative, `Dark Ages' of our Universe. Extragalactic radio sources have played an important role in identifying active galaxies at high redshifts. The most distant known {\it galaxies} have consistently been radio--selected until only very recently. In this Letter we report the discovery of a radio galaxy at $z = 5.19$. At this redshift it is the most distant known AGN, surpassing even quasars for the first time in 36 years. Throughout this paper we use $H_0 = 65 h_{65} \kmsMpc$, $\Omega_M = 0.3$, and $\Lambda = 0$. For these parameters, 1\arcsec\, subtends 7.0 $h_{65}^{-1}$ kpc at $z = 5.19$ and the Universe is only 1.08 Gyr old, corresponding to a lookback time 91.1\% of the age of the Universe. \section{Source Selection} The most efficient method to find high--redshift radio galaxies (HzRGs) is to combine two well--known techniques. The first is to select radio sources with ultra--steep spectra (USS) at radio wavelengths, i.e.\ very red radio colors (\eg Chambers, Miley, \& van Breugel 1990). Most powerful radio galaxies have radio spectral energy distributions which steepen with frequency. Therefore, at fixed observing frequencies more distant sources exhibit steeper spectra (\eg van Breugel \etal 1999). A second selection criterion relies upon the magnitude--redshift relationship at infrared wavelengths, or $K-z$ Hubble diagram, for powerful radio galaxies (Figure~\ref{kz}). At low redshifts ($z < 1$), powerful radio galaxies are uniquely associated with massive galaxies. The well--behaved $K-z$ diagram suggests that such galaxies can be found through near--IR identification. This has been confirmed by the discovery of many $3 < z < 4.4$ radio galaxies which approximately follow the $K-z$ relationship, even to the highest redshifts and despite significant morphological evolution (van Breugel \etal 1998). Using several new, large radio surveys we constructed a USS sample ($S_\nu \propto \nu^\alpha; \alpha^{\rm 1.4 GHz}_{\rm 365 MHz} < -1.30$; De Breuck \etal 1999 [DB99]) which is much larger, more accurate, and reaches fainter flux density limits than previous such samples. \tn, with $\alpha^{\rm 1.4 GHz}_{\rm 365 MHz} = -1.63 \pm 0.08$, is among the steepest sources of our sample. VLA observations at 4.85 GHz show the source is a slightly resolved $1\farcs2$ double, with $S_{4.85GHz} = 8.6\pm0.5$ mJy, centered at $\alpha_{\rm J2000} = 09^h24^m19\fs92$, $\delta_{\rm J2000} = -22\arcdeg 01\arcmin 41\farcs5$ (Figure~\ref{kimage}). \section{Observations} We obtained $K_s$ images of \tn\ using NIRC (Matthews \& Soifer 1994) at the Keck~I telescope. We integrated for 32 minutes on UT 1998 April 18 in photometric conditions with $0\farcs5$ seeing, and again for 32 minutes on UT 1998 April 19 through light cirrus with $0\farcs6$ seeing. The observing procedures, calibration, and data reduction techniques were similar to those described in van Breugel \etal (1998). The final image comprising 3840~s of on--source integration is shown in Figure~\ref{kimage}. Using circular apertures of $2\farcs1$ diameter, encompassing the entire object, we measure $K = 21.15$ for night 1, and 21.45 for night 2. We estimate that $K = 21.3 \pm 0.3$. If \tn\ is at $z = 5.19$ (\S4), then redshifted {\rm [O~II]}\ at $\lambda = 2.307\mu$m would be included in the $K_s$ passband and some of the $K$-band flux might be due to line emission. We obtained spectra of \tn\ through a 1\farcs5 wide, 3\arcmin\ long slit using LRIS (Oke \etal 1995) at the Keck~II telescope. The integration times were 5400~s on UT 1998 December 19 (position angle 0\ifmmode {^{\circ}}\else {$^\circ$}\fi) and 4400~s on UT 1998 December 20 (position angle 180\ifmmode {^{\circ}}\else {$^\circ$}\fi); both nights were photometric with 0\farcs6 seeing. The observations used the 150 lines mm$^{-1}$ grating ($\lambda_{\rm blaze} \approx 7500$ \AA; $\Delta\lambda_{\rm FWHM} \approx 17$ \AA), sampling the wavelength range 4000 \AA\ to 1$\mu$m. Between each 1800~s exposure, we reacquired offset star A (see Fig.~2), performed 20\arcsec\ spatial shifts to facilitate removal of fringing in the reddest regions of the spectra, and blind offset the telescope to return \tn\ within the slit. We calculated the dispersion using a NeAr lamp spectrum taken immediately subsequent to the observations (RMS variations of 0.50 \AA), and adjusted the zero point according to telluric emission lines. Final wavelength calibration is accurate to 1 \AA. The spectra were flux calibrated using observations of Feige~67 and Feige~110 obtained on each night and were corrected for foreground Galactic extinction using a reddening of $E_{B - V} = 0.0168$ determined from the dust maps of Schlegel, Finkbeiner, \& Davis (1998). We find a strong, single emission line at $\lambda \sim 7530$ \AA\, which shifts by $\approx 16$ \AA\ between the two nights. (Figure~\ref{spectrum}; Table~1). The cause of the line offset is unclear, though it may be related to problems LRIS was experiencing with slippage in the movable guider at the time of the observations. The relative brightnesses of other sources on the slit vary between each 1800~s observation, indicating that despite our precautions of reacquiring the target after each exposure, guider slippage must have caused some variations in telescope offsetting. These slight pointing changes may have caused the slit to sample different regions of spatially--extended, line--emitting gas. Indeed, \tn\ shows two separate components at $K$ (Figure~\ref{kimage}), and emission--line regions of HzRGs are known to be kinematically complex (Chambers, Miley \& van Breugel 1990; van Ojik \etal 1997). Line parameters are measured with a Gaussian fit to the emission line and a flat (in $F_\lambda$) fit to the continuum (Table~1). Equivalent width values were derived from a Monte Carlo analysis using the measured line flux and continuum values with errors, subject to the constraint that both are positive. For UT 1998 Dec.\ 19, when no continuum was reliably detected, we quote the 90\% confidence limit, $W^{\rm obs} > 2760$ \AA. For UT 1998 Dec.\ 20, when continuum was marginally detected, we quote the 90\% confidence interval, $W^{\rm obs} = 710 - 1550$ \AA. \section{Redshift Determination} As discussed by Dey \etal (1998) and Weymann \etal (1998) for two $z > 5$ Ly$\alpha$~-emitting field galaxies, a solitary, faint emission line at red wavelengths is most likely to be either low-redshift {\rm [O~II]}\ or high-redshift Ly$\alpha$~. Similar arguments are even more persuasive for HzRGs because of their strong, rich emission line spectra. For example, if the line at $\approx$ 7530 \AA\ were [\ion{O}{2}]$\lambda$3727~\ at $z = 1.020$ then composite radio galaxy spectra (McCarthy 1993; Stern \etal 1999a) indicate that the \tn\ spectrum should have shown {\rm CII]}$\lambda$2326~\ at 4699 \AA\, with $\approx 40 - 70$\% the strength of [\ion{O}{2}]$\lambda$3727~, and {\rm {\ion{Mg}{2}$\lambda\lambda$2796,2803~\ at 5653 \AA\, with $\approx 20 - 60$\% the strength of [\ion{O}{2}]$\lambda$3727~. Similar arguments rule out identifying the emission line with \Ha\ at $z = 0.147$ or {\rm [O~III]}\ at $z = 0.504$, since in these cases even stronger confirming lines should have been seen. The large equivalent widths also argue against identifying the emission line with [\ion{O}{2}]$\lambda$3727~\ at $z = 1.020$, implying $W_{\rm [OII]}^{\rm rest} > 1370$ \AA\ (night 1) and $350 < W_{\rm [OII]}^{\rm rest} < 770$ \AA\ (night 2). Radio galaxy composites typically have rest--frame [\ion{O}{2}]$\lambda$3727~\ equivalent widths of $\approx 130$ \AA\, (McCarthy 1993; Stern \etal 1999a), though active galaxies with extreme $W_{\rm [OII]}^{\rm rest}$ are occasionally observed ($W_{\rm [OII]}^{\rm rest} \approx 750$ \AA; Stern \etal 1999b). The equivalent width of \tn\ is more typical of high-redshift Ly$\alpha$~\ which is often observed with rest frame values of several $\times$ 100 \AA\ in HzRGs (Table 2). We also note that the observations from the second night show that Ly$\alpha$~\ is attenuated on the blue side, presumably due to associated and intervening hydrogen gas, as is commonly observed in HzRGs (\eg van Ojik \etal\ 1997; Dey 1997) and normal star-forming galaxies at $z > 5$ (\eg Dey \etal 1998). Finally, the faint $K$-band magnitude of \tn\ conforms to the extrapolation of the $K - z$ relation to $z > 5$ (Figure~\ref{kz}). Identifying the emission line with [\ion{O}{2}]$\lambda$3727~\ would imply a severely underluminous HzRG (by 3 -- 4 mag). Therefore, the most plausible identification of the emission line in \tn\ is with Ly$\alpha$~\ at a (mean) observed wavelength of 7530 \AA\ and $z = 5.19$. Table~1 gives the dereddened emission--line fluxes. \section{Discussion} Among all known $z \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 3.8$ HzRGs, \tn\ is fairly typical in radio luminosity, equivalent width, and velocity width (Table~2). But this source has the steepest radio spectrum, consistent with the $\alpha - z$ relationship for radio galaxies (\eg\ R\"ottgering \etal 1997). \tn\ also has the smallest linear size, perhaps indicating that the source is relatively young and/or embedded in a denser environment compared to the other HzRGs, commensurate with its large velocity width (van Ojik\etal 1997) and very high redshift. Together with 8C~1435$+$63, \tn\ appears underluminous in Ly$\alpha$~, which might be caused by absorption in a relatively dense cold and dusty medium. Evidence for cold gas and dust in some of the most distant HzRGs has been found from sub--mm continuum and CO--line observations of 8C~1435$+$63 and 4C~41.17 (\eg Ivison \etal 1998). Our observations of \tn\ extend the Hubble $K-z$ diagram for powerful radio galaxies to $z = 5.19$. Simple stellar evolution models are shown in Figure~\ref{kz} for comparison with the HzRG. Despite the enormous $k$--correction effect (from $U_{\rm rest}$ at $z = 5.19$ to $K_{\rm rest}$ at $z = 0$) and strong morphological evolution (from radio--aligned to elliptical structures), the $K-z$ diagram remains a powerful phenomenological tool for finding radio galaxies at extremely high redshifts. Deviations from the $K-z$ relationship may exist (Eales \etal 1997; but see McCarthy 1998), and scatter in the $K-z$ values appears to increase with redshift. The clumpy, radio--aligned $U_{\rm rest}$ morphology resembles that of other HzRGs (van Breugel \etal 1998; Pentericci \etal 1998). If the continuum is dominated by star light, as appears to be the case in the radio--aligned HzRG 4C~41.17 at $z = 3.798$ (Dey \etal 1997), then $M(U) = -24.4$ for \tn\. Then we can derive a SFR of $\sim$200 M$_\odot$ yr$^{-1}$, assuming a Bruzual \& Charlot (1999) GISSEL stellar evolution model with metallicity $Z = 0.008$, no extinction, and a Salpeter IMF. This SFR value is highly uncertain due to the unknown, but competing, effects of extinction and [\ion{O}{2}]$\lambda$3727~\ emission--line contamination, but is not unreasonable. It is 2.5 times {\it less} than in 4C~41.17, which has $M(U) = -25.2$ using the same aperture (Chambers \etal 1990). \tn\ may be a massive, active galaxy in its formative stage, in which the SFR is boosted by induced star formation (\eg Dey \etal 1997). For comparison other, `normal' star forming galaxies at $z > 5$ have 10 -- 30 times lower SFR ($\sim 6 - 20 \msun yr^{-1}$; Dey \etal 1998; Weymann \etal 1998; Spinrad \etal 1998). Recent $z \sim 3$ and $z \sim 4$ Lyman--break galaxy observations have suggested a possible divergence of star formation and AGN activity at high redshift (Steidel \etal 1999), contrary to what was previously thought (\eg Haehnelt, Natarajan \& Rees 1998). However, if starbursts and AGN are closely coupled, as suggested to explain the ultraluminous infrared galaxies (Sanders \& Mirabel 1996), then young AGN may inhabit especially dusty, obscured galaxy systems. To obtain a proper census of the AGN population at the very highest redshifts therefore requires samples which avoid optical photometric selection and extinction bias, such as our cm--wavelength/$K$-band radio galaxy sample. As emphasized by Loeb (1993), if massive black holes form in a hierarchical fashion together with their host galaxies, this process must be quick and efficient, as available timescales are short: at $z = 5.19$ the Universe is only 1 Gyr old. It is unclear how this could be done, so other models, where primordial massive black holes form soon after the Big Bang and {\it prior} to the beginning of galaxy formation, may require additional investigation. \acknowledgments We thank G.\ Puniwai, W.\ Wack, R.\ Goodrich and R.\ Campbell for their expert assistance during our observing runs at the W.M.\ Keck Observatory, and A.\ Dey, J.R.\ Graham and H.\ Spinrad for useful discussions. The work by W.v.B., C.D.B. and S.A.S.\ at IGPP/LLNL was performed under the auspices of the US Department of Energy under contract W-7405-ENG-48. W.v.B.\ also acknowledges support from NASA grant GO 5940, and D.S. from IGPP/LLNL grant 98--AP017.
1,108,101,563,991
arxiv
\section{Introduction} Recall that a type $p$ over a set $A$ in a simple theory is {\em one-based} if for any tuple $\bar a$ of realizations of $p$ and any $B\supseteq A$ the canonical base $\Cb(\bar a/B)$ is contained in $\bdd(\bar aA)$. One-basedness implies that the forking geometry is particularly well-behaved; for instance one-based groups are bounded-by-abelian-by-bounded. Ehud Hrushovski showed in \cite[Proposition 3.4.1]{udi} that for stable stably embedded types one-basedness is preserved under analyses: If $p$ is stable stably embedded in a supersimple theory, and analysable (in the technical sense defined in the next section) in one-based types, then $p$ is itself one-based. Zo\'e Chatzidakis then gave another proof for supersimple structures \cite[Theorem 3.10]{zoe}, using semi-regular analyses. We shall give an easy direct proof of the theorem stated in the abstract, thus removing the hypotheses of stability, stable embedding, or supersimplicity; it is similar to Hrushovski's proof, but does not use germs of definable functions (which work less well in simple unstable theories), and has to deal with non-stationarity of types. While we are at it, we shall also generalize the notion of bounded closure and one-basedness to $\Sigma$-closure and $\Sigma$-basedness, where $\Sigma$ is an $\emptyset$-invariant collection of partial types (thought of as small). This may for instance be applied to consider one-basedness {\em modulo types of finite $SU$-rank}, or {\em modulo superstable types}. Our notation is standard and follows \cite{wa00}. Throughout the paper, the ambient theory will be simple, and we shall be working in $\M^{heq}$, where $\M$ is a suffiviently saturated model of the ambient theory. Thus tuples are tuples of hyperimaginaries, and $\dcl=\dcl^{heq}$. \section{$\Sigma$-closure} In this section $\Sigma$ will be an $\emptyset$-invariant family of partial types. We first recall the notions of internality and analysability. \defn Let $\pi$ be a partial type over $A$. Then $\pi$ is\begin{itemize} \item ({\em almost,} resp.)\ {\em internal\/} in $\Sigma$, or ({\em almost,} resp.)\ {\em $\Sigma$-internal}, if for every realization $a$ of $\pi$ there is $B\ind_Aa$ and $\bar b$ realizing types in $\Sigma$ based on $B$, such that $a\in\dcl(B\bar b)$ (or $a\in\bdd(B\bar b)$, respectively). \item {\em analysable\/} in $\Sigma$, or {\em $\Sigma$-analysable}, if for any $a\models\pi$ there are $(a_i:i<\alpha)\in\dcl(A,a)$ such that $\tp(a_i/A,a_j:j<i)$ is $\Sigma$-internal for all $i<\alpha$, and $a\in\bdd(A,a_i:i<\alpha)$.\end{itemize} A type $\tp(a/A)$ is {\em foreign} to $\Sigma$ if $a\ind_{AB}\bar b$ for all $B\ind_Aa$ and $\bar b$ realizing types in $\Sigma$ over $B$.\edefn \defn The {\em $\Sigma$-closure\/} $\Pcl(A)$ of a set $A$ is the collection of all hyperimaginaries $a$ such that $\tp(a/A)$ is $\Sigma$-analysable.\edefn We think of $\Sigma$ as a family of small types. For instance, if $\Sigma$ is the family of all bounded types, then $\Pcl(A)=\bdd(A)$. Other possible choices might be the family of all types of $SU$-rank $<\omega^\alpha$, for some ordinal $\alpha$, or the family of all superstable types. If $P$ is an $\emptyset$-invariant family of types, and $\Sigma$ is the family of all $P$-analysable types to which all types in $P$ are foreign, then $\Pcl(A)=\cl_P(A)$ as defined in \cite[Definition 3.5.1]{wa00}; if $P$ consists of a single regular type $p$, this in turn is the $p$-closure from \cite{udi1} (see also \cite[p.\ 265]{pill}). \bem In general $\bdd(A)\subseteq\Pcl(A)$; if the inequality is strict, then $\Pcl(A)$ has the same cardinality as the ambient monster model, and hence violates the usual conventions. However, this is usually harmless. Note that $\Pcl(.)$ is a closure operator.\ebem \tats\label{forequ} The following are equivalent:\begin{enumerate} \item $\tp(a/A)$ is foreign to $\Sigma$. \item $a\ind_A\Pcl(A)$. \item $a\ind_A\dcl(aA)\cap\Pcl(A)$. \item $\dcl(aA)\cap\Pcl(A)\subseteq\bdd(A)$.\end{enumerate}\etats \bew This follows immediately from \cite[Proposition 3.4.12]{wa00}; see also \cite[Lemma 3.5.3]{wa00}.\qed $\Sigma$-closure is well-behaved with respect to independence. \lmm\label{clind} Suppose $A\ind_BC$. Then $\Pcl(A)\ind_{\Pcl(B)}\Pcl(C)$. More precisely, for any $A_0\subseteq\Pcl(A)$ we have $A_0\ind_{B_0}\Pcl(C)$, where $B_0=\dcl(A_0B)\cap\Pcl(B)$. In particular, $\Pcl(AB)\cap\Pcl(BC)=\Pcl(B)$.\elmm \bew Let $B_1=\Pcl(B)\cap\dcl(BC)$. Then $C\ind_BA$ implies $C\ind_{B_1}A$, and $\tp(C/B_1)$ is foreign to $\Sigma$ by Fact \ref{forequ}~$(3\Rightarrow1)$. Hence $C\ind_{B_1}\Pcl(A)$, and $C\ind_{B_1}A_0$. Since $\tp(A_0/B_0)$ is foreign to $\Sigma$ by Fact \ref{forequ}, we obtain $A_0\ind_{B_0}\Pcl(B_0)$. But $\Pcl(B_0)=\Pcl(B)\supseteq B_1$, whence $A_0\ind_{B_0}C$ by transitivity, and finally $A_0\ind_{B_0}\Pcl(C)$ by foreignness to $\Sigma$ again.\qed \section{$\Sigma$-basedness} Again, $\Sigma$ will be an $\emptyset$-invariant family of partial types. \defn A type $p$ over $A$ is {\em $\Sigma$-based} if $\Cb(\bar a/\Pcl(B))\subseteq\Pcl(\bar aA)$ for any tuple $\bar a$ of realizations of $p$ and any $B\supseteq A$.\edefn \bem Equivalently, $p\in S(A)$ is $\Sigma$-based if $\bar a\ind_{\Pcl(\bar aA)\cap\Pcl(B)}\Pcl(B)$ for any tuple $\bar a$ of realisations of $p$ and any $B\supseteq A$.\ebem \lmm\label{clbase} Suppose $\tp(a/A)$ is $\Sigma$-based, $A\subseteq B$, and $a_0\in\Pcl(\bar aB)$, where $\bar a$ is a tuple of realizations of $\tp(a/A)$. Then $\tp(a_0/B)$ is $\Sigma$-based.\elmm \bew Let $\bar a_0$ be a tuple of realizations of $\tp(a_0/B)$, and $C\supseteq B$. There is a tuple $\tilde a$ of realizations of $\tp(a/A)$ such that $\bar a_0\in\Pcl(\tilde aB)$; we may choose it such that $\tilde a\ind_{\bar a_0B}C$. Then $\Pcl(\tilde aB)\cap\Pcl(C)\subseteq\Pcl(\bar a_0B)$ by Lemma \ref{clind}. Put $X=\Cb(\tilde a/\Pcl(C))$. By $\Sigma$-basedness of $\tp(a/A)$ we have $$X\subseteq\Pcl(\tilde aA)\cap\Pcl(C)\subseteq\Pcl(\bar a_0B).$$ As $\tilde a\ind_X\Pcl(C)$ we get $\tilde aB\ind_{XB}\Pcl(C)$, and hence $\bar a_0\ind_Y\Pcl(C)$ by Lemma \ref{clind}, where $Y=\Pcl(XB)\cap\dcl(\bar a_0XB)$. As $Y\subseteq\Pcl(C)$, we have $$\Cb(\bar a_0/\Pcl(C))\subseteq Y\subseteq\Pcl(XB)\subseteq\Pcl(\bar a_0B).\qed$$ \lmm\label{unionbase} If $\tp(a)$ and $\tp(b)$ are $\Sigma$-based, so is $\tp(ab)$.\elmm \bew Let $\bar a$ and $\bar b$ be tuples of realizations of $\tp(a)$ and $\tp(b)$, respectively, and consider a set $A$ of parameters. We add $\Pcl(\bar a\bar b)\cap\Cb(\bar a\bar b/\Pcl(A))$ to the language. By $\Sigma$-basedness of $\tp(a)$ we get $$\Cb(\bar a/\Pcl(A))\subseteq\Pcl(\bar a)\cap\Cb(\bar a\bar b/\Pcl(A))=\dcl(\emptyset),$$ whence $\bar a\ind\Pcl(A)$; similarly $\bar b\ind\Pcl(A)$. Put $b_1=\Cb(\bar b/\Pcl(\bar aA))$, and choose $\bar a'A'\models\tp(\bar aA/b_1)$ with $\bar a'A'\ind_{b_1}\bar a\bar bA$. Then $b_1\in\Pcl(\bar a'A')$; by $\Sigma$-basedness of $\tp(a)$ and Lemma \ref{clbase} applied to $\bar ab_1\in\Pcl(\bar a\bar a'A')$ we have $\Cb(\bar ab_1/\Pcl(AA'))\subseteq\Pcl(\bar ab_1A')$. If $Y=\Pcl(\emptyset)\cap\dcl(b_1)$, then $A\ind_Yb_1$ by Lemma \ref{clind}, as $b_1\in\Pcl(\bar b)$ by $\Sigma$-basedness of $\tp(b)$ and because $\bar b\ind\Pcl(A)$; since $\tp(A'/b_1)=\tp(A/b_1)$ we also have $A'\ind_Yb_1$, whence $A'\ind_Y\bar ab_1A$, and $A'\ind_{YA}\bar ab_1$. As $\Pcl(YA)=\Pcl(A)$, Lemma \ref{clind} implies $$\begin{aligned}\Cb(\bar ab_1/\Pcl(A))&=\Cb(\bar ab_1/\Pcl(AA')) \subseteq\Pcl(\bar ab_1A')\cap\Pcl(A)\\ &\subseteq\Pcl(\bar ab_1)\subseteq\Pcl(\bar a\bar b),\end{aligned}$$ by Lemma \ref{clind} since $A'\ind_{\bar ab_1Y}A$. On the other hand, put $C=\Cb(\bar ab_1/\Pcl(A))$. Then $\bar b\ind_{b_1}\Pcl(\bar aA)$ by definition of $b_1$, whence $\bar a\bar b\ind_{\bar ab_1}\Pcl(A)$; as $\bar ab_1\ind_C\Pcl(A)$ we get $\bar a\bar b\ind_C\Pcl(A)$, whence $\Cb(\bar a\bar b/\Pcl(A))\subseteq C$. So $$\begin{aligned}\Cb(\bar a\bar b/\Pcl(A))&=\Cb(\bar ab_1/\Pcl(A))\cap\Cb(\bar a\bar b/\Pcl(A))\\ &\subseteq\Pcl(\bar a\bar b)\cap\Cb(\bar a\bar b/\Pcl(A))=\dcl(\emptyset),\end{aligned}$$ whence $\bar a\bar b\ind\Pcl(A)$.\qed \kor\label{limit} If $\tp(a_i)$ is $\Sigma$-based for all $i<\alpha$, so is $\tp(\bigcup_{i<\alpha}a_i)$.\ekor \bew We use induction on $\beta$ to show that $\tp(\bigcup_{i<\beta}a_i)$ is $\Sigma$-based, for $\beta\le\alpha$. This is clear for $\beta=0$; it follows from Lemma \ref{unionbase} for successor ordinals. And if $\beta$ is a limit ordinal, then for any set $A$ $$\Cb(\bigcup_{i<\beta}a_i/\Pcl(A))=\bigcup_{i<\beta}\Cb(\bigcup_{j\le i}a_i/\Pcl(A))\subseteq\Pcl(\bigcup_{i<\beta}a_i).\qed$$ \lmm\label{indbase} If $\tp(a/A)$ is $\Sigma$-based and $a\ind A$, then $\tp(a)$ is $\Sigma$-based.\elmm \bew Let $\bar a$ be a tuple of realizations of $\tp(a)$, and consider a set $B$ of parameters. For every $a_i\in\bar a$ choose $A_i$ with $\tp(a_iA_i)=\tp(aA)$ and $A_i\ind_{a_i}(\bar a,B,A_j:j<i)$. As $A_i\ind a_i$ we obtain $A_i\ind(\bar a,B,A_j:j<i)$, whence $A_i\ind_{(A_j:j<i)}\bar aB$, and inductively $(A_j:j\le i)\ind\bar aB$. Put $\bar A=\bigcup_{a_i\in\bar a}A_i$; we just saw that $\bar A\ind\bar aB$. Now $\tp(a_i/\bar A)$ is $\Sigma$-based for all $a_i\in\bar a$, and so is $\tp(\bar a/\bar A)$ by Corollary \ref{limit}. As $\bar a\ind_B\bar A$, Lemma \ref{clind} implies $$\Cb(\bar a/\Pcl(B))=\Cb(\bar a/\Pcl(\bar AB))\subseteq\Pcl(\bar a\bar A)\cap\Pcl(B)=\Pcl(\bar a)\cap\Pcl(B),$$ where the last equality follows from $\bar aA\ind_{\bar a}B$ and Lemma \ref{clind}.\qed \kor\label{internal} If $p$ is almost internal in $\Sigma$-based types, then $p$ is $\Sigma$-based.\ekor \bew Suppose $p=\tp(a/A)$, and choose $B\ind_Aa$ and $\bar b$ such that $a\in\bdd(B\bar b)$ and $\tp(b/B)$ is $\Sigma$-based for all $b\in\bar b$. Then $\tp(\bar b/AB)$ is $\Sigma$-based by Lemma \ref{limit}, as is $\tp(a/AB)$ by Lemma \ref{clbase}, and $\tp(a/A)$ by Lemma \ref{indbase}.\qed \lmm\label{succ} If $\tp(a)$ and $\tp(b/a)$ are $\Sigma$-based, so is $\tp(ab)$.\elmm \bew Consider a tuple $\bar a\bar b$ of realizations of $\tp(ab)$, and a set $A$ of parameters. As $\tp(\bar a)$ and $\tp(\bar b/\bar a)$ are both $\Sigma$-based, we may suppose $a=\bar a$ and $b=\bar b$. Put $C=\Cb(ab/\Pcl(A))$; again we add $\Pcl(ab)\cap C$ to the language. By $\Sigma$-basedness of $\tp(a)$ we get $a\ind\Pcl(A)$. Consider a Morley sequence $(a_ib_i:i<\omega)$ in $\lstp(ab/C)$; we may assume that $(a_ib_i:i<\omega)\ind_CabA$. Since $(a_i:i<\omega)\ind C$ we get $ab\ind(a_i:i<\omega)$. Moreover, as $\tp(ab/C)$ is foreign to $\Sigma$, we have $ab\ind_C\Pcl(a_ib_i:i<\omega)$. On the other hand $C\in\dcl(a_ib_i:i<\omega)$, whence $$C=\Cb(ab/\Pcl(a_ib_i:i<\omega)).$$ Put $b'=\Cb(ab/\Pcl(a,a_ib_i:i<\omega))$. Then $a\in b'$, and $b'\in\Pcl(ab)$ by $\Sigma$-basedness of $\tp(b/a)$. Put $X=\Pcl(\emptyset)\cap\dcl(b')$. Then $b'\ind_X(a_i:i<\omega)$ by Lemma \ref{clind}; as $\tp(b'/a_i:i<\omega)$ is $\Sigma$-based by Lemma \ref{clbase} and Corollary \ref{limit} applied to $b'\in\Pcl(a,a_ib_i:i<\omega)$, so is $\tp(b'/X)$ by Lemma \ref{indbase}. Put $C'=\Cb(b'/\Pcl(a_ib_i:i<\omega))$, then $C'\subseteq\Pcl(b')\subseteq\Pcl(ab)$ by $\Sigma$-basedness. Now $ab\ind_{b'}\Pcl(a_ib_i:i<\omega)$ by definition of $b'$; as $b'\ind_{C'}\Pcl(a_ib_i:i<\omega)$ by definition, we get $ab\ind_{C'}\Pcl(a_ib_i:i<\omega)$, whence $C\subseteq C'$. We obtain $$C=C'\cap C\subseteq\Pcl(ab)\cap C=\dcl(\emptyset),$$ whence $ab\ind\Pcl(A)$.\qed \satz Let $p$ be analysable in $\Sigma$-based types. Then $p$ is $\Sigma$-based.\esatz \bew Suppose $p=\tp(a/A)$. Then there is a sequence $(a_i:i<\alpha)\subseteq\dcl(aA)$ such that $a\in\bdd(A,a_i:i<\alpha)$ and $\tp(a_i/A,a_j:j<i)$ is internal in $\Sigma$-based types for all $i<\alpha$. So $\tp(a_i/A,a_j:j<i)$ is $\Sigma$-based for all $i<\alpha$ by Corollary \ref{internal}; we use induction on $i$ to show that $\tp(a_j:j<i/A)$ is $\Sigma$-based. This is clear for $i=0$ and $i=1$; by Lemma \ref{limit} it is true for limit ordinals, and by Lemma \ref{succ} it holds for successor ordinals.\qed \kor If $p$ is analysable in one-based types, then $p$ is itself one-based.\qed\ekor
1,108,101,563,992
arxiv
\section{Introduction} \label{intro} Gaussian Processes (GP) are popular and expressive nonparametric models, and considerable effort has gone into alleviating their cubic runtime complexity. Notable successes include inducing point methods \citep[e.g.][]{snelson2006sparse,titsias2009variational,hensman2013gaussian}, finite-basis expansions \citep[e.g.][]{rahimi2008random,mutny2019efficient,wilson2020efficiently, loper2020general}, nearest neighbor truncations \citep[e.g.][]{datta2016hierarchical,katzfuss2021general}, and iterative numerical methods \citep[e.g.][]{cunningham2008fast,cutajar2016preconditioning,gardner2018gpytorch}. Common to these techniques is the classic \emph{speed-bias tradeoff}: coarser GP approximations afford faster but more biased solutions that in turn affect both the model's predictions and learned hyperparameters. While a few papers analyze the bias of inducing point methods \citep{bauer2017understanding,burt2019rates}, the biases of other approximation techniques, and their subsequent impact on learned GP models, have not been rigorously studied. Here we scrutinize the biases of two popular techniques -- random Fourier features (RFF) \citep{rahimi2008random} and conjugate gradients (CG) \citep[e.g.][]{cunningham2008fast,cutajar2016preconditioning,gardner2018gpytorch}. These methods are notable due to their popularity and because they allow dynamic control of the speed-bias tradeoff: at any model evaluation, the user can adjust the number of CG iterations or RFF features to a desired level of approximation accuracy. In practice, it is common to truncate these methods to a fixed number of iterations/features that is deemed adequate. However, such truncation will stop short of an exact (machine precision) solution and potentially lead to biased optimization outcomes. We provide a novel theoretical analysis of the biases resulting from RFF and CG on the GP log marginal likelihood objective. Specifically, we prove that CG is biased towards hyperparameters that underfit the data, while RFF is biased towards overfitting. In addition to yielding suboptimal hyperparameters, these biases hurt posterior predictions, regardless of the inference method used at test-time. Perhaps surprisingly, this effect is not subtle, as we will demonstrate. Our analysis suggests there is value in debiasing GP learning with CG and RFF. To do so, we turn to recent work that shows the merits of exchanging the speed-bias tradeoff for a \emph{speed-variance tradeoff} \citep{beatson2019efficient,chen2020residual,luo2020sumo, oktay2020randomized}. These works all introduce a randomization procedure that reweights elements of a fast truncated estimator, eliminating its bias at the cost of increasing its variance. We thus develop bias-free versions of GP learning with CG and RFF using randomized truncation estimators. In short, we randomly truncate the number of CG iterations and RFF features, while reweighting intermediate solutions to maintain unbiasedness. Our variant of CG uses the {\bf R}ussian {\bf R}oulette estimator \cite{kahn1955use}, while our variant of RFF uses the {\bf S}ingle {\bf S}ample estimator of \citet{lyne2015russian}. We believe our {\bf RR-CG} and {\bf SS-RFF} methods to be the first to produce unbiased estimators of the GP log marginal likelihood with $< \bigo{N^3}$ computation. Finally, through extensive empirical evaluation, we find our methods and their biased counterparts indeed constitute a bias-variance tradeoff. Both RR-CG and SS-RFF are unbiased, recovering nearly the same optimum as the exact GP method, while GP trained with CG and RFF often converge to solutions with worse likelihood. We note that bias elimination is not always practical. For SS-RFF, the optimization is slow, due to the large auxiliary variance needed to counteract the slowly decaying bias of RFF. On the other hand, RR-CG incurs a minimal variance penalty, likely due to the favorable convergence properties of CG. In a wide range of benchmark datasets, RR-CG demonstrates similar or better predictive performance compared to CG using the same expected computational time. To summarize, this work offers three main contributions: \begin{itemize}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt] \item theoretical analysis of the bias of CG- and RFF-based GP approximation methods (\S 3) \item RR-CG and SS-RFF: bias-free versions of these popular GP approximation methods (\S 4) \item results demonstrating the value of our RR-CG and SS-RFF methods (\S 3 and 5). \end{itemize} \section{Background} \label{background} We consider observed data $\mathcal{D} = \{(\ensuremath{\mathbf{x}}_i, y_i)\}_{i=1}^N$ for $\ensuremath{\mathbf{x}}_i \in \mathbb{R}^d$ and $y_i \in \mathbb{R}$, and the standard GP model: \begin{equation*} \begin{gathered} f(\cdot) \sim \mathcal{GP} ( \mu(\cdot), k(\cdot, \cdot)), \\ \ensuremath{\mathbf{y}}_i = f(\ensuremath{\mathbf{x}}_i) + \epsilon_i, \qquad \epsilon_i \sim \normaldist{0}{\sigma^2} \end{gathered} \end{equation*} where $k(\cdot, \cdot)$ is the covariance kernel, $\mu(\cdot)$ is set to zero without loss of generality, and hyperparameters are collected into the vector $\ensuremath{\boldsymbol{\theta}}$, which is optimized as: % \begin{equation} \begin{aligned} \ensuremath{\boldsymbol{\theta}}^* &= {\textstyle \argmin_{\ensuremath{\boldsymbol{\theta}}} } \: \mathcal{L}(\ensuremath{\boldsymbol{\theta}}) \\ \mathcal{L}(\ensuremath{\boldsymbol{\theta}}) &= - \log p( \ensuremath{\mathbf{y}} \! \mid \! \ensuremath{\boldmatrix{X}}; \ensuremath{\boldsymbol{\theta}}) \\ & = \frac{1}{2} \big( \underbrace{\log \vert \ensuremath{\widehat{\bK}_{\bX\bX}} \vert}_{\textrm{model complexity}} + \underbrace{\ensuremath{\mathbf{y}}^{\! \top} \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{y}}}_{\textrm{data fit}} + N \log 2 \pi \big) \end{aligned} \label{eqn:log_lik} \end{equation} where $\ensuremath{\widehat{\bK}_{\bX\bX}} \in \ensuremath{\mathbb{R}}^{N \times N}$ is the Gram matrix of all data points with diagonal observational noise: $$ \ensuremath{\widehat{\bK}_{\bX\bX}}[i, j] = k(\ensuremath{\mathbf{x}}_i, \ensuremath{\mathbf{x}}_j) + \sigma^2 \mathbb{I}_{i = j}. $$ Following standard practice, we optimize $\ensuremath{\boldsymbol{\theta}}$ with gradients: % \begin{align} {\textstyle \frac{\partial \mathcal{L}}{\partial \ensuremath{\boldsymbol{\theta}}} } &= \frac{1}{2} \left( \tr{\ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} {\textstyle \frac{\partial \ensuremath{\widehat{\bK}_{\bX\bX}}}{\partial \ensuremath{\boldsymbol{\theta}}}} } - \ensuremath{\mathbf{y}}^{\top} \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} {\textstyle \frac{\partial \ensuremath{\widehat{\bK}_{\bX\bX}}}{\partial \ensuremath{\boldsymbol{\theta}}}} \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{y}} \right). \label{eqn:log_lik_deriv} \end{align} % Three terms thus dominate the computational complexity: $\ensuremath{\mathbf{y}}^\top \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{y}}$, $\log \vert \ensuremath{\widehat{\bK}_{\bX\bX}} \vert$, and $\text{tr} \{ \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1}\frac{\partial \ensuremath{\widehat{\bK}_{\bX\bX}}}{\partial \ensuremath{\boldsymbol{\theta}}} \}$. The common approach to computing this triad is the Cholesky factorization, requiring $\bigo{N^3}$ time and $\bigo{N^2}$ space. Extensive literature has accelerated the inference and hyperparameter learning of GP. Two very popular strategies are using \emph{conjugate gradients} \citep{cunningham2008fast, cutajar2016preconditioning, gardner2018gpytorch, wang2019exact} to approximate the linear solves in \cref{eqn:log_lik_deriv}, and \emph{random Fourier features} \citep{rahimi2008random}, which constructs a randomized finite-basis approximation of the kernel. \subsection{Conjugate Gradients} \label{sec:CG_background} To apply conjugate gradients to GP learning, we begin by replacing the gradient in \cref{eqn:log_lik_deriv} with a stochastic estimate \citep{cutajar2016preconditioning,gardner2018gpytorch}: \begin{align} {\textstyle \frac{\partial \mathcal{L}}{\partial \ensuremath{\boldsymbol{\theta}}} } &\approx \frac{1}{2} \left( \ensuremath{\mathbf{z}}^\top \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} {\textstyle \frac{\partial \ensuremath{\widehat{\bK}_{\bX\bX}}}{\partial \ensuremath{\boldsymbol{\theta}}}} \ensuremath{\mathbf{z}} - \ensuremath{\mathbf{y}}^{\top} \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} {\textstyle \frac{\partial \ensuremath{\widehat{\bK}_{\bX\bX}}}{\partial \ensuremath{\boldsymbol{\theta}}}} \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{y}} \right), \label{eqn:log_lik_deriv_stochastic} \end{align} where $\ensuremath{\mathbf{z}}$ is a random variable such that $\mathbb{E}[\ensuremath{\mathbf{z}}] = 0$ and $\mathbb{E}[\ensuremath{\mathbf{z}}\bz^T] = \ensuremath{\boldmatrix{I}}$. Note that the first term constitutes a stochastic estimate of the trace term \cite{hutchinson1989stochastic}. Thus, stochastic optimization of \cref{eqn:log_lik} can be reduced to computing the linear solves $\ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{y}}$ and $\ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{z}}$. Conjugate gradients (CG) \cite{hestenes1952methods} is an iterative algorithm for solving positive definite linear systems $\ensuremath{\boldmatrix{A}}^{-1} \ensuremath{\mathbf{b}}$. It consists of a three-term recurrence, where each new term requires only a matrix-vector multiplication with $\ensuremath{\boldmatrix{A}}$. More formally, each CG iteration computes a new term of the following summation: \begin{align} \textstyle \ensuremath{\boldmatrix{A}}^{-1} \ensuremath{\mathbf{b}} = \sum_{i=1}^N \gamma_i \ensuremath{\mathbf{d}}_i, \qquad \label{eqn:cg_series} \end{align} where the $\gamma_i$ are coefficients and the $\ensuremath{\mathbf{d}}_i$ are conjugate search directions \cite{golub2012matrix}. $N$ iterations of CG produce all $N$ summation terms and recover the exact solution. In practice, exact convergence may require more than $N$ iterations due to inaccuracies of floating point arithmetic. However, the summation converges exponentially, and so $J \ll N$ iterations may suffice to achieve high accuracy. CG is an appealing method for computing $\ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{y}}$ and $\ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{z}}$ due to its computational complexity and its potential for GPU-accelerated matrix products. $J$ iterations takes at most $\bigo{JN^2}$ time and $\bigo{N}$ space if the matrix-vector products are performed in a map-reduce fashion \cite{wang2019exact}. However, ill-conditioned kernel matrices hinder the convergence rate \cite{cutajar2016preconditioning}, and so the $J^\text{th}$ CG iteration may yield an inaccurate approximation of \cref{eqn:log_lik_deriv_stochastic}. \subsection{Random Fourier Features} \label{RFF_background} \citet{rahimi2008random} introduce a randomized finite-basis approximation to stationary kernels: \begin{equation} k(\ensuremath{\mathbf{x}}, \ensuremath{\mathbf{x}}') = k(\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{x}}') \approx \ensuremath{\boldsymbol{\phi}}(\ensuremath{\mathbf{x}})^{\top}\ensuremath{\boldsymbol{\phi}}(\ensuremath{\mathbf{x}}') \end{equation} where $\ensuremath{\boldsymbol{\phi}}(\ensuremath{\mathbf{x}}) \in \ensuremath{\mathbb{R}}^J$ and $J \ll N$. The RFF approximation relies on Bochner's theorem \citep{bochner1959lectures}: letting $\tau=\ensuremath{\mathbf{x}}-\ensuremath{\mathbf{x}}'$, all stationary kernels $k(\tau)$ on $\mathbb{R}^d$ can be exactly expressed as the Fourier dual of a nonnegative measure $\mathbb{P}(\ensuremath{\boldsymbol{\omega}})$: $ k(\tau) = \textstyle \int \mathbb{P}(\ensuremath{\boldsymbol{\omega}}) \exp(i\ensuremath{\boldsymbol{\omega}}\tau) \, d\ensuremath{\boldsymbol{\omega}}. \label{eqn:bochner} $ A Monte Carlo approximation of this Fourier transform yields: \begin{align*} k(\tau) \approx \frac{2}{J} \sum_{j=1}^{J/2} \exp(i\ensuremath{\boldsymbol{\omega}}_j\tau), \quad \ensuremath{\boldsymbol{\omega}}_j \sim \mathbb{P}(\ensuremath{\boldsymbol{\omega}}), \end{align*} % which simplifies to a finite-basis approximation: % \begin{gather*} \ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{X}}\bX} \approx [\ensuremath{\boldsymbol{\phi}}(\ensuremath{\mathbf{x}}_1) \ldots \ensuremath{\boldsymbol{\phi}}(\ensuremath{\mathbf{x}}_n)] [\ensuremath{\boldsymbol{\phi}}(\ensuremath{\mathbf{x}}_1) \ldots \ensuremath{\boldsymbol{\phi}}(\ensuremath{\mathbf{x}}_n)]^\top, \\ \ensuremath{\boldsymbol{\phi}}(\ensuremath{\mathbf{x}}) = [ \cos(\ensuremath{\boldsymbol{\omega}}_i^\top \ensuremath{\mathbf{x}}), \: \sin(\ensuremath{\boldsymbol{\omega}}_i^\top \ensuremath{\mathbf{x}})]_{i=1}^{J/2}, \quad \ensuremath{\boldsymbol{\omega}}_i \sim \mathbb{P}(\ensuremath{\boldsymbol{\omega}}). \end{gather*} For many common kernels, $\mathbb{P}(\ensuremath{\boldsymbol{\omega}})$ can be computed in closed-form (e.g. RBF kernels have zero-mean Gaussian spectral densities). The approximated log likelihood can be computed in $\bigo{J^3 + N}$ time and $\bigo{JN}$ space using the Woodbury inversion lemma and the matrix determinant lemma, respectively. The number of random features $J/2$ is a user choice, with typical values between 100-1000. More features lead to more accurate kernel approximations. \begin{figure*}[ht] \vskip 0.1in \begin{center} \includegraphics[scale=0.4]{figures/unbiasedness_cg.png} \includegraphics[scale=0.4]{figures/unbiasedness_rff.png} \caption{ CG (left) systematically overestimates $\ensuremath{\log \vert \trainK \vert}$ and underestimates $\ensuremath{\by^{\top} \trainK^{-1} \by}$ whereas RFF (right) does the opposite. The dashed orange line shows the exact values computed by Cholesky. Our unbiased methods, RR-CG (left) and SS-RFF (right), recover the true $\ensuremath{\log \vert \trainK \vert}$ and $\ensuremath{\by^{\top} \trainK^{-1} \by}$ values. For these two methods, the $x$-axis indicates the expected number of iterations/features.} \label{fig:debias_cg_rff} \end{center} \vskip -0.2in \end{figure*} \subsection{Unbiased Randomized Truncation}\label{subsec:randomized_truncation_intro} We will now briefly introduce Randomized Truncation Estimators, which are the primary tool we use to unbias the CG and RFF log marginal likelihood estimates. At a high level, assume that we wish to estimate some quantity $\psi$ that can be expressed as a (potentially-infinite) series: $$ \textstyle \psi = \sum_{j=1}^H \Delta_j, \quad H \in \mathbb{N} \cup \{\infty\}. $$ Here and in the following sections, $\Delta_j$ can either be random or deterministic. To avoid the expensive evaluation of the full summation, a randomized truncation estimator chooses a random term $J \in \{1, \ldots, H\}$ with probability mass function $\mathbb{P}(J) = \mathbb{P}(\mathcal{J}=J)$ after which to truncate computation. In the following, we introduce two means of deriving unbiased estimators by upweighting the summation terms. \textbf{The Russian Roulette estimator} \citep{kahn1955use} obtains an unbiased estimator $\bar{\psi}_J$ by truncating the sum after $J \sim \mathbb{P}(J)$ terms and dividing the surviving terms by their survival probabilities: \begin{equation} \label{eqn:rr} \bar {\psi}_J = \sum_{j=1}^{J} \frac{\Delta_j}{\mathbb{P}(\mathcal{J} \geq j)} = \sum_{j=1}^{H} \left( \frac{\mathbb{I}_{J \geq j}}{\mathbb{P}(\mathcal{J} \geq j)} \right) \Delta_j, \end{equation} and, $ \mathbb{E}[ \bar{\ensuremath{\boldsymbol{\psi}}}_J ] = \sum_{i=1}^{H} \Delta_j = \psi. $ (See appendix for further derivation.) The choice of $\mathbb P (J)$ determines both the computational efficiency and the variance of $\bar {\psi}_J$. A thin-tailed $\mathbb P(J)$ will often truncate sums after only a few terms ($J \ll H$). However, tail events $(J \approx H)$ are upweighted inversely to their low survival probability, and so thin-tailed truncation distributions may lead to high variance. \textbf{The Single Sample estimator} \citep{lyne2015russian} implements an alternative reweighting scheme. After drawing $J \sim \mathbb{P}(J)$, it computes a single summation term $\Delta_J$, which it upweights by $1/\mathbb{P}(J)$: \begin{equation} \bar{\psi}_J = \frac{\Delta_J}{\mathbb{P}(J)} = \sum_{j=1}^{H} \left( \frac{\mathbb{I}_{J = j}}{\mathbb{P}(\mathcal{J} = j)} \right) \Delta_j . \label{eqn:ss_estimator} \end{equation} This procedure is unbiased, and it amounts to estimating $\psi$ using a single importance-weighted sample from $\mathbb{P}(J)$ (see appendix). Again, $\mathbb{P}(J)$ controls the speed/variance trade-off. We refer the reader to \cite{beatson2019efficient} for a detailed comparison of these two estimators. We emphasize that both estimators remain unbiased even if $\Delta_j$ is a random variable, as long as it is independent from the random truncation integer $J$. \begin{figure}[ht] \vskip 0.1in \begin{center} \centerline{\includegraphics[scale=0.4]{figures/ls_recovery.png}} \caption{ Kernel lengthscale values learned by optimizing (biased) CG and RFF log marginal likelihood approximations. CG overestimates the optimal kernel lengthscale whereas RFF underestimates it. We plot the divergence (in log-ratio scale) between the learned and true lengthscales as a function of the number of CG iterations (left) and of the number of RFF samples (right).} \label{lengthscale-bias} \end{center} \vskip -0.2in \end{figure} \section{GP Learning with CG and RFF is Biased}\label{sec:bias} Here we prove that early truncated CG and RFF provide biased approximations to the terms comprising the GP log marginal likelihood (Eq.~\ref{eqn:log_lik}). We also derive the bias decay rates for each method. We then empirically demonstrate these biases and show they affect the hyperparameters learned through optimization. Remarkably, we find that the above biases are \emph{highly systematic}: CG-based GP learning favors underfitting hyperparameters while RFF-based learning favors overfitting hyperparameters. \subsection{CG Biases GP Towards Underfitting} \label{subsec:CG_bias} In the GP literature, CG has often been considered an ``exact'' method for computing the log marginal likelihood \cite{cunningham2008fast,cutajar2016preconditioning}, as the iterations are only truncated after reaching a pre-specified residual error threshold (e.g. $10^{-10}$). However, as CG is applied to ever-larger kernel matrices it is common to truncate the CG iterations before reaching this convergence \cite{wang2019exact}. While this accelerates the hyperparameter learning process, the resulting solves and gradients can no longer be considered ``exact.'' In what follows, we show that the early-truncated CG optimization objective is not only approximate but also systematically biased towards underfitting. To analyze the early-truncation bias, we adopt the analysis of \citet{gardner2018gpytorch} that recovers the GP log marginal likelihood (Eq.~\ref{eqn:log_lik}) from the stochastic gradient's CG estimates of $\ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{y}}$ and $\ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{z}}$ (Eq.~\ref{eqn:log_lik_deriv_stochastic}). Recall the two terms in the log marginal likelihood are the ``data fit'' term $\ensuremath{\mathbf{y}}^\top \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{y}}$ and the ``model complexity'' term $\ensuremath{\log \vert \trainK \vert}$. The first term falls directly out of the CG estimate of $\ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{y}}$, while a stochastic estimate of $ \ensuremath{\log \vert \trainK \vert} $ can be obtained through the byproducts of CG's computation for $\ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{z}}$. \citet{gardner2018gpytorch} show that the CG coefficients in \cref{eqn:cg_series} can be manipulated to produce a partial tridiagonalization: $\ensuremath{\boldmatrix{T}}^{(J)}_{\ensuremath{\mathbf{z}}} = \ensuremath{\boldmatrix{Q}}^{(J)\top}_\ensuremath{\mathbf{z}} \ensuremath{\widehat{\bK}_{\bX\bX}} \ensuremath{\boldmatrix{Q}}^{(J)}_\ensuremath{\mathbf{z}},$ where $\ensuremath{\boldmatrix{T}}^{(J)}_\ensuremath{\mathbf{z}} \in \ensuremath{\mathbb{R}}^{J \times J}$ is tridiagonal. $\ensuremath{\boldmatrix{T}}^{(J)}_\ensuremath{\mathbf{z}}$ can compute the Stochastic Lanczos Quadrature estimate of $\ensuremath{\widehat{\bK}_{\bX\bX}}$ \citep{ubaru2017fast, dong2017scalable}: \begin{align}\label{eqn:logdet_estimate} \log |\ensuremath{\widehat{\bK}_{\bX\bX}} | &= \mathbb{E} \left[ \ensuremath{\mathbf{z}}^T (\log \ensuremath{\widehat{\bK}_{\bX\bX}}) \ensuremath{\mathbf{z}} \right] \nonumber \\ &\approx \Vert \ensuremath{\mathbf{z}} \Vert^2 \ensuremath{\mathbf{e}}_1^\top \left( \log \ensuremath{\boldmatrix{T}}_\ensuremath{\mathbf{z}}^{(J)} \right) \ensuremath{\mathbf{e}}_1, \end{align} where $\log (\cdot )$ is the matrix logarithm and $\ensuremath{\mathbf{e}}_1$ is the first row of the identity matrix. The following theorem analyzes the bias of these $\ensuremath{\mathbf{y}}^\top \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{y}}$ and $\ensuremath{\log \vert \trainK \vert}$ estimates: \begin{theorem}\label{CG_bias_theorem} Let $u_J$ and $v_J$ be the estimates of $\ensuremath{\mathbf{y}}^\top \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{y}}$ and $\ensuremath{\log \vert \trainK \vert}$ respectively after $J $ iterations of CG; i.e.: \begin{align*} u_J = \ensuremath{\mathbf{y}}^\top \left({\textstyle \sum_{i=1}^J } \gamma_i \ensuremath{\mathbf{d}}_i \right), \quad v_J = \Vert \ensuremath{\mathbf{z}} \Vert^2 \ensuremath{\mathbf{e}}_1^\top \left(\log \ensuremath{\boldmatrix{T}}_\ensuremath{\mathbf{z}}^{(J)} \right) \ensuremath{\mathbf{e}}_1. \end{align*} If $J<N$, CG underestimates the inverse quadratic term and overestimates the log determinant in expectation: \begin{equation} u_J \leq \ensuremath{\by^{\top} \trainK^{-1} \by}, \quad \mathbb{E}_{\ensuremath{\mathbf{z}}} [ v_J ] \geq \ensuremath{\log \vert \trainK \vert}. \end{equation} The biases of both terms decay at a rate of $\bigo{C^{-2J}}$, where $C$ is a constant that depends on the conditioning of $\ensuremath{\widehat{\bK}_{\bX\bX}}$. \end{theorem} \textit{Proof sketch.} The direction of the biases can be proved using a connection between CG and numeric quadrature. $u_J$ and $v_J$ are exactly equal to the $J$-point Gauss quadrature approximation of $\ensuremath{\mathbf{y}}^\top \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{y}}$ and $\Vert \ensuremath{\mathbf{z}} \Vert^2 \ensuremath{\mathbf{e}}_1^\top ( \log \ensuremath{\boldmatrix{T}}^{(N)}_\ensuremath{\mathbf{z}} ) \ensuremath{\mathbf{e}}_1$ represented as Riemann-Stieltjes integrals. The sign of the CG approximation bias follows from the standard Gauss quadrature error bound, which is negative for $\ensuremath{\mathbf{y}}^\top \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{y}}$ and positive for $\ensuremath{\log \vert \trainK \vert}$. The convergence rates are from standard bounds on CG \cite{golub2012matrix} and the analysis of \citet{ubaru2017fast}. See appendix for a full proof. \cref{fig:debias_cg_rff} confirms our theoretical analysis and demonstrates the systematic biases of CG. We plot the log marginal likelihood terms for a subset of the PoleTele UCI dataset, varying the number of CG iterations ($J$) used to produce the estimates. Compared against the exact terms (computed with Cholesky), we see an overestimation of $\ensuremath{\log \vert \trainK \vert}$ and an underestimation of $\ensuremath{\by^{\top} \trainK^{-1} \by}$. These biases are most prominent when using few CG iterations. We turn to study the effect of CG learning on hyperparameters. Since the log marginal likelihood is a nonconvex function of $\ensuremath{\boldsymbol{\theta}}$, it is not possible to directly prove how the bias affects $\ensuremath{\boldsymbol{\theta}}$. Nevertheless, we know intuitively that underestimating $\ensuremath{\by^{\top} \trainK^{-1} \by}$ de-prioritizes model fit while overestimating $\ensuremath{\log \vert \trainK \vert}$ over-penalizes model complexity. Thus, the learned hyperparameters will likely underfit the data. Underfitting may manifest in an overestimation of the learned lengthscale $\ell$, as low values of $\ell$ increase the flexibility and the complexity of the model. This hypothesis is empirically confirmed in \cref{lengthscale-bias} (left panel). We train a GP regression model on a toy dataset: $y = x \sin(5 \pi x) + \varepsilon$ and $\varepsilon \sim \mathcal{N}(0, 0.01)$. We fix all hyperparameters other than the lengthscale, which is learned using both CG-based optimization and (exact) Cholesky-based optimization. The overestimation of $\ell$ decays with the number of CG iterations. \subsection{RFF Biases GP Towards Overfitting} \label{subsec:RFF_bias} Previous work has studied the accuracy of RFF's approximation to the entries of the Gram matrix $k(\ensuremath{\mathbf{x}}, \ensuremath{\mathbf{x}}') \approx \ensuremath{\boldsymbol{\phi}}(\ensuremath{\mathbf{x}})^\top \ensuremath{\boldsymbol{\phi}}(\ensuremath{\mathbf{x}}')$ \citep{rahimi2008random,sutherland2015error}. However, to the best of our knowledge there has been little analysis of nonlinear functions of this approximate Gram matrix, such as $\ensuremath{\by^{\top} \trainK^{-1} \by}$ and $\ensuremath{\log \vert \trainK \vert}$ appearing in the GP objective. Interestingly, we find that RFF systematically biases these terms: \begin{theorem}\label{RFF_bias_theorem} Let $\widetilde{\ensuremath{\boldmatrix{K}}}_{J}$ be the RFF approximation with $J/2$ random features. In expectation, $\widetilde{\ensuremath{\boldmatrix{K}}}_{J}$ overestimates the inverse quadratic and underestimates the log determinant: \begin{equation} {\textstyle \Evover{\mathbb{P}(\ensuremath{\boldsymbol{\omega}})}{\ensuremath{\mathbf{y}}^{\top} \widetilde{\ensuremath{\boldmatrix{K}}}_{J}^{-1} \ensuremath{\mathbf{y}}} } \geq \ensuremath{\by^{\top} \trainK^{-1} \by} \end{equation} \begin{equation} {\textstyle \Evover{\mathbb{P}(\ensuremath{\boldsymbol{\omega}})}{\log |\widetilde{\ensuremath{\boldmatrix{K}}}_{J}|} } \leq \ensuremath{\log \vert \trainK \vert}. \end{equation} The biases of both terms decay at a rate of $\bigo{1/J}$. \end{theorem} \textit{Proof sketch.} The direction of the biases is a straightforward application of Jensen's inequality, noting that $\ensuremath{\widehat{\bK}_{\bX\bX}}^{-1}$ is a convex function and $\ensuremath{\log \vert \trainK \vert}$ is a concave function. The magnitude of the bias is derived from a second-order Taylor expansion that closely resembles the analysis of \citet{nowozin2018debiasing}. See appendix for full proof. Again, \cref{fig:debias_cg_rff} confirms the systematic biases of RFF, which decay at a rate proportional to the number of features, as predicted by \cref{RFF_bias_theorem}. Hence, RFF should affect the learned hyperparameters in a manner opposite to CG. Overestimating $\ensuremath{\by^{\top} \trainK^{-1} \by}$ emphasizes data fitting while underestimating $\ensuremath{\log \vert \trainK \vert}$ reduces the model complexity penalty, overall resulting in overfitting behavior. Following the intuition presented in \cref{subsec:CG_bias}, we expect the lengthscale to be underestimated, as empirically confirmed by \cref{lengthscale-bias} (right panel). The figure also illustrates the slow decay of the RFF bias. \section{Bias-free Scalable Gaussian Processes} We debias the estimates of both the GP training objective in \cref{eqn:log_lik} and its gradient in \cref{eqn:log_lik_deriv} (as approximated by CG and RFF) using unbiased randomized truncation estimators. To see how such estimators apply to GP hyperparameter learning, we note that both CG and RFF recover the true log marginal likelihood (or an unbiased estimate thereof) in their limits: % \begin{obs} CG recovers the exact log marginal likelihood in expectation in at most $N$ iterations: \begin{align} \ensuremath{\mathbf{y}}^\top \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{y}} &= \ensuremath{\mathbf{y}}^\top \left( \textstyle {\sum_{j=1}^N } \gamma_j \ensuremath{\mathbf{d}}_j \right), \label{eqn:lim_cg_invquad} \\ \ensuremath{\log \vert \trainK \vert} &= {\textstyle \Evover{\ensuremath{\mathbf{z}}}{\Vert \ensuremath{\mathbf{z}} \Vert^2 \ensuremath{\mathbf{e}}_1^\top (\log \ensuremath{\boldmatrix{T}}^{(N)}_\ensuremath{\mathbf{z}}) \ensuremath{\mathbf{e}}_1} }. \label{eqn:lim_cg_logdet} \end{align} % By the law of large numbers, RFF converges almost surely to the exact log marginal likelihood as the number of random features goes to infinity: \begin{align} \ensuremath{\mathbf{y}}^\top \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{y}} &= \lim_{J \to \infty} \ensuremath{\mathbf{y}}^\top \widetilde{\ensuremath{\boldmatrix{K}}}_J^{-1} \ensuremath{\mathbf{y}}, \label{eqn:lim_rff_invquad} \\ \ensuremath{\log \vert \trainK \vert} &= \lim_{J \to \infty} \log \vert \widetilde{\ensuremath{\boldmatrix{K}}}_J \vert. \label{eqn:lim_rff_logdet} \end{align} \end{obs} % To maintain the scalability of CG and RFFs while eliminating bias, we express the log marginal likelihood terms in \cref{eqn:lim_cg_invquad,eqn:lim_cg_logdet,eqn:lim_rff_invquad,eqn:lim_rff_logdet} as summations amenable to randomized truncation. We then apply the Russian Roulette and Single Sample estimators of \cref{subsec:randomized_truncation_intro} to avoid computing all summation terms while obtaining the same result in expectation. \subsection{Russian Roulette-Truncated CG (RR-CG)} \label{sec:debias_cg} The stochastic gradient in \cref{eqn:log_lik_deriv_stochastic} requires performing two solves: $\ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{y}}$ and $\ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{z}}$. Using the summation formulation of CG (Eq.~\ref{eqn:cg_series}), we can write these two solves as series: $$ \textstyle \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{y}} = \sum_{i=1}^N \gamma_i \ensuremath{\mathbf{d}}_i, \quad \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{z}} = \sum_{i=1}^N \gamma_i' \ensuremath{\mathbf{d}}_i', $$ where each CG iteration computes a new term of the summation. By applying the Russian Roulette estimator from \cref{eqn:rr}, we obtain the following unbiased estimates: \begin{equation}\label{eqn:rrcg-linear-solve} \textstyle \begin{split} \textstyle \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{y}} &\approx {\textstyle \sum_{j=1}^{J} } (\gamma_j \ensuremath{\mathbf{d}}_j) / \mathbb{P}(\mathcal J \geq j), \quad J \sim \mathbb{P}(J) \\ \textstyle \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{z}} &\approx {\textstyle \sum_{j=1}^{J'} } (\gamma_j' \ensuremath{\mathbf{d}}_j') / \mathbb{P}(\mathcal J \geq j). \quad J' \sim \mathbb{P}(J'), \end{split} \end{equation} These unbiased solves produce an unbiased optimization gradient in \cref{eqn:log_lik_deriv_stochastic}; we refer to this approach as Russian Roulette CG ({\bf RR-CG}). With the appropriate truncation distribution $\mathbb{P}(J)$, this estimate affords the same computational complexity of standard CG \emph{without} its bias. We must compute two independent estimates of $\ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{y}}$ with different $J \sim \mathbb{P}(J)$ in order for the $\ensuremath{\mathbf{y}}^\top \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \frac{ \partial \ensuremath{\widehat{\bK}_{\bX\bX}} }{ \partial \ensuremath{\boldsymbol{\theta}} } \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{y}}$ term in \cref{eqn:log_lik_deriv_stochastic} to be unbiased. Thus, the unbiased gradient requires 3 calls to RR-CG, as opposed to the 2 CG calls needed for the biased gradient. Nevertheless, RR-CG has the same $\bigo{JN^2}$ complexity as standard CG -- and the additional solve can be computed in parallel with the others. We can also use the Russian Roulette estimator to compute the log marginal likelihood itself, though this is not strictly necessary for gradient-based optimization. (See appendix.) \paragraph{Choosing the truncation distribution.} Since the Russian Roulette estimator is unbiased for any choice of $\mathbb{P}\left(J\right)$, we wish to choose a truncation distribution that balances computational cost and variance\footnote{Solely minimizing variance is not appealing, as this is achieved by $\mathbb{P}\left(J\right) = \mathbb{I}_{J=N}$ which has no computational savings.}. \citet{beatson2019efficient} propose the \emph{relative optimization efficiency} (ROE) metric, which is the ratio of the expected improvement of taking an optimization step with our gradient estimate to its computational cost. A critical requirement of the ROE analysis is the expected rate of decay of our approximations in terms of the number of CG iterations $J$. We summarize our estimates and choices of distribution as follows: \begin{theorem}\label{th:optimalrrcg} The approximation to $\ensuremath{\log \vert \trainK \vert}$ and to $\ensuremath{\by^{\top} \trainK^{-1} \by}$ using RR-CG decays at a rate of $\bigo{C^{-2J}}$. Therefore the truncation distribution that maximizes the ROE is $\mathbb{P}^*(J) \propto C^{-2J}$, where $C$ is a constant that depends on the conditioning of $\ensuremath{\widehat{\bK}_{\bX\bX}}$. The expected computation and variance of $\mathbb{P}^*\left(J\right)$ is finite. \end{theorem} \textit{Proof sketch.} \citet{beatson2019efficient} show that the truncation distribution that maximizes the ROE is proportional to the rate of decay of our approximation divided by its computational cost. The error of CG decays as $\bigo{C^{-2J}}$, and the cost of each summation term is constant with respect to $J$. In practice, we vary the exponential decaying rate to control the expectation and variance of $\mathbb P(J)$. To further reduce the variance of our estimator, we set a minimum number of CG iterations to be computed, as in \cite{luo2020sumo}. See appendix for full proof. In practice, however, we do not have access to $C$ since computing the conditioning of $\ensuremath{\widehat{\bK}_{\bX\bX}}$ is impractical. Yet, we can change the base of the exponential to $e$ and add a temperature parameter $\lambda$ to rescale the function and control the rate of decay of the truncation distribution as a sensible alternative. Thus, we follow a more general exponential decay distribution: \begin{align}\label{eqn:exp-decay-dist} \mathbb{P} (J) & \propto e^{-\lambda J}, \qquad J=J_{\min}, \cdots, N \end{align} where $J_{\min}$ is the minimum truncation number. By varying the values of $\lambda$ and $J_{\min}$, we can control the expectation and standard deviation of $\mathbb{P}(J)$. In practice we found that having the standard deviation value between 10 and 20 achieves stable GP learning process, which can be obtained by tuning $\lambda$ between $0.05$ and $0.1$ for sufficiently large datasets (e.g. $N \geq 500$). We also noticed that the method is not sensitive to these choices of hyperparameters and that they work well across all the experiments. The expected truncation number can be further tuned by varying $J_{\min}$. We emphasize that these choices impact the speed-variance tradeoff. By setting a larger $J_{\min}$ we decrease the speed by requiring more baseline computations but also decrease the variance (since the minimum approximations have the largest deviations from the ground truth). \paragraph{Toy problem.} In \cref{fig:debias_cg_rff} we plot the empirical mean of the RR-CG estimator using $10^4$ samples from an exponential truncation distribution. We find that RR-CG produces unbiased estimates of the $\ensuremath{\by^{\top} \trainK^{-1} \by}$ and $\ensuremath{\log \vert \trainK \vert}$ terms that are indistinguishable from the exact values computed with Cholesky. Reducing the expected truncation iteration $\mathbb{E}(J)$ (x-axis) increases the standard error of empirical means, demonstrating the speed-variance trade-off. \subsection{Single Sample-Truncated RFF (SS-RFF)} Denoting $\widetilde \ensuremath{\boldmatrix{K}}_j$ as the kernel matrix estimated by $j$ random Fourier features, we can write $\ensuremath{\log \vert \trainK \vert}$ as the following telescoping series: \begin{align} \ensuremath{\log \vert \trainK \vert} &= { \log \vert \widetilde \ensuremath{\boldmatrix{K}}_{1} \vert + \sum_{j=2}^{N/2 - 1} \left( \log \vert \widetilde \ensuremath{\boldmatrix{K}}_j \vert - \log \vert \widetilde \ensuremath{\boldmatrix{K}}_{j-1} \vert \right) } \\ &+ \ensuremath{\log \vert \trainK \vert} - \log \vert \widetilde \ensuremath{\boldmatrix{K}}_{N/2-1} \vert \nonumber \\ &= {\textstyle \log \vert \widetilde \ensuremath{\boldmatrix{K}}_1 \vert + \sum_{j=2}^{N/2} \Delta_j, } \label{eqn:rff_logdet_series} \end{align} where $\Delta_j$ is defined as $\log \vert \widetilde \ensuremath{\boldmatrix{K}}_j \vert - \log \vert \widetilde \ensuremath{\boldmatrix{K}}_{j-1} \vert$ for all $j < N/2$, and $\Delta_{N/2}$ is defined as $\log \vert \ensuremath{\widehat{\bK}_{\bX\bX}} \vert - \log \vert \widetilde \ensuremath{\boldmatrix{K}}_{N/2 - 1} \vert$. Note that each $\Delta_j$ is now a random variable, since it depends on the \textit{random} Fourier frequencies $\omega$. Crucially, we only include $N/2$ terms in the series so that no term requires more than $\bigo{N^3}$ computation in expectation. (For any $j > N/2$, $\widetilde \ensuremath{\boldmatrix{K}}_{j}$ is a full-rank matrix and thus is as computationally expensive as the true $\ensuremath{\widehat{\bK}_{\bX\bX}}$ matrix.) We construct a similar telescoping series for $\ensuremath{\by^{\top} \trainK^{-1} \by}$. As with \cref{eqn:rrcg-linear-solve}, we approximate the series in \cref{eqn:rff_logdet_series} with a randomized truncation estimator, this time using the Single Sample estimator \eqref{eqn:ss_estimator}: \begin{align} \ensuremath{\log \vert \trainK \vert} \approx \log \vert \widetilde \ensuremath{\boldmatrix{K}}_1 \vert + \Delta_{J} / \mathbb{P}\left(J\right). \label{eqn:ssrff_est} \end{align} where $J$ is drawn from the truncation distribution $\mathbb{P}(J)$ with support over $\{ 2, 3, \ldots N/2 \}$. Note that the Single Sample estimate requires computing 3 log determinants ($\log \vert \widetilde \ensuremath{\boldmatrix{K}}_1 \vert$, $\log \vert \widetilde \ensuremath{\boldmatrix{K}}_{J-1} \vert$, and $\log \vert \widetilde \ensuremath{\boldmatrix{K}}_{J} \vert$) for a total of $\bigo{J^3 + NJ^2}$ computations and $\bigo{NJ}$ memory. This is asymptotically the same requirement as standard RFF. The Russian Roulette estimator, on the other hand, incurs a computational cost of $\bigo{NJ^3 + J^4}$ as it requires computing ($\log \vert \widetilde \ensuremath{\boldmatrix{K}}_1 \vert$ through $\log \vert \widetilde \ensuremath{\boldmatrix{K}}_{J} \vert$) which quickly becomes impractical for large $J$. A similar Single Sample estimator constructs an unbiased estimate of $\ensuremath{\by^{\top} \trainK^{-1} \by}$. Backpropagating through these Single Sample estimates produces unbiased estimates of the log marginal likelihood gradient in \cref{eqn:log_lik_deriv}. \paragraph{Choosing the truncation distribution.} For the Single Sample estimator we do not have to optimize the ROE since minimizing the variance of this estimator does not result in a degenerate distribution. \begin{theorem}\label{th:optimalssrff} The truncation distribution that minimizes the variance of the SS-RFF estimators for $\ensuremath{\log \vert \trainK \vert}$ and $\ensuremath{\by^{\top} \trainK^{-1} \by}$ is $\mathbb{P}^{*}\left(J\right) \propto 1/J$. The expected variance and computation of $\mathbb{P}^{*}(J)$ is finite. \end{theorem} \textit{Proof sketch.} The minimum variance distribution can be found by solving a constrained optimization problem. In practice, we can further decrease the variance of our estimator by fixing a minimum value of RFF features to be used in \cref{eqn:rff_logdet_series} and by increasing the step size ($c \in \mathbb{N}$) between the elements at each $\Delta_{j}$ = $\log|\widetilde{\ensuremath{\boldmatrix{K}}}_{cJ}| - \log|\widetilde{\ensuremath{\boldmatrix{K}}}_{c(J-1)}|$. See appendix for full proof. For the experiments we started with 500 features and also tried various step sizes $c \in \{1, 10, 100 \}$. The variance of the estimator decreases as we increase $c$ since the probability weights will decrease in magnitude. Yet, despite of using the optimal truncation distribution, setting a high number of features and taking long steps $c=100$, the variance of the estimator still requires taking several steps before converging, making SS-RFF computationally impractical. \paragraph{Toy problem.} Similar to RR-CG, in \cref{fig:debias_cg_rff} we plot the empirical mean of the SS-RFF estimator using $10^4$ samples. We find that SS-RFF produces unbiased estimates of the $\ensuremath{\by^{\top} \trainK^{-1} \by}$ and $\ensuremath{\log \vert \trainK \vert}$ terms. However, these estimates have a higher variance when compared to the estimates of RR-CG. Reducing the expected truncation iteration $\mathbb{E}(J)$ (x-axis) increases the standard error of the empirical means, demonstrating the speed-variance trade-off. \subsection{Analysis of the Bias-free Methods} Randomized truncations and conjugate gradients have existed for many decades \citep{hestenes1952methods,kahn1955use}, but have rarely been used in conjunction. \citet{filippone2015enabling} proposed a method closely related to our RR-CG which performs randomized early-truncation of CG iterations to obtain unbiased posterior samples of the GP covariance parameters. We differ by tackling the GP hyperparameter learning problem: we provide the first theoretical analysis of the biases incurred by both CG and RFF, and proceed to tailor unbiased estimators for each method. To some extent, randomized truncation methods are antithetical to the original intention of CG: producing deterministic and nearly exact solves. For large-scale applications, where early truncation is necessary for computational tractability, the ability to trade bias for variance is beneficial. This fact is especially true in the context of GP learning, where the bias of early truncation is systematic and cannot be simply explained away as numerical imprecision. Randomized truncation estimates are often used to estimate infinite series, where it is challenging to design truncation distributions with finite expected computation and/or variance. We avoid such issues since CG and the telescoping RFF summations are both finite. \section{Results}\label{sec:results} First, we show that our bias-free methods recover nearly the same hyperparameters as exact methods (i.e. Cholesky-based optimization), whereas models that use CG and RFF converge to suboptimal hyperparameters. Since RR-CG and SS-RFF eliminate bias at the cost of increased variance, we then demonstrate the optimization convergence rate and draw conclusions on our methods' applicability. Finally, we compare models optimized with RR-CG against a host of approximate GP methods across a wide range of UCI datasets \cite{asuncion2007uci}. All experiments are implemented in GPyTorch \citep{gardner2018gpytorch} \begin{figure}[!t] \vskip 0.1in \begin{center} \includegraphics[scale=0.25]{figures/loss_landscapes_1.png} \\ \includegraphics[scale=0.25]{figures/loss_landscapes_2.png} \caption{ Optimization landscape of a GP with two hyperparameters. The SS-RFF and RR-CG models converge to similar hyperparameter values that are nearly optimal, while the RFF and CG models converge to suboptimal solutions. In addition, the stochastic effect of the randomized truncation is visible in the trajectories of RR-CG and SS-RFF. Moreover, CG (and RR-CG) models truncate after $20$ iterations (in expectation); RFF (and SS-RFF) models use $700$ features (in expectation). } \label{fig:loss_landscapes} \end{center} \vskip -0.2in \end{figure} \paragraph{Optimization trajectories of bias-free GP.} \label{res:loss-landscape-optim} \cref{fig:loss_landscapes} displays the optimization landscape -- the log marginal likelihood of the PoleTele dataset -- as a function of (RBF) kernel lengthscale $\ell$ and noise $\sigma^2$. As expected, an exact GP (optimized using Cholesky, \cref{fig:loss_landscapes} upper left) recovers the optimum. Notably, the GP trained with standard CG and RFF converges to suboptimal hyperparameters (upper right/lower right). RR-CG and SS-RFF models (trained with 20 iterations and 700 features in expectation, respectively) successfully eliminate this bias, and recover nearly the same parameters as the exact model (upper center/lower left). These plots also show the speed-variance tradeoff of randomized truncation. SS-RFF and RR-CG have noisy optimization trajectories due to auxiliary truncation variance. \paragraph{Convergence of GP hyperparameter optimization.} \label{res:loss_in_training} \cref{fig:loss_evolution_cg,fig:loss_evolution_rff} plot the exact GP log marginal likelihood of the parameters learned by each method during optimization. Each trajectory corresponds to a RBF-kernel GP trained on the PoleTele dataset (\cref{fig:loss_evolution_rff}) and the Bike dataset (\cref{fig:loss_evolution_cg}). \begin{figure}[t!] \vskip 0.1in \begin{center} \includegraphics[scale=0.4]{figures/exact_evolution_rff2.png} \caption{ The GP optimization objective for models trained with RFF and SS-RFF. (PoleTele dataset, RBF kernel, Adam optimizer.) RFF models converge to sub-optimal log marginal likelihoods. SS-RFF models converge to (near) optimum values, yet require more than $100\times$ as many optimization steps. } \label{fig:loss_evolution_rff} \end{center} \vskip -0.2in \end{figure} \begin{figure}[t!] \vskip 0.1in \begin{center} \includegraphics[scale=0.4]{figures/exact_evolution_cg1.png} \caption{ The GP optimization objective for models trained with CG and RR-CG. (Bike dataset, RBF kernel, Adam optimizer.) RR-CG models converge to optimal solutions, while the (biased) CG models diverge. Increasing the expected truncation of RR-CG only slightly improves optimization convergence; models converge in $<100$ steps of Adam. } \label{fig:loss_evolution_cg} \end{center} \vskip -0.2in \end{figure} \begin{figure*}[ht] \vskip 0.1in \begin{center} \includegraphics[scale=0.45]{figures/RMSE_comprison.png} \includegraphics[scale=0.45]{figures/NLL_comprison.png} \caption{ Root-mean-square-error (RMSE) and negative log likelihood (NLL) of GP trained with CG (light purple), RR-CG (dark purple) and various approximate methods (grey). Dashed red lines indicates Cholesky-based GP performance (when applicable). Results are averaged over 3 dataset splits. Missing RFF and sgGP results correspond to (very high) outlier NLL / RMSE values. In almost all experiments, GP learning with RR-CG achieves similar or better performance compared to that with CG at the same computational cost.} \label{fig:nll_rmse} \end{center} \vskip -0.2in \end{figure*} \cref{fig:loss_evolution_rff} shows that RFF models converge to solutions with worse log likelihoods, and more RFF features slow the rate of optimization. Additionally, we see the cost of the auxiliary variance needed to debias SS-RFF: while SS-RFF models achieve better optima than their biased counterparts, they take 2-3 orders of magnitude longer to converge, despite using a truncation distribution that minimizes variance. We thus conclude that SS-RFF has too much variance to be practical for GP hyperparameter learning. \cref{fig:loss_evolution_cg} on the other hand shows that RR-CG is minimally affected by its auxiliary variance. The GP trained with RR-CG converges in roughly $100$ iterations, nearly matching Cholesky-based optimization. Decreasing the expected truncation value from $\mathbb{E}[J] = 40$ to $20$ slightly slows this convergence. We note that the bias induced by standard CG can be especially detrimental to GP learning. On this dataset, the biased models deviate from their Cholesky counterparts and eventually diverge away from the optimum. \paragraph{Predictive performance of bias-free GP.} Lastly, we compare the predictive performance of GPs that use RR-CG, CG, and Cholesky for hyperparameter optimization. We emphasize that the RR-CG and CG methods only make use of early truncation approximations during training. At test time, we compute the predictive posterior by running CG to a tolerance of $\leq 10^{-4}$, which we believe can be considered ``exact'' for practical intents and purposes. Additionally, we include four other (biased) scalable GP approximations methods as baselines: {\bf RFF}, Stochastic Variational Gaussian Processes ({\bf SVGP}) \citep{hensman2013gaussian}, generalized Product of Expert Gaussian Processes ({\bf POE}) \citep{cao2014generalized,deisenroth2015distributed}, and stochastic gradient-based Gaussian Processes ({\bf sgGP}) \citep{chen2020stochastic}. We note that the RFF, SVGP, and sgGP methods introduce \emph{both} bias and variance, as these methods rely on randomization and approximation. We use CG with $J=100$ iterations, and RR-CG with $\mathbb{E}[J]=100$ expected iterations; both methods use the preconditioner of \citet{gardner2018gpytorch}. All RFF models use $700$ random features. For SVGP, we use $1{,}024$ inducing points and minibatches of size $1{,}024$ as in \cite{wang2019exact}. The POE models are comprised of GP experts that are each trained on $1{,}024$ data point subsets. For sgGP, the subsampled datasets are constructed by selecting a random point $\ensuremath{\mathbf{x}}, y$ and its 15 nearest neighbors as in \cite{chen2020stochastic}. Each dataset is randomly split to 64\% training, 16\% validation and 20\% testing sets. All kernels are RBF with a separate lengthscale per dimension. See appendix for more details. We report prediction accuracy (RMSE) and negative log likelihood (NLL) in \cref{fig:nll_rmse} (see appendix for full tables on predictive performance and training time). We make two key observations: \emph{(i)} RR-CG meaningfully debiases CG. When the bias of CG is not detrimental to optimization (e.g. CG with 100 iterations is close to convergence for the Elevators dataset), RR-CG has similar performance. However, when the CG bias is more significant (e.g. the KEGG dataset), the bias-free RR-CG improves the GP predictive RMSE and NLL. We also include a figure displaying the predictive performance of RR-CG and CG with increasing number of (expected) CG iterations in appendix. \emph{(ii)} RR-CG recovers the same optimum as the ``ground-truth'' method (i.e. Cholesky) does, as indicated by the red-dashed line in \cref{fig:nll_rmse}. This result provides additional evidence that RR-CG achieves unbiased optimization. While RR-CG obtains the lowest RMSE on all but 2 datasets, we note that the (biased) GP approximations sometimes achieve lower NLL. For example, SVGP has a lower NLL than that of RR-CG on the Bike dataset, despite having a higher RMSE. We emphasize that this is not a failing of RR-CG inference. The SVGP NLL is even better than that of the exact (Cholesky) GP, suggesting a potential model misspecification for this particular dataset. Since SVGP overestimates the observational noise $\sigma^2$ \citep{bauer2017understanding}, it may obtain a better NLL when outliers are abundant. Though we cannot compare against the Cholesky posterior on larger datasets, we hypothesize that the NLL/RMSE discrepancy on these datasets is due to a similar modeling issue. \section{Conclusion} We prove that CG and RFF introduce systematic biases to the GP log marginal likelihood objective: CG-based training will favor underfitting models, while RFF-based training will promote overfitting. Modifying these methods with randomized truncation converts these biases into variance, enabling unbiased stochastic optimization. Our results show that this bias-to-variance exchange indeed constitutes a trade-off. The convergence of SS-RFF is impractically slow, likely due to the truncation variance needed to eliminate RFF's slowly-decaying bias. However, for CG-based training, we find that variance is almost always preferable to bias. Models trained with RR-CG achieve better performance than those trained with standard CG, and tend to recover the hyperparameters learned with exact methods. Though models trained with CG do not always exhibit noticeable bias, RR-CG's negligible computational overhead is justifiable to counteract cases where the bias is significant. We reported experiments with at most 300K observations for our methods and baselines, which is substantial for GPs. We emphasize that RR-CG can be extended to datasets with over one million data points as in \citet{wang2019exact}. However, the computational cost is much higher, requiring multiple GPUs for training and testing. We note that the RR-CG algorithm is not limited to GP applications. Future work should explore applying RR-CG to other optimization problems with large-scale solves. \section*{Acknowledgements} This work was supported by the Simons Foundation, McKnight Foundation, the Grossman Center, and the Gatsby Charitable Trust. \subsection{Preliminaries} define $\Delta_i = Y_{i} - Y_{i-1}$ where we can apply the randomized truncation to the log marginal likelihood objective $Y_i \propto \log \vert \hat{K}_{i}\vert + \ensuremath{\mathbf{y}}^{\top}\hat{K}_{i}\ensuremath{\mathbf{y}}$ or to each of the terms that comprise it $Y_i := \log \vert \hat{K}_{i}\vert$, or $Y_i := \ensuremath{\mathbf{y}}^{\top}\hat{K}_{i}\ensuremath{\mathbf{y}}$. \textbf{RFF biases the log-likelihood terms}. Although $\ensuremath{\widetilde{\bK}}_M$ is an unbiased estimator of $\ensuremath{\widehat{\bK}_{\bX\bX}}$, we remain with biased estimates for the two terms comprising our learning objective (\cref{eqn:log_lik}): % \begin{align*} \Evover{\mathbb{P}(\omega)}{\log \vert \ensuremath{\widetilde{\bK}}_M \vert} &\ne \log \vert \ensuremath{\widehat{\bK}_{\bX\bX}} \vert, \\ \Evover{\mathbb{P}(\omega)}{\ensuremath{\mathbf{y}}^\top \ensuremath{\widetilde{\bK}}_M^{-1} \ensuremath{\mathbf{y}}} &\ne \ensuremath{\mathbf{y}}^\top \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \ensuremath{\mathbf{y}}. \end{align*} % because (for inverse quadratic) % \begin{align*} \Evover{\mathbb{P}(\omega)}{\ensuremath{\widetilde{\bK}}^{-1}} & \leq \Evover{\mathbb{P}(\omega)}{\ensuremath{\widetilde{\bK}}}^{-1} = \ensuremath{\widehat{\bK}_{\bX\bX}}^{-1} \end{align*} % These biases have clear implications for the hyperparameters. Underestimating $\ensuremath{\log \vert \trainK \vert}$ implies that, given a fixed set of parameters $\ensuremath{\boldsymbol{\theta}}$, when employing RFF, our model pretends to be simpler than it is. Equivalently, the complexity penalty is lessened. After training in this manner, our model should be more complex compared to a GP trained with exact inference, and will have a larger $\ensuremath{\log \vert \trainK \vert}$. This could manifest when setting the lengthscale parameter $l^2$ to smaller values: For $l^2 \rightarrow 0$, we obtain a diagonal kernel matrix, i.e., $[\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{X}}\bX}]_{ij}=o^2 \; \text{when}\; i = j$ and zero otherwise. Another way to achieve high $\ensuremath{\log \vert \trainK \vert}$ is by setting higher observation noise $\sigma^2$, resulting in a kernel matrix with a large diagonal component. where $k(\ensuremath{\mathbf{x}}, \ensuremath{\mathbf{x}}')$ is called the \emph{kernel} function defining the covariance between two training examples, and we collect all pairwise kernel evaluations in an $n \times n$ matrix $\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{X}}\bX}$, i.e., $[K_{XX}]_{ij} = k(\ensuremath{\mathbf{x}}_i, \ensuremath{\mathbf{x}}_j)$. $\ensuremath{\widehat{\bK}_{\bX\bX}}$ is a kernel matrix with additional homoskedastic Gaussian observational noise $\ensuremath{\widehat{\bK}_{\bX\bX}}= K_{XX} + \sigma^2 I$. Typically $\mu(\ensuremath{\mathbf{x}})$ is set constant, and without loss of generality we will assume $\mu(\ensuremath{\mathbf{x}})=0$. The kernel function $k(\ensuremath{\mathbf{x}}, \ensuremath{\mathbf{x}}')$ is often chosen to be the squared exponential (a.k.a. RBF) kernel $k(\ensuremath{\mathbf{x}}, \ensuremath{\mathbf{x}}') = o^2\ex\mathbb{P}(\frac{-\vert \ensuremath{\mathbf{x}} - \ensuremath{\mathbf{x}}' \vert}{2\mathcal{l}^2}) $or a Matérn kernel. This regression model has three \emph{hyperparameters} $\ensuremath{\boldsymbol{\theta}} = \{o^2, \l^2, \sigma^2\}$ respectively the output scale (controlling the magnitude of the GP signal) lengthscale (controlling the decay of the correlation between neighboring datapoints) and observation noise (accounting for unexplained variance).\\ Modern GPs are trained using stochastic optimization \cite{gardner2018gpytorch}, often in conjunction with neural network models, on GPUs. Moreover, GP training is now routinely accelerated using not only CG (\cref{sec:CG_background}) and RFF (\cref{RFF_background}) but also inducing point methods and sparse variational methods \citep{cutajar2016preconditioning, bauer2017understanding}. The user has to specify a fixed number of CG iterations, Fourier features, or inducing points to use, trading off accuracy for speed. \subsection{Algorithms} If you are using \LaTeX, please use the ``algorithm'' and ``algorithmic'' environments to format pseudocode. These require the corresponding stylefiles, algorithm.sty and algorithmic.sty, which are supplied with this package. Algorithm~\ref{alg:example} shows an example. \begin{algorithm}[tb] \caption{Bubble Sort} \label{alg:example} \begin{algorithmic} \STATE {\bfseries Input:} data $x_i$, size $m$ \REPEAT \STATE Initialize $noChange = true$. \FOR{$i=1$ {\bfseries to} $m-1$} \IF{$x_i > x_{i+1}$} \STATE Swap $x_i$ and $x_{i+1}$ \STATE $noChange = false$ \ENDIF \ENDFOR \UNTIL{$noChange$ is $true$} \end{algorithmic} \end{algorithm} \subsection{Tables} You may also want to include tables that summarize material. Like figures, these should be centered, legible, and numbered consecutively. However, place the title \emph{above} the table with at least 0.1~inches of space before the title and the same after it, as in Table~\ref{sample-table}. The table title should be set in 9~point type and centered unless it runs two or more lines, in which case it should be flush left. \begin{itemize} \item Unbiasedness - We show that fast GP methods like RFF and CG, when truncated early, result in biased estimates of the GP objective. \item When training models in this manner, we find that the resulting hyperparameters are biased w.r.t. their true value and the value obtained by training with Cholesky decomposition. \item By viewing both CG and RFF as a truncated series, we replace the deterministic truncation by a randomized one. Specifically, we deploy Randomized Telescope estimators which allow us to truncate early and still maintain unbiasedness. \item We show that de-biasing the loss function results in better predictions on test data, and higher-fidelity hyperparameters. Our method allows us to use very small numbers of CG iterations or RFF features, and maintain predictive performance that is equivalent to the Cholesky decomposition. \item Moreover, in large datasets where the Cholesky decomposition is infeasible, our method outperforms other scalable alternatives. \end{itemize} quantifying the bias that both CG and RFF introduce to the log marginal likelihood objective GP training is sped up with fewer CG iterations or fewer Fourier features. Here, we test the hypothesis that early truncation of CG and RFF comes at a price of biased hyperparameters. We generated synthetic data by sampling from the prior of a GP regression model with fixed hyperparameters: \begin{align*} f(\ensuremath{\mathbf{x}}) &\sim \GP{m(\ensuremath{\mathbf{x}})}{k({\mathbf{x}, \mathbf{x'}})} \\ \ensuremath{\mathbf{y}} &= f(\ensuremath{\mathbf{x}}) + \epsilon \quad \text{where} \quad \epsilon \sim \normaldist{0}{\sigma^2} \end{align*} and where $m(\ensuremath{\mathbf{x}}) = \mathbf{0}$ and $k({\mathbf{x}, \mathbf{x'}})$ is the RBF kernel, parametrized by a lengthscale outputscale hyperparameters. Together with the observation noise $\sigma^2$, the model has three parameters (maybe show that actual RBF formula with parameters). In this experiment, we sweep over possible lengthscale values $(0.1, 1)$ and take a 500-dimensional sample $\ensuremath{\mathbf{y}}$. We then fit two models: an RFF model that approximates the covariance matrix using $K$ samples, and a CG model that computes $\ensuremath{\widehat{\bK}_{\bX\bX}}^{-1}\ensuremath{\mathbf{y}}$ using $K$ iterations. In all the models, the observational noise and outputscale were fixed to their true value, while the lengthscale was trained. We first train the hyperparameters of an \emph{exact GP model} that uses the Cholesky decomposition to compute the objective at every training iteration. We used the Adam optimizer, with a learning rate schedule of \db{Insert}, for 500 iterations. Fixing the exact GP's optimized hyperparameters, we compute $\ensuremath{\log \vert \trainK \vert}$ and $\ensuremath{\by^{\top} \trainK^{-1} \by}$ using (1) an exact GP, (2) RFF and (3) CG. We varied the number of Fourier features between \db{insert X and X} and the number of CG iterations between \db{insert X and X}. \paragraph{RFF and SS-RFF} In the right panel of \cref{fig:loss_evolution_rff} we glean that RFF converges to an impoverished loss value, albeit improving as $J$ increases as predicted by the bias decay rate. SS-RFF ($\mathbb{E}[J]=700$) is unbiased and converges to a loss equivalent to Cholesky. Crucially, however, the convergence of SS-RFF is an order of magnitude slower. We attribute the slow convergence to the especially large truncation variance needed to counteract the slow $\bigo{1/J}$ decay of the RFF bias (see \cref{RFF_bias_theorem}). In the left panel, we see that the exact training loss is qualitatively similar to the approximate one. The loss has less variance since we do not compute it using \emph{random} Fourier features. Here too, SS-RFF is slow to converge yet eventually outperforms RFF. To summarize, for RFF, it may be preferable to optimize the exact objective using $\bigo{N^3}$ operations rather than using a slow-to-train unbiased truncation. \paragraph{CG and RR-CG} In the right panel of \cref{fig:loss_evolution_cg}, we see that RR-CG converges as quickly and as accurately as an exact model, whereas CG converges to an inferior loss value. Here too, CG's bias decreases with $J$. RR-CG ($\mathbb{E}[J]=20$ \db{verify Luhuan?}) exhibits more variance than CG, yet it is only moderate and inconsequential due to the fast decay of CG's bias. The exact loss in the left panel of \cref{fig:loss_evolution_cg} illustrates again that RR-CG and Cholesky are comparable in performance and running time. Interestingly however, as CG succesfuly optimizes its biased training objective, it fails to optimize the exact one. In fact, these two are inversly correlated: better values of the approximate CG objective lead to worse values in the exact objective. We conclude that early-truncated CG successfully optimizes a distorted loss landscape. However, at least in this experiment, the exact loss landscape is quite different, leading to poor performance. \begin{proof} \begin{align*} \mathbb{E} \left[ \bar{\psi}_J \right] &= \mathbb{E} \left[ \sum_{j=1}^J \frac{\Delta_j}{\mathbb{P} (\mathcal{J} \geq j )}\right] \\ &= \sum_{J=1}^H \left[ \left(\sum_{j=1}^J \frac{\Delta_j}{\mathbb{P} (\mathcal{J} \geq j)} \right) \mathbb{P} (\mathcal{J}=J) \right], \end{align*} exchanging the summation order, \begin{align*} &= \sum_{j=1}^H \frac{\Delta_j}{\mathbb{P} (\mathcal{J} \geq j)} \left[\sum_{J=j}^H \mathbb{P} (\mathcal{J} = J) \right] \\ &= \sum_{j=1}^H \frac{\Delta_j}{ \mathbb{P} (\mathcal{J} \geq j )} \mathbb{P} (\mathcal{J} \geq j ) \\ &= \psi \end{align*} \subsection{Unbiasedness of the SS Estimator} \label{appendix:SS_unbiased} The SS estimator of \cref{eqn:ss_estimator} is unbiased, i.e., $\mathbb{E} \left[ \bar{\psi}_J \right] = \psi$. \begin{proof} \begin{align*} \mathbb{E} \left[ \bar{\psi}_J \right] &= \mathbb{E}_J \left[ \sum_{j=1}^H \frac{\Delta_j}{\mathbb{P} (\mathcal{J} = j )} \cdot \mathbbm{1}_{j=J} \right] \\ &= \sum_{j=1}^H \frac{\Delta_j}{\mathbb{P} (\mathcal{J} = j )} \mathbb{E}_J[\mathbbm{1}_{j=J}]\\ &= \sum_{j=1}^H \frac{\Delta_j}{\mathbb{P} (\mathcal{J} = j )} \sum_{J=1}^H \mathbb{P}(\mathcal{J}=J)\cdot \mathbbm{1}_{J=j}\\ &= \sum_{j=1}^H \frac{\Delta_j}{\mathbb{P} (\mathcal{J} = j )} \mathbb{P} (\mathcal{J} = j ) = \psi \end{align*} \end{proof} \end{proof} \section{Krylov GPs} Th objective is \begin{align}\label{eq:objective} L(\theta | X, \ensuremath{\boldsymbol{\theta}}) &= -\log p(\ensuremath{\mathbf{y}}| \ensuremath{\boldmatrix{X}}, \ensuremath{\boldsymbol{\theta}}) \propto \log | \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}} \ensuremath{\boldmatrix{X}}} | + \ensuremath{\mathbf{y}}^T \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}}\bX}^{-1} \ensuremath{\mathbf{y}}. \end{align} The derivative of Eq~\ref{eq:objective} is given by: \begin{align} \frac{\partial -\log p(\ensuremath{\mathbf{y}} | \ensuremath{\boldmatrix{X}}, \ensuremath{\boldsymbol{\theta}})}{\partial \theta} & \propto \textrm{Tr}\left( \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}} \ensuremath{\boldmatrix{X}}}^{-1} \frac{\partial \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}}\bX}}{\partial \theta}\right) - \ensuremath{\mathbf{y}}^T \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}}\bX}^{-1} \frac{\partial \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}}\bX}}{\partial \ensuremath{\boldsymbol{\theta}}} \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}}\bX}^{-1} \ensuremath{\mathbf{y}} \end{align} We cal Algorithm~\ref{alg:mbcg} which takes $[\ensuremath{\mathbf{y}} \quad \ensuremath{\mathbf{z}}_1 \quad \cdots \quad \ensuremath{\mathbf{z}}_t]$ as inputs and outputs \begin{align} [\ensuremath{\mathbf{u}}_0 \quad \ensuremath{\mathbf{u}}_1 \quad \cdots \quad \ensuremath{\mathbf{u}}_t] &= \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}} \ensuremath{\boldmatrix{X}}}^{-1} [\ensuremath{\mathbf{y}} \quad \ensuremath{\mathbf{z}}_1 \quad \cdots \quad \ensuremath{\mathbf{z}}_t] \textrm{ and } \tilde{T}_1, \cdots, \tilde{T}_t \end{align} where $\tilde{T}_1, \cdots, \tilde{T}_t$ are partial Lanczos tridiagonalizatios of $\hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}}\bX}$ with respect to the vectors $\ensuremath{\mathbf{z}}_1, \cdots, \ensuremath{\mathbf{z}}_t$. \begin{itemize} \item Computing $\ensuremath{\mathbf{y}}^T \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}}\bX}^{-1} \ensuremath{\mathbf{y}}$: use dircte output from mBCG. \item Computing $\log | \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}} \ensuremath{\boldmatrix{X}}} |$: use stochastic trace estimation + Lanczos quadrature. \begin{align} \log | \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}} \ensuremath{\boldmatrix{X}}} | &= \textrm{Tr} (\log \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}} \ensuremath{\boldmatrix{X}}})\\ & = \mathbb{E}\left[ \ensuremath{\mathbf{z}}^T (\log \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}} \ensuremath{\boldmatrix{X}}}) \ensuremath{\mathbf{z}} \right] \\ & \approx \frac{1}{t} \sum_{i=1}^t \ensuremath{\mathbf{z}}_i^T (\log \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}} \ensuremath{\boldmatrix{X}}}) \ensuremath{\mathbf{z}}_i \\ & \approx \frac{1}{t} \sum_{i=1}^t \left[ (\ensuremath{\boldmatrix{V}}_i \ensuremath{\mathbf{e}}_1)^T (\log \ensuremath{\boldgreekmatrix{\Lambda}}_i ) (\ensuremath{\boldmatrix{V}}_i \ensuremath{\mathbf{e}}_1) \right] \end{align} where $\tilde{\ensuremath{\boldmatrix{T}}}_{i} =\ensuremath{\boldmatrix{V}}_i \ensuremath{\boldgreekmatrix{\Lambda}}_i \ensuremath{\boldmatrix{V}}_i^T$ is the eigenvalue decomposition. \item Computing $\textrm{Tr}\left( \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}} \ensuremath{\boldmatrix{X}}}^{-1} \frac{\partial \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}}\bX}}{\partial \theta}\right)$: use stochastic trace estimation. \begin{align} \textrm{Tr}\left( \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}} \ensuremath{\boldmatrix{X}}}^{-1} \frac{\partial \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}}\bX}}{\partial \theta}\right) &= \mathbb{E} \left[ \ensuremath{\mathbf{z}}^T \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}} \ensuremath{\boldmatrix{X}}}^{-1} \frac{\partial \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}}\bX}}{\partial \theta} \ensuremath{\mathbf{z}} \right] \\ &= \frac{1}{t} \sum_{i=1}^t \left(\ensuremath{\mathbf{z}}_i^T \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}}\bX}^{-1} \right) \left( \frac{\partial \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}} \ensuremath{\boldmatrix{X}}}}{\partial \ensuremath{\boldsymbol{\theta}}} \ensuremath{\mathbf{z}}_i \right) \end{align} \end{itemize} In summary, the objective is estimated by \begin{align} L(\theta | \ensuremath{\boldmatrix{X}}, \ensuremath{\boldsymbol{\theta}}) & \approx \frac{1}{t} \sum_{i=1}^t \ensuremath{\mathbf{z}}_i^T \left(\log \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}} \ensuremath{\boldmatrix{X}}} \right) \ensuremath{\mathbf{z}}_i + \ensuremath{\mathbf{y}}^T \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}} \ensuremath{\boldmatrix{X}}}^{-1} \ensuremath{\mathbf{y}}, \end{align} and the derivative is estimated by \begin{align} \frac{\partial L(\theta | \ensuremath{\boldmatrix{X}}, \ensuremath{\boldsymbol{\theta}})}{\partial \theta} &\approx \frac{1}{t} \sum_{i=1}^t \left( \ensuremath{\mathbf{z}}_i^T \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}} \ensuremath{\boldmatrix{X}}}^{-1} \right) \left( \frac{\partial \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}}\bX}}{\partial \ensuremath{\boldsymbol{\theta}}} \ensuremath{\mathbf{z}}_i \right) - \ensuremath{\mathbf{y}}^T \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}} \ensuremath{\boldmatrix{X}}}^{-1} \frac{\partial \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}} \ensuremath{\boldmatrix{X}}}}{\partial \ensuremath{\boldsymbol{\theta}}} \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}} \ensuremath{\boldmatrix{X}}}^{-1} \ensuremath{\mathbf{y}}. \end{align} \begin{algorithm}[H] \SetAlgoLined \KwInput{$mmm_A()$ -- function for matrix-matrix multiplication with $A$ \\ $B$ -- $n\times t$ matrix to solve against \\ $\hat{P}^{-1}$ -- func. for preconditioner\\} \KwOutput{$A^{-1}B$, $\tilde{T}_1, \cdots, \tilde{T}_t$} $U_0 \leftarrow \mathbf{0}$ \gray{// Current solutions} \\ $R_0 \leftarrow B - mmm_A(U_0)$ \gray{// Current residuals} \\ $Z_0 \leftarrow P^{-1}(R_0)$ \gray{// Preconditioned residuals} \\ $D_0 \leftarrow Z_0$ \gray{// Search directions for next solutions} \\ $\tilde{T}_1, \cdots, \tilde{T}_t \leftarrow 0$ \gray{// Tridiag matrices} \\ \For{$j \leftarrow 0$ \KwTo $t$}{ $V_j \leftarrow mmm_A (D_{j-1})$ \\ $\ensuremath{\boldsymbol{\alpha}} \leftarrow (R_{j-1} \circ Z_{j-1})^T \mathbf{1} / (D_{j-1} \circ V_j)^T \mathbf{1}$ \\ $U_j\leftarrow U_{j-1} + \textrm{diag}(\ensuremath{\boldsymbol{\alpha}}_j) D_{j-1}$\\ $R_j \leftarrow R_{j-1} - \textrm{diag}(\alpha_j) V_j$ \\ \textbf{if} $\forall i \quad \| r_j^{(i)}\|_2 < $ \textbf{then return} $U_j$ \; $Z_i \leftarrow \hat{P}^{-1} (R_j)$ \\ $\ensuremath{\boldsymbol{\beta}}_j \leftarrow (R_j \circ Z_j)^T \mathbf{1} / (R_{j-1} \circ Z_{j-1})^T \mathbf{1}$ \\ $D_j \leftarrow Z_j + \textrm{diag} (\beta_j) D_{j-1}$ \\ $\forall i \quad [\tilde{T}]_{j,j} \leftarrow 1/[\alpha_j]_i + [\beta_{j-1}]_i / [\alpha_{j-1}]_i$ \\ $\forall i \quad [\tilde{T}_i]_{j-1, j}, [\tilde{T}_i]_{j, j-1} \leftarrow \sqrt{[\beta_{j-1}]_i} / [\alpha_j]_i$ } \Return{$U_{j+1}, \tilde{T}_1, \cdots, \tilde{T}_t$} \caption{Modified Batch Conjugate Gradient (mBCG)} \label{alg:mbcg} \end{algorithm} \section{RR-CG} For notational simplicity, we write $\ensuremath{\boldmatrix{K}} = \hat{\ensuremath{\boldmatrix{K}}}_{\ensuremath{\boldmatrix{X}} \ensuremath{\boldmatrix{X}}}$. Denote the system size, i.e. the size of matrix $\ensuremath{\boldmatrix{K}}$ by $N$. \begin{enumerate} \item CG solution estimates. For $\ensuremath{\boldmatrix{K}}^{-1} \ensuremath{\mathbf{y}}$, CG computes: \begin{align} \ensuremath{\boldmatrix{K}}^{-1} \ensuremath{\mathbf{y}} &= \sum_{j=1}^N \gamma_j \ensuremath{\mathbf{d}}_j \end{align} where $d_j$ are conjugate search directions. The RR estimate is: \begin{align} \ensuremath{\boldmatrix{K}}^{-1} \ensuremath{\mathbf{y}} \approx \sum_{j=1}^J \frac{\gamma_j \ensuremath{\mathbf{d}}_j}{1-F(j-1)} \end{align} where $J \sim P(J)$ and $F(\cdot)$ is the cumulative distribution function associated with $p(J)$. Similarly, we can compute the RR estimate for $\ensuremath{\boldmatrix{K}}^{-1}\ensuremath{\mathbf{z}}$. \textbf{Note:} for the backward pass computation \begin{align*} \ensuremath{\mathbf{y}}^T \ensuremath{\boldmatrix{K}}^{-1} \frac{\partial \ensuremath{\boldmatrix{K}}}{\partial \theta} \ensuremath{\boldmatrix{K}}^{-1} \ensuremath{\mathbf{y}}, \end{align*} the unbiased estimate is \begin{align} \widehat{\ensuremath{\mathbf{y}}^T \ensuremath{\boldmatrix{K}}^{-1}}_1 \frac{\partial \ensuremath{\boldmatrix{K}}}{\partial \theta} \widehat{\ensuremath{\boldmatrix{K}}^{-1} \ensuremath{\mathbf{y}}}_2, \end{align} where we use two independent RR estimates for the left linear-solve and the right linear-solve, i.e. \begin{align} \widehat{\ensuremath{\mathbf{y}}^T \ensuremath{\boldmatrix{K}}^{-1}}_1 &= \sum_{j=1}^{J_1} \frac{\gamma_j \ensuremath{\mathbf{d}}_j} {1-F_1 (j-1)}, \\ \widehat{\ensuremath{\mathbf{y}}^T \ensuremath{\boldmatrix{K}}^{-1}}_2 &= \sum_{j=1}^{J_2} \frac{\gamma_j \ensuremath{\mathbf{d}}_j}{1-F_2 (j-1)}. \end{align} Here $J_1 \sim p_1(J)$, $J_2 \sim p_2(J)$, $J_1$ and $J_2$ are independent, and $F_1(\cdot)$ and $F_2(\cdot)$ are cdfs of the seperate distributions. \item logdet term computes: \begin{align} \log | \ensuremath{\boldmatrix{K}} | &\approx \frac{1}{t} \sum_{i=1}^t [(\ensuremath{\boldmatrix{V}}_i \ensuremath{\mathbf{e}}_1)^T (\log \ensuremath{\boldgreekmatrix{\Lambda}}_i ) (\ensuremath{\boldmatrix{V}}_i \ensuremath{\mathbf{e}}_1)] \end{align} For notational simplicity, consider $i=1$ (the number of probe vectors is 1), so we drop the subscript $i$. We use the subscript $j$ to denote the size of $T$ matrix. That is, $\ensuremath{\boldmatrix{T}}_j$ is the partial Lanczos tridiagonalization of $\ensuremath{\boldmatrix{K}}$ after $j$ iterations, and $\ensuremath{\boldmatrix{T}}_j = \ensuremath{\boldmatrix{V}}_j \ensuremath{\boldgreekmatrix{\Lambda}}_j \ensuremath{\boldmatrix{V}}_j^T$. Then \begin{align} \log | \ensuremath{\boldmatrix{K}} | &\approx (\ensuremath{\boldmatrix{V}}_N \ensuremath{\mathbf{e}}_1)^T (\log \ensuremath{\boldgreekmatrix{\Lambda}}_N ) (\ensuremath{\boldmatrix{V}}_N \ensuremath{\mathbf{e}}_1) \\ &= (\ensuremath{\boldmatrix{V}}_1 \ensuremath{\mathbf{e}}_1)^T (\log \ensuremath{\boldgreekmatrix{\Lambda}}_1 ) (\ensuremath{\boldmatrix{V}}_1 \ensuremath{\mathbf{e}}_1) + \\ &\qquad \sum_{j=2}^N \big[ (\ensuremath{\boldmatrix{V}}_j \ensuremath{\mathbf{e}}_1)^T (\log \ensuremath{\boldgreekmatrix{\Lambda}}_j) (\ensuremath{\boldmatrix{V}}_j \ensuremath{\mathbf{e}}_1) \\ & \qquad \qquad - (\ensuremath{\boldmatrix{V}}_{j-1} \ensuremath{\mathbf{e}}_1)^T (\log \ensuremath{\boldgreekmatrix{\Lambda}}_{j-1} ) (\ensuremath{\boldmatrix{V}}_{j-1} \ensuremath{\mathbf{e}}_1) \big]. \end{align} The RR estimate is: \begin{align} \log | \ensuremath{\boldmatrix{K}} | &\approx (\ensuremath{\boldmatrix{V}}_1 \ensuremath{\mathbf{e}}_1)^T (\log \ensuremath{\boldgreekmatrix{\Lambda}}_1 ) (\ensuremath{\boldmatrix{V}}_1 \ensuremath{\mathbf{e}}_1) + \\ &\qquad \sum_{j=2}^N \big[ (\ensuremath{\boldmatrix{V}}_j \ensuremath{\mathbf{e}}_1)^T (\log \ensuremath{\boldgreekmatrix{\Lambda}}_j) (\ensuremath{\boldmatrix{V}}_j \ensuremath{\mathbf{e}}_1) \\ & \qquad \qquad - (\ensuremath{\boldmatrix{V}}_{j-1} \ensuremath{\mathbf{e}}_1)^T (\log \ensuremath{\boldgreekmatrix{\Lambda}}_{j-1} ) (\ensuremath{\boldmatrix{V}}_{j-1} \ensuremath{\mathbf{e}}_1) \big] / \left(1- F(j-1)\right). \end{align} \end{enumerate} \iffalse \begin{algorithm}[H] \SetAlgoLined \KwInput{$mmm_A()$ -- function for matrix-matrix multiplication with $A$ \\ $B$ -- $n\times t$ matrix to solve against \\ $\hat{P}^{-1}$ -- func. for preconditioner\\} \\ $F(\cdot)$ -- distribution for sampling termination number \\ \KwOutput{$A^{-1}B$, $\tilde{T}_1, \cdots, \tilde{T}_t$} $U_0 \leftarrow \mathbf{0}$ \gray{// Current solutions} \\ $R_0 \leftarrow B - mmm_A(U_0)$ \gray{// Current residuals} \\ $Z_0 \leftarrow P^{-1}(R_0)$ \gray{// Preconditioned residuals} \\ $D_0 \leftarrow Z_0$ \gray{// Search directions for next solutions} \\ $\tilde{T}_1, \cdots, \tilde{T}_t \leftarrow 0$ \gray{// Tridiag matrices} \\ \For{$j \leftarrow 0$ \KwTo $t$}{ $V_j \leftarrow mmm_A (D_{j-1})$ \\ $\ensuremath{\boldsymbol{\alpha}} \leftarrow (R_{j-1} \circ Z_{j-1})^T \mathbf{1} / (D_{j-1} \circ V_j)^T \mathbf{1}$ \\ $U_j\leftarrow U_{j-1} + \textrm{diag}(\ensuremath{\boldsymbol{\alpha}}_j) D_{j-1}$\\ $R_j \leftarrow R_{j-1} - \textrm{diag}(\alpha_j) V_j$ \\ \textbf{if} $\forall i \quad \| r_j^{(i)}\|_2 < $ \textbf{then return} $U_j$ \; $Z_i \leftarrow \hat{P}^{-1} (R_j)$ \\ $\ensuremath{\boldsymbol{\beta}}_j \leftarrow (R_j \circ Z_j)^T \mathbf{1} / (R_{j-1} \circ Z_{j-1})^T \mathbf{1}$ \\ $D_j \leftarrow Z_j + \textrm{diag} (\beta_j) D_{j-1}$ \\ $\forall i \quad [\tilde{T}]_{j,j} \leftarrow 1/[\alpha_j]_i + [\beta_{j-1}]_i / [\alpha_{j-1}]_i$ \\ $\forall i \quad [\tilde{T}_i]_{j-1, j}, [\tilde{T}_i]_{j, j-1} \leftarrow \sqrt{[\beta_{j-1}]_i} / [\alpha_j]_i$ } \Return{$U_{j+1}, \tilde{T}_1, \cdots, \tilde{T}_t$} \caption{Modified Batch Conjugate Gradient (mBCG)} \label{alg:mbcg} \end{algorithm} \fi \bibliographystyle{apalike}
1,108,101,563,993
arxiv
\section{Introduction} \label{sec:intro} Redshift survey of galaxies is a promising way for exploring the nature the dark energy and testing gravity on the cosmological scales. Recent results of the baryon oscillation spectroscopic survey (BOSS) date release (DR) 11 of the Sloan Digital Sky Survey (SDSS) III have demonstrated the usefulness of redshift surveys \cite{Anderson,Percival}. A possible tension in the cosmological parameters between the results by the Planck satellite and the BOSS is reported \cite{Samushia,Chuang,Sanchez,Beutler}, which attracts interests of researchers. An interesting question that arises is whether this tension could be resolved in models where the gravity gets modified from its usual general relativistic form. The redshift-space distortion plays an important role in testing gravity \cite{Guzzo,YSH}, which reflects the information of velocity of galaxies. One of the target of redshift surveys is a measurement of the redshift-space distortions in the linear regime of the density perturbations \cite{Hikage2014}, which provides us with a chance of testing gravity through the linear growth rate. On the other hand, the Finger-of-God (FoG) effect is the redshift-space distortion in the nonlinear regime of density perturbations reflecting the random motion of galaxies. The primary purpose of the present paper is to investigate an effective method to evaluate the random velocity of galaxies in halos, which might provide us with a unique chance of testing gravity on halo scales. This can be achieved by precisely modeling the FoG effect on the basis of the halo model. In order to quantify the redshift-space distortions, the multipole power spectrum is useful, which is defined as a multipole coefficient of the multipole expansion of the anisotropic power spectrum (e.g., \cite{Yamamoto2006,YSH,Beutler}). Recently, the authors of Ref.~\cite{HY} found that a halo model describes well small-scale behavior of the higher multipole power spectra of the luminous red galaxy (LRG) sample of SDSS DR7. Based on this new finding, we consider a potential of measuring the velocity of satellite galaxies in halos and testing the gravity theory on the halo scales with the multipole power spectrum. The key of the method is the random motion of the satellite galaxies and their 1-dimensional velocity dispersion in a halo with mass $M$, for which we adopt a simple formula, \begin{eqnarray} &&\sigma^2_{\rm v}(M)=\beta{GM\over 2r_{\rm vir}}, \label{sigmav2} \end{eqnarray} where $\beta$ is a constant parameter, $G$ is the Newton's universal gravitational constant, and $r_{\rm vir}$ is the virial radius defined by $r_{\rm vir}=(3M/4\pi\bar{\rho}_{\rm m}(z) \Delta_{\rm vir}(z))^{1/3}$, where $\bar\rho_{\rm m}(z)$ is the mean matter density and $\Delta_{\rm vir}(z)$ is the density contrast of a halo, respectively, at the redshift $z$. We adopt $\Delta_{\rm vir}=265$ at $z=0.3$, for the sample corresponding to our LRG mock samples. We carefully check this velocity dispersion relation using the numerical simulations, as well as the validity of the theoretical model for the higher multipole power spectra. This theoretical model is compared with the SDSS LRG sample, and we put a useful constraint on the velocity dispersion and the gravitational constant on the halo scales. \label{sec:fm} We here briefly review the multipole spectrum in a halo model according to Ref.~\cite{HY,Hikage2014}. Following the general prescription of the halo approach~\cite{Seljak2001,White2001,CooraySheth2002}, we write the anisotropic power spectrum in the redshift-space consisting of the 1-halo and 2-halo terms, P_{\rm LRG}(k,\mu)=P^{\rm 1h}(k,\mu)+P^{\rm 2h}(k,\mu). $ We here consider the model which consists of the central galaxies and the satellite galaxies. We adopt the following expression (\ref{eq:pk_1h}) for the one halo term, \begin{eqnarray} &&P^{\rm 1h}(k,\mu)=\frac{\displaystyle 1}{\displaystyle \bar{n}^2}\int\!dM~ \frac{\displaystyle dn}{\displaystyle dM} \Bigl[2\langle N_{\rm sat}\rangle \tilde u_{\rm NFW}(k,M) e^{-(\sigma_c^2+\sigma_s^2) k^2\mu^2/2a^2H^2} \nonumber\\ &&\hspace{5cm}+\langle N_{\rm sat}\rangle^2 \tilde u^2_{\rm NFW}(k,M) e^{-\sigma_s^2 k^2\mu^2/a^2H^2} \Bigr], \label{eq:pk_1h} \end{eqnarray} where we adopt the halo mass function $dn/dM$ given by \cite{ShethTormen1999}, $\bar{n}$ is the mean number density of LRGs given by $\bar{n}=\int dM (dn/dM) N_{\rm HOD}(M)$, $N_{\rm HOD}(M)=\langle N_{\rm cen}\rangle +\langle N_{\rm sat}\rangle$ is the halo occupation distribution (HOD), for which we adopt the following form \cite{Zheng2005}, \begin{eqnarray} &&\langle N_{\rm cen}\rangle =\frac{1}{2}\left[1+{\rm erf}\left(\frac{\log_{10}(M)-\log_{10} (M_{\rm min})}{\sigma_{\log M}}\right)\right], \\ &&\langle N_{\rm sat}\rangle \langle N_{\rm cen}\rangle \left(\frac{M-M_{\rm cut}}{M_1}\right)^{\alpha}, \label{eq:HOD} \end{eqnarray} with the error function ${\rm erf}(x)$, and $\sigma^2_c(M)$ and $\sigma^2_s(M)$ are the velocity dispersion of the central LRGs and the satellite LRGs, respectively. Table 1 lists the HOD parameters matching the SDSS DR7 LRG catalog in Ref.~\cite{Reid2009a}. We assume that the distribution of the satellite galaxies follows the NFW profile \cite{NFW1996} and $\tilde{u}_{\rm NFW}(k)$ denotes the Fourier transform of truncated NFW profile \cite{Scoccimarro2001}. Results of Ref.~\cite{Guo} support this assumption. We may assume that central LRGs reside near the halo center, thus their velocity difference relative to the host halo should be small (cf.~\cite{Hikage2012}). On the other hand, satellite LRGs are off-centered and their random velocity should be the main source of the FoG effect. Here we assume \begin{eqnarray} &&\sigma^2_c(M)=\alpha_c^2 \sigma^2_{\rm v}(M), \\ &&\sigma^2_s(M)=\alpha_s^2 \sigma^2_{\rm v}(M), \end{eqnarray} where $\alpha_c$ and $\alpha_s$ are the constant parameters. In the previous paper \cite{HY}, the 2-halo term was modeled with an analytic fitting formula from N-body simulations. However, in the present paper, we adopt a very simple treatment for the two halo term of the higher multipole power spectrum, because it is not trivial to construct a precise analytic model in redshift space which is applicable even at large wavenumbers. Using the mock catalogs corresponding the LRG sample is an alternative way to incorporate precise theoretical predictions for the two halo term. For this modeling, we adopt the results in the previous paper \cite{Hikage2014}, which has constructed mock catalogs, corresponding to the SDSS LRG sample, and has investigated the behavior of the multipole spectra. In the present paper, we use the following modeling for $P^{2h}_4(k)$ and $P^{2h}_6(k)$. The results in \cite{Hikage2014} demonstrate that the contribution from the two halo term to $P_6(k)$ is negligible, i.e., $P^{2h}_6(k)\simeq 0$, and $P_4^{2h}(k)$ is simply expressed as $kP^{2h}_4(k)\simeq 15 [h{\rm Mpc}^{-1}]^2$, which we also adopt here. Because the contribution of the two halo term to $P_2(k)$ is rather large compared to that of $P_4(k)$ and $P_6(k)$ \cite{HY}, it is not included in our analysis. \begin{table*}[h] \begin{center} \begin{tabular}{lc} \hline \hline ~ & Simulation/LRG \\ \hline $M_{\rm min}$ & $5.7\times 10^{13}M_\odot/h$\\ $\sigma_{\log M}$ & 0.7 \\ $M_{\rm cut}$ & $3.5\times 10^{13}M_\odot/h$ \\ $M_1$ & $3.5\times 10^{14}M_\odot/h$ \\ $\alpha$ & 1 \\ \hline \end{tabular} \caption{ HOD parameters of the LRG sample \cite{Reid2009a}.} \end{center} \label{tab:lrgHOD} \end{table*} We first demonstrate the validity of our theoretical model by comparing with the results of N-body simulations. The simulations assume the spatially flat cold dark matter model with a cosmological constant, adopting $\Omega_0=0.273$ and $\sigma_8=0.82$. We run 10 realizations of N-body simulations using Gadget-2 code \citep{Springel05} with Gaussian initial condition. Each simulation has a side length of 600$h^{-1}$Mpc and the particle number of $800^3$ (each particle mass is $2.8\times 10^{10}h^{-1}M_\odot$). We use $z=0.3$ snapshots and identify halos by Friends-of-Friends algorithm with a linking length of 0.2. Mock catalogs are constructed so that the bias and the HOD match the SDSS DR7 LRG catalog in Ref.~\cite{Reid2009a}. The position of a central LRG is given by the potential minimum of the host halo and the velocity is given as the averaged velocity of all particles within the halos. We substitute randomly-picked up dark matter particles for satellite LRGs. In this analysis we constructed mock samples both with and without including the fiber collision effect \cite{Blanton03}. We first make uncollided samples by removing one of the adjacent subhalos within 55 arcsec at $z=0.3$ and randomly return a part of removed subhalos at 10 percent probability for the overlapped area of tiling where both spectrum of collided pairs can be measured. In our simulation, the central LRGs locate near the halo center, and their velocity is negligible. We assume no velocity bias for satellites. Thus, our mock catalogs should be understood as $\alpha_c=0$ and $\alpha_s=1$. Using the mock catalogs, we show the validity of our expression~(\ref{sigmav2}) for the velocity dispersion of satellite galaxies. The velocity dispersion of satellite galaxies in a halo has not been well understood, though there are a few works that investigate the velocity dispersion of LRGs \cite{Hikage2012b,Masaki}. Recently, Guo et al. have studied the velocity bias of galaxies in the SDSS III CMASS sample in the context of a halo model \cite{GuoVD}. Their results have implications for our results, as will be discussed below. Figure \ref{fig:sigmav} compares the velocity dispersion $\sigma_{\rm v}^2(M)$ of satellites as a function of the host halo's mass $M$. Here the cross symbols show the results of the N-body numerical simulation, while the curve shows $(GM/2r_{\rm vir})^{1/2}$, i.e., Eq.~(\ref{sigmav2}) with $\beta=1$. This suggests that Eq.~(\ref{sigmav2}) with $\beta=1$ reproduces well the relation between the velocity dispersion of satellite and the halo mass of our N-body simulations. The effect of the fiber collision, which misses galaxies located closely to each other, could be crucial in the analysis of the redshift-space clustering on small scales \cite{Guofiber}. The fiber collision dominantly occurs for pairs in the same halo. In the previous work \cite{HY}, the effect of the fiber collision is included by a multiplying factor reducing the satellite fraction. In the present paper, we adopt a similar prescription, for simplicity. Instead of introducing the satellite fraction, we float the HOD parameter $M_1$, which changes the satellite fraction, as a fitting parameter in our MCMC analysis. \begin{figure}[t] \begin{center} \vspace{.0cm} \hspace{0mm}\scalebox{.45}{\includegraphics{f1.eps}} \vspace{0.cm} \caption{1-dimensional velocity dispersion $\sigma_{\rm v}(M)$ as a function of halo mass. The crosses are from N-body simulation, while the curve is Eq.~(\ref{sigmav2}) with $\beta=1$. \label{fig:sigmav}} \end{center} \end{figure} \begin{table}[h] \begin{center} \begin{tabular}{ccccc} \hline \hline ~~~~~~~~ & (A)~Mock with F.C. & (B)~Mock without F.C. & ~~~~(C)~LRG~~& ~~(D)~LRG with BLRG~~\\ \hline $\beta$ & ~$1.17^{+0.56}_{-0.40}~\bigl(0.96^{+0.43}_{-0.30}\bigr)$ & $0.97^{+0.30}_{-0.24}~\bigl(0.83^{+0.23}_{-0.20}\bigr)$ & $1.70^{+0.83}_{-0.55}~\bigl(1.79^{+0.83}_{-0.59}\bigr)$ & $1.35^{+0.68}_{-0.45}~\bigl(1.38^{+0.65}_{-0.47}\bigr)$ \\ $~M_{\rm 1}[10^{14}M_\odot/h]$ & ~$6.5^{+0.9}_{-1.0}~\bigl(6.1^{+1.0}_{-1.1}\bigr)$ & $4.1^{+0.4}_{-0.4}~\bigl(4.0^{+0.6}_{-0.6}\bigr)$ & $4.0^{+0.4}_{-0.4}~\bigl(4.0^{+0.4}_{-0.4}\bigr)$ & $4.0^{+0.4}_{-0.5}~\bigl(4.0^{+0.4}_{-0.5}\bigr)$ \\ {\small \rm satellite ~fraction(\%)}&~ $3.8^{+0.7}_{-0.5}~\bigl(4.1^{+0.8}_{-0.5}\bigr)$ & $5.9^{+0.6}_{-0.5} ~\bigl(6.2^{+0.7}_{-0.6}\bigr)$ & $6.3^{+0.5}_{-0.4} ~\bigl(6.3^{+0.5}_{-0.4}\bigr)$ & $6.4^{+0.5}_{-0.4} ~\bigl(6.4^{+0.5}_{-0.4}\bigr)$ \\ $\chi^2$ &~ $16 ~(58)$ & $18 ~(59)$ & $47 ~(56)$ & $6.4 ~(7.3)$ \\ d.o.f. &~ $10 ~(12)$ & $10 ~(12)$ & $60 ~(80)$ & $60 ~(80)$ \\ \hline\hline \end{tabular} \caption{ Results of our MCMC analysis with floating the $2$ parameters $\beta$ and $M_1$, where we fixed $\alpha_c=0$ and $\alpha_s=1$ and the other cosmological parameters. The best fitting values with one sigma statistical errors, the satellite fraction $\int dM (dn/dM)\langle N_{\rm sat}\rangle/\bar n$, and the chi-squared along with the number of d.o.f are presented when fitted with (A) the simulation with the fiber collision (F.C.), (B) the simulation without the fiber collision, (C) LRG sample and (D) LRG sample with the two halo term modeled using the BLRG sample, from the left to the right column, respectively. The results are obtained using the data in the range of the wavenumbers $0.3\leq k/[h{\rm Mpc}^{-1}]\leq 0.6$, and the values in the parenthesis are same but using the data in the range of the wavenumbers $0.2\leq k/[h{\rm Mpc}^{-1}]\leq 0.6$. } \end{center} \label{tab:simulationnop0} \end{table} We compare the results of the numerical simulations and the theoretical multipole power spectra, with floating the two parameters $\beta$ and $M_1$. Especially, we here fixed $\alpha_c=0$ and $\alpha_s=1$ taking the consistency with our numerical simulations. Note that the HOD parameters other than $M_1$ are fixed. In the MCMC analysis we only use $P_4(k)$ and $P_6(k)$ in the range of wavenumbers $0.3\leq k/[h{\rm Mpc}^{-1}]\leq0.6$ in order to reduce the influences from the uncertain contribution of the two halo term. Table II summarizes our results, where the best fitting values with one sigma statistical errors are presented for (A) simulation with the fiber collision (F.C.), (B) simulation without the fiber collision, and (C) LRG sample, (D) LRG sample with the two halo term modeled with the brightest LRG (BLRG) sample \cite{HY}, from the left to the right column, respectively. The chi-squared and the degrees of freedom are also shown. In this table, the values within parentheses are the results for the data in the range of $0.2\leq k/[h{\rm Mpc}^{-1}]\leq 0.6$. The left hand two columns of Figure \ref{fig:234} show the best fit curves of the HOD and the multipole power spectra for the simulations (A) and (B). These results demonstrate that our theoretical model reproduces those of the numerical simulations. We next apply our method to the multipole power spectra measured with the SDSS DR7 \cite{Abazajian}. The DR7 LRG sample is selected to cover a redshift range, $0.16 < z < 0.36$, only in the northern cap in order to reduce systematic uncertainties and to match the analysis in Ref.~\cite{Reid2009a}. Thus, the sky coverage is limited to $7189 ~{\rm deg.}^2$ and the total number of LRGs is $61\,899$. We adopt the same method for the measurement as that in Refs.~\cite{Oka,HY,Yamamoto2010}, but with the fiducial cosmological background for the distant-redshift relation of the spatially flat $\Lambda $CDM cosmology with $\Omega_m = 0.3$. The right hand two columns in Table II list the results of the MCMC analysis with the LRG multipole spectra, whose difference comes from the modeling for the two halo term. The right two columns of Figure \ref{fig:234} show the best fitting curve and the data. The results of MCMC analysis with the SDSS LRG sample can be used for testing the gravitational constant on the halo scales. This is because the velocity dispersion in a modified gravity models could be written as $\sigma^2_{\rm v}(M)={G_{\rm eff}M/ 2r_{\rm vir}}$, where $G_{\rm eff}$ is an effective gravitational constant. Regarding $G_{\rm eff}=\beta G$, we may put a constraint on the effective gravitational constant from the SDSS LRG sample on the halo scales, $\beta =1.70^{+0.83}_{-0.55}$ from the column (C) in Table II, in which we adopted the same modeling for the two halo term as that of the mock catalogs. This value is rather larger than the prediction of the numerical simulations, although the error is not small. Though the contribution of the two halo term to $P_4$ and $P_6$ is rather small compared with the one halo term, but it might be influential to our results. As a check of our results, we model the contribution of the two halo term using the BLRG sample \cite{HY}. Because the BLRG catalog roughly corresponds to the central galaxies catalog, then we may model the two halo term by computing the multipole spectrum of the BLRG catalog. The column (D) of Table II represents the results $\beta =1.35^{+0.68}_{-0.45}$. Compared with the case of the modeling for the two halo term from the numerical simulation, the value of $\beta$ becomes small and $\beta=1$ is in the one-sigma error of the results. Let us discuss the reason why higher value of $\beta$ is obtained from the analysis of the LRG sample. It could be a smoking gun of a modified gravity. For example, an $f(R)$ gravity model has an effective gravitation constant $G_{\rm eff}=4G/3$, as long as the chameleon mechanism does not work. However, we should discuss possible systematics that may lead to a larger value of $\beta$. Because the satellite fraction is small, being around $6$\% of the total LRGs, the first term dominates the right hand side of Eq.~(\ref{eq:pk_1h}). Then, taking the degeneracy in the central and the satellite galaxy velocities, we should understand that the constraint is \begin{eqnarray} {\sigma_c^2+\sigma_s^2 \over \sigma_{\rm v}^2}=\beta(\alpha_c^2+\alpha_s^2)=1.35^{+0.68}_{-0.45}, \end{eqnarray} in the case (D) when we use the BLRG sample for modeling the two halo term. The results might be explained by a larger velocity dispersion of the central galaxy in the multiple system. Recently, Guo et al. have reported the velocity bias of galaxies in the SDSS CMASS samples \cite{GuoVD}. The sample is different from ours, but they report that $\alpha_c\sim0.3$. However, this value $\alpha_c\sim0.3$ is rather small to explain our results, and $\alpha_c\sim0.6$ is required within the general relativity $\beta=1$. Other possible systematics is the modeling of the two halo term in $P_\ell(k)$, as we obtained somewhat different values between (C) and (D) in Table II. More sophisticated simulations based on subhalo catalogs from N-body simulations could be necessary and useful. We checked that our treatment of the fiber collision effect works within the error. However, this effect could be more complicated and a more careful modeling of the fiber collision might be necessary. In summary, we have investigated the potential of the higher multipole power spectra of the galaxy distribution in redshift-space. This method is based on the recent finding that a halo model accounts well for the behavior of the multipole power spectrum of LRGs on small scales. Our method uses the data of the spectrum on small scales $0.3\leq k/[h{\rm Mpc}^{-1}]\leq 0.6$. This is quite in contrast to the usual method of testing gravity by measuring the linear growth rate on very large scales. Our method is based on the fact that one halo term makes a dominant contribution to the higher multipole power spectra at large wavenumbers, which reflects the random motions of the satellite galaxies. We carefully investigated the relation between the velocity dispersion of the random motions of satellite galaxies and the host halo mass on the basis of the mock catalogs from N-body simulations. The validity of our theoretical model for the higher multipole power spectrum is tested using the results of the mock catalogs. By confronting our theoretical model and the observed multipole spectra of the SDSS LRG samples, we obtained a value for an effective gravitational constant somewhat larger than that predicted by the numerical simulations. This could be a smoking gun of the modified gravity. However, we might need to check our theoretical model for the two halo term and the fiber collision effect more carefully. \vspace{2mm} K. Y. thanks the workshop, APC-YITP collaboration: Mini-Workshop on Gravitation and Cosmology, which was held at YITP Kyoto universe for a useful chance for discussions on the topic of the present paper. This work is supported by a research support program of Hiroshima University. The research by C.H. is supported in part by Grant-in-Aid for Scientific researcher of Japanese Ministry of Education, Culture, Sports, Science, and Technology (No.~24740160).
1,108,101,563,994
arxiv
\section{Introduction} Since time immemorial, the night skies have been a source of inspiration for humankind. People have been influenced by the stars and their arrangements on the sky, and have studied the positions of the stars for agriculture, timekeeping, and navigation --- some indigenous communities continue to use this knowledge today. Yet, in modern times, the majority of celestial objects are obscured by light pollution; more than 80\% of the world’s population live under light-polluted skies (\citealt{Falchi16}). This has led to an increased desire to seek out the dark, and reconnect with our cultural heritage. \section{Astrotourism for development} Astronomy and sustainable development go hand in hand, as several examples on the African continent have already shown (\citealt{Povic18,McBride18,Strubbe21}). Where development through astronomy-related capacity-building projects has received notable attention, new possibilities arise through travel and tourism. As far as we are aware, the potential for astrotourism and development was first realised in South Africa (\citealt{Govender09}). The construction of SALT (Southern African Large Telescope)\footnote{\url{https://www.salt.ac.za/}} in the Northern Cape attracted tourists to the area, which in turn stimulated the creation of jobs and small businesses. Since then, astrotourism has been identified as one of the Flagships of the International Astronomical Union's Office of Astronomy for Development (IAU OAD)\footnote{\url{http://www.astro4dev.org/flagship-projects/}}. The benefits of astrotourism can also be understood through the United Nations' Sustainable Development Goals (SDGs)\footnote{\url{https://sdgs.un.org/}}. This includes, but is by no means limited to, Goals 4, 5, 8, 9, 10, and 11: quality education; gender equality; decent work and economic growth; industry, innovation, and infrastructure; reduced inequalities; and sustainable communities (\citealt{Dalgleish20}). Where astrotourism is often centred around observatories, it is also possible to attract tourists to rural, dark sky areas which do not have any significant astronomical infrastructure. Dark sky tourism, a branch of astrotourism, focuses exclusively on activities which require dark skies, and can therefore bring significant advantages to rural communities (\citealt{Dalgleish21a}). Added benefits are generated through the conservation of the night sky, via organisations like the International Dark-Sky Association (IDA)\footnote{\url{https://www.darksky.org/}}. Communities can apply to the IDA for dark sky status, enhancing the visibility and protection of dark sky oases, and fostering increased tourism and local economic activity (\citealt{Mitchell19}). There are currently more than 130 certified International Dark Sky Places in the world, although only two of these are in Africa. \section{Astrotourism in Namibia} Namibia is ideal for astronomy-related tourism experiences. The country has a very low population density (3 people/km$^2$), and so there is little-to-no light pollution in the vast majority of the country. Namibian weather is an additional advantage --- dry conditions make for fewer clouds and clearer conditions to observe celestial objects. Alongside the amazing skies, the country is developing its capacity for astrophysics research (\citealt{Backes18}) and has many exciting astronomy-related sites to offer, including the Hoba meteorite (the largest meteorite in the world); the NamibRand Nature Reserve\footnote{\url{http://www.namibrand.com/dark-sky.html}} (Africa's first International Dark Sky Reserve); and the world-leading High Energy Spectroscopic System (H.E.S.S.) observatory (\citealt{deNaurois18}). Tourism is one of Namibia's key industries, accounting for 10.3\% of Gross National Product in 2019 (\citealt{wttc20}). Despite being one of the best places in the world to see unpolluted night skies, Namibian tourism typically revolves around safaris, hiking, indigenous culture, and trophy hunting. There is, however, a small community of amateur astronomers who have been visiting ``astrofarms'' over the past few decades to do astrophotography. Namibia is also at the forefront of community conservation; $\sim$20\% of Namibia is covered by 86 communal conservancies --- self-governing, democratic entities dedicated to the management and conservation of natural resources within social, cultural and economic contexts. Although not explicitly recognised, parallels can be drawn between these conservancies and the ethos of dark sky preservation. A recent meeting with the Ju/’Hoansi Traditional Authority (TA) in the Nyae Nyae conservancy, showed that astrotourism was unknown to them, but they were eager to learn more. Tourism in the area has essentially come to a standstill (due to the Covid-19 pandemic) and so astrotourism presents an opportunity to reignite tourism activity in post-Covid times. The TA also shared that much of their indigenous starlore has been lost, and so astrotourism provides further opportunities to help recover and retain this endangered knowledge. One of the main hurdles in implementing astrotourism is the training of tour guides in astronomy and stargazing. A tour guide qualification on this topic already exists under the Namibia Training Authority, but it has been mostly unobtainable given a lack of accessible training, and is also in need of review. We plan to update the qualification, as well as devise an astronomy course for tour guides (\citealt{Dalgleish21b}) --- we hope to deliver some of this training ourselves, as well as to `train the trainer', to ensure sustainability. Overall, there is great scope for Namibia to expand and diversify its tourist demographic via astrotourism, and with little cost (\citealt{Stone19}). \section{Summary} Astrotourism makes use of unpolluted nightscapes as a natural and infinite resource. As dark skies become more scarce, many countries in the Global South have a unique opportunity to offer their dark skies to travellers who seek to reconnect with the heavens. Astronomy activities can also offer a sustainable and meaningful tourism experience, while keeping in line with the conservation of natural resources and cultural heritage. Moreover, it can be implemented at minimal cost, and with minimal infrastructure. In all, astrotourism presents opportunities for development through education, sustainable job creation, reduced inequalities, and the preservation of indigenous knowledge. For these reasons, astrotourism fulfils several of the SDGs and shows great potential for sustainable development in countries like Namibia. \acknowledgements We acknowledge support from the UKRI STFC Global Challenges Research Fund project ST/S002952/1 and Exeter College, Oxford.
1,108,101,563,995
arxiv
\section{Introduction} Under extreme thermodynamic conditions the restoration of chiral symmetry is expected in Quantum Chromodynamics (QCD). The spectral functions of chiral partners become degenerate as a consequence. The most accessible of these chiral pairs is composed of $(\rho,a_1)$, the vector and axial-vector mesons with the lowest masses. In particular, the $\rho$ meson's direct dilepton decay allows direct experimental access to its in-medium modifications of hadronic properties. Therefore, a large experimental and theoretical effort has been undertaken to study the behavior of the spectral function of the $\rho$ meson, in and out of thermal equilibrium. Experimentally, QCD and particularly the properties of resonances in a hot and dense medium are studied through nucleus-nucleus collisions. Dileptons are particularly attractive to study in-medium properties, since they are not subject to the strong force and only interact electromagnetically with the surrounding fireball, escaping the medium essentially undisturbed. The dilepton emission was therefore studied by various experiments, starting with the CERES/NA45 experiment at CERN-SPS at high beam energies \cite{wessels2003latest}, exhibiting an excess attributed to the direct radiation of the fireball. Afterwards, the NA60 experiment \cite{arnaldi2006first} settled the debate about the nature of this excess: it was found consistent with a strong broadening of the $\rho$ spectral function with no apparent mass shift. The latter being predicated by a handful of theoretical models as the `dropping mass' scenario along with an increasing width \cite{pisarski1982phenomenology,hatsuda1992qcd,li1995enhancement}. More recently, the HADES experiment at GSI investigated the kinematic regime of low beam energies for p+p, p+A, and A+A collisions. An excess in the measured electron pair yield $M_{ee}\sim 0.15$-$0.5$ GeV/c$^2$ \cite{agakichiev2007dielectron,agakishiev2012first,agakishiev2012inclusive, agakishiev2012inclusive2,agakishiev2011dielectron}, in agreement with previous results from the DLS experiment at BEVALAC \cite{porter1997dielectron} was found, again showing an excess yield connected to strong coupling of the $\rho$ meson to the baryonic sector \cite{agakichiev2010origin, rapp2009chiral}. Future experiments, such as CBM at FAIR, will add more high-quality experimental data on dilepton emission in the high net-baryon density region, by probing the intermediate energy range \cite{hohne2014measurement}. Different theoretical techniques are employed to study the vector meson spectral functions. A well-established approach ensues from hadronic many-body theory \cite{rapp1997rho,rapp1999low,rapp2005vector}, evaluating the $\rho$ propagator including several modifications to its self-energy. Convolving it with a simple fireball model resulted in a dilepton yield consistent to SPS data \cite{van2008dilepton,van2006comprehensive}. One more recent advance comes from the Functional Renormalization Group approach, a non-perturbative framework capable of taking critical fluctuations into account \cite{tripolt2014spectral,rennecke2015vacuum}, and therefore is suited to study medium modifications in cold and dense nuclear matter close to the liquid-gas phase transition, as well as the effects of the corresponding critical endpoint \cite{jung2017medium,tripolt2021vector}. Transport approaches allow to connect theoretical approaches for the spectral function to experimental measurements. They describe the full evolution in a collision and, therefore, have the advantage of allowing direct access to the spectral information of particles at all times. In some approaches, the resonances can propagate \emph{off-shell}, and their spectral function can dynamically change, with the constraint of a vacuum behavior in the absence of surrounding matter. One example is (P)HSD \cite{cassing2000semiclassical,bratkovskaya2008dilepton}, which computes the spectral function at each time step via transition amplitudes; another is GiBUU \cite{buss2012transport,larionov2020dilepton}, in which the invariant mass of the particle becomes an independent variable, determined self-consistently through the evolution. Other transport approaches, such as UrQMD \cite{bass1998microscopic}, deal with \emph{on-shell} resonances always assuming vacuum properties. SMASH (Simulating Many Accelerated Strongly-interacting Hadrons) \cite{weil2016particle}, the hadronic transport used for this work, falls in this category as well. A possibility to supplement direct medium-modifications to the spectral function to vacuum hadronic transport approaches is the coarse-graining method, where the local energy and net-baryon densities of an ``average event'' are converted into temperature and baryo-chemical potential. They are used as input parameters for a medium modifications e.g. when calculating electromagnetic radiation \cite{endres2016energy,endres2016photon,staudenmaier2018dilepton}. This improves agreement with experimental measurements as shown for dilepton radiation with the SMASH approach in \cite{staudenmaier2018dilepton}. A description of the resonance dynamics solely based on on-shell propagation is not sufficient; instead, a mixed approach including the coarse-graining method leads to a better agreement for large collision systems. Nevertheless, even with a resonances description based on vacuum properties, the dynamical evolution of resonances is dramatically different in vacuum and in medium. Of particular interest for this work is the shortening of resonance lifetimes by inelastic scatterings i.e. absorption inside the medium, often referred to as \emph{collisional broadening}. A shortening of the lifetime ($\tau$) translates to an effective, dynamically-generated increase of the width ($\Gamma$) and subsequently broadening of the spectral function ($\Gamma_{\rm eff}={1}/{\tau_{\rm eff}}$). The goal of this work is to investigate this dynamical broadening on the example of the $\rho$ meson quantitatively with the transport approach SMASH. The effective width and spectral function is reconstructed by analyzing the resonance lifetimes. Both equilibrium and non-equilibrium systems are studied and compared to assess the role of the different dynamics. This paper is organized as follows: Sec. \ref{sec:SMASH} describes SMASH, the hadronic transport approach used in this work, and details its relevant features. Sec. \ref{sec:Broadening} defines how the collisional broadening is calculated, with the results for different scenarios presented in Sec. \ref{sec:results} and summarized in Sec. \ref{sec:Conclusion}. \section{Model Description}\label{sec:SMASH} For the investigation of the $\rho$ meson in this work, the hadronic transport approach SMASH-2.1 \cite{weil2016particle,dmytro_oliinychenko_2021_5796168} is employed. It allows to access the full phase space information at all times. Particles can be followed individually and their lifetimes and interactions are directly accessible. Hadrons evolve in spacetime according to an effective solution of the relativistic Boltzmann equation. Particle species, their pole masses $M_0$, and corresponding decay widths $\Gamma_0$ are taken from the Particle Data Group \cite{Zyla:2020zbs} up to $M_0\sim2.3\ \mathrm{GeV}$. Hadrons with $\Gamma_0\leq10\ \mathrm{keV}$ are considered stable, otherwise they are regarded as resonances with a non-singular vacuum spectral function, and can decay with probability given by the mass-dependent decay width. Only two-body decays and scatterings are included in the calculations with a geometrical collision criterion in order to maintain detailed balance. Resonances go through a $1\to2$ decay, or are absorbed, either in a $2\to1$ resonance formation or a $2\to2$ inelastic collision \cite{weil2016particle}. Equilibrium properties are studied in infinite matter calculations. This is achieved by using a finite box with periodic boundary conditions. The initial multiplicities are sampled from a Poisson distribution, simulating a grand-canonical ensemble. Particle momenta are sampled from a Maxwell-Boltzmann distribution with the given $(T,\mu_B)$, which approximates thermal and chemical equilibrium. For nucleus-nucleus collisions, the nuclei travel towards each other along the longitudinal axis with a given kinetic energy, and offset in the transverse axis by a given impact parameter. For the initial condition, the positions of nucleons in each nucleus are sampled according to the Woods-Saxon distribution, without Fermi momentum. Densities in this work are computed in the Eckart rest frame with a Gaussian smearing. For the hadron density, each particle in SMASH has the same unitary weight. \subsection{Decay widths} The hadronic decay widths in SMASH follow the treatment of \cite{manley1992multichannel}, where the two-body decay $R\to ab$ has a mass-dependent width of \begin{equation}\label{SMASH:partial_width_def} \Gamma^\mathrm{dec}_{R\to ab}(m)=\Gamma_{R\to ab}^0\frac{\rho_{ab}(m)}{\rho_{ab}(M_0)}, \end{equation} where $m$ is the off-shell mass, $M_0$ and $\Gamma_{R\to ab}^0$ are the pole mass and corresponding width, and $\rho(m)$ is a parametrization, described in full detail in \citep{weil2016particle}. The proper lifetime of a resonance with mass $m$ is defined as \begin{equation}\label{SMASH:proper_lifetime_def} \tau=\frac{1}{\Gamma^\mathrm{dec}(m)}, \end{equation} where $\Gamma^\mathrm{dec}(m)$ is the \emph{total} decay width, computed as the sum of the partial widths \eqref{SMASH:partial_width_def} over all decay channels $\{ab\}$. The probability for the resonance to decay in a sufficiently small time interval in its rest frame is \begin{equation}\label{SMASH:prob_decay} P(\mathrm{decay\ in\ }\Delta t)=\frac{\Delta t}{\tau}=\Gamma^\mathrm{dec}(m)\Delta t. \end{equation} SMASH uses this probability at each timestep to decay resonances. When it happens, a decay channel $R\to X$ is randomly chosen from the list of possible processes, with a mass-dependent branching ratio $\Gamma^\mathrm{dec}_{R\to X}(m)/\Gamma^\mathrm{dec}(m)$. \subsection{Spectral function} The spectral function of a resonance relates to the imaginary part of its propagator, and hence carries information about the mass probability distribution. In general, it can depend on temperature and density; however, such in-medium modifications are currently neglected in SMASH, and vacuum properties are assumed for all particles. The spectral function of a resonance is given by the relativistic Breit-Wigner distribution \begin{equation}\label{SMASH:Breit-Wigner} \mathcal{A}(m)=\frac{2\mathcal{N}}{\pi}\frac{m^2\Gamma^\mathrm{dec}(m)}{(m^2-M_0^2)^2+m^2\Gamma^\mathrm{dec}(m)^2}, \end{equation} where $\mathcal{N}$ is a normalization factor. When a resonance is formed, SMASH samples its mass using \eqref{SMASH:Breit-Wigner} within the available phase-space. The mass distribution in a simple gas in equilibrium, for instance, amounts to folding \eqref{SMASH:Breit-Wigner} with a thermal distribution: \begin{equation}\label{SMASH:eq_folding} \frac{1}{N}\dv{N}{m}\ (m;T,\mu)\propto(mT)^{3/2}e^{(\mu-m)/T}\mathcal{A}(m). \end{equation} Depending on kinematic limitations, different channels for the production of a $\rho$ will be available, which can lead to interesting non-thermal structures in the mass distribution \cite{schumacher2006theoretical,vogel2006reconstructing}. \section{Collisional broadening}\label{sec:Broadening} In a hadronic medium, absorption of particles decreases the average lifetime of resonances compared to that in vacuum. Such a decrease can be considered as an effective increase of the total decay width, consequently widening the spectral function. This effect is known as \emph{collisional broadening}. In a medium, absorptions are the main mechanism that determines resonance lifetimes. The effective total width is computed by extracting the average lifetime of a collection of particles, and inverting \eqref{SMASH:proper_lifetime_def} to define \begin{equation}\label{Broadening:Gamma_eff_def} \Gamma^\mathrm{eff}=\frac{1}{\avg{\tau}}=\avg{\frac{\gamma}{t_f-t_i}}, \end{equation} where $\gamma$ is the Lorentz factor of the resonance with respect to the computational system, computed with the momentum of the resonance, which is taken from the interaction history provided by SMASH along with the initial and final times $t_{i,f}$. The average in \eqref{Broadening:Gamma_eff_def} can be computed differentially, for instance depending on the invariant mass or the local hadron density. The additional contribution to the width, the collisional width, is defined as $\Gamma^\mathrm{col}=\Gamma^\mathrm{eff}-\Gamma^\mathrm{dec}$. In order to study the dynamical effects on the spectral function, the mass-dependent $\Gamma^\mathrm{eff}(m)$ replaces the regular vacuum decay width $\Gamma^\mathrm{dec}$ in \eqref{SMASH:Breit-Wigner}. As particles have a shorter average lifetime in the medium due to absorption processes, $\Gamma^\mathrm{eff}$ is larger than $\Gamma^\mathrm{dec}$. The obtained ``dynamic'' spectral function $\mathcal{A}^\mathrm{dyn}(m)$ is therefore broader than ${A}(m)$. Since resonances that are annihilated in absorptions do not decay, this quantity can be thought of as the spectral function of $\rho$ mesons which effectively contribute to the dilepton yields in \cite{staudenmaier2018dilepton}. In order to allow for a comparison between different systems, the spectral function must be properly normalized. This is not trivial since higher masses are increasingly rare, so the support of $\mathcal{A}^\mathrm{dyn}(m)$ is not infinite as in the vacuum. Each dynamic spectral function is normalized to the integral of the vacuum Breit-Wigner in the available support. This introduces a small scaling error, which has no impact on the analyses below. \section{Results}\label{sec:results} \subsection{Thermal systems} First, the collisional broadening of $\rho$ mesons is computed inside a box with an equilibrated hadron gas. This allows for an assessment of the thermodynamic behavior of the effective width, as well as for the comparison of $\mathcal{A}^\mathrm{dyn}$ to well-established model calculations of full in-medium modifications \cite{rapp1999low,van2008dilepton}. The box is set to initialize as explained in Sec. \ref{sec:SMASH} at different temperatures $T\in\{120,150,180\}$ MeV, each with three baryochemical potential values $\mu_B\in\{0,330,450\}$ MeV. Results are only considered after $t=10^4$ fm, which was checked to guarantee equilibration. \begin{figure}[ht] \centering \includegraphics[width=0.98\linewidth]{mass_nom_box.pdf}\caption{Probability distribution of $\rho$ meson masses in thermal equilibrium, at $\mu_B=330\ \mathrm{MeV}$ and $T\in\{120,150,180\}\ \mathrm{MeV}.$ SMASH production shown in the solid histogram, and the thermal folding \eqref{SMASH:eq_folding} in dashed lines. The vacuum Breit-Wigner \eqref{SMASH:Breit-Wigner} is shown in black dash-dotted lines.}\label{fig:box_nominal} \end{figure} The mass distribution of $\rho$ mesons is shown to closely match the folding \eqref{SMASH:eq_folding} in Fig. \ref{fig:box_nominal}, as the simulated matter indeed corresponds to a thermalized hadron gas. The thermal weight exponentially favors the creation of smaller masses in comparison to the vacuum \eqref{SMASH:Breit-Wigner}, and there is not much difference between the production at the selected temperatures. The value of baryochemical potential does not alter the distribution -- at least in this region of the phase diagram --, therefore only $\mu_B=330$ MeV is displayed. This is consistent with the folding, in which the dependence on $\mu_B$ is cancelled by the normalization. In simpler systems exact matches between the analytic expectation and SMASH were found, and the slight deviations can be attributed to additional production channels in the full hadron gas via the $N^*(1520)$ resonance. \begin{figure}[ht] \centering \includegraphics[width=0.92\linewidth]{mass_width_box.pdf}\caption{Effective width of $\rho$ mesons in thermal equilibrium. Error bands are statistical.}\label{fig:box_width} \end{figure} Applying \eqref{Broadening:Gamma_eff_def} to the $\rho$ mesons separated in bins of mass, the mass-dependent effective width, shown in Fig. \ref{fig:box_width}, is extracted. $\Gamma^\mathrm{eff}(m)$ depends strongly on the thermodynamic conditions of the system, in contrast to the mass distribution of Fig. \ref{fig:box_nominal}. Lower masses are more affected by changes in the thermodynamic parameters, since the cross-section for $2\to1$ and $2\to2$ processes decreases with the masses of incoming particles in this energy range \cite{weil2016particle}. Hence, these heavier $\rho$ mesons are less likely to be absorbed. The baryochemical potential is only relevant below the pole mass of $M_0=776$ MeV; an increase in $\mu_B$ favors the creation of baryons, suggesting that their coupling to the $\rho$ dominates the low-mass region. \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{mass_dyn+Rapp_SF.pdf}\caption{(Upper) Dynamic spectral function of $\rho$ mesons in thermal equilibrium. (Lower) Momentum-integrated in-medium spectral function for $\mu_B=330$ MeV \cite{rapp1999low}.}\label{fig:box_dynamic} \end{figure} Figure \ref{fig:box_dynamic} shows a comparison between the dynamic spectral function from SMASH and the modifications to the $\rho$ meson propagator from a full in-medium model \cite{rapp1999low}. Qualitatively, the behavior of $\mathcal{A}^\mathrm{dyn}(m)$ is similar to the full in-medium spectral function: it is broadened with increasing temperature, and the peak mass shifts slightly. Notably, the baryochemical potential makes almost no difference in the dynamic spectral function. However, there are quantitative differences present. The high-mass tail is less broadened in SMASH, and the opposite happens in the low-mass tail. This is likely due to the ``tree-level'' character of hadronic transport. Quantum corrections -- loops -- are taken into account in the matching of elementary cross-sections to experimental data; that is, only vacuum corrections are correctly described. The medium effects on these diagrams are not present, unlike in the in-medium model description, which modifies the propagator self-consistently including interference terms. \subsection{Collision systems} Next, the emergence of collisional broadening is studied in the off-equilibrium matter created by low-energy nuclear collisions following the selection of the HADES collaboration. Experiments assess the medium effects in these systems, by analyzing the excess dilepton yields in comparison to a hadronic cocktail \cite{arnaldi2006first}. The present study is restricted to low beam energies, where the evolution is appropriately described by the kinetic transport approach. Several nuclear systems are considered to obtain insight about how the broadening depends on system size, centrality, and beam energy. \begin{figure}[ht] \centering \includegraphics[width=0.98\linewidth]{mass_nom_HIC.pdf}\caption{Nominal spectral function of $\rho$ mesons in different collision systems.}\label{fig:HIC_nominal} \end{figure} The mass distribution of $\rho$ mesons, shown in Fig. \ref{fig:HIC_nominal}, for nuclear collisions at HADES beam energies is not following a thermal folding, because the system is very far from equilibrium. There is an enhanced production peak at $\sim0.5\ \mathrm{GeV}$, stemming from the decay $N^*(1520)\to N+\rho $, since the mass of this resonance does not allow to produce a pole-mass $\rho$ \cite{schumacher2006theoretical,vogel2006reconstructing}. This is most apparent in the C+C collisions at $E_\mathrm{kin}=1$ AGeV, where the low beam energy changes the preferred production channel. This effect is purely kinematic, since the production of resonances does not take lifetimes nor in-medium effects into account\footnote{The contribution of $\pi\pi$ scatterings to the production of $\rho$ mesons is at most 20\%.}. \begin{figure}[ht] \centering \includegraphics[width=0.98\linewidth]{mass_width_HIC.pdf}\caption{Effective width of $\rho$ mesons in different collision systems. Error bands are statistical.}\label{fig:HIC_width} \end{figure} The effective width shown in Figure \ref{fig:HIC_width} reveals system size as a dominant factor. In more central collision the dynamic width is larger, as evidenced by the four centrality classes of Au+Au collisions. Similarly, heavier nuclei reveal a stronger collisional broadening. The p+p collision follows the vacuum decay, since there is essentially no medium with which the $\rho$ mesons interact formed. However, system size is not the only factor, given that a Au+Au collision at 30-40\% broadens as much as a central Ag+Ag, even though it has far fewer participants on average\footnote{Using the Woods-Saxon model with an inelastic NN-cross section of $\sigma_{NN}=30$ mb, a 30-40\% Au+Au collision has $\sim98$ participants on average, whereas a central Ag+Ag collision has $\sim168$.}. The beam energy is playing a role as well: the faster velocity of the ions in the larger system lets the medium dissipate faster, leading to fewer binary collisions overall. This is also seen in C+C collisions, the width at $E_\mathrm{kin}=1$ AGeV is larger than at $2$ AGeV. \begin{figure}[ht] \centering \includegraphics[width=0.98\linewidth]{mass_dyn_HIC.pdf}\caption{Dynamic spectral function of $\rho$ mesons in different collision systems.}\label{fig:HIC_dynamic} \end{figure} Unlike in the thermal system, the broadening below $m\sim0.5$ GeV always decreases, because most low-mass resonances are created in the later stages of the collision, when the medium is already dilute. Furthermore, the peak mass of the dynamic spectral function, shown in Fig. \ref{fig:HIC_dynamic}, does not shift to a great extent, as these nuclear systems do not reach temperatures comparable to the selected box values of Fig. \ref{fig:box_dynamic}. \subsubsection*{Time evolution} It is also interesting to observe how the collisional broadening evolves with the expansion of the fireball. This is seen by computing \eqref{Broadening:Gamma_eff_def} as a function of the time $t_i$ at which the resonance is created (in the computational frame). The corresponding width is shown in Fig. \ref{fig:HIC_evolution}, normalized by the width at pole. \begin{figure}[ht] \centering \includegraphics[width=0.98\linewidth]{timedep_HIC_width.pdf}\caption{Time evolution of the effective width of $\rho$ mesons in different collision systems.}\label{fig:HIC_evolution} \end{figure} One way to understand this evolution is by probing the lifetime of the medium itself. This is done in Fig. \ref{fig:HIC_evolution_density}, where $n_0=0.16~\mathrm{fm}^{-3}$ is the nuclear ground state density, by using the average hadron density $n_\mathrm{h}$ as a proxy. The densities in SMASH are computed at interactions, so the hadron density is chosen to be calculated at $t_i$. \begin{figure}[ht] \centering \includegraphics[width=0.98\linewidth]{density_ev_HIC.pdf}\caption{Time evolution of the average hadron density at the interaction in which a $\rho$ meson is removed from the system.}\label{fig:HIC_evolution_density} \end{figure} During the first few fm of the collision, the effective width grows as the nuclei traverse each other, until the number of binary collisions is maximal. This occurs when $n_\mathrm{h}$ is largest and so the position of this maximum changes with nuclear mass and beam energy. As the medium expands, it becomes more dilute and consequently the effective width decreases. In light of this, the difference in the effective widths of 30-40\% Au+Au and central Ag+Ag systems -- that have a similar mass-dependent width in Figure \ref{fig:HIC_width} -- becomes evident. Having more participating nucleons, the initial broadening in the latter is larger, but it decreases faster than in the former. At later times, particles are essentially travelling in vacuum, hence $\Gamma^\mathrm{eff}\approx\Gamma^\mathrm{dec}$. Since higher masses have higher vacuum decay widths (as per Figs. \ref{fig:box_width} and \ref{fig:HIC_width}), and $\avg{m}<M_0$ as seen in Fig. \ref{fig:HIC_evolution_mass}, the widths in all systems fall to small values less than $\Gamma_0=0.149$ GeV. This also explains why, for instance, the width in a C+C collision falls below the one in p+p at this stage. \begin{figure}[ht] \centering \includegraphics[width=0.98\linewidth]{mass_ev_HIC.pdf}\caption{Time evolution of the average $\rho$ meson mass in different nuclear collisions.}\label{fig:HIC_evolution_mass} \end{figure} \subsubsection*{Density dependence} Figures \ref{fig:HIC_evolution} and \ref{fig:HIC_evolution_density} in the previous subsection suggest a monotonic dependence of the effective width on the local hadron density, as seen in Figure \ref{fig:HIC_width_density}. Here, the density is computed at the end-interaction (at time $t_f$) of the resonance, so as to probe the medium conditions where the $\rho$ is absorbed or decays. This is an approximate way of probing $\Gamma^\mathrm{eff}(n_\mathrm{h})$, as it does not consider how the density changes during propagation. \begin{figure}[ht] \centering \includegraphics[width=0.98\linewidth]{density_width_HIC.pdf}\caption{Density dependence of the effective width of $\rho$ mesons in different collision systems.}\label{fig:HIC_width_density} \end{figure} A near universal relation is recognizable, with most systems following similar curves, while in Figs. \ref{fig:HIC_width} and \ref{fig:HIC_evolution} the widths of the $\rho$ mesons are distinct in different systems. This universality is reminiscent of the semiclassical calculation $\Gamma^\mathrm{col}\sim \gamma n_N\avg{v\sigma_{VN}^\mathrm{tot}}$, used in models in which the mass is not constant and the collisional width is proportional to the local nucleon density \cite{bratkovskaya2008dilepton, larionov2020dilepton}. A deviation to this appears in the p+p and C+C systems, when the density is high. This is because the density calculation starts to break down for very dilute system and an influence of the reaction partners in the specific binary interaction cannot be ruled out as there are never many particles close to the interaction point, potentially being the reason for the spurious high densities seen for such small systems. \subsection{Non-equilibrium effects} Having quantified the collisional broadening in these two different scenarios, it is tempting to ask: how does the effective width in a non-equilibrated collision system compare to the thermalized value of a box? \begin{figure}[ht] \centering \includegraphics[width=0.95\linewidth]{mass_broad_nonequilibrium.pdf}\caption{Collisional width of a central Au+Au collision at $E_\mathrm{kin}=1.23$ GeV, in the full phase and restricted to a region of constant $(T,\mu_B)$; in a box initialized with these thermodynamic parameters.}\label{fig:noneq_width} \end{figure} There is a spacetime region of a heavy-ion collision in which the temperature and chemical potential\footnote{These are computed by assuming the HRG equation of state with local densities of energy and baryon number.} are nearly constant \cite{staudenmaier2018dilepton}. This allows to compare the effective width in a collision system with a box at the same thermodynamic conditions (shown in Figure \ref{fig:noneq_width}). In the central cell of a Au+Au collision, the system remains stable in the same thermodynamic state of around $(T,\mu_B)=(80,900) \mathrm{MeV}$ early in the collision, between $7\leq t\leq15 \ \mathrm{fm}$ \cite{staudenmaier2018dilepton}. The system is much denser on average in this spacetime region, so the width is correspondingly enhanced. Since this restriction removes the late-stage resonances, the collisional width at the hadronic threshold is no longer approximately zero. A thermalized hadron gas at the same $(T,\mu_B)$ manifests a similar broadening for masses above $0.5$ GeV. Below this value, the collision system displays a larger broadening, meaning that the non-equilibrium character in a heavy-ion system results in a suppresion of lifetimes of low-mass resonances. The suppression is explained by considering the hadronic content of each scenario in Fig. \ref{fig:noneq_species}. The invariant yield is computed by considering the average number of the particles over the relevant region of the phase space, and normalizing by the size of that region. The species displayed are first ordered by multiplicity in the central cell of the Au+Au collision until $N^*(1535)$, then by the multiplicity of remaining species in the box. During the interval of constant thermodynamic conditions, the central cell of the nuclear system is composed mostly of baryons, being dominated by nuclear and $\Delta$-baryon resonances. As discussed in the introduction \cite{agakichiev2010origin,rapp2009chiral,salabura2021dilepton}, these couple strongly with low-mass $\rho$ mesons, enhancing the effective width. These particles are not as abundant in the thermal gas. Instead, the energy is distributed in the form of lighter hyperons, since the multiplicity is $\propto e^{-m/T}$ in an equilibrated system. Such particles do not couple as much with the $\rho$, and hence do not increase the width in the thermal box. \section{Conclusion}\label{sec:Conclusion} In this work, the collisional broadening of $\rho$ mesons was investigated and quantified by computing their effective width via lifetime analysis. The employed transport approach, SMASH \cite{weil2016particle}, relies on vacuum properties of hadrons, so that the mass distribution is given by a vacuum Breit-Wigner function adjusted to the kinematically available energy. The ``dynamic'' spectral function is computed using the effective width, as the collisional broadening emerges from absorption processes as part of the evolution of the hadronic medium. First, an infinite hadron gas in equilibrium has been calculated for different temperatures and baryochemical potentials. The resulting spectral functions is compared to full in-medium model calculations \cite{rapp1999low}, which take into account modifications to self-energies and higher order interactions and a qualitatively similar broadening is observed. The shapes, as expected, are numerically different. This evidences that the collisional broadening in SMASH alone is not enough to reproduce the experimental dilepton yield in heavy-ion collisions, so other methods using full in-medium modifications need to be applied, such as coarse-graining \cite{staudenmaier2018dilepton}. Furthermore, the emergence of collisional broadening is studied in non-equilibrium systems created by nuclear collisions (pp, CC, ArKCl, AgAg, AuAu) at different beam energies and centrality classes. The effective width exhibits a clear dependence on system size, as a larger medium enhances the broadening. It also reveals that larger beam energies leads to smaller widths, which is understood through the time evolution of the system. These observations are caused by an universal dependence on the local hadron density. Lastly, the two scenarios are compared in order to assess non-equilibrium effects. This has been achieved by simulating a box with similar thermodynamic conditions to those present in a spacetime region of Au+Au collisions. Above $m\sim0.5$ GeV the observed broadening is similar, while below it the collision system exhibited an enhanced width, which is explained by the different hadronic composition of the two scenarios. The lifetime analysis used throughout this work can be used to understand how inelastic scatterings of vector mesons affect the decay width dynamically and quantify this effect in a transport approach in contrast to genuine in-medium modifications. In the future, it will be interesting to investigate if assumptions about the resonance properties, such as the exact way to calculate the decay probability also affect the effective width and therefore potentially the emission of dileptons. \section*{Acknowledgments} This work was supported by the Helmholtz Forschungsakademie Hessen für FAIR (HFHF) and in part by the National Science Foundation (NSF) within the framework of the JETSCAPE collaboration, under grant numbers ACI-1550228 and OAC-2004571. The authors also acknowledge the support by the State of Hesse within the Research Cluster ELEMENTS (Project ID 500/10.006), and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project number 315477589 – TRR 211. Computational resources have been provided by the GreenCube at GSI. \onecolumngrid\ \begin{center}\ \begin{figure}[h]\ \includegraphics[width=0.75\linewidth]{HIC_composition.pdf}\caption{Multiplicity of each species in the thermodynamically stable region of a Au+Au collision, and in a box under the same $(T,\mu_B)$.} \label{fig:noneq_species} \end{figure}\ \end{center}\ \twocolumngrid
1,108,101,563,996
arxiv
\section{Benchmark} \label{sec:benchmark} \subsection{Experimental Setting} \textbf{Data.} We split our proposed dataset into the train set and test set, with an approximate ratio of $4:1$. With respect to data augmentation, we utilize online random flip (vertical and horizontal) and Gaussian blur during training. In order to fit the input size of the networks, all images are resized to $224 \times 224$. \textbf{CNNs.} We conduct two types of experiments using 6 state-of-the-art CNNs which are AlexNet, VGG-16, VGG-19, ResNet-34, ResNet-50 and ResNet-101. Two types are pre-training initialization and training from scratch, respectively. All CNNs are imported from Pytorch \cite{pytorch}. In the case of pre-training initialization, since all networks are pre-trained on the ImageNet \cite{imagenet_cvpr09} which includes $1,000$ classes, we adjust their final output layers to $19$ accordingly, i.e. the number of species in our dataset. For simplicity and consistency, we set the same hyperparameters for all CNNs, including a learning rate of $1.0\times10^{-3}$, a weight decay of $10^{-4}$, a batch size of $32$, a cross-entropy loss function and SGD as the optimizer. We assign $100$ epochs for all CNNs, in terms of the training with pre-training initialization. We assign $150$ epochs for all CNNs, in the case of training from scratch. In both cases, we select the trained models with the best performance for fair comparisons. We use a desktop workstation with one NVIDIA TITAN V and one NVIDIA QUADRO RTX 6000 for training. \textbf{Evaluation metric.} To evaluate the classification or recognition outcomes, we employ two common metrics which are \textit{Mean Class Accuracy} and \textit{Overall Accuracy}. We use Mean Cls. Acc. and Overall Acc. to represent them, respectively, in the tables. In particular, Mean Class Accuracy is to evaluate the average accuracy of all classes, while Overall Accuracy is for evaluating the average accuracy over all sample images. They are defined as \begin{equation}\label{eq:metric} \begin{aligned} acc_{class} = \frac{1}{c}\sum_{i=1}^c \frac{1}{n_i} \sum_{j=1}^{n_i} a_j^i,\\ acc_{overall} = \frac{\sum_{i=1}^c{\sum_{j=1}^{n_i}} a_j^i } {\sum_{i=1}^c n_i}, \end{aligned} \end{equation} where $acc_{class}$ denotes Mean Class Accuracy, and $acc_{overall}$ indicates Overall Accuracy. $c$ is the number of classes, i.e. $19$ in this work. $n_i$ is the number of images in the $i$-th class, and $a_j^i$ is the accuracy for the $j$-th image in the $i$-th class. It should be noted that the train set and test set of our dataset are separate for the metric calculation, and we report the evaluation results of these two metrics for the test set. \begin{table*}[thbp] \centering \caption{ Classification or recognition results of different CNNs on our dataset. Numbers before $/$ denote the test results of using pre-trained models (on ImageNet) for training CNNs on our proposed dataset. Numbers after $/$ indicate the test results by training CNNs over our dataset from scratch. All numbers are with $\%$. We can see that the ResNet series are usually better than other CNNs over our dataset. ``sp.'' represents ``species''. ``Mean Cls. Acc'' and ``Overall Acc.'' denote Mean Class Accuracy and Overall Accuracy, respectively. }\label{table:classificationaccuracy} \begin{tabular}{l c c c c c c} \toprule Species & \tabincell{l}{AlexNet} & \tabincell{l}{VGG16} & \tabincell{l}{VGG19} & \tabincell{l}{ResNet34} & \tabincell{l}{ResNet50} & \tabincell{l}{ResNet101} \\ \midrule \tabincell{l}{\textit{Acrobeles} sp.} & 28.6/0.0 & 50.0/21.4 & 57.1/28.6 & 64.3/28.6 & 64.3/7.1 & 71.4/21.4 \\ \tabincell{l}{\textit{Acrobeloides} sp.} & 13.9/36.1 & 33.3/16.7 & 58.3/16.7 & 66.7/30.6 & 77.8/27.8 & 80.6/8.3 \\ \tabincell{l}{\textit{Amplimerlinius} sp.} & 0.0/0.0 & 0.0/0.0 & 0.0/0.0 & 20.0/0.0 & 20.0/0.0 & 20.0/0.0 \\ \tabincell{l}{\textit{Aphelenchoides} sp.} & 49.3/26.1 & 84.1/37.7 & 92.8/36.2 & 87.0/30.4 & 85.5/65.2 & 91.3/55.1 \\ \tabincell{l}{\textit{Aporcelaimus} sp.} & 32.0/16.0 & 16.0/32.0 & 8.0/44.0 & 72.0/4.0 & 76.0/8.0 & 36.0/0.0 \\ \tabincell{l}{\textit{Axonchium} sp.} & 82.4/50.0 & 82.4/52.9 & 79.4/64.7 & 82.4/50.0 & 85.3/47.1 & 91.2/61.8 \\ \tabincell{l}{\textit{Discolimus} sp.} & 0.0/0.0 & 0.0/0.0 & 0.0/0.0 & 0.0/0.0 & 7.7/0.0 & 0.0/0.0 \\ \tabincell{l}{\textit{Ditylenchus} sp.} & 73.5/39.7 & 79.4/77.9 & 88.2/69.1 & 72.1/67.7 & 88.2/64.7 & 85.3/57.4 \\ \tabincell{l}{\textit{Dorylaimus} sp.} & 14.3/0.0 & 28.6/14.3 & 42.9/14.3 & 57.1/14.3 & 42.9/0.0 & 42.9/0.0 \\ \tabincell{l}{\textit{Eudorylaimus} sp.} & 35.3/0.0 & 82.3/17.7 & 58.8/11.8 & 64.7/23.5 & 64.7/5.9 & 58.8/0.0 \\ \tabincell{l}{\textit{Helicotylenchus} sp.} & 40.0/0.0 & 53.3/20.0 & 93.3/6.7 & 80.0/26.7 & 93.3/20.0 & 93.3/20.0 \\ \tabincell{l}{\textit{Mesodorylaimus} sp.} & 10.5/0.0 & 15.8/0.0 & 21.1/0.0 & 47.4/0.0 & 57.9/0.0 & 26.3/0.0 \\ \tabincell{l}{\textit{Miconchus} sp.} & 45.5/0.0 & 90.9/45.5 & 54.6/18.2 & 90.9/27.3 & 90.9/9.1 & 100.0/18.2 \\ \tabincell{l}{\textit{Mylonchulus} sp.} & 62.1/0.0 & 89.7/13.8 & 75.9/6.9 & 93.1/0.0 & 96.6/0.0 & 96.6/0.0 \\ \tabincell{l}{\textit{Panagrolaimus} sp.} & 38.5/23.1 & 61.5/16.9 & 73.9/12.3 & 76.9/18.5 & 73.9/16.9 & 83.1/12.3 \\ \tabincell{l}{\textit{Pratylenchus} sp.} & 80.0/65.0 & 90.0/63.3 & 88.3/65.0 & 98.3/60.0 & 96.7/60.0 & 98.3/61.7 \\ \tabincell{l}{\textit{Pristionchus} sp.} & 69.2/7.7 & 74.4/71.8 & 64.1/66.7 & 79.5/59.0 & 69.2/61.5 & 71.8/59.0 \\ \tabincell{l}{\textit{Rhbiditis} sp.} & 60.0/0.0 & 66.7/20.0 & 53.3/26.7 & 46.7/6.7 & 46.7/46.7 & 93.3/26.7 \\ \tabincell{l}{\textit{Xenocriconema} sp.} & 36.4/0.0 & 100.0/27.3 & 72.7/27.3 & 100.0/0.0 & 100.0/0.0 & 81.8/0.0 \\ \tabincell{l}{\textbf{Mean Cls. Acc.}} & 40.6/13.9 & 57.8/28.9 & 57.0/27.1 & 68.4/23.5 & 70.4/23.2 & 69.6/21.1 \\ \tabincell{l}{\textbf{Overall Acc.}} & 50.7/24.6 & 67.0/38.6 & 69.4/36.8 & 76.1/33.3 & 78.6/36.4 & 79.0/32.8 \\ \bottomrule \end{tabular} \end{table*} \subsection{Species Recognition} We perform species identification (or recognition) of nematodes using the six above CNNs on our proposed dataset. The classification results are reported in Table \ref{table:classificationaccuracy}. The results consist of the accuracy for recognizing each species of nematodes, the mean class accuracy as well as the overall accuracy. Note that the numbers before $/$ are the results by fine-tuning the pre-trained model on our dataset. The numbers after $/$ are obtained through training each CNN on our dataset from scratch. It is clear that training from scratch produces inferior test results to those by the fine-tuning with pre-training. On the one hand, it follows the general observation, that is, a good weight initialization could help improve the stability of training and the training outcomes. On the other hand, it also reveals the challenges of our dataset for these state-of-the-art CNNs, which is evidenced by the poor classification accuracy ($<40.0\%$) by training from scratch as well as the limited recognition accuracy ($<80.0\%$) by fine-tuning. \subsubsection{Pre-training initialization} It can be seen from Table \ref{table:classificationaccuracy} that the ResNet series often outperform other CNNs in the case of pre-training initialization. This is probably because the residual concept makes the training of deep networks easier than other CNNs. The highest accuracy is $79.0\%$, achieved by fine-tuning ResNet101. By contrast, AlexNet obtains the lowest recognition accuracy which is $50.7\%$. VGG16 and VGG19 achieve $67.0\%$ and $69.4\%$, respectively. With respect to species, we see that the highest test accuracy is $100.0\%$, reported in \textit{Xenocriconema} sp. and \textit{Miconchus} sp., achieved by ResNet family and VGG16. By contrast, nearly all CNNs fail to identify \textit{Discolimus} sp. correctly. This is probably because the images of this species have few discriminative features and CNNs mistake them for other species. In general, the species with more images are more likely to be correctly recognized. As shown in Table \ref{table:classificationaccuracy}, the classifiers tend to perform well on the species where sufficient data is provided. Although the number of samples is of importance to the test accuracy, CNNs may also be able to recognize species with less training data. For instance, the number of samples for two different species, \textit{Panagrolaimus} sp. and \textit{Miconchus} sp., are $326$ and $57$ respectively, but most CNNs produced higher test accuracy in identifying \textit{Miconchus}. This may be attributed to high image quality and discriminative features or patterns on the images, which hence provide useful information for CNNs to learn. \subsubsection{Training from scratch} Regarding training from scratch, all CNNs lead to poor recognition outcomes which are less than $40.0\%$. Specifically, AlexNet attains a lowest recognition accuracy which is $24.6\%$. Other CNNs are a bit better than AlexNet, achieving over $30.0\%$ accuracy. We further noticed that only a few species learned effective features, for example, the species of \textit{Pristionchus}, \textit{Pratylenchus}, \textit{Ditylenchus} and \textit{Axonchium}. We deduce that this is due to a large number of images in these species. By contrast, other species have fewer samples and/or less discriminative features/patterns. We observed the recognition accuracy of some species are even $0\%$. We suspect that our dataset involves challenging patterns or features (e.g. twisted worms, random poses of worms, disturbance information etc) for learning, and a random weight initialization would be too arbitrary to learn these patterns or features. Thanks to the pre-training initialization, we achieve much better recognition outcomes than training from scratch. \begin{table}[thbp] \centering \caption{ Comparisons for different augmentation strategies. ``Mean Cls. Acc'' and ``Overall Acc.'' denote Mean Class Accuracy and Overall Accuracy, respectively. }\label{table:augdistricmp} \begin{tabular}{l c c c} \toprule Accuracy (\%) & None & \tabincell{l}{Flip} & \tabincell{l}{Flip \& blur} \\ \midrule Mean Cls. Acc. (ResNet34) & 61.3 & 68.0 & 68.4 \\ Overall Acc. (ResNet34) & 71.9 & 76.1 & 76.1 \\ Mean Cls. Acc. (ResNet50) & 58.1 & 65.0 & 70.4 \\ Overall Acc. (ResNet50) & 68.7 & 74.1 & 78.6 \\ Mean Cls. Acc. (ResNet101) & 68.0 & 66.1 & 69.6 \\ Overall Acc. (ResNet101) & 76.8 & 76.8 & 79.0 \\ \bottomrule \end{tabular} \end{table} \subsection{Discussion} Besides the above species recognition results, we also discuss the effects of augmentation, and the supported research of our dataset and benchmark. The former is for comparisons among none augmentation, a single type of augmentation (random flip) and two types of augmentations (random flip, Gaussian blur). The latter explains some crucial research which can be benefited from our work. \textbf{Augmentation.} As mentioned above, we augmented data during training. We implemented two types of augmentation which are random flip (vertical and horizontal) and Gaussian blur. We use the ResNet series in the case of pre-training initialization for this experiment. We observed from Table \ref{table:augdistricmp} that the augmentation is able to promote the classification accuracy. The complete augmentation (random flip, Gaussian blur) attains a better accuracy than both the single augmentation (random flip) and none augmentation. The single flip augmentation usually obtains a better accuracy than none augmentation. Though there is some improvement, it is not significant and the highest accuracy is still below $80.0\%$. This further demonstrates the challenges of our proposed dataset, and that augmentation is difficult to compensate the challenges. \textbf{Research to support.} Our proposed dataset and benchmark can serve many relevant research activities, and some are listed as follows. \begin{itemize} \item \textit{Pest control.} Crop rotation is the practice of planting different crops sequentially on the same plot of land, in order to improve soil health and control pest. The success of crop rotation relies on a proper selection of an alternative non-host crop for the pest. For the control of plant-parasitic nematodes, the nematode species as prior information is essential, since different species usually have differing host ranges. \item \textit{Soil ecology.} Soil nematodes are abundant in number and vary sensitively to pollutants and environmental disturbance. Therefore the presence and the abundance degree of certain soil nematode species is one of the most important bio-indicators to evaluate soil health, quality, and the physical or pollution-induced disturbances. In this direction, species recognition is critical and our work supports it. \item \textit{Bio-geography.} Many nematode species are cosmopolitan. Their worldwide distribution are affected by some biological and geographic factors. The understanding of the involved species and their assemblage (e.g. species recognition) is key to interpret the interactions of these factors, and how they contribute to the environment. Our work supports this direction as well. \item \textit{3D modeling.} 2D image based understanding of nematodes might lose some important information. 3D reconstruction and rendering of nematodes would provide more freedom for researchers' understanding of nematodes. These can be based on our proposed images dataset (e.g. 3D reconstruction from 2D images). \end{itemize} \subsection{Future Work} The above results and discussion reveal the challenges of our dataset, especially for training a CNN from scratch. It will motivate researchers to analyze the properties of our data and propose innovative techniques (e.g. CNNs, data balance methods etc), in order to attain higher classification accuracy. In the future, we would like to add new data, and invite users to submit their test data, to our dataset. With more samples and species gathered, this dataset will be expanded to advance the species recognition of nematodes in the foreseeable future. \section{Conclusion} \label{sec:conclusion} In this paper, we have presented an open-access imaging dataset for nematode recognition. The dataset has been collected and annotated manually, with paying rigorous efforts and time. We have investigated the efficacy of the state-of-the-art deep learning methods on nematode recognition, a \textit{biological computing} task. We have conducted a benchmark of adopting various state-of-the-art Convolutional Neural Networks (CNNs) on this dataset. We also discussed and analyzed the results. We found augmentation and pre-training initialization can help boost the performance of species recognition. However, there is still great space to improve, given the current highest overall accuracy $79.0\%$. It is possible to improve the performance by designing sophisticated deep learning networks. We hope our dataset and benchmark will be instructive to relevant researchers from different fields in their future research. \section{Our Dataset: I-Nema} \label{sec:dataset} \subsection{Overview} \label{sec:overview} In this section, we explain how we create our dataset, referring to as ``I-Nema''. At first, we conduct a country-wide soil sample collection in order to gather specimens representing different lineages in the tree of Nematoda \cite{holterman2006phylum}. The individual nematodes are extracted from soil, fixed, and subsequently processed into glass slide specimens with $10\sim20$ nematode worms per glass slide. The species is manually identified and the undesirable taxa are excluded. Images are then captured for each selected specimen using the camera equipped with a microscope. The identified species label is simply annotated to the involved images. We finally perform some pre-processing for the initially collected images. Figure \ref{fig:overview} shows the pipeline for creating I-Nema. \subsection{Sample Collection} \label{sec:samplecollection} Sample collection contains soil collection, nematode extraction and specimen preparation. Soil samples are collected from various ecosystems in Mainland China. Aside from collecting samples in the natural environment, we also use the nematode cultures maintained in laboratory (two species: \textit{Pratylenchus} sp. and \textit{Acrobeloides} sp.). For soil samples, nematodes are extracted with a Baermann tray after an incubation in room temperature for 24 hours. The nematode extraction are concentrated using a 500 mesh sieve (25µm opening). For the laboratory maintained species, nematodes are directly washed from either carrot disc (\textit{Pratylenchus} sp.) or cultural middle (\textit{Acrobeloides} sp.). After removing water, nematodes are fixed with $4\%$ formalin and gradually transferred to anhydrous glycerin, following the protocol \cite{sohlenius1987vertical} which is modified based on \cite{seinhorst1962killing}. The glycerin preserved nematodes are manually picked up and placed in a glass slide. Each permanent slide contains about $10\sim20$ individual worms. \subsection{Specimen Recognition} \label{sec:specimenid} The species is manually identified before capturing images. This is to collect diverse species of nematodes, rather than simply increasing the number of images in a few species. Nematodes are identified based on the morphological characters (e.g. the presence of teeth or stylet in buccal cavity, the shape of pharynx and tail, the type of reproductive system, etc) and morphometric measurements (e.g. the length of stylet, body width and length, tail length, etc). These characters can be straightforwardly observed through microscopy and/or measured in a professional software connected to the camera equipped with the microscope. The acquired information is subject to the available taxonomic key (e.g. \cite{bongers1988nematoden,andrassy2005free}) and is further confirmed with the original descriptions. The recovered taxon will be considered as undesirable and excluded if it is rare in population (difficult to acquire sufficient specimens) or evolutionary repetitive to a captured species. If the taxon is abundant in number and represent a different evolutionary lineage, we continue the following image acquisition step. We selected and identified a total number of $2$ laboratory cultured species (\textit{Pratylenchus} sp. and \textit{Acrobeloides }sp.) and $17$ naturally collected species (see Table \ref{table:datasetstatistics}), covering $16$ families of common soil species and all nematode trophic guilds. Figure \ref{fig:overview} shows the microscope setup for the manual specimen identification. \begin{figure}[htbp] \centering \begin{minipage}[b]{0.32\linewidth} \subfigure[]{\label{}\includegraphics[width=1\linewidth]{figures/Apor.png}} \end{minipage} \begin{minipage}[b]{0.32\linewidth} \subfigure[]{\label{}\includegraphics[width=1\linewidth]{figures/Apor_head}} \end{minipage} \begin{minipage}[b]{0.32\linewidth} \subfigure[]{\label{}\includegraphics[width=1\linewidth]{figures/Apor_tail}} \end{minipage} \\ \begin{minipage}[b]{0.32\linewidth} \subfigure ]{\label{}\includegraphics[width=1\linewidth]{figures/Mylonchulus3.jpg}} \end{minipage} \begin{minipage}[b]{0.32\linewidth} \subfigure ]{\label{}\includegraphics[width=1\linewidth]{figures/Mylonchulus-head1.jpg}} \end{minipage} \begin{minipage}[b]{0.32\linewidth} \subfigure ]{\label{}\includegraphics[width=1\linewidth]{figures/Mylonchulus-head2.jpg}} \end{minipage} \\ \begin{minipage}[b]{0.32\linewidth} \subfigure ]{\label{}\includegraphics[width=1\linewidth]{figures/Pristionchus1.jpg}} \end{minipage} \begin{minipage}[b]{0.32\linewidth} \subfigure ]{\label{}\includegraphics[width=1\linewidth]{figures/Pristionchus-head.png}} \end{minipage} \begin{minipage}[b]{0.32\linewidth} \subfigure ]{\label{}\includegraphics[width=1\linewidth]{figures/Pristionchus3.jpg}} \end{minipage} \caption{ The example of nematode images used in this study. (a-c): \textit{Aporcelaimus} sp.; (d-f): \textit{Mylonchulus} sp.; (g-i): \textit{Pristionchus} sp.; (a), (b), (e), (f), (g), (h): head region; (c,d): tail region; (i): middle body. } \label{fig:sampleimages} \end{figure} \subsection{Image Acquisition and Annotation} We capture images after the step of identifying specimens. The glass slides are examined and photographed using an Olympus BX51 DIC Microscope (Olympus Optical, produced in Tokyo, Japan), equipped with an camera. The system (see Figure \ref{fig:overview}) has also been utilized for specimen identification in Section \ref{sec:specimenid}. The procedure for image acquisition are as follows. The nematode in specimen is moved to the view center of the microscope, to facilitate further image alignment. Head regions are given a higher priority, as they involve more informative taxonomic characters. Tail and middle body regions are also photographed, since these regions are also informative for species recognition and easy to locate. For the \textit{Mylonchulus} sp., \textit{Miconchus} sp., \textit{Pristionchus} sp. and \textit{Acrobeles }sp., $5\sim10$ images are taken at different image planes to ensure each of the taxonomic characters are properly captured. These species have contrasting morphological structures at different layers. For other species, $3\sim5$ images are taken from the lowest, middle and highest image planes which allow large variations to be captured. Both juveniles and adult worms are included for capturing images. We eventually acquire a total number of $2,769$ images, ranging from $24$ to $347$ images per species. As for image annotation, since the species of a specimen has been identified in the last step, the species label can be simply allocated to the acquired images of corresponding nematodes. Table \ref{table:datasetstatistics} shows some statistics of our proposed image dataset (I-Nema). Figure \ref{fig:sampleimages} shows some nematode images from I-Nema. As shown in the histogram (Figure \ref{fig:imagehistogram}), the data distribution is diverse. Species with less than $114$ images account for more than half of the whole dataset, while over $324$ images are included in three species. Two species have less than $54$ images. Species between $84$ images and $294$ images have few variations (except blanks), in terms of the number of classes. \begin{table}[thbp] \centering \caption{ Numbers of image samples in each species in our dataset. ``sp.'' represents ``species''. ``\#. Samples'' denotes the number of samples. }\label{table:datasetstatistics} \begin{tabular}{l c} \toprule Species & \#. Samples \\ \midrule \tabincell{l}{\textit{Acrobeles} sp.} & 71 \\ \tabincell{l}{\textit{Acrobeloides} sp.} & 184 \\ \tabincell{l}{\textit{Amplimerlinius} sp.} & 24 \\ \tabincell{l}{\textit{Aphelenchoides} sp.} & 347 \\ \tabincell{l}{\textit{Aporcelaimus} sp.} & 128 \\ \tabincell{l}{\textit{Axonchium} sp.} & 170 \\ \tabincell{l}{\textit{Discolimus} sp.} & 64 \\ \tabincell{l}{\textit{Ditylenchus} sp.} & 330 \\ \tabincell{l}{\textit{Dorylaimus} sp.} & 38 \\ \tabincell{l}{{\textit{Eudorylaimus}} sp.} & 86 \\ \tabincell{l}{\textit{Helicotylenchus} sp.} & 77 \\ \tabincell{l}{\textit{Mesodorylaimus} sp.} & 96 \\ \tabincell{l}{\textit{Miconchus} sp.} & 57 \\ \tabincell{l}{\textit{Mylonchulus} sp.} & 139 \\ \tabincell{l}{\textit{Panagrolaimus} sp.} & 326 \\ \tabincell{l}{\textit{Pratylenchus} sp.} & 286 \\ \tabincell{l}{\textit{Pristionchus} sp.} & 196 \\ \tabincell{l}{\textit{Rhbiditis} sp.} & 81 \\ \tabincell{l}{\textit{Xenocriconema} sp.} & 69 \\ \tabincell{l}{\textbf{Total}} & 2,769 \\ \bottomrule \end{tabular} \end{table} \begin{figure}[htbp] \centering \begin{minipage}[b]{0.9\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/plot.png}} \end{minipage} \caption{ Histogram for our proposed dataset (number of images versus number of classes). } \label{fig:imagehistogram} \end{figure} \subsection{Image Pre-processing} \label{sec:preprocessing} The captured images suffer from several issues, for example, nematode worms existing in different regions on images, various background color. To alleviate these issues, we first crop the images based on the detected edge information. Specifically, we use Canny edge detector \cite{canny1986computational} which is sufficient to detect edges in each image (i.e. worm area), and calculate the min/max pixels for the detected edges. We crop each image according to the corresponding min/max pixels. Note that this is not a worm segmentation, as shown in Figure \ref{fig:preprocessing}. We then convert the cropped images to grayscale images. Figure \ref{fig:preprocessing} illustrates the procedure for the image pre-processing step. \begin{figure}[htbp] \centering \begin{minipage}[b]{1.0\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/preprocessing.png}} \end{minipage} \caption{ Preprocessing for the initially captured images. We fist crop the less informative regions and then process it into a grayscale image. } \label{fig:preprocessing} \end{figure} \section{Introduction} \label{sec:introduction} Nematode worms are important, due to the following reasons: (1) parasitic nematodes threaten the health of plants on a global scale, causing at least $80$ billion US dollar's loss per year \cite{nicol2011current}; (2) interstitial nematodes pervade sediment and soil ecosystems in overwhelming numbers \cite{lambshead1993recent}; and (3) \textit{Caenorhabditis elegans} (\textit{C. elegans}) is a favourable experimental model system \cite{riddle1997developmental}. Accurate recognition or identification of nematodes are of great value for pest control (e.g. choosing a specific nematicide or rotation crop), soil ecology (e.g. evaluation for the soil quality), bio-geography, habitat conservation and climate maintenance \cite{coomans2002present}. However, nematodes recognition is challenging, due to high phenotypic plasticity (i.e. high morphological variations within a single species) \cite{coomans2002present,nadler2002species}, vague diagnostic characteristics \cite{derycke2008disentangling,erwin1982tropical}, and frequently encountered juveniles \cite{anderson2000nematode}. More importantly, the manual identification is extremely time-consuming and labor-intensive, especially when dealing with large-scale ecological data. It often requires solid training and well-established experience. As a result, the taxonomy of nematode species appears to be a significant bottleneck for many relevant fields. \begin{table}[t]\fontsize{9pt}{\baselineskip}\selectfont \centering \caption{Datasets for Nematode recognition. \#. Samples and \#. Classes denote the numbers of samples and classes, respectively. Our proposed dataset is the first open-access one which involves both naturally isolated and laboratory cultured nematodes, covering diverse morphology and life strategies. }\label{table:datasetcontrast} \begin{tabular}{l c c c} \toprule Datasets & \tabincell{l}{ \#. Samples } & \tabincell{l}{ \#. Classes } & \tabincell{l}{ Open to public } \\ \midrule \tabincell{l}{\cite{liu2010-xray,liu2010-xraymultilinear}} & 50 & 5 & NO \\ \tabincell{l}{\cite{liu2018-cnn}} & 500 & 5 & NO \\ \tabincell{l}{\cite{liu2018-projection}} & 1,000 & 5 & NO \\ \tabincell{l}{\cite{Liu2017-1-imagefusion,liu2017-2-projection}} & 500 & 10 & NO \\ \tabincell{l}{\cite{zhou2006automatic}} & 480,000 frames & 8 & NO \\ \tabincell{l}{\cite{javer2018identification}-single} & 10,476 clips & 365 & NO \\ \tabincell{l}{\cite{javer2018identification}-multi} & 308 clips & 12 & NO \\ \tabincell{l}{\textbf{Our dataset}} & 2,769 & 19 & YES \\ \bottomrule \end{tabular} \end{table} Although molecular barcoding \cite{floyd2002molecular,blaxter2004promise} has been consolidated as a powerful tool for the species identification and biodiversity investigation, its applications depend on the availability of molecular biology facilities, sequencing instruments, as well as the background knowledge. Alternatively, imaging data are often more accessible and economical for broader users, and have been utilized for nematode research, such as nematode detection/segmentation \cite{silva2003intelligent,nguyen2006improved,ochoa2007contour,rizvandi2008machine,Rizvandi-2,Rizvandi-3,chou2017edge,wang2020celeganser,chen2020cnn}, classification \cite{javer2018identification,zhou2006automatic,liu2010-xray,liu2010-xraymultilinear,liu2017-informationfusion,Liu2017-1-imagefusion,liu2017-2-projection,liu2018-projection,liu2018-cnn}, nematode (eggs) counting \cite{akintayo2018deep,holladay2016high}, etc. It should be noted that the current identification or recognition research are either for nematode image stacks \cite{liu2010-xray,liu2010-xraymultilinear,liu2017-informationfusion,Liu2017-1-imagefusion,liu2017-2-projection,liu2018-projection,liu2018-cnn}, or predominantly designed for model organism \textit{ C. elegans} \cite{zhou2006automatic,javer2018identification}. However, the involved issues for identification tasks are: (1) very few classes involved in the image stacks, e.g. $5$ classes in Table \ref{table:datasetcontrast}; (2) low diversity coverage, i.e. species limited to the laboratory cultured \textit{C. elegans} only (Table \ref{table:datasetcontrast}: \cite{zhou2006automatic,javer2018identification}). Importantly, to our knowledge, all relevant imaging data were used in their own research, and none of them are publicly available. In summary, an image dataset covering diverse kinds of nematodes and a standard benchmark using state-of-the-art deep learning techniques is still missing! Provided the above analysis, we are motivated to create an image dataset for nematodes and complete a benchmark for state-of-the-art deep learning methods. Despite that laboratory cultured species are much easier to acquire, for example, \textit{C. elegans} complete a reproductive life cycle in $3.5$ days at $20^\circ$ and a huge population can be reproduced, our samples are mostly collected from the natural environment (many are not cultivable in laboratory). As a result, the data collection is more time-consuming and labor-intensive (e.g. sampling in different ecosystems, manual nematode picking for target species in a mixed population, etc) than using pure laboratory culture. In particular, we first collect soil from a wide range of natural environments, including temperate broadleaf, mixed forest, crop field, and tundra. Then the nematodes are extracted and further placed in glass slides. With the aid of the microscope system, manual identification is performed to determine whether a nematode is needed or not for further image capturing. If it is needed, we continue image capturing using the microscope imaging system (a camera, a microscope and a software), and assign the manually identified species label to the involved images. It is discarded otherwise. We also take images for $2$ laboratory cultured species. A single nematode worm per image is constrained during image capture. The dataset has a total number of $2,769$ images and $19$ different species ($17$ species from natural environment and $2$ laboratory species). To the best of our knowledge, \textit{this proposed dataset is the first open-access image dataset including diverse nematodes species and life strategies (plant-parasitic, fungi feeder, bacteria feeder, omnivores, predator)}. In addition to the dataset, we further employ the state-of-the-art deep learning networks on the species recognition of nematodes. We analyze the results with regard to different respects: (1) different deep learning networks, (2) pre-training as initialization versus training from scratch. We also conduct experiments and analysis on augmentation and discuss the supported research by our work. The \textit{contributions} of this work are summarized as follows. \begin{itemize} \item We propose an image dataset for diverse species of nematodes. It is, to our knowledge, the first open-access biological image dataset for diverse nematodes, representing different evolutionary lineages and life strategies (isolated both from natural environment and laboratory). \item We conduct a benchmark by adjusting and training the state-of-the-art deep learning networks on our dataset. We compare and analyze their performance. We also discuss the supported research by our work. \end{itemize} \section{Related Work} \label{sec:relatedwork} In this section, we review the research which are most relevant to this work. We first look back on the nematode detection/segmentation, and then review species recognition of nematodes. We finally summarize some state-of-the-art CNNs. \subsection{Nematode Detection/Segmentation} \label{related-detection} An early nematode detection paper presented a computational system which supports the detection of nematodes in digital images \cite{silva2003intelligent}. Nguyen et al. proposed an improved watershed segmentation algorithm using water diffusion and local shape priors, which has been applied to the segmentation of nematode worms \cite{nguyen2006improved}. Ochoa et al. introduced an approach to extract a considerable number of individuals even in cluttered regions in population images \cite{ochoa2007contour}. Later, novel methods based on skeleton analysis were designed for the detection of individual nematode worms in population images in presence of worm overlapping \cite{rizvandi2008machine,Rizvandi-2,Rizvandi-3}. Recently, Chou et al. put forwarded an efficient CNN-based regression network for accurate edge detection \cite{chou2017edge}. Chen et al. proposed a framework for detecting worm-shaped objects in microscopic images using convolutional neural networks (CNNs) \cite{chen2020cnn}. The authors used curved lines as annotation rather than bounding boxes. Wang et al. introduced a pipeline for automated analysis of C. elegans imagery \cite{wang2020celeganser}, which detects, segments the worm, and predicts the age of individual worms with high accuracy. \begin{figure*}[thbp] \centering \begin{minipage}[b]{1.0\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/overview.png}} \end{minipage} \caption{ The pipeline for creating our dataset. Soil samples are first collected (soil collection), and nematodes are then extracted from soil (extraction). The nematodes are further picked up and placed in a glass slide (specimen preparation). After manual identification (identification), we determine which specimens will do image capturing and annotation (image acquisition and annotation). } \label{fig:overview} \end{figure*} \subsection{Nematode Recognition} \label{related-classification} A major proportion of nematode classification research are conducted on image stacks, with each stack (or each species) consisting of multiple focal planes of the specimen \cite{liu2010-xray,liu2010-xraymultilinear,liu2017-informationfusion,liu2018-projection,liu2018-cnn,Liu2017-1-imagefusion,liu2017-2-projection}. Various methods have been developed to handle the classification based on image stacks, for example, information fusion based approaches \cite{liu2017-informationfusion,Liu2017-1-imagefusion}, 3D X-Ray Transform based method \cite{liu2010-xray}, 3D X-Ray Transform based multilinear analysis method \cite{liu2010-xraymultilinear}, projection-based methods \cite{liu2017-2-projection,liu2018-projection}, deep convolutional neural network (CNN) image fusion based multilinear approach \cite{liu2018-cnn}. Zhou et al. proposed an identification method for mutant types and other types, based on locomotion patterns \cite{zhou2006automatic}. Javer et al. presented a fully convolutional neural network to discern genetically diverse strains of C. elegans, based only on their recorded spontaneous activity, and explored how its performance changes as different embeddings are used as input \cite{javer2018identification}. \textbf{Datasets.} \cite{liu2010-xray,liu2010-xraymultilinear} used $5$ species and $10$ samples per species for classification. \cite{liu2018-cnn} adopted $500$ samples from $5$ classes, and \cite{liu2018-projection} utilized $1,000$ samples from $5$ species. \cite{Liu2017-1-imagefusion,liu2017-2-projection} used $500$ samples of $10$ categories. \cite{zhou2006automatic} used a total number of $480,000$ frames from the wild type and $7$ other mutant types of C. elegans. \cite{javer2018identification} involves two datasets of C. elegans: single-worm dataset and multi-worm dataset. The former includes $10,476$ video clips of individual worms divided between $365$ different classes. The latter contains a total number of $308$ video clips from $12$ strains. We summarize the dataset information in Table \ref{table:datasetcontrast}. \textit{It is not surprising to collect a great number of samples for C. elegans, since C. elegans is a common organism with rapid reproduction in laboratory. Also, the data are typically used in their own research and are not released to the public. } \subsection{Deep CNNs} \label{related-cnns} Nowadays, deep convolutional neural networks (CNNs) are powerful tools for data-driven research. State-of-the-art CNNs are AlexNet \cite{alexnet}, VGG \cite{vgg}, ResNet \cite{resnet}, Inception \cite{inception}, Xception \cite{xception}, etc. They are well defined and built on image grids, and capable of learning discriminative features from input images. In this work, we adjust AlexNet, VGG-16, VGG-19, ResNet-34, ResNet-50, ResNet-101 and train them on the train set, and test them over the test set of our dataset, in terms of the task of nematode recognition.
1,108,101,563,997
arxiv
\section{Introduction} Application of computer vision techniques aimed at object recognition is gathering increasing attention in industrial applications. Among others, prominent applications in this space include robot picking in assembly lines and surface inspection. To address these tasks, the vision system must estimate the 6 DoF (degrees-of-freedom) pose of the sought objects, which calls for a 3D object recognition approach. Moreover, in industrial settings robustness, accuracy as well as run-time performance are particularly important. Reliance on RGB-D sensors providing both depth and color information is conducive to 3D object recognition. Yet, typical nuisances to be dealt with in 3D object recognition applications include clutter, occlusions and the significant degree of noise which affects most RGB-D cameras. Many studies, such as \cite{Johnson1999,Guo2015IJCV}, have investigated on these problems and highlighted how local 3D descriptors can effectively withstand clutter, occlusions and noise in 3D object recognition. The local descriptors pipeline for 3D object recognition is however quite slow. Indeed, RGB-D cameras generate a high amount of data (over 30MB/s) and, as this may hinder performance in embedded and real-time applications, sampling strategies are needed. To reduce processing time, keypoint extraction techniques are widely used. In addition, some solutions propose to assign higher priority to specific image areas, like, for example, in the foveation technique \cite{Gomes2013}. Another approach, inspired by human perception and widely explored for 2D image segmentation, consists in saliency detection, which identifies the most prominent points within an image \cite{Aytekin2018}. Unlike the foveation, which processes arbitrary regions, the use of saliency allows for highlighting image regions that are known to be more important. This work proposes a solution to improve the performance of the standard local descriptors pipeline for 3D object recognition from point clouds. The idea consists in adding a preliminary step, referred to as Saliency Boost, which filters the point clouds using a saliency mask in order to reduce the number of processed points and consequently the whole processing time. Besides, by selecting only salient regions, our approach may yield a reduction in the number of false positives, thereby often also enhancing object recognition accuracy. \section{Related Works} 3D object recognition systems based on local descriptors typically deploy two stages, one carried out offline and the other online, referred to as training and testing, respectively. The training stage builds the database of objects, storing their features for later use. In the testing stage, then, features are extracted from scene images. Given a scene, the typical pipeline, depicted in Figure \ref{fig:proposed-pipeline} and described, e.g. in \cite{Chen2007}, consists of the following steps 1) Keypoints extraction; 2) Local descriptors calculation; 3) Matching; 4) Grouping correspondences; and 5) Absolute orientation estimation. The first two, described in more details below, are those which really differentiate the various approaches and impact performance most directly. \subsection{Keypoints Extraction} This step concerns selecting some surface points, either from images or point clouds. According to \cite{Tombari2012}, keypoint extraction must reduce data dimensionality without losing discriminative capacity. In this work, we explore techniques which work in 3D, as Uniform sampling and Intrinsic Shape Signatures (ISS), and 2D alike, i.e. SIFT and FAST. Uniform Sampling downsamples the point cloud segmenting it in voxels based on a certain leaf size, and selects as keypoint each nearest neighbor point to a voxel centroid \cite{PCL}. ISS \cite{Zhong2009} selects keypoints based on a local surface saliency criterion, so as to extract 3D points that exhibit a large surface variation in their neighbourhood. The keypoint detector proposed in SIFT \cite{sift1999} is arguably the prominent proposal for RGB images. It is based on the detection of blob-like and high contrast local features amenable to compute highly distinctive features and similarity invariant image descriptors. The FAST keypoint extractor is a 2D corner detector based on a machine learning approach, which is widely used in real-time computer vision applications due to its remarkable computational efficiency. \subsection{Local Descriptors} \label{sec:local-features} A local 3D descriptor processes the neighborhood of a keypoint to produce a feature vector discriminative with respect to clutter and robust to noise. Many descriptors have been proposed in recent years and several works, e.g. \cite{Guo2015IJCV}, have investigated on their relative merits and limitations. In this work, we explore both descriptors which process only depth information, such as Signatures of Histograms of OrienTations (SHOT)\cite{Salti2014} and Fast Point Feature Histogram (FPFH)\cite{Rusu2009}, as well as depth and color, like Point Feature Histogram for Color (PFHRGB) \cite{Rusu2008} and Color SHOT (CSHOT) \cite{Salti2014}. Introduced by \cite{Salti2014}, SHOT describes a keypoint based on spatial and geometric information. To calculate the descriptor, first 3D Local Reference Frame (LRF) around the keypoint is established. Then, a canonical spherical grid is divided into 32 segments. Each segment results in a histogram that describes the angles between the normals at the keypoint and the normal at the neighboring points. The authors also proposed a variation to work with color at the points, called CSHOT. The color value is encoded according to the CIELab color space and added to the angular information deployed in SHOT. This descriptor is known to yield better results than SHOT when applied to colored point clouds. PFHRGB \cite{Rusu2008} is based on the Point Feature Histogram (PFH) and stores geometrical information by analyzing the angular variation of the normal between each pair of combination in a set composed by the keypoint and all its k-neighbors. PFHRGB works on RGB and stores also the color ratio between the keypoint and its neighbors, increasing its efficiency on RGB-D data \cite{Alexandre2012}. In order to speed-up the descriptor calculation, \cite{Rusu2009} proposed a simplified solution, called FPFH, which considers only the differences between the keypoint and its k-neighbors. Also, an influence weight is stored, resulting in a descriptor which can be calculated faster while maintaining its discriminative capacity. \subsection{Saliency Detection} Salient object detection is a topic inspired by human perception, which affirms that the human being tends to select visual information based on attention mechanisms in the brain \cite{Kastner2000}. Its objective is to emphasize regions of interest in a scene \cite{Aytekin2018}. Many applications benefit from the use of saliency, such as object tracking and recognition, image retrieval, restoring and segmentation. The majority of recent works perform saliency detection using either RGB \cite{Aytekin2018,Hou2019} or RGB-D \cite{Chen2018,Li2018} images and are based on Deep Learning algorithms. \section{Proposed Approach} We present a way to improve significantly the time performance and also the memory efficiency of the standard pipeline described above, by adding an additional step to the original pipeline. We refer to this step as \textit{saliency boost}. It leverages the RGB scene image by detecting salient regions within it, which are then used to filter the point cloud and to execute the local descriptors' pipeline only on salient regions. In particular, we use the saliency mask to reduce the search space for 3D keypoints by letting them run on the part of the point cloud which corresponds to the salient regions of the image. To project saliency information from the 2D domain of the RGB image to the point cloud we leverage the registration information provided by RGB-D cameras. Figure \ref{fig:proposed-pipeline} presents a graphical overview of the approach. In the case of 2D keypoint detectors, instead, we run them on the full RGB image and we then filter out keypoints not belonging to the salient regions: we do not filter the image before the keypoint extraction step because 2D detectors exploit also pixels from the background to define blobs and edges/corners to detect keypoints. In the 3D case, instead, points from the background are usually far away and outside the sphere used to define the keypoint neighborhood, so it is possible to filter them before without affecting the detector performance. Our approach is not dependent from a specific saliency detection technique. In this work, we choose the DSS algorithm \cite{Hou2019}, and we detect salient areas by running the trained model provided by the authors. \begin{figure}[htb] \centering \includegraphics[width=\textwidth]{pipeline-saliency.png} \caption{Local descriptor pipeline with saliency boost.} \label{fig:proposed-pipeline} \end{figure} \section{Experimental Results} \subsection{Dataset} The experiments were performed on the Kinect dataset from the University of Bologna, presented in \cite{Salti2014}. This dataset has sixteen scenes, and six models with pose annotation. Each model is represented as a set of 2.5D views from different angles and has from thirteen to twenty samples. Figure \ref{fig:kinect-dataset} depicts some examples of models and scenes in this dataset. \begin{figure}[!htb] \centering \includegraphics[width=\textwidth]{kinect.png} \caption{Examples of models and scenes from Kinect dataset \cite{Salti2014}. Models of Mario and Rabbit (leftmost figures), and scenes (rightmost figures).} \label{fig:kinect-dataset} \end{figure} \subsection{Local Descriptors Pipeline} In the local feature pipeline for object recognition, the choice of the keypoint extraction and description methods is key, and depends on the applications, the kind of 3D representation available and their resolution, the sensor noise, etc... In order to evaluate the performance of the proposed approach in an application-agnostic scenario, we test combinations of several descriptors and detectors. The selected descriptors are: SHOT and CSHOT (Color SHOT) \cite{Salti2014}, FPFH \cite{Rusu2009} and PFHRGB \cite{Rusu2008}. The keypoint detectors working on 3D data are Uniform sampling (with leaf sizes ranging from 2 to 5 cm with step of 1 cm) and ISS \cite{Zhong2009}, while on images we test SIFT \cite{sift1999} and FAST \cite{fast2006}, run on the RGB image and projected on the point cloud, as discussed. The matching step is performed by nearest neighbor (NN) search implemented by the FLANN library, integrated in the Point Cloud Library (PCL) \cite{PCL}. A KdTree is built for each view of each model in the database and each keypoint on the scene is matched to only one point of one view of one model in the database by selecting the closest descriptor among views and models. After this process, all matches pointing to a view of a model are processed by the Geometric Consistency Grouping algorithm \cite{Chen2007}, which selects all the subsets of geometrically consistent matches between the view and the scene, and estimates the aligning transformation. The transformation obtained from the largest correspondence group among all the views of an object is considered the best estimation of the aligning transformation for that object. If an object fails to have a geometrically consistent subset with at least 3 matches among all its views, it is estimated as being not present in the considered scene. \subsection{Evaluation Protocol} In order to evaluate the performance of the proposed object detection pipeline, the correctness of predictions both of object presence and pose are evaluated. We adopt the Intersection over Union (IoU) metric (Equation \ref{eq:IoU}), also known as the Jaccard index, and defined as the ratio between the intersection and the union of the estimated bounding box ($BB_{Est}$) and the ground truth bounding box($BB_{GT}$). \begin{equation} \label{eq:IoU} IoU = \frac{BB_{GT} \cap BB_{Est}}{BB_{GT} \cup BB_{Est}} \end{equation} A detection is evaluated as correct if its IoU with the ground truth is greater than 0.25, as in \cite{Song2016}. Given detections and ground truth boxes, we calculate precision and recall (Equations \ref{eq:precision} and \ref{eq:recall}) by considering a correct estimation as True Positive ($ TP $), i.e. $IoU \geq 0.25$, an estimation of an absent object as False Positive ($FP$), and misdetections or detections with $IoU < 0.25$ as False Negative ($ FN $). \begin{equation} \label{eq:precision} precision = \frac{TP}{(TP + FP)} \end{equation} \begin{equation} \label{eq:recall} recall = \frac{TP}{(TP + FN)} \end{equation} To calculate precision-recall curves (PRC), we varied the threshold on the number of geometrically consistent correspondences to declare a detection, increasing it from the minimum value of 3 up to when no more detections are found in a scene. The area under the PRC curve (AUC) is then computed for each combination detector/descriptor and used to compare and rank the pipelines. \subsection{Implementation Details} Tests were performed on a Linux Ubuntu 16.04 LTS machine, using the Point Cloud Library (PCL), Version 1.8.1, OpenCV 3.4.1 and the VTK 6.2 library. For comparison purposes, all trials were performed on the same computer, equipped with an Intel Core i7-3632QM processor and 8GB of RAM. When available in PCL, the parallel version of each descriptor was used (e.g. for SHOT, CSHOT, and FPFH). As for parameters of detectors, the ISS Non-Maxima Suppression radius was set to $0.6$ cm and the neighborhood radius to $1$ cm, while for SIFT and FAST we used the default values provided in OpenCV. As for descriptors, to estimate the normals we used the first ten neighbors of each point while the description radius was set to $5$ cm for all the considered. \subsection{Results} In this section, we present the results obtained in the experiments. All trials were performed on the Kinect dataset, comparing the original pipeline (blue part in Figure \ref{fig:proposed-pipeline}) with the proposed pipeline with saliency boosting. For each descriptor and each pipeline we tested seven keypoint extractors, totaling 56 trials. The scene processing time, which comprises the saliency detection (only for boosted pipeline), keypoint extraction, description, matching correspondences, clustering and pose estimation, was measured to verify the impact of the proposed modification also on processing time. Results in terms of the number of keypoints extracted are presented in Table \ref{tab:keypoints-extracted}. The saliency filtering reduces significantly the average number of keypoins extracted by each detector: reduction using saliency boost ranges from $24.58\%$ to almost $80\%$ with an average of $56\%$. \begin{table}[!htb] \centering \caption{Average number of keypoints extracted from scenes in the trials with the traditional local pipeline (LP) and boosted by saliency (Boost). The column ``\%'' represents the variation between them. Best value in bold.}\label{tab:keypoints-extracted} \begin{tabular}{p{3cm}|r|r|r} \hline \textbf{Keypoints} & \multicolumn{1}{c|}{\textbf{LP}} & \multicolumn{1}{c|}{\textbf{Boost}} & \multicolumn{1}{c}{\textbf{\%}} \\ \hline FAST & 489.71 & 369.36 & -24.58 \\ ISS & 4201.16 & 846.75 & \textbf{-79.84} \\ SIFT & 282.79 & 199.79 & -29.35 \\ US$_{0.02}$ & 4559.80 & 1457.86 & -68.03 \\ US$_{0.03}$ & 2144.07 & 731.36 & -65.89 \\ US$_{0.04}$ & 1266.00 & 446.29 & -64.75 \\ US$_{0.05}$ & 820.57 & 303.71 & -62.99 \\ \hline \textbf{Average} & \multicolumn{1}{c}{-} & \multicolumn{1}{c}{-} & \multicolumn{1}{r}{-56.49} \\ \hline \end{tabular} \end{table} The number of keypoints extracted impacts directly the running time of the pipeline, mainly by two factors: the number of descriptors that have to be computed and the time it takes to match them. The SHOT and CSHOT descriptors are calculated relatively fast but due to their length (352 and 1344 bins respectively), the matching phase is slower, accounting for 97 and 99\% of the processing time, respectively. The PFHRGB and FPFH are shorter descriptors (250 and 33 bins respectively), but the description is slower and requires 94 and 89\% of the overall time, respectively. As shown in Table \ref{tab:results-time}, the extraction of keypoints only in salient regions reduces drastically the processing time for both kinds of descriptors. In the best case, reductions in processing time is as high as 80\%, i.e. the boosted pipeline is 5 times faster due to the proposed saliency boosting. For all the considered detector/descriptor combinations, deployment of the saliency boosting step always reduces the processing time significantly, from the 22\% obtained by FAST/SHOT to 83\% for ISS and US$_{0.05}$ with FPFH. \begin{table}[!htb] \scriptsize \centering \caption{Average scene processing time (s) in the trials with the traditional Local Pipeline (LP) and boosted by saliency (Boost). The column ``\%'' represents the variation between them. Best value in each column in bold.}\label{tab:results-time} \begin{tabular}{l|rrr|rrr|rrr|rrr} \hline & \multicolumn{3}{c|}{\textbf{CSHOT}} & \multicolumn{3}{c|}{\textbf{SHOT}} & \multicolumn{3}{c|}{\textbf{PFHRGB}} & \multicolumn{3}{c}{\textbf{FPFH}} \\ \hline \textbf{Keypoints} & \multicolumn{1}{c}{\textbf{LP}} & \multicolumn{1}{c}{\textbf{Boost}} & \multicolumn{1}{c|}{\textbf{\%}} & \multicolumn{1}{c}{\textbf{LP}} & \multicolumn{1}{c}{\textbf{Boost}} & \multicolumn{1}{c|}{\textbf{\%}} & \multicolumn{1}{c}{\textbf{LP}} & \multicolumn{1}{c}{\textbf{Boost}} & \multicolumn{1}{c|}{\textbf{\%}} & \multicolumn{1}{c}{\textbf{LP}} & \multicolumn{1}{c}{\textbf{Boost}} & \multicolumn{1}{c}{\textbf{\%}} \\ \hline FAST & 244.0 & 174.8 & -28.36 & 59.1 & 45.9 & -22.31 & 351.4 & 238.8 & -32.06 & 46.6 & 19.0 & -59.14 \\ ISS & 226.1 & \textbf{47.7} & \textbf{-78.90} & 72.4 & \textbf{17.1} & \textbf{-76.45} & 2580.4 & 489.1 & \textbf{-81.05} & 141.9 & 24.3 & -82.92 \\ SIFT & \textbf{132.2} & 94.5 & -28.50 & \textbf{34.3} & 25.7 & -25.29 & \textbf{195.3} & 138.2 & -29.24 & \textbf{31.3} & \textbf{17.7} & -43.39 \\ US$_{0.02}$ & 2167.9 & 668.8 & -69.15 & 505.7 & 174.5 & -65.50 & 2100.9 & 455.5 & -78.32 & 150.6 & 29.6 & -80.32 \\ US$_{0.03}$ & 988.0 & 335.6 & -66.04 & 238.8 & 88.0 & -63.16 & 913.2 & 191.5 & -79.03 & 137.4 & 24.1 & -82.47 \\ US$_{0.04}$ & 583.4 & 205.2 & -64.83 & 139.7 & 54.1 & -61.29 & 506.1 & 103.5 & -79.56 & 130.9 & 22.2 & -83.08 \\ US$_{0.05}$ & 378.1 & 139.9 & -63.01 & 90.7 & 37.2 & -58.99 & 304.3 & \textbf{62.1} & -79.61 & 128.7 & 20.9 & \textbf{-83.76} \\ \hline \textbf{Average} & & & -56.97 & & & -53.29 & & & -65.55 & & & -73.58 \\ \hline \end{tabular} \end{table} Reducing processing time is only beneficial if it doesn't harm recognition and localization performance. Interestingly, deployment of the saliency boosting step very often improves AUC with respect to the traditional pipeline, as shown in Table \ref{tab:results-AUC}. In particular, for 19 of the 28 trials which included the saliency boosting step, the pipeline boosted by saliency performed better also on AUC, with massive improvements by more than $50 \%$ for PFHRGB and FPFH. Viceversa, when the AUC decreases due to the deployment of the saliency boost, it does it usually marginally, by $1$ or $2\%$, with the worst decrease in AUC being greater than $10\%$ only once, when using the SIFT detector. \begin{table}[!htb] \scriptsize \centering \caption{AUC results in the trials with the traditional Local Pipeline (LP) and boosted by saliency (Boost). The column ``\%'' represents the variation between them. Best value in each column in bold.}\label{tab:results-AUC} \begin{tabular}{l|rrr|rrr|rrr|rrr} \hline & \multicolumn{3}{c|}{\textbf{CSHOT}} & \multicolumn{3}{c|}{\textbf{SHOT}} & \multicolumn{3}{c|}{\textbf{PFHRGB}} & \multicolumn{3}{c}{\textbf{FPFH}} \\ \hline \textbf{Keypoints} & \multicolumn{1}{c}{\textbf{LP}} & \multicolumn{1}{c}{\textbf{Boost}} & \multicolumn{1}{c|}{\textbf{\%}} & \multicolumn{1}{c}{\textbf{LP}} & \multicolumn{1}{c}{\textbf{Boost}} & \multicolumn{1}{c|}{\textbf{\%}} & \multicolumn{1}{c}{\textbf{LP}} & \multicolumn{1}{c}{\textbf{Boost}} & \multicolumn{1}{c|}{\textbf{\%}} & \multicolumn{1}{c}{\textbf{LP}} & \multicolumn{1}{c}{\textbf{Boost}} & \multicolumn{1}{c}{\textbf{\%}} \\ \hline FAST & 0.946 & 0.874 & -7.61 & 0.915 & 0.892 & -2.45 & 0.743 & 0.761 & 2.43 & 0.631 & 0.668 & 5.89 \\ ISS & 0.868 & 0.881 & 1.52 & 0.866 & 0.912 & \textbf{5.30} & \textbf{0.745} & \textbf{0.900} & 20.68 & 0.491 & \textbf{0.752} & \textbf{53.04} \\ SIFT & 0.864 & 0.889 & 2.83 & 0.903 & 0.820 & -9.15 & 0.472 & 0.549 & 16.41 & 0.529 & 0.476 & -10.13 \\ US$_{0.02}$ & \textbf{0.949} & \textbf{0.948} & -0.07 & \textbf{0.941} & \textbf{0.938} & -0.31 & 0.739 & 0.807 & 9.19 & \textbf{0.641} & 0.728 & 13.48 \\ US$_{0.03}$ & 0.861 & 0.905 & 5.08 & 0.875 & 0.843 & -3.58 & 0.731 & 0.814 & 11.37 & 0.488 & 0.621 & 27.26 \\ US$_{0.04}$ & 0.832 & 0.875 & 5.23 & 0.824 & 0.817 & -0.92 & 0.564 & 0.700 & 24.22 & 0.289 & 0.368 & 27.14 \\ US$_{0.05}$ & 0.582 & 0.619 & \textbf{6.19} & 0.682 & 0.644 & -5.64 & 0.373 & 0.599 & \textbf{60.76} & 0.145 & 0.162 & 11.77 \\ \hline \textbf{Average} & & & 1.88 & & & -2.39 & & & 20.72 & & & 18.35 \\ \hline \end{tabular} \end{table} While the AUC generally increases with the boosted pipeline, it doesn't do so on average when deployed with the SHOT descriptor. However, it does increase by $5\%$ in the very relevant case of combining SHOT with the ISS detector, the combination that delivers the fastest running time among all the tested variants (as shown in Table \ref{tab:results-time}). \begin{figure}[!htb] \centering \subfigure{ \includegraphics[width=0.47\textwidth]{Bologna_AUC_Time_CSHOT.eps} \label{fig:cshot-results} } \subfigure{ \includegraphics[width=0.47\textwidth]{Bologna_AUC_Time_SHOT.eps} \label{fig:shot-results} } \subfigure{ \includegraphics[width=0.47\textwidth]{Bologna_AUC_Time_PFHRGB.eps} \label{fig:pfhrgb-results} } \subfigure{ \includegraphics[width=0.47\textwidth]{Bologna_AUC_Time_FPFH.eps} \label{fig:fpfh-results} } \caption{AUC $\times$ Time Results for the descriptors: \subref{fig:cshot-results} CSHOT, \subref{fig:shot-results} SHOT, \subref{fig:pfhrgb-results} PFHRGB and \subref{fig:fpfh-results} FPFH. Boosted pipeline denoted by an asterisk (*) next of the keypoint name.} \label{fig:AUC-time-results} \end{figure} Finally, in Figure \ref{fig:AUC-time-results}, we report a Pareto analysis on the data for all descriptors. We can see how points (i.e. detector/descriptor pairs) closer to the ideal point (that is $AUC = 1$ and Time as low as possible) are obtained by the execution of the boosted pipeline. In this analysis, the CSHOT, SHOT and FPFH obtained the best performance when paired with the boosted ISS (ISS$^{*}$), while PFHRGB when paired with the Boosted Uniform Sampling at $r = 3 cm$ (US$_{0.03}^{*}$). Hence, the boosting pipeline outperforms the traditional one for all tested descriptors when taking into account the combined effect of processing time and recognition performance. \section{Conclusion} In this work, we presented an approach based on saliency detection to boost the processing time of the traditional local descriptor pipeline. It was verified for all the tested cases a significant processing time reduction, from 22 to 83\%. Interestingly, the processing time reduction didn't generally decrease the object recognition performance, as measured by the AUC of the precision recall curves. Actually, an improvement on the performance recognition was found for all descriptors in at least one pairing, up to 5\% for SHOT and CSHOT, and more than 50\% for FPFH and PFHRGB. In spite of the improvements in processing time, the whole processing time is not suitable for real-time applications yet. However, the proposed approach offers a considerable speed-up without impact negatively on recognition performance, which brings us a step closer to create an effective and real-time local feature pipeline for 3D object recognition. \bibliographystyle{splncs04}
1,108,101,563,998
arxiv
\section{Introduction} \label{sec:introduction} Heavy elements, "metals", are produced in the universe through nuclear synthesis in massive stars, and are partly returned to the interstellar medium via the explosive collapse of supernovae. The gas-phase metallicity of galaxies is therefore a powerful diagnostic of the different processes in galaxy evolution, including gas inflow, star formation in cold gas clouds, and wind-driven outflows of gas from galaxies. The Oxygen abundance, mostly produced on short time-scales (a few Myr) by the rapid collapse and violent explosion of massive stars, i.e. the Type II supernovae, is widely used as an observationally accessible proxy of the metallicity of the gas in galaxies \citep[e.g.][]{Lequeux-79, Wheeler-89, Kewley-02, Pettini-04, Tremonti-04}. Observationally, the gas-phase metallicity is usually measured based on the flux ratios of emission lines in star-forming HII regions \citep[e.g.][]{Kobulnicky-04, Tremonti-04, Pettini-04, Maiolino-08, Kewley-08, Perez-Montero-09, Pilyugin-10, Rosales-Ortega-12, Marino-13, Dopita-13, Vogt-15, Dopita-16, Pilyugin-16}, such as [OII]$\lambda$3227, [OIII]$\lambda$4363, H$\beta$, [OIII]$\lambda$5007, H$\alpha$, [NII]$\lambda$6584, and [SII]$\lambda\lambda$6717,6731. These emission lines are mostly excited by O and B stars, which must have formed recently, within the last $<$10 Myr, and the gas-phase metallicity measured in this way can therefore be treated as the current ``instantaneous" metallicity of the gas out of which the stars have formed. This timescale is much shorter than the $\sim 10^{10}$ year lifetime of the galaxies or even the $\sim 10^9$ year gas depletion timescale. In the literature, there is a number of empirical relations to derive the Oxygen abundance based on the combination of some of these emission lines \citep[see][and references therein]{Kewley-08, Sanchez-17}. As a whole, measurements of Oxygen abundance based on these different approaches are certainly positive correlated, but the absolute values and the ranges of these measurements are not consistent with each other \citep[e.g.][]{Kewley-08, Blanc-15, Sanchez-19}, due to the different methods, different samples and different emission lines used in the calibration. Even given the inconsistency of estimations of the gas-phase metallicity by different approaches, there is no dispute that the gas-phase metallicity is strongly correlated with the stellar mass \citep[e.g.][]{Lequeux-79, Tremonti-04}. Based on fibre spectra of the central regions of large numbers of galaxies from SDSS \citep[the Sloan Digital Sky Survey;][]{Stoughton-02}, \cite{Tremonti-04} established a tight stellar-mass/gas metallicity relation (MZR) for star-forming (SF) galaxies spanning over three orders of magnitude in mass and one order of magnitude in gas-phase metallicity. The relation is relatively steep at low mass end, but flattens at stellar masses above $10^{10.5}$${\rm M}_\odot$. Furthermore, the gas-phase metallicity appears to have a larger dispersion towards the lower stellar masses. The MZR is found to be evolving with redshift in the sense that galaxies at higher redshift tend to be more metal-poor with respect to their counterparts in the local universe \citep[e.g.][]{Savaglio-05, Maier-06, Maiolino-08}. The existence of the MZR relation can be explained by one or the combination of these factors: the outflow by the supernova-driven winds \citep{Larson-74,Tremonti-04, Finlator-08}, the different star formation efficiencies in galaxies \citep{Brooks-07, Mouhcine-07, Calura-09}, and the variations in the initial mass function across galaxy population \citep{Koppen-07}. The gas-phase metallicity is also found to be correlated with other global properties of galaxies, such as the SFR \citep[e.g.][]{Ellison-08, Mannucci-10, Andrews-13} and half-light radius \citep[e.g.][]{Ellison-08, Wang-18b}. Based on the large sample of galaxies from SDSS, \cite{Mannucci-10} found that the negative correlation with SFR is strong at low stellar mass end, and becomes less significant with increasing stellar mass. Furthermore, they claimed that there was a universal epoch-independent mass-metallicity-SFR relation $Z_{\rm gas}(M_*,{\rm SFR})$, i.e. that the apparent evolution in the MZR could be accounted for, phenomenologically, by the higher SFR encountered in high redshift galaxies. This universal $Z_{\rm gas}(M_*,{\rm SFR})$ is therefore known as the ``fundamental metallicity relation'' \citep[FMR;][]{Mannucci-10, Richard-11, Nakajima-12, Cresci-12, Salim-14, Cresci-19, Huang-19, Curti-20}. \cite{Cresci-19} finds that the anti-correlation between specific SFR and gas-phase metallicity at given stellar mass, regardless of what the metallicity and SFR indicators are used. Recently, the emergence of widespread integral field spectroscopy (IFS) for galaxy surveys, such as MaNGA \citep{Bundy-15}, CALIFA \citep{Sanchez-12} and SAMI \citep{Croom-12}, has produced many spatially resolved spectroscopic data of relatively nearby galaxies. This enables the investigation of the relations of metallicity with mass (surface density) and star-formation rates within galaxies. A strong correlation between local stellar surface density and local gas-phase metallicity, an apparent analog to the global MZR, is found based on the spatially-resolved spectroscopic data by many authors \citep[e.g.][]{Moran-12, Rosales-Ortega-12, Barrera-Ballesteros-16, Zhu-17, Gao-18}. However, whether the SFR (or sSFR) is a second parameter to the sub-galactic resolved MZR has been debated. By using 38 nearby galaxies from the PINGS survey, \cite{Rosales-Ortega-12} found a negative correlation of gas-phase metallicity and the local specific SFR, indicated by the H$\alpha$ equivalent width \citep[also see][]{Zhu-17}. However, \cite{Moran-12} and \cite{Barrera-Ballesteros-17} argued that there is no evidence for the local sSFR (or SFR surface density) to be a robust second parameter in the resolved MZR. More recently, based on analysis of MaNGA galaxies (with a spatial resolution of 1-2 kpc), \cite{Berhane-Teklu-20} found a negative correlation between local metallicity and local sSFR when using {\tt N2} and {\tt O3N2} metallicity indicators, but the correlation is nearly disappeared for the {\tt N2O2} and {\tt N2S2} metallicity indicators. Furthermore, by using the HII regions of eight nearby galaxies mapped by the Multi-Unit Spectroscopic Explore (MUSE), \cite{Kreckel-19} found that the regions with highest H$\alpha$ luminosity tended to have higher gas-phase metallicity at a $\sim$100 pc scale, indicating a {\it positive} correlation between metallicity and sSFR. Similarly, \cite{Ho-18} found that the oxygen abundances are higher in the spiral arms than in the inter-arm regions for NGC 2997, at a similar spatial resolution. A clear picture to understand these seemingly contradictory findings is still lacking. Efforts have been made to understand the global star formation rates and metal content of galaxies, by looking at the balance between inflow, outflow and star formation \citep[e.g.][]{Schaye-10, Bouche-10, Dave-11, Lilly-13, Belfiore-19}. In particular, \cite{Dave-12} gave an analytic formalism that describes the evolution of the stellar mass, gas mass and metallicity of galaxies, assuming an equilibrium state in which the mass of the gas reservoir is assumed to be constant, i.e. $\dot{M}_{\rm gas}\sim$0. This scenario is also known as the ``reservoir'' or ``bathtube'' model \citep{Bouche-10, Dave-12}. Because the gas mass does not change, this model is not able to regulate the SFR. \cite{Lilly-13} released this restriction and allowed the mass of gas reservoir to change, so that the SFR is regulated by the changing gas mass adjusting to the inflow rate. This is known as the ``gas-regulator'' model. The gas-regulator model of \cite{Lilly-13} produces an analytic form of the mass metallicity relation that has the SFR naturally as a second parameter, i.e. $Z_{\rm gas}(M_*,{\rm SFR})$. Further, the form of this is set by the basic parameters of the regulator model, specifically the star-formation efficiency (SFE) and the mass-loading of the wind $\lambda$, both of which may vary with the overall stellar mass. However, if these mass-dependent parameters are independent of epoch, as is quite plausible, then the form of $Z_{\rm gas}(M_*,{\rm SFR})$ will also not evolve. The gas-regulator model is therefore naturally able to produce an epoch-independent FMR. The whole point of the \cite{Lilly-13} gas-regulator model is that the mass of gas in the galaxy can change. In previous papers, we have explored the dynamical behaviour of the gas regulator model as it adjusts to variations in the inflow rate or other parameters and find that it can well explain several features of the galaxy population, and especially observations of the variation of SFR within galaxies and across the galaxy population. Based on a well defined SF galaxy sample from MaNGA, \citet[][hereafter \citetalias{Wang-19}]{Wang-19} found that galaxies above the Star Formation Main Sequence (SFMS) show elevated SFR surface density ($\Sigma_{\rm SFR}$) at all galactic radii compared with the median $\Sigma_{\rm SFR}$ profile of the whole SF population, and vice versa, galaxies below the SFMS show depressed $\Sigma_{\rm SFR}$ profiles. They found that the dispersion in the relative $\Sigma_{\rm SFR}$ (correcting for different effective radii) at a given relative radius in galaxies with similar stellar mass, decreases with increasing gas depletion time. The latter was inferred from the stellar surface mass density using the ``extended Schmidt Law'' from \cite{Shi-11}. By driving a gas-regulator system with a periodic time-varying inflow rate, \citetalias{Wang-19} found that the resulting time-varying SFR is also a periodic function of time, varying with an amplitude that depended on the product of the inflow variations times an analytic function of the ratio of the gas depletion time to the driving period. Regions with shorter gas depletion times are better able to follow variations in inflow rate, and therefore produce a larger dispersion in $\Sigma_{\rm SFR}$ at a given driving period and amplitude. It was suggested that this feature of the gas regulator model could produce the observed relation between the scatter of $\Sigma_{\rm SFR}$ with the inferred gas depletion time (see more details in \citetalias{Wang-19}). Similarly, the dynamical gas regulator model can also qualitatively explain the observed dependence of the dispersion of the overall SFMS on stellar mass and stellar surface density \citep[][]{Wang-18b, Davies-19}. Consistent with, but quite independent of, our \citetalias{Wang-19} analysis, \cite{Wang-20a} found that regions with shorter gas depletion times also exhibit larger dispersions in the temporal changes in the SFR, as parameterized by SFR$_{\rm 5Myr}$/SFR$_{\rm 800Myr}$, the ratio of SFR averaged within the last 5 Myr to the SFR averaged within the last 800 Myr. The SFR$_{\rm 5Myr}$/SFR$_{\rm 800Myr}$ was estimated by the equivalent widths of H$\alpha$ emission and H$\delta$ absorption. The results in \cite{Wang-20a} therefore confirm that that the variation in the $\Sigma_{\rm SFR}$ profiles in \cite{Wang-19} are indeed apparently due to real {\it temporal} variations in the SFR within galaxies, rather than any intrinsic differences between galaxies in the population. Furthermore, based on the same dataset in \cite{Wang-20a}, \cite{Wang-20b} constrained the power spectral distribution (PSD) of the star formation histories of galaxies, i.e. the contribution of the variations in SFR at different timescales. This too showed highly consistent results with our earlier results in \citetalias{Wang-19} and \cite{Wang-20a}. All these results strongly support the importance of the dynamical response of the simple gas-regulator system to a time-varying inflow in producing the variations of SFR or $\Sigma_{\rm SFR}$ at galactic scale. Since the dynamical gas-regulator model evidently has great success in interpreting the variation of SFR between galaxies and on large scales within galaxies, it is interesting to further explore the behaviour of this system, and in particular to look again at its response to variations also in star-formation efficiency, and to explore further the gas-phase metallicity as a diagnostic tool. This is the focus of the current paper. In this work, we extend the work of \cite{Lilly-13} and \citetalias{Wang-19} and look at the metal-enrichment process in the dynamical gas-regulator framework. We will present the basic assumptions and continuity equations of the dynamical gas-regulator model in Section \ref{sec:2.1} and examine how the SFR, and the total mass, metal mass and gas-phase metallicity of the gas reservoir, vary in response to time-variations in the inflow rate of gas into the system and/or time-variations in the SFE (Section \ref{sec:2.2} and \ref{sec:2.3}). In addition, we will also explore how the wind mass-loading factor, the metallicity of the inflowing gas, and the yield (defined as the mass of metals returned to the interstellar medium per unit mass that is locked up into long-lived stars), can all modify these responses (Section \ref{sec:2.4}). We then turn to look for evidence of the predicted responses of the dynamic gas regulator in observational data. In Section \ref{sec:3}, we introduce the data used in this work, including the IFS data from the MaNGA survey and from the MUSE Atlas of Disks \citep[MAD;][]{Erroz-Ferrer-19}. The MaNGA sample is taken from \cite{Wang-18a} and \citetalias{Wang-19}, and includes nearly 1000 SF galaxies with typical spatial resolutions of 1-2 kpc, while the MAD sample has only 38 SF galaxies but with the spatial resolution down to 100 pc or even less. Therefore, the MaNGA sample is suitable to study the global effects at galactic and sub-galactic scale, while MAD galaxies can be used to study the effects on the scale of HII regions or individual molecular clouds. In Section \ref{sec:4} and \ref{sec:5}, we present the main observational results and compare them with the model predictions of the dynamical gas-regulator model. In Section \ref{sec:6}, we discuss our results compared with previous findings, and present the implications from the perspective of our understanding of relationship between SFR, cold gas mass and gas-phase metallicity at different physical scales. We summarize the main results of this work in Section \ref{sec:7}. Throughout this paper, we assume a flat cold dark matter cosmology model with $\Omega_m=0.27$, $\Omega_\Lambda=0.73$ and $h=0.7$ when computing distance-dependent parameters. The metallicity referred in this work are the gas-phase [O/H] metallicity (denoted as $Z$ or $Z_{\rm gas}$) rather than the stellar metallicity, unless specified. \section{The dynamic response of the gas-regulator model} \label{sec:2} \subsection{Basic continuity equations} \label{sec:2.1} The basic idea of gas-regulator model is that the formation of stars is instantaneously determined by the mass of a cold gas reservoir, which is regulated by the interplay between inflow, outflow and star formation \citep{Lilly-13}. The instantaneous SFR can be written as: \begin{equation} \label{eq:1} {\rm SFR}(t) = M_{\rm gas}(t) \cdot {\rm SFE}(t), \end{equation} where the SFE$(t)$ is the instantaneous star formation efficiency. We note that the SFE is the inverse of the gas depletion time ($\tau_{\rm dep}$) by definition, i.e. SFE$\equiv 1/\tau_{\rm dep}$. Following the work of \cite{Lilly-13} and \citetalias{Wang-19}, we assume that the mass loss due to outflow is scaled by the instantaneous SFR$(t)$ with a mass-loading factor $\lambda$, i.e. $\lambda$SFR$(t)$. The effective gas depletion timescale is therefore reduced by a factor (1+$\lambda$). We denote the inflow rate as $\Phi(t)$, and the metallicity of infalling gas as $Z_0$. The metal mass of the gas reservoir is denoted as $M_{\rm Z}(t)$. The yield, i.e. the mass of metals returned to the interstellar medium per unit mass of instantaneously formed stars, is denoted as $y$. The basic continuity equations for gas and metals are \citep[see equations 9 and 20 in][]{Lilly-13}: \begin{equation} \label{eq:2} \begin{split} \frac{dM_{\rm gas(t)}}{dt} = & \ \Phi(t) - {\rm SFR}(t) - \lambda \cdot {\rm SFR}(t) \\ = & \ \Phi(t) - {\rm SFE}(t)\cdot M_{\rm gas}(t) - \lambda \cdot {\rm SFE}(t)\cdot M_{\rm gas}(t) \end{split} \end{equation} \begin{equation} \label{eq:3} \begin{split} \frac{dM_{\rm Z}(t)}{dt} = &y \cdot {\rm SFR}(t) - Z(t) \cdot (1+\lambda) \cdot {\rm SFR}(t) + \Phi(t) \cdot Z_{\rm 0} \\ = & y \cdot {\rm SFE}(t)\cdot M_{\rm gas}(t) - (1+\lambda)\cdot {\rm SFE}(t) \cdot M_{\rm Z}(t) \\ & + \Phi(t) \cdot Z_{\rm 0} \end{split} \end{equation} where $Z(t) = M_{\rm Z}(t)/M_{\rm gas}(t)$ by definition. In principle, there is a time delay of metal enhancement to the current star formation activity, which is a few Myr, i.e. the lifetime of the massive stars that collapse into Type-II supernovae. But since this time delay is much less than the gas depletion time of galaxies, we can apply the instantaneous re-cycling approximation in this work, i.e. we assume that the metal-enhancement by star formation is instantaneous\footnote{Observationally, if the SFR is measured based on emission lines, this SFR represents the averaged SFR within the last 5 Myr \citep{Wang-20a}. This would compensate for the time-delay to some extent.}. In Equation \ref{eq:2} and \ref{eq:3}, there are a total of five quantities driving the solution: the possibly time-varying $\Phi(t)$, SFE$(t)$, and the (assumed constant) $\lambda$, $Z_0$ and $y$. The $M_{\rm gas}(t)$, SFR$(t)$, $M_{\rm Z}(t)$ and $Z(t)$ are then the response of the regulator system to these five quantities. In this work we assume the $\lambda$, $Z_0$ and $y$ are time-independent. The mass-loading factor is tightly correlated with the stellar mass of galaxies \citep{Hayward-17}, and is not likely to change significantly on timescales of Gyr given the current relatively low sSFR$\sim$0.1 Gyr$^{-1}$ of local SFMS galaxies. The $Z_{\rm 0}$ reflects the metallicity of the surrounding circumgalactic medium of galaxies, which is also not likely to change significantly with time. The yield $y$ is a physical parameter reflecting the nuclear synthesis activity and the relative number of massive stars, which is expected to be tightly correlated with the initial mass function (IMF) but which is not likely to change significantly on Gyr timescale. However, we note that $\lambda$, $Z_{\rm 0}$ and even $y$ may well change from galaxy-to-galaxy and even from region-to-region within individual galaxies. The two Equations \ref{eq:2} and \ref{eq:3} are the two basic continuity equations in the gas-regulator model. The following analysis in Section \ref{sec:2} will focus on the solution of these two equations by driving the regulator system with periodic $\Phi(t)$ or periodic SFE$(t)$, and investigate the correlation between the resulting instantaneous SFR$(t)$ and the gas-phase metallicity $Z(t)$. This correlation will be the main diagnostic used in our later analysis of observational data. \subsection{Driving the gas-regulator system with a time-varying inflow rate} \label{sec:2.2} \begin{figure*} \begin{center} \epsfig{figure=./fig/model/inflow/sine/example_sine_inflow.eps,clip=true,width=0.32\textwidth} \epsfig{figure=./fig/model/inflow/sine/plot_zgas_sfr_2Gyr_l0_Am01.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/model/inflow/sine/plot_zgas_mgas_2Gyr_l0_Am01.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/model/inflow/step/example_step_inflow.eps,clip=true,width=0.32\textwidth} \epsfig{figure=./fig/model/inflow/step/plot_zgas_sfr_2Gyr_l0.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/model/inflow/step/plot_zgas_mgas_2Gyr_l0.eps,clip=true,width=0.33\textwidth} \end{center} \caption{Illustration of the SFR, $M_{\rm gas}$\ and $Z_{\rm gas}$\ in response to a sinusoidally varying inflow rate (upper panels) and a periodic step function inflow rate (lower panels), both with a constant SFE, in the gas regulator framework. Upper left panel: Examples of the SFR$(t)$, $M_{\rm gas}$$(t)$, $M_{\rm Z}$$(t)$, and $Z_{\rm gas}$$(t)$ (scaled to their mean values) in response to the sinusoidal inflow rate at three different $\xi$ (see text). The cases of different $\xi$ are separated with arbitrary offsets ($-0.5$, 0.0, $+$0.5) in the y-axis for display purposes. Upper middle panel: The correlation of SFR and $Z_{\rm gas}$\ in logarithmic space for different $\xi$. Upper right panel: The correlation of SFR and $M_{\rm gas}$\ in logarithmic space for different $\xi$ (see text). The lower panels are similar to the top panels, but for a periodic step function of the inflow rate. For illustration, the period of the step-function is set to be 2 Gyr, and the $\tau_{\rm dep,eff}$ is set to be 1 Gyr. The duration of the ``high-phase" inflow rate ($\tau_{\rm s}$) varies from 0.1$\tau_{\rm dep,eff}$ to $\tau_{\rm dep,eff}$. The different colors in the lower middle and right panels are for the cases of different $\tau_{\rm dep,eff}/\tau_{\rm s}$, and the data points are equally spaced in time and so their density reflects the speed of the change of the model. Since the SFE is set to be constant over time, the SFR$(t)$ is always overlapped with $M_{\rm gas}$$(t)$ in the two left panels, and the middle panels are the same as the right-most panels. } \label{fig:1} \end{figure*} We now first drive the gas-regulator system with time-varying inflow and time-invariant SFE. As in \citetalias{Wang-19}, we input the simple case that inflow rate is a linear sinusoidal function of time with a period of $T_{\rm p}$: \begin{equation} \label{eq:4} \Phi(t) = \Phi_{\rm 0} + \Phi_{\rm t} \cdot {\rm sin}(2\pi t/T_{\rm p}). \end{equation} Then, we look for solutions in which $M_{\rm gas}(t)$ and $M_{\rm Z}(t)$ are sinusoidal functions phase-shifted from the inflow rate. \begin{equation} \label{eq:5} \begin{split} M_{\rm gas}(t) = &\ M_{\rm 0} + M_{\rm t} \cdot {\rm sin}(2\pi t/T_{\rm p} - \delta) \\ M_{\rm Z}(t) = &\ M_{\rm Z0} + M_{\rm Zt} \cdot {\rm sin}(2\pi t/T_{\rm p} - \beta). \end{split} \end{equation} In \citetalias{Wang-19}, by substituting $M_{\rm gas}(t)$ into Equation \ref{eq:2} and equalising the various time-dependent terms in the usual way, we have the solution of $M_{\rm gas}(t)$: \begin{equation} \label{eq:6} \begin{split} M_{\rm 0} = & \ \Phi_{\rm 0}\tau_{\rm dep,eff} \\ \delta \ \ = & \ {\rm arctan}(\xi) \\ M_{\rm t} = & \ \frac{\Phi_{\rm t}\tau_{\rm dep,eff}}{(1+\xi^2)^{1/2}}, \end{split} \end{equation} where $\tau_{\rm def,eff}$ is the effective gas depletion time, define as $\tau_{\rm dep}\cdot (1+\lambda)^{-1}$ or ${\rm SFE}^{-1}\cdot (1+\lambda)^{-1}$, and $\xi$ is the ratio of the effective gas depletion timescale to the timescale of the variation of inflow rate $T_{\rm p}(2\pi)^{-1}$, i.e. \begin{equation} \label{eq:6.1} \xi \equiv 2\pi \tau_{\rm dep,eff}/T_{\rm p}. \end{equation} Therefore, based on Equation \ref{eq:6}, we also have \begin{equation} \label{eq:7} \frac{M_{\rm t}}{M_{\rm 0}} = \frac{1}{(1+\xi^2)^{1/2}} \times \frac{\Phi_t}{\Phi_0}. \end{equation} As we discussed in \citetalias{Wang-19}, the amplitude and phase-delay of the output $M_{\rm gas}(t)$ strongly depends on the parameter $\xi$, the relative timescale of gas depletion time to the driving period. At fixed $T_{\rm p}$, galaxies or regions with shorter gas depletion time are more able to follow the changes of the inflow rate, leading to a smaller phase-delay and larger amplitude, and vice versa (see more discussion in section 4 of \citetalias{Wang-19}). In the similar way, we substitute Equations \ref{eq:5} and \ref{eq:6} into Equation \ref{eq:3}, and equate the various time-dependent terms to find the solution of $M_{\rm Z}(t)$: \begin{equation} \label{eq:8} \begin{split} M_{\rm Z0} = &\ (y_{\rm eff}+Z_{\rm 0})\Phi_{\rm 0}\tau_{\rm dep,eff} \\ \beta \ \ = & \ {\rm arctan}[\frac{2y_{\rm eff}\xi + Z_{\rm 0}\xi(1+\xi^2)}{y_{\rm eff}(1-\xi^2)+Z_{\rm 0}(1+\xi^2)}] \\ M_{\rm Zt} = &\ \frac{(1+\eta^2)^{1/2}}{1+\xi^2}\times (y_{\rm eff}+Z_{\rm 0})\Phi_{\rm t}\tau_{\rm dep,eff}, \end{split} \end{equation} where \begin{equation} \label{eq:9} y_{\rm eff} \equiv y\cdot(1+\lambda)^{-1} \end{equation} and \begin{equation} \label{eq:10} \eta = \xi Z_0 \cdot (y_{\rm eff}+Z_0)^{-1}. \end{equation} If $\beta$ is less than zero, then $\beta$ should equal to $\beta$+$\pi$. The shorthand $y_{\rm eff}$ and $\eta$ are defined for convenience. We remind readers that the effective yield $y_{\rm eff}$ defined in this way is {\it different} from some previous papers \citep[e.g.][]{Edmunds-90, Garnett-02}. We prefer this definition because we believe it is more fundamental. Based on the solution of $M_{\rm Z}(t)$, we also have \begin{equation} \label{eq:11} \frac{M_{\rm Zt}}{M_{\rm Z0}} = \frac{(1+\eta^2)^{1/2}}{1+\xi^2}\times \frac{\Phi_{\rm t}}{\Phi_{\rm 0}}. \end{equation} If we assume that the inflow gas is in a pristine state, i.e. $Z_0\sim0$, then the solution of $M_{\rm Z}(t)$ can be simplified further to be \begin{equation} \label{eq:12} \begin{split} M_{\rm Z0} = &\ y_{\rm eff}\Phi_{\rm 0}\tau_{\rm dep,eff} \\ \beta \ \ = &\ {\rm arctan}[\frac{2\xi}{1-\xi^2}] \\ \frac{M_{\rm Zt}}{M_{\rm Z0}} = & \ \frac{1}{1+\xi^2}\times \frac{\Phi_{\rm t}}{\Phi_{\rm 0}}. \end{split} \end{equation} Interestingly, in this specific case with $Z_0\sim0$, the phase-delay of $M_{\rm Z}(t)$ is twice that of $M_{\rm gas}(t)$, i.e. $\beta=2\delta$. Similar to $M_{\rm gas}(t)$, the phase-delay $\beta$ and relative amplitude $M_{\rm Zt}/M_{\rm Z0}$ of $M_{\rm Z}(t)$ strongly depend on the parameter $\xi$. At fixed $T_{\rm p}$, galaxies or regions with shorter (effective) gas depletion time, can more easily follow the change of inflow rate and gas mass, resulting in smaller $\beta$ and larger $M_{\rm Zt}/M_{\rm Z0}$. Specifically, if $\xi$ is much less than unity, then both $\delta$ and $\beta$ are close to zero, and both $M_{\rm t}/M_{\rm 0}$ and $M_{\rm Zt}/M_{\rm Z0}$ are close to $\Phi_{\rm t}/\Phi_{\rm 0}$. In other words, when the (effective) gas depletion time is much less than the driving period, i.e. $\xi \ll 1$, the change of mass of gas reservoir and of the mass of metals in the gas-regulator system can nearly follow the change of inflow rate, with little phase-delay and with nearly the same relative amplitude of variation. If, however, $\xi$ is much larger than 1, then $\delta$ is close to $\pi/2$, $\beta$ is close to $\pi$, and both $M_{\rm t}/M_{\rm 0}$ and $M_{\rm Zt}/M_{\rm Z0}$ are close to zero. This means that, when the (effective) gas depletion time is much longer than the driving period, i.e. $\xi \gg 1$, the gas-regulator system is unable to follow the relatively fast changes in the inflow rate, resulting in little variation in either $M_{\rm gas}(t)$ or $M_{\rm Z}(t)$. The dependence of $M_{\rm gas}(t)$ and $M_{\rm Z}(t)$ on $\xi$ can be clearly seen in the top left panel of Figure \ref{fig:1}, where we show examples of the evolution of $M_{\rm gas}$ (blue), SFR (red), $M_{\rm Z}$ (orange) and $Z$ (purple) when driving the gas-regulator system with periodic sinusoidal inflow. For illustrative purpose, we set $Z_{\rm 0}=0$, $\Phi_{\rm t}/\Phi_{\rm 0}=0.1$, $T_{\rm p}=1$ Gyr, and $\log \xi=-0.5$, 0.0, and 0.5. Given the solutions of $M_{\rm gas}(t)$ and $M_{\rm Z}(t)$ in Equation \ref{eq:6} and \ref{eq:8}, the resulting SFR$(t)$ and $Z(t)$ can be easily obtained. Since the SFE is assumed here (in this subsection) to be time-invariant, the change of SFR will exactly follow the change of cold gas mass. Therefore, the blue and red lines in the top left panel of Figure \ref{fig:1} are overlapped together. However, the $Z(t)$, i.e. the ratio of $M_{\rm Z}(t)$ to $M_{\rm gas}(t)$, has a more complicated behavior than SFR(t), because it is not a sinusoidal function. The variation of the metallicity depends on the amplitude of variations in $M_{\rm Z}(t)$ and $M_{\rm gas}(t)$, as well as the phase-delay between the two. To clarify the correlation between the instantaneous SFR$(t)$ and $Z(t)$, we plot the $\log {\rm SFR}(t)/\langle {\rm SFR}\rangle$ vs. $\log Z(t)/\langle {Z}\rangle$ and $\log M_{\rm gas}(t)/\langle {M_{\rm gas}}\rangle$ vs. $\log Z(t)/\langle {Z}\rangle$ for a set of different $\xi$ in the top middle and right panels of Figure \ref{fig:1}. Since the $\log {\rm SFR}(t)/\langle {\rm SFR}\rangle$ is a relative quantity, i.e. $\log {\rm SFR}(t) - \log \langle {\rm SFR}\rangle$, we also denote the $\log {\rm SFR}(t)/\langle {\rm SFR}\rangle$ as $\Delta$SFR. In the same way, we denote the $\log Z(t)/\langle {Z}\rangle$ as $\Delta Z$, and $\log M_{\rm gas}(t)/\langle {M_{\rm gas}}\rangle$ as $\Delta M_{\rm gas}$. As shown, at all the different $\xi$ shown here, the gas-regulator model predicts that $\Delta$SFR and $\Delta Z$ are {\it negatively} correlated when the system is driven with a sinusoidal inflow rate. The slope and the tightness of the $\Delta$SFR-$\Delta Z$ correlation strongly depend on $\xi$. Generally speaking, at fixed $T_{\rm p}$, the correlation of $\Delta$SFR-$\Delta Z$ becomes weaker and steeper with increasing the effective gas depletion time. The slope of $\Delta$SFR-$\Delta$Z relation is always steeper than $-1$. This means that the gas-regulator model requires that the scatter of $\Delta Z$ is always less than or equal to the scatter of $\Delta$SFR. We will come back to this point later. A pure sinusoidal variation of inflow may or may not be realistic (though we note that it should apply to any individual Fourier component of a more complex time series, as in \cite{Wang-19}). In addition to the sinusoidal inflow rate, we also for completeness explored the effect of a periodic step function in the inflow rate. The bottom panels of Figure \ref{fig:1} show the resulting $M_{\rm gas}(t)$, SFR$(t)$, $M_{\rm Z}(t)$ and $Z(t)$, as well as the resulting correlation between $\Delta$SFR (and $\Delta M_{\rm gas}$) and $\Delta Z$. In generating the plots, we set the period of step function as 2 Gyr, and change the upper-state duration of inflow rate ($\tau_{\rm s}$). We allow the $\tau_s$ varying from $0.1\tau_{\rm dep,eff}$ to $\tau_{\rm dep,eff}$. As shown in the bottom-left panel of Figure \ref{fig:1}, a sudden increase of inflow rate causes an increase of the SFR (or cold gas mass) and a decrease of gas-phase metallicity, and vice versa. This therefore also leads to a {\it negative} correlation between SFR (or cold gas mass) and metallicity, i.e. between $\Delta$SFR (or $\Delta M_{\rm gas}$) and $\Delta Z$, consistent with the result for the sinusoidal variation in the top panels of Figure \ref{fig:1}. Although here we have illustrated the properties of gas-regulator model by using only two types of inflow, we argue that the basic result of a negative correlation must hold for more complicated inflow rates since any given inflow function can be expressed as a linear combination of sinusoidal functions with different frequencies via the Fourier transform. The gas-regulator system will response to these individual sinusoidal components independently according to their $\xi$ via Equation \ref{eq:6} and \ref{eq:8}. This is also why the analysis in \cite{Lilly-13} gave an FMR with a negative correlation between $Z$ and SFR. Of course, observationally we cannot follow the temporal evolution of a single gas-regulated galaxy. In \citetalias{Wang-19} we therefore explored the scatter of the instantaneous SFR in a population of gas regulators, $\sigma({\rm SFR})$, when they are driven with simple sinusoidal $\Phi(t)$, and showed that within this population $\sigma({\rm SFR})$/$\sigma(\Phi)$ is a monotonically decreasing function of $\xi$: \begin{equation} \label{eq:13} \frac{\sigma({\rm SFR})}{\sigma(\Phi)} = \frac{1}{(1+\xi^2)^{1/2}} \cdot (1+\lambda)^{-1}. \end{equation} In Equation \ref{eq:13}, the scatter of SFR and $\Phi$ are calculated in linear space, while in the observations, the scatter of SFR is usually measured in logarithmic space. Here we present an approximate analytical solution of $\sigma({\rm \log SFR})/\sigma(\log \Phi)$, which can be written as: \begin{equation} \label{eq:14} \frac{\sigma({\rm \log SFR})}{\sigma(\log \Phi)} \approx \frac{1}{(1+\xi^2)^{1/2}}. \end{equation} As can be seen in Equation \ref{eq:14}, if the scatter is measured in logarithmic space, the factor $1+\lambda$ vanishes, since this can be viewed as a constant ``inefficiency'' in the star-formation \citep[see][]{Lilly-13}. The details to derive the Equation \ref{eq:14} are given in the Appendix \ref{sec:A}. The left panel of Figure \ref{fig:3} shows the numerical solution (black solid curve) and the approximate analytical solution (gray dashed curve) of $\sigma({\rm \log SFR})/\sigma(\log \Phi)$ as a function $\log\xi$, which are in excellent agreement. As proposed in \citetalias{Wang-19} and \cite{Wang-20b}, the Equation \ref{eq:14} provides the basic link between the PSD of the resulting SFR(t) and the PSD of the input $\Phi(t)$ under the gas-regulator system that the PSD of SFR(t) should be the PSD of $\Phi(t)$ multiplied by $1/(1+\xi^2)$. However, from the observations, the inflow rate history of galaxies is not of course a directly observable quantity. Finally, we come to the scatter of the gas-phase metallicities in the population. As shown in top middle panel of Figure \ref{fig:1}, the ratio of the scatter in $Z(t)$ to the scatter of SFR$(t)$ (i.e. what would be observed in a given population of regulators at fixed time) is predicted to be strongly dependent on $\xi$. Here we present the approximate analytical solution of $\sigma({\rm \log Z})/\sigma(\log {\rm SFR})$, which can be written as: \begin{equation} \label{eq:15} \frac{\sigma(\log Z)}{\sigma(\log {\rm SFR})} \approx \frac{\xi}{(1+\xi^2)^{1/2}} \cdot \frac{1}{1+Z_0/y_{\rm eff}}. \end{equation} The detailed derivation of Equation \ref{eq:15} is shown in Appendix \ref{sec:A}. Since the $\sigma({\rm \log Z})/\sigma(\log {\rm SFR})$ only depends on $\xi$ at fixed $Z_{\rm 0}/y_{\rm eff}$, we therefore present both the numerical (the blue, green and red curves) and the analytic solutions (the gray dashed curves) of the $\sigma(\log Z)/\sigma(\log {\rm SFR})$ in the left panel of Figure \ref{fig:3} at three different $Z_{\rm 0}/y_{\rm eff}$, which are again in excellent agreement. For different $Z_{\rm 0}/y_{\rm eff}$, the $\sigma({\rm \log Z})/\sigma(\log {\rm SFR})$ monotonically increases with $\log \xi$, which is opposite to $\sigma({\rm \log SFR})/\sigma(\log \Phi)$. Intriguingly, if $Z_0=0$, the Equation \ref{eq:14} and \ref{eq:15} are strictly symmetrical across the axis of $\log \xi=0$. Unlike Equation \ref{eq:14}, the Equation \ref{eq:15} prediction of the gas-regulator model relates two readily observable quantities, the instantaneous SFR and the instantaneous gas-phase metallicity. \subsection{Driving the gas-regulator system with time-varying SFE} \label{sec:2.3} \begin{figure*} \begin{center} \epsfig{figure=./fig/model/sfe/sine/example_sine_sfe.eps,clip=true,width=0.32\textwidth} \epsfig{figure=./fig/model/sfe/sine/plot_zgas_sfr_2Gyr_l0.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/model/sfe/sine/plot_zgas_mgas_2Gyr_l0.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/model/sfe/step/example_step_sfe.eps,clip=true,width=0.32\textwidth} \epsfig{figure=./fig/model/sfe/step/plot_zgas_sfr_2Gyr_l0.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/model/sfe/step/plot_zgas_mgas_2Gyr_l0.eps,clip=true,width=0.33\textwidth} \end{center} \caption{Illustration of the SFR, $M_{\rm gas}$\ and $Z_{\rm gas}$\ in response to a time-varying SFE$(t)$ of the sinusoidal function (upper panels) and the periodic step function (lower panels) with constant inflow rate, in the gas regulator framework. The panels, lines and colors are the same as those in Figure \ref{fig:1}, except the gas regulator system is now driven with periodic SFE$(t)$ and constant inflow rate. } \label{fig:2} \end{figure*} \begin{figure*} \begin{center} \epsfig{figure=./fig/model/inflow/sine/scatter_sine_inflow.eps,clip=true,width=0.42\textwidth} \epsfig{figure=./fig/model/sfe/sine/scatter_sine_sfe.eps,clip=true,width=0.42\textwidth} \end{center} \caption{Left panel: The ratio of $\sigma$($\log$O/H) to $\sigma$($\log$SFR) as a function of $\xi$ determined by the numerical calculation, when the gas regulator is driven with a sinusoidal inflow rate ($\Phi_{\rm t}/\Phi_{\rm 0}=0.1$) and constant SFE. The colored lines show the relations for three different $Z_{\rm 0}/y_{\rm eff}$. In the meanwhile, we also display the ratio of $\sigma$($\log$SFR) to $\sigma$($\log \Phi$) as a function of $\xi$ as a black line. Each line is followed by a gray dashed line, which shows the approximate analytic solution (see Equation \ref{eq:14} and \ref{eq:15}). Right panel: The same as the left panel but for the case in which the gas regulator system is driven with a sinusoidal SFE and constant inflow rate (see Equation \ref{eq:19} and \ref{eq:20}). } \label{fig:3} \end{figure*} In the previous subsection, we looked at the behavior of the gas-regulator system with time-varying inflow and time-invariant SFE. In this subsection, we will explore the behaviour when the regulator is driven with a constant inflow rate but experiences a time-varying SFE. Similar to Section \ref{sec:2.2}, we first input a time-invariant inflow, i.e. $\Phi(t)=\Phi_0$, and a sinusoidally varying SFE(t): \begin{equation} \label{eq:16} {\rm SFE}(t) = {\rm SFE_0} + {\rm SFE_t} \cdot {\rm sin}(2\pi t/T_{\rm p}). \end{equation} The driving period of SFE$(t)$ is again denoted as $T_{\rm p}$. As before, we look for the solution of $M_{\rm gas}(t)$ and $M_{\rm Z}(t)$ in terms of Equation \ref{eq:5}. However, we note that, unlike in Section \ref{sec:2.2}, the solutions of $M_{\rm gas}(t)$ and $M_{\rm Z}(t)$ are not exact sinusoidal functions, but can be represented as approximations to sinusoids. We assume the variation of the input SFE(t) is small, i.e. SFE$_{\rm t}\ll $SFE$_{\rm 0}$. Therefore, the variations of the resulting $M_{\rm gas}(t)$ and $M_{\rm Z}(t)$ are also small, i.e. $M_{\rm t}\ll M_{\rm 0}$ and $M_{\rm Zt}\ll M_{\rm Z0}$. By substituting Equation \ref{eq:5} and Equation \ref{eq:16} into Equation \ref{eq:2} and \ref{eq:3} ignoring second-order terms, we obtain the approximate solution of $M_{\rm gas}(t)$ and $M_{\rm Z}(t)$, which can be written as: \begin{equation} \label{eq:17} \begin{split} M_{\rm 0} = &\ \Phi_{\rm 0}\tau_{\rm dep,eff} \\ \delta \ \ = &\ {\rm arctan}(\xi) \\ \frac{M_{\rm t}}{M_{\rm 0}} = &\ - \frac{1}{(1+\xi^2)^{1/2}}\times \frac{{\rm SFE}_{\rm t}}{{\rm SFE}_{\rm 0}}, \end{split} \end{equation} and \begin{equation} \label{eq:18} \begin{split} M_{\rm Z0} = &\ (y_{\rm eff}+Z_{\rm 0})\Phi_{\rm 0}\tau_{\rm dep,eff} \\ \beta \ \ = & \ {\rm arctan}[\frac{2y_{\rm eff}\xi + Z_{\rm 0}\xi(1+\xi^2)}{y_{\rm eff}(1-\xi^2)+Z_{\rm 0}(1+\xi^2)}] \\ \frac{M_{\rm Zt}}{M_{\rm Z0}} = &\ -\frac{(1+\eta^2)^{1/2}}{1+\xi^2}\times \frac{{\rm SFE}_{\rm t}}{{\rm SFE}_{\rm 0}}. \end{split} \end{equation} We emphasize that the definitions of $\xi$ and $\eta$ are the same as in Section \ref{sec:2.2}. The similarity of the solutions in Equation \ref{eq:17} and \ref{eq:18} with those in Equation \ref{eq:6} and \ref{eq:8} should be noted. Specifically, the expression for the phase-delay is the same, and the response amplitude of both $M_{\rm gas}(t)$ and $M_{Z}(t)$ is also the same relative to the amplitude of the variation in the driving parameters. The difference in sign indicates the fact that, when driving the regulator with time-varying $\Phi(t)$, the increase of $\Phi(t)$ produces an increase of both $M_{\rm gas}(t)$ and $M_{\rm Z}(t)$ with some time-delays, while when driving the regulator with time-varying SFE$(t)$, the increase of SFE$(t)$ leads to decreases of both $M_{\rm gas}(t)$ and $M_{\rm Z}(t)$, again with some time-delay. We note that the difference in sign can not be simply treated as an additional phase-delay of $\pi$, because it is physically unreasonable that the $M_{\rm gas}(t)$ follows the change of SFE$(t)$ in the same direction in the gas-regulator system with time-invariant inflow rate. This is clearly illustrated in the top-left panel of Figure \ref{eq:2}, where we show examples of the evolution of $M_{\rm gas}$ (blue), SFR (red), $M_{\rm Z}$ (orange) and $Z$ (purple) when driving the gas-regulator system with a periodic sinusoidal SFE. As previously, we also investigate the correlation between $\Delta$SFR (and $\Delta M_{\rm gas}$) and $\Delta Z$ at a set of different $\xi$, as shown in the top-middle (and top-right) panel of Figure \ref{fig:2}. In contrast with the result in Section \ref{sec:2.2}, $\Delta$SFR and $\Delta Z$ now show a strong {\it positive} correlation. However, we note that $\Delta M_{\rm gas}$ and $\Delta Z$ remain negatively correlated, as would be expected, independent of whether the gas-regulator is driven with time-varying inflow or time-varying SFE. In this sense, metallicity variations fundamentally reflect variations in total gas mass in the gas regulator reservoir \citep[see][]{Lilly-13, Bothwell-13, Bothwell-16}. As in Section \ref{sec:2.2}, we also look at periodic step functions for the time-variation of SFE. Such changes in SFE may well be more realistic than sinusoidal changes. The bottom panels of Figure \ref{fig:2} demonstrate the resulting $M_{\rm gas}(t)$, SFR$(t)$, $M_{\rm Z}(t)$ and $Z(t)$, as well as the correlation between $\Delta$SFR (and $\Delta M_{\rm gas}$) and $\Delta Z$. In generating the plots, we set the period of step function to be 2 Gyr, and change the upper-state duration ($\tau_{\rm s}$). We allow the $\tau_s$ varying from $0.1\tau_{\rm dep,eff}$ to $\tau_{\rm dep,eff}$. Here the $\tau_{\rm dep,eff}$ is calculated based on the SFE in its lower-state. A sudden increase of SFE causes a sudden increase, but a following decrease, of the SFR, a following decrease of $M_{\rm gas}$ and a following increase of the gas-phase metallicity. As a whole, it is not as immediately clear what the sign of the $\Delta$SFR and $\Delta$Z correlation will be, given the fact that the lower branch has many more data points (reflecting the longer interval of time) than the upper branch on the bottom-middle panel of Figure \ref{fig:2}. There is a strong {\it asymmetry} in the distribution of SFR through the cycle, indicated by the number density of the data points in the bottom-middle panel of Figure \ref{fig:2}. Specifically, SFR$(t)$ stays close to its median value for most of the time, but shows a strong increase for a short period. The asymmetry becomes more significant as the relative duration of the increased phase of SFE is decreased. However, one thing is clear: the states with strongly {\it enhanced} SFR are always {\it metal-enhanced} with respect to the mean metallicity. These phases are represented in the upper locus of points in the figure which have $Z_{\rm gas} > \langle Z \rangle$. Consistent with the top-right panels of Figure \ref{fig:2}, the $\Delta M_{\rm gas}$ and $\Delta Z$ we conclude that there will always overall be a negative correlation that will be most clearly seen in the highest SFR points. Similar to Section \ref{sec:2.2}, we again present the approximate analytical solution of $\sigma(\log {\rm SFR})/\sigma(\log {\rm SFE})$, and $\sigma(\log Z)/\sigma(\log {\rm SFR})$ when driving the gas-regulator with sinusoidal SFE. These quantities can be written as: \begin{equation} \label{eq:19} \frac{\sigma({\rm \log SFR})}{\sigma(\log {\rm SFE})} \approx \frac{\xi}{(1+\xi^2)^{1/2}} \end{equation} and \begin{equation} \label{eq:20} \frac{\sigma(\log Z)}{\sigma(\log {\rm SFR})} \approx \frac{1}{(1+\xi^2)^{1/2}} \cdot \frac{1}{1+Z_0/y_{\rm eff}}. \end{equation} The right panel of Figure \ref{fig:3} shows the numerical solution (solid curves) and the approximate analytical solution (gray dashed curves) of $\sigma(\log {\rm SFR})/\sigma(\log {\rm SFE})$ and $\sigma(\log Z)/\sigma(\log {\rm SFR})$ as a function $\log\xi$. As shown, the numerical solution is well matched by the analytical solution. Intriguingly, the Equation \ref{eq:19} and \ref{eq:20} are strictly symmetrical to the Equation \ref{eq:14} and \ref{eq:15}, respectively, at the axis of $\log \xi=0$. When driving the gas-regulator with a time-varying SFE, the $\sigma(\log {\rm SFR})/\sigma(\log {\rm SFE})$ is predicted to increase with $\xi$, while $\sigma(\log Z)/\sigma(\log {\rm SFR})$ is predicted to decrease with $\xi$. Although the Equation \ref{eq:14}, \ref{eq:15}, \ref{eq:19} and \ref{eq:20} are only approximate solutions obtained in the limit of small variations of inflow rate or SFE, we have verified numerically that these equations remain reasonable approximations even when the variations of inflow rate or SFE are quite significant. \subsection{The effects of mass-loading, $Z_{\rm 0}$ and the yield} \label{sec:2.4} \begin{figure*} \begin{center} \epsfig{figure=./fig/model/lyz/plot_zgas_sfr_all_inflow1.eps,clip=true,width=0.95\textwidth} \epsfig{figure=./fig/model/lyz/plot_zgas_sfr_all_inflow2.eps,clip=true,width=0.95\textwidth} \epsfig{figure=./fig/model/lyz/plot_zgas_sfr_all_inflow.eps,clip=true,width=0.95\textwidth} \end{center} \caption{Illustration of the role of $Z_{\rm 0}$ and the mass-loading factor $\lambda$ in shaping the correlation of SFR and $Z_{\rm gas}$, when driving the gas regulator system with a sinusoidal inflow rate and constant SFE. From top to bottom, we set the $Z_{\rm 0}$ to be 0.0, 0.5$y$ and 0.5$y_{\rm eff}$. The $y_{\rm eff}$ is defined as $y/(1+\lambda)$. In each panel, we explore the cases of five different mass-loading factors, i.e. $\lambda=$0.0, 0.5, 1.0, 2.0, and 4.0. The lines are color-coded with the $\xi$ as before. The black line segments indicate the median gas-phase metallicity of the different five mass-loading factors for the three panels, which is exactly equal to 12+$\log(y_{\rm eff}+Z_{\rm 0})$. } \label{fig:4} \end{figure*} \begin{figure*} \begin{center} \epsfig{figure=./fig/model/lyz/plot_zgas_sfr_all_sfe1.eps,clip=true,width=0.95\textwidth} \epsfig{figure=./fig/model/lyz/plot_zgas_sfr_all_sfe2.eps,clip=true,width=0.95\textwidth} \epsfig{figure=./fig/model/lyz/plot_zgas_sfr_all_sfe.eps,clip=true,width=0.95\textwidth} \end{center} \caption{The same as Figure \ref{fig:4}, but driving the gas regulator system with sinusoidal SFE and constant inflow. } \label{fig:5} \end{figure*} In Section \ref{sec:2.2} and \ref{sec:2.3}, we have explored the behavior of the gas-regulator system when it is driven by time-varying inflow and time-varying SFE, respectively. In this subsection, we will explore how the metallicities are modified by changes in the wind mass-loading factor and the metallicity of inflow gas. The assumed yield enters as a simple scaling factor throughout. Following Section \ref{sec:2.2}, we drive the gas-regulator with a sinusoidally varying inflow rate for a set of different the mass-loading factors and $Z_{\rm 0}$. First, we set $Z_{\rm 0} = 0$, and show the $\Delta$SFR vs. $\log Z$ in the top panel of Figure \ref{fig:4} for different mass-loading factors. The yield $y$ is set to be 0.001 \citep{Lilly-13}, which only accounts for the yield of Oxygen. This is convenient to compare with observations. We note that assuming a different yield does not change any of our conclusions. At $Z_{\rm 0}=0$, the relative changes in metallicity and star-formation, i.e. the $\Delta$SFR-$\Delta Z$ relation, stays the same with varying $\lambda$, while the {\it mean} metallicity decreases with increasing $\lambda$. We then set $Z_{\rm 0} = 0.5y$, and obtain the $\Delta$SFR vs. $\log Z$ shown in the middle panel of Figure \ref{fig:4}. As shown, the basic negative correlation between $\Delta$SFR vs. $\Delta Z$ still retains, but the relative change in metallicity, i.e. the scatter of metallicity in the population, decreases with increasing $Z_{\rm 0}$ and increasing $\lambda$ (if $Z_{\rm 0}\ne 0$), compared to the top panel of Figure \ref{fig:4}. Finally, we set $Z_{\rm 0}= 0.5y_{\rm eff}$, and the $\Delta$SFR vs. $\log Z$ is shown in the bottom panel of Figure \ref{fig:4}. The correlation between $\Delta$SFR vs. $\Delta Z$ is the same when $Z_0/y_{\rm eff}$ is fixed. These results follow the Equation \ref{eq:15} where it is clear that the ratio of $\sigma(\log Z)$ to $\sigma(\log {\rm SFR})$ only depends on $Z_{\rm 0}/y_{\rm eff}$ once $\xi$ is fixed. At a given $\xi$, the scatter of $\log Z$ is scaled by the factor $(1+Z_{\rm 0}/y_{\rm eff})^{-1}$ with respect to $\sigma(\log Z)$ at $Z_{\rm 0}=0$. Another interesting point is that the mean metallicity does depend on $Z_0$, $\lambda$ and $y$. As a whole, the mean gas-phase metallicity increases with increasing $Z_0$ and/or decreasing $\lambda$. Actually, the mean SFR and metallicity can be solved analytically based on Equation \ref{eq:6} and \ref{eq:8} (or Equation \ref{eq:17} and \ref{eq:18}), which can be written as: \begin{equation} \label{eq:21} \langle {\rm SFR}\rangle = M_{\rm 0}\cdot {\rm SFE} = \Phi_{\rm 0} \cdot (1+\lambda)^{-1}. \end{equation} and \begin{equation} \label{eq:22} \langle {Z}\rangle = \frac{M_{\rm Z0}}{M_{\rm 0}} = Z_{\rm 0}+y_{\rm eff}. \end{equation} For completeness, we also look at driving the gas-regulator system with sinusoidal SFE. The results are shown in Figure \ref{fig:5}. As shown, the effects of $Z_{\rm 0}$, $\lambda$ and $y$ on the correlation between $\Delta$SFR and $\Delta Z$ follow the Equation \ref{eq:20}, and the effects of them on mean SFR and metallicity follow the Equation \ref{eq:21} and Equation \ref{eq:22}. Although here we do not try to set a different yield, we argue that the effect of varying yield would also follow the Equation \ref{eq:15} or Equation \ref{eq:20}. Based on Equation \ref{eq:21} and \ref{eq:22}, we find that the mean SFR is determined by the mean inflow rate and mass-loading factor regardless of the SFE, simply because the gas reservoir adjusts to maintain long-term balance, while the mean metallicity only depends on $Z_{\rm 0}$ and $y_{\rm eff}$ regardless of how large is the inflow or SFE. We conclude that, under the gas-regulator framework, the metallicity is primarily determined by $Z_{\rm 0}$ and $y_{\rm eff}$ with a secondary effect on SFR (or cold gas mass) due to the time-variation of the inflow rate or SFE. It is important to note that the metallicity does not depend on the {\it absolute} values of inflow rate and SFE, but rather on the change in them. Therefore, when investigating the question whether the SFR is a secondary parameter to determine the metallicity, one should look at the correlation of the relative values of SFR and $Z$ (or residuals like $\Delta$SFR and $\Delta Z$ in Figure \ref{fig:1} and Figure \ref{fig:2}), rather than the absolute values, in order to eliminate different $\langle {\rm SFR}\rangle$ and $\langle Z\rangle$ for different galaxies or regions. In Section \ref{sec:2.2} and \ref{sec:2.3}, we have investigated the properties of gas-regulator by driving the gas-regulator system with 1) time-varying $\Phi$ and time-invariant SFE, and 2) time-varying SFE and time-invariant $\Phi$. The most important result is that there are opposite correlations (negative and positive, respectively) between $\Delta$SFR and $\Delta Z$ for these two modes. However, in the real universe, inflow rate and SFE could both be time-varying. We have examined the simplest case of simultaneously time-varying $\Phi$ and time-varying SFE. We assume both the inflow rate and SFE are a step function (not periodic), jumping at the same time $t_0$, i.e. $\Phi(t) = \Phi_1$ at $t<t_0$ and $\Phi(t)=\Phi_2$ at $t\ge t_0$, and SFE(t) = SFE$_1$ at $t<t_0$ and SFE(t) = SFE$_2$ at $t\ge t_0$. If the $\Phi$ and SFE jumped by the same factor $f$, the resulting metallicity does not change and the following SFR jumps by a factor of $f^2$. If the jump of inflow rate is more dominant than that of SFE, the following metallicity decreases and SFR increases, leading to a negative correlation between the two parameters, and vice versa. This is to say that, the correlation between $\Delta$SFR and $\Delta Z$ is a powerful and observationally accessible diagnostic of the {\it dominant} mechanism for the variation of SFR, changes in inflow or changes in SFE, even if both of them are varying with time. \section{Data} \label{sec:3} In Section \ref{sec:2}, we have established links between the variations of SFR, cold gas mass, and gas-phase metallicity when a gas-regulator system is driven by changes in inflow or SFE, and established that the sign of the correlation between changes in the easily observable SFR and changes in the metallicity is a key diagnostic in establishing whether changes are driven by variations in inflow or SFE. While these relations have been constructed based on the {\it temporal} changes in a single system, they would of course apply also to an ensemble of such systems observed at a single epoch (assuming the phases are random), provided the assumption of ergodicity applies \citep[see][for a discussion]{Wang-20b}. The goal of the observational part of the paper is therefore to examine the correlation of $\Delta$SFR and $\Delta Z$, computed relative to suitably chosen fiducial values, at different locations. We will wish to examine this correlation both from galaxy to galaxy, but also at different locations within galaxies, in order to try to assess the relative importance of changes in inflow and SFE on different physical scales. In this section, we will briefly introduce the data used in the observational part of this work, namely the 38 SF galaxies from the MUSE Atlas of Disks \citep[MAD;][]{Erroz-Ferrer-19} survey, and the nearly 1000 well-defined SF galaxies from Mapping Nearby Galaxies at APO (MaNGA) survey (\citetalias{Wang-19}). We refer the readers to \cite{Erroz-Ferrer-19} and \citetalias{Wang-19} for more details of these two galaxy samples. \subsection{The MAD galaxy sample} \label{sec:3.1} The final released sample\footnote{https://www.mad.astro.ethz.ch} of the MAD survey includes 38 weakly inclined, spiral galaxies, spanning a large range of stellar mass from $10^{8.5}$ to $10^{11.2}$${\rm M}_\odot$. These galaxies were observed during the MUSE Guaranteed Time Observing runs on the Very Large Telescope (VLT) between 2015 April and 2017 September. The on-source exposure time was for each galaxy 1 h and the seeing ranged between 0.4 and 0.9 arcsec. These galaxies are very nearby, with $z<0.013$, leading to an average spatial resolution of $\sim$100 pc or better. The MUSE field of view is 1 arcmin$^2$, and the wavelength coverage of the spectra is from 4650 to 9300 \AA. The data were reduced using the MUSE pipeline \citep{Weilbacher-NG}, including bias and dark subtraction, flat fielding, wavelength calibration and so on. The data downloaded from the data release is not the original two-dimensional spectra, but the derived measurements from the spectra. These include the strengths of the emission lines, such as the flux map of H$\beta$, [OIII]$\lambda$4959,5007, H$\alpha$, [NII]$\lambda$6548,6583, and [SII]$\lambda$6717,6731. The emission lines are modelled by single Gaussian profiles. The fluxes of emission lines are corrected for the dust attenuation by using the Balmer decrement with case B recombination. The intrinsic flux ratio of H$\alpha$ to H$\beta$ is taken to be 2.86, assuming the electron temperature of 10$^4$ K and the electron density of $10^2$ cm$^{-3}$. The adopted attenuation curve is the CCM \citep[][]{Cardelli-89, ODonnell-94} extinction curve with $R_{\rm V}=3.1$. In addition to the maps of emission lines, the released MAD data also includes the maps of derived quantities, such as SFR surface density ($\Sigma_{\rm SFR}$) and stellar mass surface density ($\Sigma_*$). The SFR is derived from the H$\alpha$ luminosity \citep{Kennicutt-98, Hao-11}, with assuming the \cite{Kroupa-01} IMF. The stellar mass density is derived by fitting stellar population models to the stellar continuum spectra twice. The first continuum fitting is performed spaxel-by-spaxel, and the second fitting was performed on the stellar Voronoi tessellation (single-to-noise of 50 on the continuum) using the stellar templates from {\tt MILES} \citep{Sanchez-Blazquez-06} with {\tt pPXF} \citep[][]{Cappellari-04}. Then, the resulting $\Sigma_*$ map to a spaxel-by-spaxel map is obtained, assuming that the continuum is the same at each stellar Voronoi bin. We note that in the released maps, spaxels that are located within the Seyfert and LINER regions of the BPT diagram \citep[e.g.][]{Baldwin-81, Kewley-01}, are masked out. The SFR and gas-phase metallicity cannot be well measured in these masked regions. Their exclusion should not affect our analysis and, therefore, in this work we only focus on the SF and ``composite" regions. This also means that for each galaxy, we only use some fraction of the spaxels within the MUSE field. The fraction of valid spaxels varies from galaxy-to-galaxy, with a median value of 0.33. We have examined that our result does not depend on this fraction of used spaxels. The highly-resolved MAD data enables us to investigate the correlation between star formation and metal enhancement down to the scale of giant molecular clouds (GMC). However, the MAD sample only includes 38 galaxies, which limits the statistical power when examining the galaxy population as a whole. Therefore, we utilize complementary data on the integrated spectra of galaxies from MaNGA survey. We do not use the individual spaxel data for the MaNGA galaxies because the resolution is so much worse than for MAD. \subsection{The MaNGA galaxy sample} \label{sec:3.2} MaNGA is the largest IFS surveys of nearby galaxies up to now, and aims at mapping the 2-dimensional spectra for $\sim$10,000 galaxies with redshifts in the range of $0.01<z<0.15$. Using the two dual-channel BOSS spectrographs at Sloan Telescope \citep{Gunn-06, Smee-13}, MaNGA covers the wavelength of 3600-10300 \AA\ at R$\sim$2000. The spatial coverage of individual galaxies is usually larger than 1.5{$R_{\rm e}$}\ with a resolution of 1-2 kpc. The flux calibration, including the flux loss due to atmospheric absorption and instrument response, is accurate to better than 5\% for more than 89\% of MaNGA’s wavelength range \citep{Yan-16}. In this work, we use the well-defined sample of SF galaxy from \citetalias{Wang-19}. Here we only briefly describe the sample definition, and refer the reader to \citetalias{Wang-19} for further details. This galaxy sample is originally selected from the SDSS Data Release 14 \citep{Abolfathi-18}, excluding quenched galaxies, mergers, irregulars, and heavily disturbed galaxies. The quenched galaxies are identified and excluded based on the stellar mass and SFR diagram. For each individual galaxy, the stellar mass and SFR are measured within the effective radius, i.e. $M_*(<R_{\rm e})$ and SFR($<${$R_{\rm e}$}), based on the MaNGA 2-dimensional spectra. The final MaNGA sample includes 976 SF galaxies, and is a good representation of normal SFMS galaxies. Similar to the measurements of SFR for the MAD galaxies, the map of SFR surface density of MaNGA galaxies is also determined by the extinction-corrected H$\alpha$ luminosity \citep{Kennicutt-98}. The correction of dust attenuation follows the same approach for MAD galaxies, as described in Section \ref{sec:3.1}, by using the Balmer decrement and adopting the CCM extinction curve. The maps of stellar mass surface density for MaNGA galaxies are obtained by fitting the stellar continuum using the public fitting code {\tt STARLIGHT} \citep{Cid-Fernandes-04}, using single stellar populations with {\tt Padova} isochrones from \cite{Bruzual-03}. However, we note that in determining the $\Sigma_{\rm SFR}$ and $\Sigma_*$ for MaNGA galaxies, the \cite{Chabrier-03} IMF is assumed, which is different from the one adopted for MAD galaxies. We argue that the two IMFs are quite close to each other, with only a small overall shift on SFR and $M_*$ (or $\Sigma_{\rm SFR}$ and $\Sigma_*$) which does not change any of our conclusions in this work. \subsection{The estimation of gas-phase metallicity} \label{sec:3.3} The $T_{\rm e}$-based method is widely understood to represent the ``gold standard'' in determining the gas-phase metallicity \citep[e.g.][]{Skillman-89, Garnett-02, Bresolin-07, Berg-15, Bian-18}. However, the measurement of the weak [OIII]$\lambda$4363 emission line is needed, which is often not detected in the available spectra. Therefore, a set of empirical recipes have been proposed to derive the gas-phase metallicity based only on the strong emission lines \citep[e.g.][]{Kobulnicky-04, Pettini-04, Maiolino-08, Perez-Montero-09, Pilyugin-10, Marino-13, Vogt-15}, such as [OII]$\lambda$3227, H$\beta$, [OIII]$\lambda$5007, H$\alpha$, [NII]$\lambda$6584, and [SII]$\lambda\lambda$6717,6731. However, systematic offsets of 0.2 dex or more are found between these different empirical calibrations, even using the same line measurements \citep{Kewley-08, Blanc-15}. Not only are there systematic offsets, the range of the derived metallicities are also different. This is unfortunately important since we have argued that the variation of gas-phase metallicity (or the residuals of metallicity) as the SFR varies is an important diagnostic. The dispersion in metallicities must be considered in the context of the different ranges of the Oxygen abundance measurements using the different methods. Unfortunately, the wavelength coverage of MUSE does not include the [OII]$\lambda$3727 and [OIII]$\lambda$4363 lines, leading to a limited number of usable strong line prescriptions. In this work, we adopt the empirical relations from \citet[][hereafter \citetalias{Dopita-16}]{Dopita-16} and \citet[][hereafter \citetalias{Pilyugin-16}]{Pilyugin-16}. These two empirical relations are the most recently constructed and have advantages over the previous methods, although ultimately it is not known whether they are more accurate than the other methods. In the following, we briefly introduce these two calibrators. \subsubsection{{\tt N2S2H$\alpha$}} The {\tt N2S2H$\alpha$} is a remarkably simple diagnostic proposed by \citetalias{Dopita-16}, which can be written as: \begin{equation} \label{eq:23} {\tt N2S2H\alpha} = \log([{\rm NII}]/[{\rm SII}]) + 0.264\log([{\rm NII}]/{\rm H}\alpha) \end{equation} where [NII] is the flux of [NII]$\lambda$6584 and [SII] is the total flux of [SII]$\lambda\lambda$6717,6731. Then, the empirical relation of the metallicity can be written as: \begin{equation} \label{eq:24} 12 + \log ({\rm O/H}) = 8.77 + {\tt N2S2H\alpha} + 0.45({\tt N2S2H\alpha}+ 0.3)^5. \end{equation} This simple empirical relation is suitable for the full range of the metallicity. The H$\alpha$, [NII]$\lambda$6584 and [SII]$\lambda\lambda$6717,6731 are located close together in wavelength, limiting the spectral range needed, and making the {\tt N2S2H$\alpha$} diagnostic to be insensitive to reddening. The [NII]/H$\alpha$ term provides a correction for the weak dependence on the ionization parameter and gas pressure. \citetalias{Dopita-16} argued that this diagnostic should be a fairly reliable metallicity estimator with only small residual dependence on other physical parameters, and that it can be used in a wide range of environments. However, this metallicity estimator strongly depends on the relative N/O ratio. In the calibration of {\tt N2S2H$\alpha$}, \citetalias{Dopita-16} assumed a universal correlation between N/O and O/H, which is determined based on a mixture of both stellar and nebular sources \citep{Nicholls-17}. This means that any galaxies or regions that deviate from the adopted N/O-O/H relation would have an extra uncertainty in metallicity measurement when using {\tt N2S2H$\alpha$} as the metallicity estimator. \subsubsection{{\tt Scal}} The S-calibration ({\tt Scal}) metallicity estimator has been proposed by \citetalias{Pilyugin-16}, based on three standard diagnostic line ratios: \begin{equation} \begin{split} {\tt N2} = \ & [{\rm NII}]\lambda\lambda6548,6584/{\rm H}\beta, \\ {\tt S2} = \ & [{\rm SII}]\lambda\lambda6717,6731/{\rm H}\beta, \\ {\tt R3} = \ & [{\rm OIII}]\lambda\lambda4959,5007/{\rm H}\beta. \end{split} \end{equation} The {\tt Scal} diagnostic is defined separately for the upper and lower branches of $\log {\tt N2}$. The {\tt Scal} indicator for the upper branch ($\log {\tt N2}\ge -0.6$) can be written as: \begin{equation} \label{eq:26} \begin{split} 12 + \log({\rm O/H}) = \ & 8.424 + 0.030\log({\tt R3/S2}) + 0.751\log {\tt N2} \\ \ & + (-0.349 + 0.182\log({\tt R3/S2}) \\ \ & + 0.508\log {\tt N2})\times \log {\tt S2}, \end{split} \end{equation} and the {\tt Scal} indicator for the lower branch ($\log {\tt N2}<-0.6$) can be written as: \begin{equation} \label{eq:27} \begin{split} 12 + \log({\rm O/H}) = \ & 8.072 + 0.789\log({\tt R3/S2}) + 0.726\log {\tt N2} \\ \ & + (1.069 - 0.170\log({\tt R3/S2}) \\ \ & + 0.022\log {\tt N2})\times \log {\tt S2}. \end{split} \end{equation} The {\tt Scal} prescription is calibrated to metallicity measurements, from the T$_{\rm e}$-based method, of several hundred nearby HII regions. \citetalias{Pilyugin-16} found that the {\tt Scal} indicator gives metallicities in very good agreement with the $T_{\rm e}$-based methods, with a scatter of only $\sim$0.05 dex across the full metallicity range. Furthermore, {\tt Scal} indicator takes advantage of three emission line ratios, which is an improvement over previous strong-line methods. In principle, given the wavelength coverage of MUSE, the {\tt N2} and {\tt O3N2} diagnostics, calibrated by \cite{Marino-13}, are also applicable. As pointed out by \cite{Kreckel-19}, using {\tt O3N2} (or {\tt N2}) and {\tt Scal} can result in qualitatively different results from the MUSE data. In this work, we prefer to use metallicity indicators of {\tt N2S2H$\alpha$} and {\tt Scal}, rather than {\tt N2} and {\tt O3N2}. Actually, \cite{Marino-13} calibrated the {\tt O3N2} and {\tt N2} diagnostics to the T$_{\rm e}$-based method, and found that the {\tt O3N2} and {\tt N2} diagnostics result in the uncertainty in Oxygen abundance of 0.18 dex and 0.16 dex, respectively. Given the similar ranges of the metallicity determined by {\tt Scal}, {\tt O3N2} and {\tt N2} for a given data set, the smaller uncertainty of {\tt Scal} diagnostic indicates a significant improvement over the previous {\tt O3N2} and {\tt N2} diagnostics. This improvement may come from the fact that 1) {\tt Scal} use more emission line ratios, and 2) these break the degeneracy between metallicity and ionization parameter \citep[also see][]{Kreckel-19}. The later is also true for the {\tt N2S2H$\alpha$} indicator. \subsubsection{The contamination from diffuse ionized gas} The fluxes of emission lines cannot be fully attributed to star formation activity alone. The diffuse ionized gas (DIG) makes a substantial contribution to the total emission-line flux from disk galaxies \citep{Walterbos-94, Ferguson-96, Greenawalt-98}, especially for regions of low H$\alpha$ surface brightness \citep{Oey-07, Zhang-17}. The line ratios for emission from HII regions and DIG are different, reflecting their different physical origins. The empirical relations for deriving metallicity and SFR from line ratios and line strengths always assume that all of the line emission is due to star formation. This assumption is not unreasonable if the target regions are SF regions on the BPT diagram, while a significant contamination from DIG is expected in the ``composite'' and LINER regions. Compared with HII regions, the DIG shows enhanced line ratios in [NII]$\lambda$6584/H$\alpha$ and [SII]$\lambda\lambda$6717,6731/H$\alpha$ \citep{Reynolds-85, Hoopes-03, Madsen-06}. The emission from DIG moves the position of SF regions towards the composite or LINER regions on the BPT diagram \citep{Sarzi-06, Yan-12, Gomes-16}. In this work, we therefore only use the SF and composite regions of galaxies, and exclude those spaxels that are classified as Seyfert or LINERs. However, the contamination of DIG for the composite regions may still be significant. \cite{Erroz-Ferrer-19} have identified the regions in MAD galaxies in which star formation or DIG is dominant. Following the method developed in \cite{Blanc-09}, they measured the fraction of flux coming from DIG and from HII regions, and further defined the DIG regions to be those in which the flux contribution of HII regions is less than 60\%. They found that the HII regions show on average $\sim$0.1 dex higher metallicity than the DIG, while the metallicity radial gradient in both components is similar. Following the analysis of \cite{Erroz-Ferrer-19}, in this work, we will use all the spaxels in the SF or composite regions to do the analysis, but ignoring whether they are classified as HII regions or DIG. However, we have recalculated our main result when only using the HII regions, and find that the basic result remains unchanged. This indicates that the contamination of DIG is not a big concern in the present work. \section{Observational analysis of MAD galaxies} \label{sec:4} \subsection{Maps and profiles of sSFR and $Z_{\rm gas}$ for MAD galaxies} \label{sec:4.1} \begin{figure*} \begin{center} \epsfig{figure=./fig/obs/example/example_prof_ssfr_i27.eps,clip=true,width=0.3\textwidth} \epsfig{figure=./fig/obs/example/example_prof_zgas_i27_DOP16.eps,clip=true,width=0.3\textwidth} \epsfig{figure=./fig/obs/example/example_prof_zgas_i27_PG16.eps,clip=true,width=0.3\textwidth} \epsfig{figure=./fig/obs/example/example_ssfr_i27.eps,clip=true,width=0.3\textwidth} \epsfig{figure=./fig/obs/example/example_zgas_i27_DOP16.eps,clip=true,width=0.3\textwidth} \epsfig{figure=./fig/obs/example/example_zgas_i27_PG16.eps,clip=true,width=0.3\textwidth} \epsfig{figure=./fig/obs/example/example_dssfr_i27.eps,clip=true,width=0.3\textwidth} \epsfig{figure=./fig/obs/example/example_dzgas_i27_DOP16.eps,clip=true,width=0.3\textwidth} \epsfig{figure=./fig/obs/example/example_dzgas_i27_PG16.eps,clip=true,width=0.3\textwidth} \end{center} \caption{An example of the profiles and 2-d maps of SFR and $Z_{\rm gas}$\ for a representative MAD galaxy, NGC 1483. Top three panels: The profiles of sSFR, $Z_{\rm gas}$-\citetalias{Dopita-16} and $Z_{\rm gas}$-\citetalias{Pilyugin-16} for NGC1483. In each panel, the small dots show individual spaxels from NGC 1483, the blue line is the running median profile, and the red line is a linear fit to the data points. Middle three panels: The 2-d maps of sSFR, $Z_{\rm gas}$-\citetalias{Dopita-16} and $Z_{\rm gas}$-\citetalias{Pilyugin-16} for NGC1483. In each panel, the white regions within the MUSE field of view are due to the fact that these regions are located in the Seyfert or LINER regions according to the BPT diagram, where the SFR and $Z_{\rm gas}$\ can not be well determined based on emission lines. Bottom three panels: The maps of $\Delta$sSFR, $\Delta$$Z_{\rm gas}$-\citetalias{Dopita-16} and $\Delta$$Z_{\rm gas}$-\citetalias{Pilyugin-16} for this galaxy. The $\Delta$sSFR, $\Delta$$Z_{\rm gas}$-\citetalias{Dopita-16} and $\Delta$$Z_{\rm gas}$-\citetalias{Pilyugin-16} of each individual spaxel are defined to the deviations from the red lines in the corresponding top panels. } \label{fig:6} \end{figure*} In Section \ref{sec:2}, we investigated the behavior of the SFR$(t)$, the cold gas mass $M_{\rm gas}(t)$ and the gas-phase metallicity $Z(t)$ in the gas regulator system in response to variations in the inflow rate $\Phi(t)$ and star-formation efficiency SFE$(t)$, and how this response depends on the (assumed constant) wind mass-loading factor $\lambda$, and metallicity of the inflowing gas $Z_{\rm 0}$. Specifically, we found a {\it negative} correlation between $\Delta$SFR and $\Delta$Z (i.e. $\log {\rm SFR}(t)/\langle {\rm SFR}\rangle$ vs. $\log Z(t)/\langle {Z}\rangle$) when driving the gas-regulator with time-varying inflow rate, and a {\it positive} correlation between $\Delta$SFR and $\Delta Z$ when driving with time-varying SFE$(t)$. Therefore, one can in principle identify the driving mechanism of star formation activity by looking at the sign of the correlation between SFR and gas-phase metallicity in observational data. However, as pointed out above in Section \ref{sec:2.4}, one should look at the correlations of the relative values of SFR and $Z$ (i.e. the residuals $\Delta$SFR and $\Delta Z$ in Figure \ref{fig:1} and Figure \ref{fig:2}), rather than the absolute values, in order to take out the effects of different $\langle {\rm SFR}\rangle$ and $\langle Z\rangle$ for different galaxies or of different regions within them, e.g. the overall mass-metallicity or mass-sSFR relations or radial gradients in metallicity or sSFR within galaxies. In this section, we will therefore construct radial profiles of sSFR and $Z$ for the MAD galaxies, and use these to construct localized $\Delta$sSFR and $\Delta Z$ data points from the observations. Figure \ref{fig:6} shows an example of the radial profiles and 2-dimensional maps of sSFR and $Z_{\rm gas}$ for one individual MAD galaxy, NGC 1483. The top three panels of Figure \ref{fig:6} show the sSFR$(r)$, and the $Z_{\rm gas}(r)$ as estimated by {\tt N2S2H$\alpha$} and {\tt Scal} as a function of radius for individual spaxels. The $Z_{\rm gas}$ based on {\tt N2S2H$\alpha$} diagnostic is denoted as $Z_{\rm gas}$-\citetalias{Dopita-16}, and the $Z_{\rm gas}$ based on {\tt Scal} approach is denoted as $Z_{\rm gas}$-\citetalias{Pilyugin-16}. The radius used here is the de-projected radius scaled by the effective radius of the galaxy. In computing this de-projected radius we use the disk inclination based on the measured minor-to-major axis ratio and the position angle taken from the S4G photometry \citep{Salo-15}, assuming an infinitely thin disk. In each of the top panels, the blue line is a running median of 201 spaxels. As shown, for NGC 1483 the distribution of the sSFR at a given radius is quite strongly asymmetric, over nearly the whole range of galactic radius. While the sSFR of most spaxels are close to the median profile (or lightly less), a small fraction of spaxels have sSFR that is enhanced by up to an order of magnitude relative to the median profile. This is due to the fact that star formation activity is not uniform across the disk, but happens mostly in spiral arms or other star-formation regions. The regions with strong of enhanced sSFR can clearly be seen in the sSFR map in the middle-left panel of Figure \ref{fig:6}. In addition, the sSFR profile shows a positive radial gradient, which is consistent with the inside-out growth expected in disk galaxies \citep[e.g.][]{Perez-13, Li-15, Ibarra-Medel-16, Lilly-16, Goddard-17, Rowlands-18, Wang-18a}. The impression of strong asymmetry is reduced in $Z_{\rm gas}$, for both metallicity indicators. In addition, the overall $Z_{\rm gas}$ profiles for both indicators have a negative radial gradient, consistent with the previous studies of disk galaxies \citep[e.g.][]{Pilkington-12, Belfiore-17, Erroz-Ferrer-19}. This feature also can be seen in the maps of $Z_{\rm gas}$, shown in the middle row of Figure \ref{fig:6}. The measurements of $Z_{\rm gas}$ based on {\tt N2S2H$\alpha$} and {\tt Scal} are qualitatively consistent. However, for a given dataset, $Z_{\rm gas}$-\citetalias{Dopita-16} is usually larger than $Z_{\rm gas}$-\citetalias{Pilyugin-16}, and the range of $Z_{\rm gas}$-\citetalias{Dopita-16} is nearly twice as that of $Z_{\rm gas}$-\citetalias{Pilyugin-16}. We note that the particular galaxy is typical of the sample, and most of the SF disk galaxies show similar features in sSFR and $Z_{\rm gas}$ as shown for this galaxy. As pointed out in Section \ref{sec:2.4}, the gas-regulator predicts that the average SFR will only depend on the average inflow rate $\Phi_{\rm 0}$ and mass-loading factor $\lambda$, and the average $Z_{\rm gas}$ is determined by the effective yield, defined from the wind mass-loading as $y(1+\lambda)^{-1}$, and the metallicity of the inflowing gas $Z_{\rm 0}$. In the gas-regulator framework, the fitted average sSFR$(r)$ and $Z_{\rm gas}(r)$ profiles should therefore reflect the radial dependence of the average inflow rate, wind mass-loading and/or $Z_{\rm 0}$ (see further discussion in Section \ref{sec:4.2.2}). Therefore, the correlation of the median sSFR vs. median $Z_{\rm gas}$ is not the one we want when comparing with the model predictions in Figure \ref{fig:1} and Figure \ref{fig:2}. In contrast, we should eliminate this effect, which is caused by the time-invariant factors and focus instead on the values of individual spaxels relative to these underlying trends, i.e. on the $\Delta$sSFR and $\Delta Z_{\rm gas}$ residuals when the underlying sSFR$(r)$ and $Z_{\rm gas}(r)$ profiles are subtracted from each spaxel. To achieve this, we first perform a linear fit to the sSFR$(r)$ and $Z_{\rm gas}(r)$ profiles based on all the individual spaxels. These fits are shown as red lines in the top panels of Figure \ref{fig:6}. As shown, for both sSFR or $Z_{\rm gas}$, the linear fit is quite a good representation of the median profile, although it is not perfect. We then define the $\Delta$sSFR and $\Delta Z_{\rm gas}$ for each individual spaxel as the deviation of each spaxel from the fitted profile of sSFR$(r)$ or $Z_{\rm gas}(r)$ respectively. In this way, we can eliminate the overall radial dependences of sSFR and $Z_{\rm gas}$, as well as global differences between galaxies, such as the overall sSFR or effects due to the mass-metallicity relation. In the gas-regulator framework, these changes in $\langle {\rm SFR} \rangle$ or $\langle Z_{\rm gas} \rangle$ will reflect differences (radially within galaxies or from galaxy to galaxy) in the overall inflow rate, mass-loading factor and $Z_{\rm 0}$. The bottom three panels of Figure \ref{fig:6} show the maps of $\Delta$sSFR and $\Delta Z_{\rm gas}$ that are obtained for NGC 1483 after removing the radial gradients in sSFR and $Z_{\rm gas}$. It is immediately apparent that regions with enhanced SFR, indicated by red bumps, nearly always show enhanced metallicity for both of the two metallicity indicators. This is consistent with the previous analysis of \cite{Kreckel-19} and \cite{Erroz-Ferrer-19}, at the similar spatial resolution of $\sim$100 pc. It should be noted that the color scale used for $\Delta$Z$_{\rm gas}$-\citetalias{Dopita-16} has twice the range of that used for $\Delta$Z$_{\rm gas}$-\citetalias{Pilyugin-16}. Figure \ref{fig:7} shows the fitted linear profiles of sSFR$(r)$, $Z_{\rm gas}(r)$-\citetalias{Dopita-16} and Z$_{\rm gas}(r)$-\citetalias{Pilyugin-16} for all the 38 MAD galaxies. In displaying these profiles, we separate galaxies into three mass bins: $\log (M_\ast/$\msolar)$<10.0$ (blue lines), $10.0<$$\log (M_\ast/$\msolar)$<10.8$ (green lines), and $10.8<$$\log (M_\ast/$\msolar)\ (red lines). As shown, almost all of the MAD galaxies have positive radial gradients in sSFR with only four exceptions. It can be seen that there is no strong dependence of the sSFR profile on the stellar mass of galaxies. While the profiles in $Z_{\rm gas}$ have similar slopes, the overall values show a very strong dependence on global stellar mass, reflecting the MZR. \begin{figure*} \begin{center} \epsfig{figure=./fig/obs/DOP16_radius/global_rsfms.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/obs/DOP16_radius/global_rmzr.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/obs/PG16_radius/global_rmzr.eps,clip=true,width=0.33\textwidth} \end{center} \caption{ The linearly fitted profiles of sSFR (left panel), $Z_{\rm gas}$-\citetalias{Dopita-16} (middle panel), and $Z_{\rm gas}$-\citetalias{Pilyugin-16} (right panel) for all MAD galaxies. In each panel, the MAD galaxies are separated into three color-coded stellar mass bins: $\log (M_\ast/$\msolar)$<10.0$ (blue), $10.0<$$\log (M_\ast/$\msolar)$<10.8$ (green), and $10.8<$$\log (M_\ast/$\msolar)\ (red). } \label{fig:7} \end{figure*} \begin{figure*} \begin{center} \epsfig{figure=./fig/obs/DOP16_radius/global_define_delta.eps,clip=true,width=0.95\textwidth} \end{center} \caption{ The sSFR (left panel), $Z_{\rm gas}$-\citetalias{Dopita-16} (middle panel) and $Z_{\rm gas}$-\citetalias{Pilyugin-16} (right panel) at 0.5{$R_{\rm e}$}\ as a function of the stellar mass for the MAD galaxies. In each panel, the data points are color-coded with the stellar mass, and the black solid line is the linear fit to the data points. } \label{fig:8} \end{figure*} The definition of $\Delta$sSFR and $\Delta Z_{\rm gas}$ for the individual spaxels, based on Figure \ref{fig:7}, enables us to investigate the correlations of $\Delta$sSFR vs. $\Delta Z_{\rm gas}$ for small-scale regions ($\sim$100 pc) within individual MAD galaxies. However, one can well imagine that the physical processes driving the small-scale star formation, may be very different from those driving the star formation on galactic scales and this motivates define analogous quantities, $\Delta$sSFR and $\Delta Z_{\rm gas}$ to reflect the {it global} properties of galaxies with the MAD population. For this purpose, we choose the fitted values of sSFR and $Z_{\rm gas}$ at 0.5{$R_{\rm e}$}\ (see the red lines in the top panels of Figure \ref{fig:6}) as a representative of the global properties of individual galaxies\footnote{We realize that the sSFR and $Z_{\rm gas}$ at one specific galactic radius can not perfectly reflect the global sSFR and gas-phase metallicities, because both sSFR and $Z_{\rm gas}$ show radial gradients. In Section \ref{sec:5.1}, we will treat the sSFR and $Z_{\rm gas}$ that are measured within 1.5{$R_{\rm e}$}\ as a representative of global quantities for MaNGA galaxies. }, because the spatial coverage for MAD galaxies generally extends to at least 0.5{$R_{\rm e}$}. Figure \ref{fig:8} shows the sSFR(0.5{$R_{\rm e}$}), $Z_{\rm gas}$(0.5{$R_{\rm e}$})-\citetalias{Dopita-16} and $Z_{\rm gas}$(0.5{$R_{\rm e}$})-\citetalias{Pilyugin-16} as a function of the overall stellar mass of the galaxies. The sSFR(0.5{$R_{\rm e}$}) decreases slightly with stellar mass, and $Z_{\rm gas}$(0.5{$R_{\rm e}$}) increases significantly with stellar mass, for both metallicity indicators. Both of these trends are of course well-established for SF galaxies in the literature. As pointed out above, in the framework of gas-regulator model, the dependence of sSFR and $Z_{\rm gas}$ on stellar mass is due to the stellar mass-dependence of the inflow rate, $\lambda$ and $Z_{\rm 0}$ \citep[see Equation \ref{eq:21} and Equation \ref{eq:22} and discussion in][]{Lilly-13}. To eliminate the mass dependence, we perform a linear fit for each of these two relation, as shown with the black lines in Figure \ref{fig:8}. In a similar way, for each individual MAD galaxy, we can then define the $\Delta$sSFR(0.5{$R_{\rm e}$}) or $\Delta$Z$_{\rm gas}$(0.5{$R_{\rm e}$}) to be the deviation of an individual galaxy from the linearly fitted relation. This is useful to study the driving mechanisms of star formation from galaxy to galaxy within the population. We note that in this section we have defined the change parameter as $\Delta$sSFR, rather than the $\Delta$SFR used in exploring the model predictions (see Figure \ref{fig:1} and \ref{fig:2}). This is to eliminate the effect that regions of different $\Sigma_*$ may have different $\Sigma_{\rm SFR}$ even at the same galactic radius for a given disk galaxy. In Appendix \ref{sec:B}, we repeat the our basic analysis by defining the $\Delta$sSFR and $\Delta Z_{\rm gas}$ for individual spaxels using average relations with $\Sigma_*$ rather than with galactic radius, and find that the basic results remain the same. \subsection{Correlations between $\Delta$sSFR and $\Delta$Z$_{\rm gas}$ on different spatial scales} \label{sec:4.2} \begin{figure*} \begin{center} \epsfig{figure=./fig/obs/DOP16_radius/data_dist_all.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/obs/DOP16_radius/data_dist_all_bifit.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/obs/DOP16_radius/global_intermedian_DOP16.eps,clip=true,width=0.663\textwidth} \epsfig{figure=./fig/obs/DOP16_radius/global_intermedian_res_DOP16.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/obs/DOP16_radius/global_05Re_DOP16_mad.eps,clip=true,width=0.33\textwidth} \end{center} \caption{ (a), The $\Delta$sSFR-$\Delta$$Z_{\rm gas}$\ diagram for all the usable spaxels of MAD galaxies. The grayscale shows the number density of spaxels in logarithmic space. The contours show the constant number density of spaxels decreasing 0.5 dex from inner outwards. (b), Bisector fits for the individual spaxels on the $\Delta$sSFR-$\Delta$$Z_{\rm gas}$\ diagram. The lines correspond to the individual MAD galaxies color-coded by the overall stellar mass. The length of the line is determined by the range of $\Delta$sSFR for each individual galaxy. (c), The sSFR-$Z_{\rm gas}$\ relation of the fitted profiles of sSFR and $Z_{\rm gas}$\ (shown in Figure \ref{fig:7}) for MAD galaxies, color-coded by the stellar mass. The triangles shows the values of sSFR and $Z_{\rm gas}$\ at 0.5{$R_{\rm e}$}\ for each individual galaxies. The black solid line in this panel shows the sSFR-$Z_{\rm gas}$\ relation for the fitted relation of sSFR(0.5{$R_{\rm e}$})-$M_*$ and $Z_{\rm gas}$(0.5{$R_{\rm e}$})-$M_*$ relations, which are determined in Figure \ref{fig:8}. (d), The $\Delta$sSFR-$\Delta$$Z_{\rm gas}$\ relation of the fitted profiles of sSFR and $Z_{\rm gas}$\ for individual MAD galaxies. The colored lines in panel (d) are taken from panel (c) but shifting the lines with the sSFR and $Z_{\rm gas}$\ at 0.5{$R_{\rm e}$}\ (indicated by the triangles) to be zero. (e), The $\Delta$sSFR(0.5{$R_{\rm e}$})-$\Delta$$Z_{\rm gas}$(0.5{$R_{\rm e}$}) diagram for the 38 MAD galaxies, with the color-coding of stellar mass. In panel (e), the black line is the bisector fits of the data points. In all five panels, the scale of x-axis (or y-axis) are the same in displaying, so that the readers can directly compare the slope of the lines in all five panels. We note that, in all the panels, the gas-phase metallicity is measured based on the empirical formula from \citetalias{Dopita-16}. See text for further details. } \label{fig:9} \end{figure*} \begin{figure*} \begin{center} \epsfig{figure=./fig/obs/PG16_radius/data_dist_all.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/obs/PG16_radius/data_dist_all_bifit.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/obs/PG16_radius/global_intermedian_PG16.eps,clip=true,width=0.663\textwidth} \epsfig{figure=./fig/obs/PG16_radius/global_intermedian_res_PG16.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/obs/PG16_radius/global_05Re_PG16_mad.eps,clip=true,width=0.33\textwidth} \end{center} \caption{The same as Figure \ref{fig:9}, but using the gas-phase metallicity based on the empirical formula from \citetalias{Pilyugin-16}. } \label{fig:10} \end{figure*} \begin{figure*} \begin{center} \epsfig{figure=./fig/obs/DOP16_radius/coef_dist_all.eps,clip=true,width=0.4\textwidth} \epsfig{figure=./fig/obs/PG16_radius/coef_dist_all.eps,clip=true,width=0.4\textwidth} \end{center} \caption{ Left panel: The distribution of the Pearson correlation coefficient for the $\Delta$sSFR-$\Delta$$Z_{\rm gas}$-\citetalias{Dopita-16} relation of the individual spaxels for 38 MAD galaxies. We measure two kinds of Pearson correlation coefficient: one is without weighting the spaxels (black histogram), and the other is after weighting the spaxels by their SFR (red histogram). Right panel: The same as the left panel, but using $Z_{\rm gas}$-\citetalias{Pilyugin-16}. } \label{fig:11} \end{figure*} To compare with our theoretical expectations of the gas-regulator model constructed in Section \ref{sec:4.1}, we have defined in the previous section $\Delta$sSFR and $\Delta Z_{\rm gas}$ both on the 100-pc scale of individual spaxels in MAD galaxies, and for the much larger galactic scale of MAD galaxies as a whole (defined at 0.5{$R_{\rm e}$}). In this section, we will explore the correlations between $\Delta$sSFR and $\Delta Z_{\rm gas}$ on these scales, and interpret the results in terms of the predictions of the gas-regulator model. \subsubsection{100-pc scales} \label{sec:4.2.1} We first look at the $\Delta$sSFR-$\Delta Z_{\rm gas}$ relation on scales of $\sim$100 pc. The panel (a) of Figure \ref{fig:9} shows the distribution of all valid MAD spaxels on the $\Delta$sSFR vs. $\Delta Z_{\rm gas}$-\citetalias{Dopita-16} diagram. The grayscale shows the number density of spaxels in logarithmic space. In presenting the panel (a), we assign each MAD galaxy the same weight. In other words, for a given MAD galaxy, we weight the individual spaxels of that galaxy by $N_{\rm spaxel}^{-1}$, where $N_{\rm spaxel}$ is the total number of valid spaxels for that particular galaxy. As a whole, we find a significant positive correlation between $\Delta$sSFR and $\Delta Z_{\rm gas}$-\citetalias{Dopita-16} for all the individual spaxels in the 38 MAD galaxies. Furthermore, we also investigate the correlation of $\Delta$sSFR vs. $\Delta Z_{\rm gas}$ for each individual MAD galaxy. The panel (b) of Figure \ref{fig:9} shows bisector fits of $\Delta$sSFR vs. $\Delta Z_{\rm gas}$-\citetalias{Dopita-16} relation of individual spaxels for each of the 38 MAD galaxies. Here we adopt a bisector fitting, because there is no reason for us to prefer regression of $\Delta Z_{\rm gas}$ on $\Delta$sSFR or $\Delta$sSFR on $\Delta Z_{\rm gas}$. As can be seen, consistent with the result of panel (a), the $\Delta$sSFR and $\Delta$Z$_{\rm gas}$-\citetalias{Dopita-16} show positive correlation for 37 of the 38 MAD galaxies with only one exception. The exception is NGC 4030, the most massive galaxy in MAD sample. Inspection of the color-coding suggests that this result is not dependent on the mass of the galaxy. The same analysis is repeated for the Z$_{\rm gas}$-\citetalias{Pilyugin-16}, in the panels (a) and (b) of Figure \ref{fig:10}. Overall, we find that $\Delta$sSFR and $\Delta Z_{\rm gas}$-\citetalias{Pilyugin-16} still show a positive correlation, although not as significant as in panel (a) of Figure \ref{fig:9}. Consistent with this, panel (b) shows that 32 of the MAD galaxies show positive correlations of $\Delta$sSFR vs. $\Delta Z_{\rm gas}$-\citetalias{Pilyugin-16}, but now 6 MAD galaxies show negative correlations. To quantify the significance of the correlation, we calculate the distribution of Pearson correlation coefficients for the 38 MAD galaxies in Figure \ref{fig:11}. For each individual MAD galaxy, the coefficient is calculated in two ways, by weighting the individual spaxels equally and by weighting them by the SFR of the spaxels. As shown, for both the approaches of computing correlation coefficient, and for both the metallicity indicators, we find that most of the coefficients are positive with only a few less than zero. This is consistent with the results in panels (a) and (b) of the Figure \ref{fig:9} and Figure \ref{fig:10}. The correlation of $\Delta$sSFR vs. $\Delta Z_{\rm gas}$ becomes more significant when weighting the spaxels with their SFR. This is due to the fact that regions with strongly enhanced star formation, always show enhanced gas-phase metallicity (also see panel (a) of Figure \ref{fig:9} and \ref{fig:10}). This is both visible to the eye in Figure \ref{fig:6} and may reflect the point made in the context of Figure \ref{fig:2} in Section \ref{sec:2.3} that the positive correlations caused by step function changes to the SFE are most clearly seen when the SFR is highest. In the gas-regulator framework, we showed that the $\Delta$SFR (or $\Delta$sSFR) and $\Delta Z_{\rm gas}$ will be positively correlated when the gas-regulator system is driven by a time-varying SFE (see Figure \ref{fig:2}). Comparing the model predictions with the panel (a) of Figure \ref{fig:9} or Figure \ref{fig:10}, we can conclude that at $\sim$100 pc scales within galaxies, the variation of SFR (and $Z_{\rm gas}$) is due to a time-varying SFE experienced by a particular packet of gas. This conclusion implies an assumption that different {\it spatial} regions of galaxies, around an annulus of given galactic radius, constitute different temporal phases of a gas packet, as modelled in Figure \ref{fig:2}. This assumption is not unreasonable, and is supported by other strong observational evidence in favour of a time-varying SFE at $\sim$100 pc scale. \cite{Leroy-13} and \cite{Kreckel-18} have found that the dispersion of the SFE based on molecular gas measurements increases significantly towards smaller scales \citep[see also][]{Kruijssen-14}. Specifically, the scatter of the SFE is $\sim$0.4 dex at $\sim100$ pc \citep{Kreckel-18}. Consistent with this, \cite{Kruijssen-19} and \cite{Chevance-20} have shown that molecular gas mass and star formation rates are spatially de-correlated on the scales of individual GMCs in nearby disk galaxies, contrary to the tight correlation seen on kpc- and galactic-scales \citep[e.g.][]{Shi-11, Bigiel-08}. This de-correlation implies rapid temporal cycling between GMCs, star formation, likely involving feedback processes. By using this de-correlation, \cite{Chevance-20} constrain the properties of GMC and claimed that the GMC lifetimes are typically 10–30 Myr, which consists of a long inert phase without massive star formation and a short phase (of less than 5 Myr) with a burst of star formation. These observational results indicate a strong spatial variation of SFE at $\sim$100 pc scales, simultaneously suggesting a strong temporal variation of SFE for a given packet of gas. In Section \ref{sec:2.3}, we constructed two models of time-varying SFE$(t)$ in the form of both a sinusoidal function and a periodic step-function. According to the above discussion, we find that the SFE$(t)$ at $\sim$100 pc scale may be better characterized by a step-function with a short duration of top-phase and a long duration of bottom-phase, rather than the sinusoidal model. Consistent with this, we find the model prediction with the step-function is in good qualitative agreement with the observational results. Specifically, the distribution of $\Delta$sSFR is strongly asymmetric with a very small fraction of spaxels showing strongly enhanced SFR, and with these regions also show strongly enhanced metallicity, as found in the panel (a) of both Figure \ref{fig:9} and \ref{fig:10}. These features can also be found in the model prediction shown in the bottom-middle panel of Figure \ref{fig:2}. It goes without saying that the models explored in Section \ref{sec:2} are simple heuristic models, which cannot be expected to explain all the details of the situation. Not least, timescales of star-formation of only 10-30 Myr \citep{Chevance-20} are comparable to the timescales for chemical enrichment, an effect neglected by our use of the instantaneous recycling approximation in Section \ref{sec:2}. Nevertheless, we can conclude qualitatively that the variation of SFR and gas-phase metallicity on 100 pc scales is primarily due to a time-varying star formation efficiency experienced by the gas. \subsubsection{Sub-galactic scales} \label{sec:4.2.2} The two Panels (c) on Figures \ref{fig:9} and \ref{fig:10} show the correlation between the average sSFR and the average $Z_{\rm gas}$, as at a given galactic radius, for all 38 MAD galaxies, color-coded by their stellar mass. These are obtained by combining the linear fits of sSFR$(r)$ and $Z_{\rm gas}(r)$ (see Figure \ref{fig:7} in Section \ref{sec:4.1}) to eliminate $r$ and thereby produce a linear sSFR-$Z_{\rm gas}$ relation for each galaxy. The triangular points on each line show the values of sSFR and $Z_{\rm gas}$ at a fiducial 0.5{$R_{\rm e}$}, chosen to be representative of the global quantities for the galaxies. Since the metallicity profiles of MAD galaxies always show negative radial gradients, for a given line segment in these Panels, the central regions of the galaxies correspond to the higher $Z_{\rm gas}$ end of each line segment, which generally also have lower sSFR. These individual lines in the Panels (c) therefore represent the radial variations of sSFR and $Z_{\rm gas}$ {\it within} a given galaxy, i.e. on {\it sub-galactic} scales. All azimuthal variations are eliminated and the radial variations are greatly smoothed out by the linear fits to sSFR and metallicity. Shifting these lines to align the triangles would therefore produce a residual plot that would in principle be directly analogous to that in the Panels (b). This is done in the Panels (d). Comparing the lines in the panels (d) with those in the panels (b) of Figure \ref{fig:9} and \ref{fig:10}, it is clear that the correlation of $\Delta$sSFR and $\Delta Z_{\rm gas}$ on these larger ``sub-galactic'' scales show the {\it opposite} correlation to that seen on 100-pc scales. Almost all the individual galaxies show positive correlations on 100-pc scales in the Panels (b) but most, especially at low stellar masses, show a negative correlation between in the Panels (d). It should be noted that the trend with stellar mass of the galaxy is much clearer than in the Panels (b). The interpretation of the (anti-)correlation of $\Delta$sSFR and $\Delta Z_{\rm gas}$ radially across the galaxy is less trivially interpreted than the positive correlation on 100-pc scales in Section \ref{sec:4.2.1} or, we will argue, on larger galactic scales in Section \ref{sec:4.2.3} below. Positive radial gradients of sSFR are certainly expected from the inside-out growth of disks \citep[see e.g.][]{Lilly-16}. How can this obvious effect be incorporated into the gas-regulator formalism? Inside-out growth of disk galaxies trivially produces a strong radial dependence of the mean specific inflow rate since, as defined by \cite{Lilly-13}, this is just the current inflow rate divided by the integral over all time of the past inflow rate. Since a basic feature of the gas-regulator is to set the sSFR to be equal to the specific inflow rate \citep[see][]{Lilly-13}, this implied gradient in specific inflow rate would be expected to produce an equivalent gradient in sSFR. The negative correlation of $\Delta$sSFR and $\Delta Z_{\rm gas}$ in the Panels (c) could then again be interpreted as the signature of changes in the inflow rate, albeit occurring on cosmological timescales. However, an important difference between these radial ``sub-galactic'' variations and those on both smaller (Section \ref{sec:4.2.1}) and larger (Section \ref{sec:4.2.3} below) scales concerns the potential effects of the wind mass-loading term, $\lambda$ and/or possibly the metallicity of inflow gas, $Z_{\rm 0}$. On 100-pc scales, we normalised each spaxel by the average properties of all the spaxels at the {\it same} galactic radius in the {\it same} galaxy. We might expect the $\lambda$ and $Z_{\rm 0}$ to be the same for all these pixels if they are determined by the location within the galactic potential well. Likewise, on the larger scales when we consider the integrated properties of galaxies, we will normalise by the average properties of galaxies with the same integrated stellar mass, which again we may argue are likely have similar overall values of $\lambda$ and $Z_{\rm 0}$. But, in the current subsection where we are looking at radial variations of sSFR and $Z_{\rm gas}$ {\it within} galaxies, it is very likely that there will be a positive radial gradient in $\lambda(r)$ (and a possibly negative radial gradient in $Z_{\rm 0}$). According to the Equation \ref{eq:22} that the average $Z_{\rm gas}$ is determined by $Z_{\rm 0}+y_{\rm eff}$, there is no difficulty to produce a negative gradient in $Z_{\rm gas}$ with a positive gradient in $\lambda$ and/or a negative gradient in $Z_{\rm 0}$. We will return to the radial profiles of (s)SFR and $\Delta Z_{\rm gas}$ below in Section \ref{sec:5.2}, where we analyze the MaNGA sample, which is not only much larger but extends over a much larger radial range, albeit at much poorer spatial resolution. \subsubsection{Galactic scales} \label{sec:4.2.3} Finally, the two Panels (e) of Figures \ref{fig:9} and \ref{fig:10} show the correlations between the residuals of the overall sSFR and metallicity, i.e. $\Delta$sSFR(0.5{$R_{\rm e}$}) and $\Delta Z_{\rm gas}$(0.5{$R_{\rm e}$}), for each MAD galaxy, once the overall trends of sSFR(0.5{$R_{\rm e}$}) and $Z_{\rm gas}$(0.5{$R_{\rm e}$}) with galactic stellar mass (shown in Figure \ref{fig:8}) are taken out. Each triangle represents an individual MAD galaxy with the color-coding of its stellar mass. The Panels (d) on the two figures therefore show whether a given MAD galaxy is {\it overall} elevated or depressed in sSFR and $Z_{\rm gas}$ relative to other galaxies of the same mass. They are therefore completely analogous (albeit with a vastly different number of points) to the Panels (a) which showed whether individual spaxels within a MAD galaxy were elevated or depressed in these two quantities relative to the other spaxels at the same radial location within the same galaxy. As argued in the previous subsection, the effects of any systematic variations of wind mass-loading and metallicity of inflow gas with stellar mass should not be present in this diagram. The black solid lines in the Panels (d) show a bisector fit for the data points. For both metallicity indicators, a negative correlation between $\Delta$sSFR(0.5{$R_{\rm e}$}) and $\Delta Z_{\rm gas}$(0.5{$R_{\rm e}$}) on galactic scales can clearly be seen. The Pearson correlation coefficients are $-$0.23 and $-$0.17 for the {\tt N2S2H$\alpha$} and {\tt Scal} indicators, respectively. This negative correlation and fit is in clear contrast to the positive correlation and fits on 100-pc scales that are shown in the (directly analogous) Panels (a) and (b). Given the limit number of MAD galaxies that are available, one might be concerned of the statistical validity of the reversal of sign between the negative correlation in Panels (e). Therefore, in the next section of the paper, we perform a similar analysis on the much larger sample of 976 MaNGA galaxies, and find completely consistent results in this much larger sample. Comparing the above result with the model prediction in Section \ref{sec:2.2}, we conclude that the inverse correlation between variations in the {\it overall} sSFR and $Z_{\rm gas}$ across the galaxy population at a given mass, are due to temporal variations in the inflow rate onto the galaxies. This is in marked contrast to the situation on 100-pc scales, where we argued that the positive correlation between these quantities was the clear signature of temporal variations in the SFE as a given packet of gas enters and leaves regions of high SFE. Finally, it should be noted in passing that the observed negative correlation between the overall $\Delta$sSFR and $\Delta Z_{\rm gas}$ in the Panels (e) is a straightforward manifestation of the existence of SFR as a second parameter in the mass-metallicity relation \citep[e.g.][]{Mannucci-10, Salim-14, Cresci-19, Curti-20}. It has further been claimed that the $Z(M_*,{\rm SFR})$ relation is epoch-independent, i.e. that there is a so-called FMR \citep[e.g.][]{Richard-11, Nakajima-12, Huang-19}. One of the successes of the gas-regulator model presented by \cite{Lilly-13} was to provide a natural analytic explanation for the presence of SFR as a second parameter and even to predict that the $Z(M_*,{\rm SFR})$ relation could well be more or less epoch independent. The \cite{Lilly-13} analysis considered in the first instance a constant specific inflow rate. But a steady specific inflow implicitly produces an {\it increase} in the inflow rate. If the specific inflow changes in such a way that the inflow rate is constant, then the sensitivity to the SFR vanishes (see \cite{Lilly-13} for discussion, also the Appendix to \citet{Onodera-16}). This emphasizes both that the anti-correlation in Panel (e) is not a new or controversial observational result, and also, in a very general sense, that this negative correlation between overall metallicity and star-formation rate on galactic scales is fundamentally driven by {\it changes} to the inflow rate, as discussed in this paper. \subsection{Quantitative interpretation of the dispersion of gas-phase metallicity and sSFR on 100-pc scales in MAD galaxies} \label{sec:4.3} \begin{figure*} \begin{center} \epsfig{figure=./fig/obs/DOP16_radius/scatter_mstar_all.eps,clip=true,width=0.45\textwidth} \epsfig{figure=./fig/obs/PG16_radius/scatter_mstar_all.eps,clip=true,width=0.45\textwidth} \end{center} \caption{ Left panel: The ratio of $\sigma$($\Delta$$Z_{\rm gas}$) to $\sigma$($\Delta$sSFR) for the 38 MAD galaxies. For each MAD galaxies, the $\sigma$($\Delta$$Z_{\rm gas}$) (or $\sigma$($\Delta$sSFR)) is the standard deviation of $\Delta$$Z_{\rm gas}$\ (or $\Delta$sSFR) for the all the individual spaxels in this galaxy. The data points are color-coded with the stellar masses. Right panel: The same as the left panel, but using $Z_{\rm gas}$-\citetalias{Pilyugin-16}. } \label{fig:16} \end{figure*} We showed in Section \ref{sec:2} that the relative strength of variations in the SFR and metallicity of the gas-regulator should depend on the response timescale of the system, set by the gas depletion timescale, relative to the timescales of any driving variations in the inflow or in the SFE. The Equation \ref{eq:20} describes the relative amplitude of these variations, characterized by $\sigma(\Delta Z_{\rm gas})$/$\sigma(\Delta {\rm SFR})$ across the population, as a function of $\xi$ when driving the gas-regulator system with sinusoidal SFE$(t)$. According to Equation \ref{eq:20}, the $\sigma(\Delta Z_{\rm gas})$/$\sigma(\Delta {\rm SFR})$ should decreases with increasing $\xi$, i.e. decrease with increasing effective gas depletion time if we ignore the possible variation of the driving period of SFE$(t)$ (see Figure \ref{fig:5}). We have therefore calculated the dispersions across the spaxels of the residual quantities $\Delta Z_{\rm gas}$ and $\Delta {\rm SFR}$ that were plotted in the two Panels (a) in Figures \ref{fig:9} and \ref{fig:10} for the two metallicity indicators respectively. We calculate this dispersion for each of the MAD galaxies respectively. Figure \ref{fig:16} shows the resulting ratios of $\sigma(\Delta Z_{\rm gas})$ to $\sigma(\Delta {\rm sSFR})$ for each of the 38 MAD galaxies for the two metallicity indicators. It is obvious that the $\sigma(\Delta Z_{\rm gas})$ based on {\tt N2S2H$\alpha$} is overall greater than that based on {\tt Scal}. This is not a noise issue, but is due to the fact that the range of $Z_{\rm gas}$-\citetalias{Dopita-16} is nearly twice the range of $Z_{\rm gas}$-\citetalias{Pilyugin-16} for a given dataset, as mentioned earlier in this paper. This systematic uncertainty hinders the quantitative interpretation of these dispersions, although trends established within a single estimator (i.e. within a single panel of Figure \ref{fig:16}) should have some validity. As can be seen, the $\sigma(\Delta Z_{\rm gas})$/$\sigma(\Delta {\rm sSFR})$ based on {\tt N2S2H$\alpha$} is in the range of 0.12 to 0.42 with the median value of 0.24. The $\sigma(\Delta Z_{\rm gas})$/$\sigma(\Delta {\rm sSFR})$ based on {\tt Scal} is about a half this value, mostly in the range of 0.06-0.17 with a median value of 0.11. We do not find a significant dependence of $\sigma(\Delta Z_{\rm gas})$/$\sigma(\Delta {\rm sSFR})$ on stellar mass, except possibly for an increase at the lowest stellar masses below $10^9$${\rm M}_\odot$. The MAD galaxy with the largest $\sigma(\Delta Z_{\rm gas})$/$\sigma(\Delta {\rm sSFR})$ is ESO499-G37. A small fraction of spaxels in ESO499-G37 have $\log${\tt N2}$<-0.6$, resulting in very low metallicity in these regions (see Equation \ref{eq:27}), and producing a large dispersion in $\Delta Z_{\rm gas}$-\citetalias{Pilyugin-16}. Although we suspect that a sinusoidally time-varying SFE is not likely to be realistic in the universe at $\sim$100 pc, Equation \ref{eq:20} (also see Figure \ref{fig:3}) does permit a rough order of magnitude estimate of whether these relative dispersions are reasonable and of the approximate timescales involved. If we examine what values of $\xi$ in Equation \ref{eq:20} produce the observed $\sigma(\Delta Z_{\rm gas})$/$\sigma(\Delta {\rm sSFR})$ for typical MAD galaxies, i.e. 0.24 and 0.12 for the two metallicity estimators, then these are $\log \xi=$ 0.607 for {\tt N2S2H$\alpha$}, and $\log \xi=$ 0.956 for {\tt Scal} at $Z_{\rm 0}=0$. If we take 1 Gyr as a reasonable estimate for the effective gas depletion timescale for the overall galaxy population (see \citetalias{Wang-19}), we get rough estimates of $T_{\rm p}=$ 1.5 Gyr for {\tt N2S2H$\alpha$}, and $T_{\rm p}=$ 0.7 Gyr for {\tt Scal} as the nominal period of a time-varying SFE. Intriguingly, in the Milky Way, a periodic star formation history with a period of $\sim$0.5 Gyr has been suggested in the solar neighborhood, from analysis of the resolved stellar population \citep{Hernandez-00, de-la-Fuente-Marcos-04}. As further pointed out by \cite{Egusa-09}, this periodic star formation history may be associated with the passage of the spiral potential in the density wave theory \citep{Lin-69}. Assuming that the potential has a two-armed pattern as suggested by \cite{Drimmel-00}, the pattern speed can be calculated as $\Omega_{\rm P}=\Omega(r=R_{\odot})-\pi/(0.5\ {\rm Gyr}) = 21$ km s$^{-1}$ kpc$^{-1}$. This is also consistent with the result from numerical simulations of the stellar and gaseous response to the spiral potential, presented by \cite{Martos-04}. It is quite suggestive that this same sort of timescale emerges in our own analysis of the metallicities in terms of a periodically varying SFE, and suggests that the periodic (or time-varying) SFE$(t)$ relevant on 100-pc scales may be explained by the passage of orbiting packets of gas through the spiral density wave. However, given the many steps and uncertainties in our analysis, we certainly caution against over-interpretation of our analysis. \section{Analysis of MaNGA galaxies} \label{sec:5} In the previous section of the paper, we have presented results based on MAD galaxies. The high spatial resolution of MAD galaxies enables us to obtain a robust statistical analysis on the correlation of the $\Delta$sSFR-$\Delta$$Z_{\rm gas}$\ on 100 pc scales. However, the analysis at galactic scales needs to be verified because of the limited sample size of MAD galaxies. In this section, we therefore present a similar analysis as in Section \ref{sec:4} by using a well-defined set of SF MaNGA galaxies. The MaNGA sample used in this work includes 976 SF galaxies, which is $\sim$25 times larger than the MAD sample. The spatial coverage of MaNGA sample is also greater than 1.5{$R_{\rm e}$}, which is larger than the MAD coverage as a whole. However, the spatial resolution of MaNGA galaxies (1-2 kpc) is much worse than that of MAD galaxies. Therefore, we only focus on the analysis on galactic and ``sub-galactic" scales for MaNGA galaxies in this section, rather than on individual spaxels. \subsection{Correlations of the integrated $\Delta$sSFR and $\Delta Z_{\rm gas}$ for MaNGA galaxies} \label{sec:5.1} \begin{figure*} \begin{center} \epsfig{figure=./fig/obs/DOP16_radius/manga_15Re_DOP16.eps,clip=true,width=0.43\textwidth} \epsfig{figure=./fig/obs/PG16_radius/manga_15Re_PG16.eps,clip=true,width=0.516\textwidth} \end{center} \caption{ The $\Delta$sSFR($<$1.5{$R_{\rm e}$})-$\Delta$$Z_{\rm gas}$($<$1.5{$R_{\rm e}$}) diagram for MaNGA SF galaxies, color-coded by integrated stellar mass. Unlike in the panel (e) of Figure \ref{fig:9} and Figure \ref{fig:10}, we use the integrated sSFR (and $Z_{\rm gas}$) measured within 1.5{$R_{\rm e}$}\ to be representative of the global quantity, rather than using the values at one specific galactic radius. In both panels, the black lines are the bi-sector fittings of the data points. } \label{fig:12} \end{figure*} In Section \ref{sec:4.2.3} we examined the relation between the measure of the overall, i.e. ``galactic-scale", $\Delta$sSFR-$\Delta$$Z_{\rm gas}$\ for MAD galaxies, taking as measures of these quantities the values from the radial profiles at a fiducial radius of 0.5 {$R_{\rm e}$}, as shown for the 38 MAD galaxies in the two Panels (e) in Figures \ref{fig:9} and \ref{fig:10}. Since the MaNGA coverage usually extends further than 1.5{$R_{\rm e}$}\ for individual galaxies, we can now measure the integrated sSFR and $Z_{\rm gas}$ within 1.5{$R_{\rm e}$}. The $Z_{\rm gas}$ within 1.5{$R_{\rm e}$}\ is computed as the H$\alpha$ luminosity weighted $Z_{\rm gas}$\ for all the spaxels within 1.5{$R_{\rm e}$}. This is probably more representative of the global quantities than the sSFR (or $Z_{\rm gas}$) that was measured in the MAD sample at one particular radius. As before, we first construct the sSFR($<$1.5{$R_{\rm e}$}) vs. mass and $Z_{\rm gas}$($<$1.5{$R_{\rm e}$}) vs. mass relations, and use these to normalize the measurements of individual galaxies. The first is obtained with a linear fit of the sSFR($<$1.5{$R_{\rm e}$}) vs. stellar mass relation. For the metallicity, we do a polynomial fit to the $Z_{\rm gas}$($<$1.5{$R_{\rm e}$}) vs. stellar mass relation, since it clearly flattens at the high-mass end. For each individual MaNGA galaxy, we thereby define the $\Delta$sSFR($<$1.5{$R_{\rm e}$}) and $\Delta Z_{\rm gas}$($<$1.5{$R_{\rm e}$}) to be the (logarithmic) deviation from these relations. The correlation between $\Delta$sSFR($<$1.5{$R_{\rm e}$}) and $\Delta Z_{\rm gas}$($<$1.5{$R_{\rm e}$}) for the MaNGA sample is shown for the two metallicity indicators in the two panels of Figure \ref{fig:12}. As shown, enlarging the sample size by a factor of about 25, the negative correlation between $\Delta$sSFR and $\Delta Z_{\rm gas}$ is very clearly seen for both metallicity indicators. The Pearson correlation coefficients for the MaNGA sample are $-$0.23 and $-$0.36 for {\tt N2S2H$\alpha$} and {\tt Scal} indicators, respectively. For each metallicity indicator, the linear slopes obtained in MaNGA by bi-sector fitting are very similar to those seen in the MAD sample (Panels (e) of Figures \ref{fig:9} and \ref{fig:10}). However, we note again that the slopes for the two metallicity indicators are significantly different, due to the range of $Z_{\rm gas}$-\citetalias{Dopita-16} being nearly twice of that of $Z_{\rm gas}$-\citetalias{Pilyugin-16}. As also noted above, this clear inverse correlation is a re-statement of the existence of SFR as a (negative) second parameter in the mass-metallicity relation. \subsection{Radial profiles of sSFR and $Z_{\rm gas}$ for MaNGA galaxies} \label{sec:5.2} \begin{figure*} \begin{center} \epsfig{figure=./fig/obs/DOP16_radius/zgas_radius_mass_DOP16.eps,clip=true,width=0.84\textwidth} \end{center} \caption{The median gas-phase metallicity profile for MaNGA sample galaxies at given stellar mass and a given $\Delta$sSFR($<${$R_{\rm e}$}). The $\Delta$sSFR($<${$R_{\rm e}$}) is defined to be the vertical deviation (i.e. in sSFR) from the ``nominal'' SFMS, i.e. sSFR($<${$R_{\rm e}$})-$M_*$($<${$R_{\rm e}$}) relation \citep{Wang-19}. The blue, green, yellow and red profiles are the median $Z_{\rm gas}$\ profiles of galaxies with $0.33<\Delta$sSFR($<${$R_{\rm e}$}), $0.0<\Delta$sSFR($<${$R_{\rm e}$})$<0.33$, $-0.33<\Delta$sSFR($<${$R_{\rm e}$})$<0.0$ and $\Delta$sSFR($<${$R_{\rm e}$})$<-0.33$, respectively. The width of each profile is calculated using the boot-strap method from the sample in question. We note that the gas-phase metallicity profile here is generated based on $Z_{\rm gas}$-\citetalias{Dopita-16}.} \label{fig:14} \end{figure*} \begin{figure*} \begin{center} \epsfig{figure=./fig/obs/PG16_radius/zgas_radius_mass_PG16.eps,clip=true,width=0.84\textwidth} \end{center} \caption{ The same as Figure \ref{fig:14}, but using $Z_{\rm gas}$-\citetalias{Pilyugin-16}. } \label{fig:15} \end{figure*} In the previous analyses, we focused on the correlation of $\Delta$sSFR vs. $\Delta Z_{\rm gas}$ on different spatial scales: the 100-pc scales within galaxies and the scale of entire galaxies. We discussed how they revealed the driving of the SFR in the gas-regulator framework, arguing that on the smaller scales temporal variations in SFE are responsible while on galaxy scales it is temporal variations in the accretion rate. These conclusions result from the opposite sign of the correlations between the residuals $\Delta$sSFR and $\Delta Z_{\rm gas}$ obtained by normalizing the observed values by appropriate fiducial values. We now turn to look more closely at the radial variations of SFR and $Z_{\rm gas}$ in galaxies and how these vary across the galaxy population, using again the MaNGA sample. This analysis may be regarded as an extension of the analysis of SFR profiles that was presented in \citetalias{Wang-19}. In \citetalias{Wang-19}, we investigated the dependence of the normalized star-formation surface density profiles, $\Sigma_{\rm SFR}$(R/{$R_{\rm e}$}) of star-forming MaNGA galaxies on the overall stellar mass and on $\Delta$sSFR($<${$R_{\rm e}$}), the vertical deviation of the galaxy from the ``nominal'' SFMS\footnote{The ``nominal'' SFMS is the linear fitted relation of SFR($<${$R_{\rm e}$}) versus $M_*$($<${$R_{\rm e}$}) for the SF galaxies of MaNGA (see more details in section 2 of \citetalias{Wang-19}). }. We showed (figure 5 of that paper) that star-forming MaNGA galaxies that lie significantly above (or below) the overall SFMS show an elevation (or suppression) of SFR at all radii. In addition, we showed that whereas at low stellar masses this elevation (suppression) of star-formation is more or less uniform with radius, for the more massive galaxies the elevation (or suppression) of star-formation becomes more pronounced in the central regions of the galaxies. As a direct consequence of this, the dispersion in the (normalized) $\Sigma_{\rm SFR}$ across the galaxy population, which we designated $\sigma(\Delta\Sigma_{\rm SFR})$ was found to be correlated with the local surface mass density, which is equivalent to an inverse correlation with the gas depletion timescale in the extended Schmidt Law \citep{Shi-11}, since in that local surface density and the gas depletion timescale are directly related. This result was shown in figure 9 of \citetalias{Wang-19}. In this subsection, we will carry out an analogous study of the metallicity profiles for the same MaNGA galaxies used in \citetalias{Wang-19}. We will investigate the dependence of these metallicity profiles on stellar mass and the deviation from the SFMS, analogously to the analysis of $\Sigma_{\rm SFR}$ in that paper. For consistency with that earlier work, we use the $\Delta$sSFR($<${$R_{\rm e}$}) to separate galaxies. Galaxies are classified into four sub-samples: 0.33$<\Delta$sSFR($<${$R_{\rm e}$}), 0.0$<\Delta$sSFR($<${$R_{\rm e}$})$<$0.33, $-$0.33$<\Delta$sSFR($<${$R_{\rm e}$})$<$0.0 and $\Delta$sSFR($<${$R_{\rm e}$})$<-0.33$ and further split the sample into five bins of overall stellar mass, as in \citetalias{Wang-19}. For each individual galaxy, we first compute a radial profile of $Z_{\rm gas}$, determined as the median $Z_{\rm gas}$\ of spaxels located within each radial bin. In each of these subsamples we then construct the median $Z_{\rm gas}$($r$/{$R_{\rm e}$}) radial profiles, using the two metallicity estimators in turn, and estimating uncertainties by boot-strapping the sample galaxies. Figure \ref{fig:14} shows the median $Z_{\rm gas}$-\citetalias{Dopita-16} profiles for these sub-samples of galaxies. In each stellar mass bin, the metallicity profiles are displayed in the blue, green, yellow and red in descending order of their overall $\Delta$sSFR($<${$R_{\rm e}$}). Figure \ref{fig:15} is the same as Figure \ref{fig:14} but for Z$_{\rm gas}$-\citetalias{Pilyugin-16}. The first-order result is clear: independent of which metallicity indicator is used, low mass galaxies that lie significantly above (or below) the SFMS in their overall sSFR, have systematically lower (or higher) $Z_{\rm gas}$ over the whole galactic radii. This dependence of $Z_{\rm gas}$ on $\Delta$sSFR however decreases (and even vanishes and possibly reverses) with increasing stellar mass. There is some evidence that it also decreases to the centers of the galaxies (see for example the higher mass bins in Figure \ref{fig:14}). The result of Figure \ref{fig:14} is broadly consistent with the result of Figure \ref{fig:15}, except for the highest mass bins. In the highest mass bin of Figure \ref{fig:15}, a positive correlation between $\Delta$sSFR and $\Delta Z_{\rm gas}$ can be seen, which is clearly opposite to the result of other mass bins. This might be due to the failure of the {\tt N2S2H$\alpha$} indicator that the assumed N/O-O/H relation does not hold for the most massive SF galaxies. The overall negative correlation between the overall $\Delta$sSFR and $Z_{\rm gas}$ shown by the sets of profiles in Figures \ref{fig:14} and \ref{fig:15} is another manifestation of the inverse correlations in Panels (e) of Figures \ref{fig:9} and \ref{fig:10} and in Figure \ref{fig:12}. We can then argue that these indicate that time-varying inflows are the primary drivers of variations of sSFR and $Z_{\rm gas}$, across the galaxy population \citep[also see \citetalias{Wang-19};][]{Wang-20a}. As noted above, this result is a manifestation of the general presence of SFR as a (negative) second parameter in the overall mass-metallicity relation \citep[e.g.][]{Mannucci-10}. The fact that we see the range of $Z_{\rm gas}$ (at given R/{$R_{\rm e}$}) decreasing with stellar mass is also consistent with previous studies of the overall $Z(M_*,{\rm SFR})$ relation \citep[e.g.][]{Mannucci-10, Curti-20}. It is quite striking however, when comparing the Figures \ref{fig:14} and \ref{fig:15} with figure 5 in \citetalias{Wang-19}, how the {\it dispersion} of $Z_{\rm gas}$ (at a given mass and radius) behaves in the {\it opposite} way to the dispersion in (normalized) $\Sigma_{\rm SFR}$ shown in our previous work. Whereas the former decreases with increasing stellar mass (and possibly towards the centers of galaxies), the dispersion of $\Sigma_{\rm SFR}$ (or sSFR) does not vary much with mass but increases towards the centers of galaxies. We will discuss this in detail in the next Section \ref{sec:5.3}. \subsection{Quantitative interpretation of the dispersion of gas-phase metallicity and sSFR in MaNGA galaxies} \label{sec:5.3} \begin{figure*} \begin{center} \epsfig{figure=./fig/obs/manga/sigma_dsfr7_DOP16_old.eps,clip=true,width=0.45\textwidth} \epsfig{figure=./fig/obs/manga/sigma_dsfr7_PG16_old.eps,clip=true,width=0.45\textwidth} \end{center} \caption{Left panel: The ratio of $\sigma$($\Delta$$Z_{\rm gas}$)-\citetalias{Dopita-16} to $\sigma$($\Delta\Sigma_{\rm SFR}$) as a function of $\tau_{\rm dep}$ derived using the extended Schmidt law \citep{Shi-11}, based on the MaNGA galaxies. The different colors represent different stellar mass bins, as denoted in the top-left corner. Data points with the radius larger than {$R_{\rm e}$}, are indicated in gray. The uncertainties are measured by bootstrap method. Right panel: The same as the left panel, but using $Z_{\rm gas}$-\citetalias{Pilyugin-16}. } \label{fig:17} \end{figure*} As discussed in Section \ref{sec:2}, the simple gas-regulator model predicts not only the sign of the correlation between $\Delta$SFR and $\Delta Z_{\rm gas}$, but also the variation of $\Delta$SFR and $\Delta Z_{\rm gas}$ as a function of $\xi$ (see Figure \ref{fig:3}) defined as the ratio of the driving period to the effective gas depletion timescale in Equation \ref{eq:6.1}. In this subsection, we will investigate more quantitatively the variation of $\Delta$SFR and $\Delta Z_{\rm gas}$ in MaNGA galaxies. This will inevitably run up against the systematic uncertainties arising from the significantly different outputs of our two chosen metallicity indicators. We can however try to minimize the effects of these by looking at relative effects across the galaxy population, expecting that systematic effects will thereby cancel out. In \citetalias{Wang-19}, we constructed the $\Sigma_{\rm SFR}$ profiles for MaNGA galaxies at five different stellar mass bins and defined the parameter $\Delta \Sigma_{\rm SFR}$ as the deviation of a given galaxy from the median $\Sigma_{\rm SFR}$ profile at a given galactic radius and in a given stellar mass bin. We then computed the dispersion of this quantity across the population within five stellar mass bins and within five radial bins of $r$/{$R_{\rm e}$}. It was found that the scatter of $\Delta \Sigma_{\rm SFR}$, which we denoted as $\sigma(\Delta \Sigma_{\rm SFR})$, increases significantly with stellar surface mass density, $\Sigma_*$. We interpreted this trend in terms of a decreasing gas depletion time, linking the stellar surface mass density to the gas depletion timescale via the extended Schmidt law \citep{Shi-11}, i.e. $\tau_{\rm dep} \propto \Sigma_*^{-1/2}$. The trend with the inferred gas depletion time was found to be consistent with the model predication of time-varying inflow rate driven scenario, provided that the driving period of the inflow was more or less the same for all galaxies. In this subsection, we look at the analogous variation of $\Delta Z_{\rm gas}$ in the population, and compare the result with the model predictions. Similar to the definition of $\Delta \Sigma_{\rm SFR}$, we define the $\Delta Z_{\rm gas}$ to be the deviation from the median Z$_{\rm gas}$ at a given specific radius and a given stellar mass bin. We then compute the dispersion of this quantity, $\sigma$($\Delta$$Z_{\rm gas}$), and compare this to the dispersion of the completely independent (normalised) star-formation surface density, $\sigma(\Delta \Sigma_{\rm SFR})$ used in \citetalias{Wang-19}. Figure \ref{fig:17} shows the ratio of $\sigma$($\Delta$$Z_{\rm gas}$) to $\sigma$($\Delta\Sigma_{\rm SFR}$) as a function of the inferred gas depletion time, for both metallicity indicators. In each panel of Figure \ref{fig:17}, different colors are used for different stellar mass bins. The different data points of the same color represent the different radial bins, evenly spaced with a bin width of 0.2{$R_{\rm e}$}. At a fixed stellar mass bin, the $\tau_{\rm dep}$ monotonically increases with galactic radius as $\Sigma_*$ decreases. As in \citetalias{Wang-19}, data points at galactic radii larger than {$R_{\rm e}$}\ are indicated in gray, since these may be more easily affected by environmental effects \citep[also see \citetalias{Wang-19};][]{Wang-20a}. Readers are invited to consult figure 9 in \citetalias{Wang-19} and figures 15 and 17 in \cite{Wang-20a} for further insights into the variations of $\Sigma_{\rm SFR}$. As a whole, the ratio $\sigma(\Delta Z_{\rm gas})$/$\sigma(\Delta \Sigma_{\rm SFR})$ increases quite sharply with the inferred gas depletion time for both the metallicity indicators. This is individually true for each the five stellar mass bins (where it reflects radial changes) except for the highest stellar mass bin for $Z_{\rm gas}$-\citetalias{Dopita-16} and it is true when comparing galaxies of different stellar masses. These trends reflect quantitatively a combination of the effects that are seen in Figures \ref{fig:14} and \ref{fig:15} of this paper and figure 5 of \citetalias{Wang-19}. It should however be noted that the dispersions $\sigma$ are calculated using the individual galaxies whereas these figures plot the median values within the four bins of $\Delta$SFR, and so are not directly comparable. We have discussed the particular case of the highest mass bin for $Z_{\rm gas}$-\citetalias{Dopita-16} in Section \ref{sec:5.2}. It is again clear that the dispersions $\sigma(\Delta Z_{\rm gas})$ obtained using the different metallicity estimators differ by a factor of two, reflecting their two-fold difference in range within the sample. In the absence of a reliable reason to prefer one over the other, this makes any precise quantitative comparison with the predictions of the simple gas-regulator model (see Figure \ref{fig:3}) impossible. However, three points may be made, independent of the choice of estimator. First, both metallicity estimators show about a factor of four {\it increase} in the ratio $\sigma(\Delta Z_{\rm gas})$/$\sigma(\Delta \Sigma_{\rm SFR})$ as the inferred gas depletion timescale increases by an order of magnitude from $\log \tau_{\rm dep}$ $\sim$ 8.8 to 9.8. This is as expected for variations in inflow rate (around $\log \xi \sim 0$) and quite opposite to the trend expected for variations in SFE, which would have predicted a decrease in this ratio as the gas depletion timescale increases. Recalling that $\xi = 2\pi \tau_{\rm dep,eff}/T_{\rm p}$, we may infer $(1+\lambda)T_{\rm p}/2\pi \sim 9.5-9.8$, or driving periods of the inflow of $T_{\rm p}$ of a few to several Gyr, by matching Figure \ref{fig:17} with the left panel of Figure \ref{fig:3}. This is broadly consistent with the independent argument presented in \cite{Wang-20a} that the variation of $\Delta$sSFR is mainly produced by a variation of inflow rate on relatively long timescales. The driving period of inflow appears to be considerably longer than the period of the temporal variations in SFE discussed in Section \ref{sec:4.3}. Second, it can be seen that at a given gas depletion time (stellar surface mass density), more massive galaxies tend to have a {\it lower} value of $\sigma(\Delta Z_{\rm gas})$/$\sigma(\Delta \Sigma_{\rm SFR})$. This could possibly reflect the quite plausible expectation that more massive galaxies might well have higher $Z_0/y_{\rm eff}$ than less massive galaxies due to either a lower wind-mass loading $\lambda$ or a higher inflow metallicity $Z_0$, leading to a reduction in $\sigma(\Delta Z_{\rm gas})$/$\sigma(\Delta \Sigma_{\rm SFR})$, as shown in Figure \ref{fig:3}. Finally, it is noticeable that, even with the larger range of the $Z_{\rm gas}$-\citetalias{Dopita-16}, the ratio of $\sigma(\Delta Z_{\rm gas})$/$\sigma(\Delta \Sigma_{\rm SFR})$ never exceeds unity, the maximum value permitted by the gas-regulator model, as shown in Figure \ref{fig:3}. \section{Discussion} \label{sec:6} \subsection{The scale effect of $\Delta${\rm sSFR}-$\Delta Z_{\rm gas}$ relation} \label{sec:6.1} In this work, we find that on $\sim$100 pc scales within individual galaxies, the local $\Delta Z_{\rm gas}$ appears to positively correlated with the local $\Delta$sSFR, while when looking into the integrated quantities of the same galaxies across the galaxy population, the $\Delta Z_{\rm gas}$ is found to be negatively correlated with $\Delta$sSFR. These results are quite consistent with previous findings, as discussed in Section \ref{sec:introduction}. Specifically, based on the $\sim$1000 SAMI galaxies, \cite{Sanchez-19} found the $\Delta$sSFR and $\Delta Z_{\rm gas}$ (defined in a similar way as in the present work) show a negative correlation across the galaxy population for a wide range of metallicity indicators with the correlation coefficient between $-$0.32 to $-$0.14. At highly-resolved scale ($\sim$100 pc), many authors have found that regions with strongly enhanced star formation show enhanced gas-phase metallicity \citep[e.g.][]{Ho-18, Erroz-Ferrer-19, Kreckel-19}. Our results are consistent with these previous results. The opposite sign of the correlation between $\Delta$sSFR and $\Delta Z_{\rm gas}$ on 100-pc (GMC-scales) and on galactic-scales and large sub-galactic scales indicates that different physical processes regulate the star formation and chemical enhancement on these different scales in the context of the gas-regulator framework. A positive $\Delta$sSFR-$\Delta Z_{\rm gas}$ relation arises driving the gas-regulator with time-varying SFE$(t)$, and a negative $\Delta$sSFR-$\Delta Z_{\rm gas}$ relation is the result of driving it with a time-varying inflow rate. A time-varying SFE$(t)$ at $\sim$100 pc scale \citep[see][]{Kruijssen-19, Chevance-20}, and a time-varying inflow rate at galactic scale \citep[see][]{Wang-19, Wang-20b}, are also suggested by other recent works. In this work, we have not examined the intermediate scales of, say, 1 kpc. However, it is not difficult to infer that, as the scale is increased, the effect of time-varying SFE$(t)$ becomes weaker and that of time-varying inflow rate becomes stronger. This is likely the reason that at $\sim$1 kpc scale, the correlation between $\Delta$sSFR and $\Delta Z_{\rm gas}$ is weaker or even disappeared, as seen by previous works \citep{Moran-12, Barrera-Ballesteros-17, Berhane-Teklu-20}. \subsection{What determines the gas-phase metallicity?} \label{sec:6.2} As shown in Equation \ref{eq:22}, the metallicity of the steady state (constant inflow rate and constant SFE) is only determined by the metallicity of the inflow gas $Z_0$ and the effective yield including the effects of any wind, $y(1+\lambda)^{-1}$. These two parameters are expected to be strongly correlated with the global stellar mass, in the sense that, the $Z_0$ increases with stellar mass, and the $\lambda$ decreases with stellar mass. This is probably the origin of the observed mass-metallicity relation. In addition, we emphasize that the $Z_{\rm gas}$ does not depend on the {\it absolute} value of SFE or inflow rate, but depends on the change (with time) of it, under the gas-regulator model frame. Based on the analysis of the present work (and also some previous works), the so-called ``fundamental metallicity relation'' is clearly not valid at sub-kpc scales. In the gas-regulator framework, we predict that the mass of the gas reservoir is always negatively correlated with $Z_{\rm gas}$ (see Figure \ref{fig:1} and \ref{fig:2}), regardless of the driving mechanisms by time-varying SFE or time-varying inflow rate. Observationally, \cite{Bothwell-13} and \cite{Bothwell-16} found that the cold gas mass appears is more fundamental in determining the MZR than SFR, for both atomic and molecular gas, consistent with this picture. In this sense, the mass of gas reservoir is a better secondary parameter than SFR in determining the metallicity across the GMC-scale to galactic scale. The importance of the SFR is that, in studying the correlation between $\Delta$SFR and $\Delta Z_{\rm gas}$ (as in this paper) we can distinguish the underlying physical processes that are driving variations in gas content, SFR and metallicity, even for individual galaxies. Specifically, even though the negative $\Delta$SFR vs. $\Delta Z_{\rm gas}$ relation across the galaxy population indicates that time-varying inflow is the main driver of variations in SFR, it is completely possible that in some particular galaxies, existing reservoirs of cold gas are undergoing strong gravitational instability, leading to a temporary increase in their SFE. For instance, \cite{Jin-16} identified 10 SF galaxies with kinematically misaligned gas and stars from MaNGA survey. These galaxies have intense on-going star formation and high gas-phase metallicity in their central regions with respect to normal SF galaxies, which can be easily interpreted as evidence of a temporary increase in SFE due to gas-gas collision between the pre-existing gas and the misaligned inflowing gas. \subsection{The variability of SFR and gas-phase metallicity} \label{sec:6.3} In addition to the correlation between $\Delta$SFR and $\Delta Z_{\rm gas}$, we have also predicted the correlation between the dispersions of them within a population in the gas regulator framework, shown in Equations \ref{eq:14} and \ref{eq:15} (or Equation \ref{eq:19} and \ref{eq:20}). This can be directly compared with the observations and puts further constraints on the variability of inflow rate or SFE on different physical scales. As pointed out in \citetalias{Wang-19} and \cite{Wang-20b}, when driving the gas-regulator with time-varying inflow rate, the amplitude of the variations in SFR is reduced from the amplitude of variations in the inflow rate by a frequency-dependent ($\nu=1/T_{\rm p}$) response curve, i.e. Equation \ref{eq:14} or equation 9 in \citetalias{Wang-19}. In other words, for a given inflow rate with any given PSD$_{\Phi}$($\nu$), the power spectral distribution of the resulting SFR, PSD$_{\rm SFR}$($\nu$), can be written as: \begin{equation} \label{eq:28} \begin{split} {\rm PSD}_{\rm SFR}(\nu) = & \frac{\sigma^2(\log {\rm SFR})}{\sigma^2(\log \Phi)}\cdot {\rm PSD}_{\Phi}(\nu) \\ = & \frac{1}{1+\xi^2} \cdot {\rm PSD}_{\Phi}(\nu) \\ = & \frac{1}{1+(2\pi\tau_{\rm dep,eff}\nu)^2} \cdot {\rm PSD}_{\Phi}(\nu). \end{split} \end{equation} This is because any given inflow rate $\Phi(t)$ can be written as the linear combination of different sinusoidal components with different frequencies by Fourier transform \citep[also see][]{Tacchella-20}. The Equation \ref{eq:28} theoretically predicts the connection between the PSDs of $\log$SFR and $\log \Phi$, when driving the gas-regulator with time-varying inflow rate and time-invariant SFE. By using the SFR measured on different timescales, \cite{Wang-20b} have constrained the PSD of the variations in $\Delta$sSFR$(t)$, i.e. the movement of galaxies up and down relative to the SFMS. We found that, if scaling the frequency with the effective depletion time $\tau_{\rm dep,eff}$, the returned PSDs for different radial bins and stellar mass bins largely overlap each other. In the same way, when driving the gas-regulator with a time-varying inflow rate, the relation between the PSDs of SFR$(t)$ and $Z_{\rm gas}(t)$ can be written as: \begin{equation} \label{eq:29} \begin{split} {\rm PSD}_{\rm Z}(\nu) = & \frac{\sigma^2(\log {Z})}{\sigma^2(\log {\rm SFR})}\cdot {\rm PSD}_{\rm SFR}(\nu) \\ = & \frac{\xi^2}{1+\xi^2} \cdot \frac{1}{(1+Z_0/y_{\rm eff})^2} \cdot {\rm PSD}_{\rm SFR}(\nu) \\ = & \frac{1}{1+(2\pi\tau_{\rm dep,eff}\nu)^{-2}} \cdot \frac{1}{(1+Z_0/y_{\rm eff})^2} \cdot {\rm PSD}_{\rm SFR}(\nu). \end{split} \end{equation} We note that Equation \ref{eq:28} and \ref{eq:29} are applicable when the variation of inflow rate is dominant, i.e. on galactic scales. These two equations predict the link between the variability of inflow rate, SFR and gas-phase metallicity, which opens a new perspective to study the interplay between gas accretion, star formation, and chemical enhancement of galaxies. At the smaller 100-pc (GMC) scale, the variation of SFE$(t)$ becomes dominant. In the similar way, according to Equation \ref{eq:19} and \ref{eq:20}, we can write the correlation between the PSDs of SFR$(t)$ and SFE$(t)$, and the correlation between the PSDs of $Z_{\rm gas}(t)$ and SFR$(t)$ as: \begin{equation} \label{eq:30} {\rm PSD}_{\rm SFR}(\nu) = \frac{1}{1+(2\pi\tau_{\rm dep,eff}\nu)^{-2}} \cdot {\rm PSD}_{\rm SFE}(\nu) \end{equation} and \begin{equation} \label{eq:31} {\rm PSD}_{\rm Z}(\nu) = \frac{1}{1+(2\pi\tau_{\rm dep,eff}\nu)^2} \cdot \frac{1}{(1+Z_0/y_{\rm eff})^2} \cdot {\rm PSD}_{\rm SFR}(\nu). \end{equation} Equation \ref{eq:30} and \ref{eq:31} provide clues in understanding the GMC-scale physics of gravitational instability of gas clouds, the triggering of star formation and chemical enhancement of HII regions. It is not easy to measure the variability of SFR and $Z_{\rm gas}$ from observations up to now, while it is definitely feasible from the hydro-dynamical simulations. Our simple theoretical model potentially provides the guidance for the improvements of the baryon physics at both galactic scale and GMC scale in the hydro-dynamical simulations. \subsection{Caveats} \label{sec:6.4} In Section \ref{sec:2.2} (or Section \ref{sec:2.3}), we always explore the effects of time-varying inflow rate (or SFE) while assuming the other to be time-invariant. However, we note that in the real universe, both inflow rate and SFE could vary with time simultaneously. On galactic scales, the negative correlation between $\Delta$sSFR and $\Delta Z_{\rm gas}$ in the observations indicates that time-varying inflow rates are dominant. However, this does not means that the SFE is fully time-invariant at galactic scale for all SF galaxies. Actually, as also mentioned in Section \ref{sec:6.2}, the SFE may be temporally enhanced in some galaxies, due to some physical processes, such as a merger or interaction with close companions. At the small 100-pc scale, although the variation of SFE is dominant, we can probably ignore the possible variation of inflow rate for different regions within individual galaxies. In addition, we do not consider other feedback processes of star formation in the model, except for the outflow, which is assumed to be simply proportional to the SFR. In the model, we assume that the yield $y$ of metals is uniform both within galaxies and across the galaxy population. The yield $y$ is closely related with the relative number of Type II supernova, and therefore to the IMF. In the real universe, the IMF may be different from galaxy to galaxy, or even different in different parts of the same galaxy. Indeed, by using a sensitive index of the IMF, $^{13}$CO/C$^{18}$O, \cite{Zhang-18} found that the IMF in dusty star-burst galaxies at redshift $\sim$2-3 may be more top-heavy with respect to \cite{Chabrier-03} IMF. The top-heavy IMFs would result in larger $y$ than the bottom-heavy IMF, which increases the complexity of understanding the metal enhancement process by star formation. In Section \ref{sec:4}, we presented the observational results obtained using {\tt N2S2H$\alpha$} and {\tt Scal} metallicity indicators. The two indicators produce broadly consistent results. As discussed in Section \ref{sec:3.3}, these two indicators, proposed by \citetalias{Dopita-16} and \citetalias{Pilyugin-16} respectively, offer significant improvements and advantages over the previous indicators, like {\tt N2} and {\tt O3N2}. However, we note that when using {\tt O3N2}, the derived results differ in part from those presented here. Specifically, a negative correlation between $\Delta$sSFR and $\Delta Z_{\rm gas}$ across the MaNGA galaxy population can be still seen with {\tt O3N2}, while the $\Delta$sSFR-$\Delta Z_{\rm gas}$ relation of individual spaxels for MAD galaxies is found to depend quite strongly on stellar mass. For galaxies with stellar mass above $\sim10^{10.5}$${\rm M}_\odot$, the $\Delta$sSFR and $\Delta Z_{\rm gas}$ of spaxels show positive correlation, which is similar to the results based on {\tt N2S2H$\alpha$} and {\tt Scal}. But for galaxies with stellar massed below $\sim10^{10.5}$${\rm M}_\odot$, the correlation between $\Delta$sSFR and $\Delta Z_{\rm gas}$ of spaxels becomes negative, different from the results of {\tt N2S2H$\alpha$} and {\tt Scal} shown in this paper. This may be due to the fact that both the {\tt N2S2H$\alpha$} and {\tt Scal} indicators break the degeneracy between metallicity and ionization parameter. Although we prefer to use the {\tt N2S2H$\alpha$} and {\tt Scal} indicators, we mention this alteration of the results with using {\tt O3N2} here for those readers who may prefer that metallicity indicator. In the present work, we have only compared the observational results on low redshift galaxies with the model predictions. However, we note that our model prediction is also suitable for the high-redshift galaxies. One may expect to push the analysis in the current work to high-redshift in the near future, based on near-infrared spectroscopic galaxy surveys with the JWST. \section{Summary and Conclusions} \label{sec:7} \label{sec:conclusion} The present work consists mainly of two parts. One is the theoretical prediction of the correlation between SFR and gas-phase metallicity in the gas-regulator framework (see Section \ref{sec:2}). The other is the study of this correlation directly from the observation and the comparison of the results with the model predictions (see Section \ref{sec:4} and \ref{sec:5}). We may summarize the results of these two parts in the following. The gas-regulator model is based on the interplay between inflow, outflow and star formation, assuming that the star formation is instantaneously determined by the mass of cold gas reservoir \citep{Lilly-13, Wang-19}. According to the continuity equations for the mass of metals and of the gas reservoir, we build the two basic continuity equations, shown in Equation \ref{eq:2} and Equation \ref{eq:3}. There are in total five quantities that determine the solution of the equations, which are the (assumed here varying) inflow rate $\Phi(t)$ and SFE$(t)$, and the (assumed here constant) mass-loading factor $\lambda$, metallicity of inflow gas $Z_{\rm 0}$ and the yield $y$. Once these five quantities are input, the solution of SFR$(t)$, $M_{\rm gas}(t)$ and $Z_{\rm gas}(t)$ are unique. The model predictions are listed below. \begin{itemize} \item When driving the gas-regulator system with a sinusoidal inflow rate and a time-invariant SFE, the resulting SFR$(t)$, $M_{\rm gas}(t)$ and $M_{\rm Z}(t)$ are also in the form of an exact sinusoidal function with time, but with some phase-delay to the inflow rate (see Equation \ref{eq:6} and \ref{eq:8}). The $\Delta$SFR and $\Delta Z_{\rm gas}$, defined as $\log {\rm SFR}(t)/\langle {\rm SFR}\rangle$ and $\log Z(t)/\langle {Z}\rangle$, are found to be negatively correlated, and the ratio of $\sigma(\Delta {\rm SFR})$ to $\sigma(\Delta Z_{\rm gas})$ increases with increasing $\xi$, defined in terms of the effective gas depletion timescale to be $2\pi\tau_{\rm dep,eff}/T_{\rm p}$ (see Equation \ref{eq:15}). If the gas-regulator is driven by a periodic step-function in the inflow rate, a similar negative correlation between $\Delta$SFR and $\Delta Z_{\rm gas}$ is produced. \item When driving the gas-regulator system with a sinusoidal SFE and time-invariant inflow rate, the resulting SFR$(t)$, $M_{\rm gas}(t)$ and $M_{\rm Z}(t)$ can be solved approximately by a sinusoidal function if the variation of SFE is small (see the approximate solution in Equation \ref{eq:17} and \ref{eq:18}). Opposite to the case of time-varying inflow rate, the $\Delta$SFR and $\Delta Z_{\rm gas}$ are now positively correlated, and the ratio of $\sigma(\Delta {\rm SFR})$ to $\sigma(\Delta Z_{\rm gas})$ decreases with increasing $\xi$ (see Equation \ref{eq:20}). When driving the gas-regulator with a periodic SFE in the form of step-function, we find the positive correlation between $\Delta$SFR and $\Delta Z_{\rm gas}$ becomes less significant with respect to the case of sinusoidal SFE. However, one thing is clear: the states with highly enhanced SFR are always metal-enhanced with respect to the mean metallicity. \item Regardless of whether the gas regulator is driven with time-varying inflow or time-varying SFE, the $\Delta M_{\rm gas}$ is always predicted to be negatively correlated with $\Delta Z_{\rm gas}$ (see Figure \ref{fig:1} and Figure \ref{fig:2}). \item The scatter of $\Delta$Z$_{\rm gas}$ always decreases with increasing $Z_0/y_{\rm eff}$, where the $y_{\rm eff}$ is defined as the $y(1+\lambda)^{-1}$ (see Equation \ref{eq:15}, Equation \ref{eq:20} and Figure \ref{fig:3}). \item The mean SFR is determined by the mean inflow rate and mass-loading factor, and the mean metallicity is determined by the $Z_{\rm 0}+y_{\rm eff}$ (see Equation \ref{eq:21} and \ref{eq:22}). The resulting Z$_{\rm gas}$ does not depend on the SFE (or inflow rate) itself, but does depend on the temporal {\it changes} of it. \end{itemize} The key point is that a time-varying inflow rate leads to the opposite correlation between $\Delta$SFR and $\Delta Z_{\rm gas}$ from that produced by a time-varying SFE. Therefore, studying the $\Delta$SFR-$\Delta Z_{\rm gas}$ relation in the observation on different spatial scales can in principle directly distinguish the driving mechanisms of the variation of star formation and gas-phase metallicity in galaxies. We then utilize the two-dimensional spectroscopic data of 38 SF galaxies from the MAD survey \citep{Erroz-Ferrer-19}, as well as a well-defined SF sample of $\sim$1000 galaxies from MaNGA survey (\citetalias{Wang-19}). The spatial resolution of MAD galaxies is $\sim$100 pc or less, while the spatial resolution of MaNGA galaxies is 1-2 kpc. The MAD sample enables us to study the $\Delta$SFR-$\Delta Z_{\rm gas}$ relation down to 100-pc (GMC) scales, while the large sample size of MaNGA enables us to statistically study the $\Delta$SFR-$\Delta Z_{\rm gas}$ relation at galactic or large (radial) sub-galactic scales across the galaxy population. The SFR is measured based on the dust attenuation corrected H$\alpha$ luminosity \citep{Kennicutt-98}. The two versions of gas-phase metallicity are measured by adopting two recently-developed indicators: {\tt N2S2H$\alpha$} (\citetalias{Dopita-16}) and {\tt Scal} (\citetalias{Pilyugin-16}), which represent improvements and advantages over the previously widely-used indicators. The results of these two metallicity indicators are very similar. Here we summarize the main observational results, which are valid for both these metallicity indicators. \begin{itemize} \item Consistent with previous studies, we find that MAD galaxies generally show a positive sSFR profile, confirming an inside-out growth scenario. As a whole, the gas-phase metallicity increases strongly with stellar mass, and decreases with galactic radius within individual galaxies, as expected. \item At $\sim$100 pc scale in MAD galaxies, we find that $\Delta$sSFR and $\Delta Z_{\rm gas}$ are positively correlated. This positive correlation shows little or no dependence on the global stellar mass. \item At galactic and sub-galactic scales, we find in contrast that $\Delta$sSFR and $\Delta Z_{\rm gas}$ are negatively correlated across the galaxy population. This is true for both MAD and the larger MaNGA samples. The correlation between $\Delta$sSFR and $\Delta Z_{\rm gas}$ shows a strong dependence of global stellar mass and galactic radius. \item At the $\sim$100 pc scale, the ratio of $\sigma(\Delta Z_{\rm gas})$ to $\sigma(\Delta \Sigma_{\rm SFR})$ show almost no dependence on the global stellar mass. However, at galactic or sub-galactic scale, the $\sigma(\Delta Z_{\rm gas})$/$\sigma(\Delta \Sigma_{\rm SFR})$ increases with the inferred gas depletion time (inferred from the surface mass density using the extended Schmidt law). At fixed gas depletion time, the $\sigma(\Delta Z_{\rm gas})$/$\sigma(\Delta \Sigma_{\rm SFR})$ appears to be smaller for galaxies of higher stellar mass. \end{itemize} We interpret the observational results in the frame of the gas-regulator model. The overall increase of $Z_{\rm gas}$ with global stellar mass and the decrease of $Z_{\rm gas}$ with galactic radius, can be well explained as the mass and radial dependence of the metallicity of inflow gas $Z_0$ and the mass-loading factor $\lambda$. At 100-pc scales, the positive correlation between $\Delta$sSFR and $\Delta Z_{\rm gas}$ indicates that the time-varying SFE plays a dominant role in governing the star formation and metal enhancement. This is also consistent with the fact that the variation of SFE increases strongly towards smaller scale \citep{Kreckel-18, Chevance-20} and is likely caused by the passage of orbiting gas through regions of higher SFE, such as spiral arms. At galactic or sub-galactic scales, the negative correlation indicates that the time-varying inflow rate plays a dominant role. In addition, the variation of $\Delta$sSFR and $\Delta Z_{\rm gas}$ as a function of gas depletion time are in quite good agreement with the model predictions. This strongly strengthens the conclusion that on galactic scales the star formation and metal-enhancement is primarily regulated by the time-varying inflow rate of gas from the surrounding medium. We emphasize that the sign of the correlation between gas-phase metallicity and SFR is a powerful diagnostic of the driving mechanisms of star formation. Our study provides a new perspective in understanding the correlation between star formation rate, gas-phase metallicity and mass of cold gas reservoir, that is applicable from 100 pc-scales up to galactic scales, from individual galaxies up to the overall galaxy population, and at both low and high redshifts. \acknowledgments Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrof\'isica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut f\"ur Astrophysik Potsdam (AIP), Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg), Max-Planck-Institut f\"ur Astrophysik (MPA Garching), Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE), National Astronomical Observatory of China, New Mexico State University, New York University, University of Notre Dame, Observat\'ario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\'onoma de M\'exico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. \section{Introduction} \label{sec:introduction} Heavy elements, "metals", are produced in the universe through nuclear synthesis in massive stars, and are partly returned to the interstellar medium via the explosive collapse of supernovae. The gas-phase metallicity of galaxies is therefore a powerful diagnostic of the different processes in galaxy evolution, including gas inflow, star formation in cold gas clouds, and wind-driven outflows of gas from galaxies. The Oxygen abundance, mostly produced on short time-scales (a few Myr) by the rapid collapse and violent explosion of massive stars, i.e. the Type II supernovae, is widely used as an observationally accessible proxy of the metallicity of the gas in galaxies \citep[e.g.][]{Lequeux-79, Wheeler-89, Kewley-02, Pettini-04, Tremonti-04}. Observationally, the gas-phase metallicity is usually measured based on the flux ratios of emission lines in star-forming HII regions \citep[e.g.][]{Kobulnicky-04, Tremonti-04, Pettini-04, Maiolino-08, Kewley-08, Perez-Montero-09, Pilyugin-10, Rosales-Ortega-12, Marino-13, Dopita-13, Vogt-15, Dopita-16, Pilyugin-16}, such as [OII]$\lambda$3227, [OIII]$\lambda$4363, H$\beta$, [OIII]$\lambda$5007, H$\alpha$, [NII]$\lambda$6584, and [SII]$\lambda\lambda$6717,6731. These emission lines are mostly excited by O and B stars, which must have formed recently, within the last $<$10 Myr, and the gas-phase metallicity measured in this way can therefore be treated as the current ``instantaneous" metallicity of the gas out of which the stars have formed. This timescale is much shorter than the $\sim 10^{10}$ year lifetime of the galaxies or even the $\sim 10^9$ year gas depletion timescale \citep{Leroy-08, Bigiel-08, Shi-11}. In the literature, there is a number of empirical relations to derive the Oxygen abundance based on the combination of some of these emission lines \citep[see][and references therein]{Kewley-08, Sanchez-17}. As a whole, measurements of Oxygen abundance based on these different approaches are certainly positive correlated, but the absolute values and the ranges of these measurements are not consistent with each other \citep[e.g.][]{Kewley-08, Blanc-15, Sanchez-19}, due to the different methods, different samples and different emission lines used in the calibration. Even given the inconsistency of estimations of the gas-phase metallicity by different approaches, there is no dispute that the galaxy integrated gas-phase metallicity is strongly correlated with the stellar mass \citep[e.g.][]{Lequeux-79, Tremonti-04}. Based on fibre spectra of the central regions of large numbers of galaxies from SDSS \citep[the Sloan Digital Sky Survey;][]{Stoughton-02}, \cite{Tremonti-04} established a tight stellar-mass/gas metallicity relation (MZR) for star-forming (SF) galaxies spanning over three orders of magnitude in mass and one order of magnitude in gas-phase metallicity. The relation is relatively steep at low mass end, but flattens at stellar masses above $10^{10.5}$${\rm M}_\odot$. Furthermore, the gas-phase metallicity appears to have a larger dispersion towards the lower stellar masses. The MZR is found to be evolving with redshift in the sense that galaxies at higher redshift tend to be more metal-poor with respect to the galaxies of the similar stellar mass in the local universe \citep[e.g.][]{Savaglio-05, Maier-06, Maiolino-08}. The existence of the MZR relation can be explained by one or the combination of these factors: the outflow by the supernova-driven winds \citep{Larson-74,Tremonti-04, Finlator-08}, the different star formation efficiencies in galaxies \citep{Brooks-07, Mouhcine-07, Calura-09}, and the variations in the initial mass function across galaxy population \citep{Koppen-07}. The gas-phase metallicity is also found to be correlated with other global properties of galaxies, such as the SFR \citep[e.g.][]{Ellison-08, Mannucci-10, Andrews-13} and half-light radius \citep[e.g.][]{Ellison-08, Wang-18b}. Based on the large sample of galaxies from SDSS, \cite{Mannucci-10} found that the negative correlation with star formation rate (SFR) is strong at low stellar mass end, and becomes less significant with increasing stellar mass. Furthermore, they claimed that there was a universal epoch-independent mass-metallicity-SFR relation $Z(M_*,{\rm SFR})$, i.e. that the apparent evolution in the MZR could be accounted for, phenomenologically, by the higher SFR encountered in high redshift galaxies. This universal $Z(M_*,{\rm SFR})$ is therefore known as the ``fundamental metallicity relation'' \citep[FMR;][]{Mannucci-10, Lara-Lopez-10, Richard-11, Nakajima-12, Cresci-12, Dayal-13, Salim-14, Cresci-19, Huang-19, Curti-20}. \cite{Cresci-19} finds that an anti-correlation between specific SFR (sSFR, defined as the SFR divided by the stellar mass) and gas-phase metallicity at given stellar mass, regardless of what the metallicity and SFR indicators are used. Recently, the emergence of widespread integral field spectroscopy (IFS) for galaxy surveys, such as MaNGA \citep{Bundy-15}, CALIFA \citep{Sanchez-12} and SAMI \citep{Croom-12}, has produced many spatially resolved spectroscopic data of relatively nearby galaxies. This enables the investigation of the relations of metallicity with mass (surface density) and star-formation rates within galaxies. A strong correlation between local stellar surface density and local gas-phase metallicity, an apparent analog to the global MZR, is found based on the spatially-resolved spectroscopic data by many authors \citep[e.g.][]{Moran-12, Rosales-Ortega-12, Barrera-Ballesteros-16, Zhu-17, Gao-18}. However, whether the SFR or sSFR is a second parameter to the sub-galactic resolved MZR has been debated. By using 38 nearby galaxies from the PINGS survey, \cite{Rosales-Ortega-12} found a negative correlation of gas-phase metallicity and the local specific SFR, indicated by the H$\alpha$ equivalent width \citep[also see][]{Zhu-17, Sanchez-Almeida-18, Hwang-19}. However, \cite{Moran-12} and \cite{Barrera-Ballesteros-17} argued that there is no evidence for the local sSFR (or SFR surface density) to be a robust second parameter in the resolved MZR. More recently, based on analysis of MaNGA galaxies (with a spatial resolution of 1-2 kpc), \cite{Berhane-Teklu-20} found a negative correlation between local metallicity and local sSFR when using {\tt N2} and {\tt O3N2} metallicity indicators, but the correlation is nearly disappeared for the {\tt N2O2} and {\tt N2S2} metallicity indicators \citep[also see][]{Sanchez-Menguiano-19}. Furthermore, by using the HII regions of eight nearby galaxies mapped by the Multi-Unit Spectroscopic Explore (MUSE), \cite{Kreckel-19} found that the regions with highest H$\alpha$ luminosity tended to have higher gas-phase metallicity at a $\sim$100 pc scale, indicating a {\it positive} correlation between metallicity and sSFR. Similarly, \cite{Ho-18} found that the oxygen abundances are higher in the spiral arms than in the inter-arm regions for NGC 2997, at a similar spatial resolution. A clear picture to understand these seemingly contradictory findings is still lacking. Efforts have been made to understand the global star formation rates and metal content of galaxies, by looking at the balance between inflow, outflow and star formation \citep[e.g.][]{Schaye-10, Bouche-10, Dave-11, Lilly-13, Belfiore-19}. In particular, \cite{Dave-12} gave an analytic formalism that describes the evolution of the stellar mass, gas mass and metallicity of galaxies, assuming an equilibrium state in which the mass of the gas reservoir is assumed to be constant, i.e. $\dot{M}_{\rm gas}\sim$0. This scenario is also known as the ``reservoir'' or ``bathtube'' model \citep{Bouche-10, Dave-12}. Because the gas mass does not change, this model is not able to regulate the SFR. \cite{Lilly-13} released this restriction and allowed the mass of gas reservoir to change, so that the SFR is regulated by the changing gas mass adjusting to the inflow rate. This is known as the ``gas-regulator'' model. The gas-regulator model of \cite{Lilly-13} produces an analytic form of the mass metallicity relation that has the SFR naturally as a second parameter, i.e. $Z(M_*,{\rm SFR})$. Further, the form of this is set by the basic parameters of the regulator model, specifically the star-formation efficiency and the mass-loading of the wind $\lambda$, both of which may vary with the overall stellar mass. However, if these mass-dependent parameters are independent of epoch, as is quite plausible, then the form of $Z(M_*,{\rm SFR})$ will also not evolve. The gas-regulator model is therefore naturally able to produce an epoch-independent FMR. The whole point of the \cite{Lilly-13} gas-regulator model is that the mass of gas in the galaxy can change. In previous papers, we have explored the dynamical behaviour of the gas regulator model as it adjusts to variations in the inflow rate or other parameters and find that it can well explain several features of the galaxy population, and especially observations of the variation of SFR within galaxies and across the galaxy population. Based on a well defined sample of galaxies on the Star Formation Main Sequence \citep[SFMS; e.g.][]{Brinchmann-04, Daddi-07, Salim-07} from MaNGA, \citet[][hereafter \citetalias{Wang-19}]{Wang-19} investigated their SFR surface density ($\Sigma_{\rm SFR}$), and found that the dispersion in the relative $\Sigma_{\rm SFR}$ (correcting for different effective radii) at a given relative radius in galaxies with similar stellar mass, decreases with increasing gas depletion time. The gas depletion timescale ($\tau_{\rm dep}$) is defined as the total cold gas mass divided by the current SFR for individual galaxies, which is exactly the inverse of star formation efficiency (SFE). By driving a gas-regulator system with a periodic time-varying inflow rate, \citetalias{Wang-19} found that regions with shorter gas depletion times are better able to follow variations in inflow rate, and therefore produce a larger dispersion in $\Sigma_{\rm SFR}$ at a given driving period and amplitude. It was suggested that this feature of the gas regulator model could produce the observed relation between the scatter of $\Sigma_{\rm SFR}$ with the inferred gas depletion time (see more details in \citetalias{Wang-19}). Similarly, the dynamical gas regulator model can also qualitatively explain the observed dependence of the dispersion of the overall SFMS on stellar mass and stellar surface density \citep[][]{Wang-18b, Davies-19}. Consistent with, but quite independent of, our \citetalias{Wang-19} analysis, \cite{Wang-20a} found that regions with shorter gas depletion times also exhibit larger dispersions in the temporal changes in the SFR, as parameterized by SFR$_{\rm 5Myr}$/SFR$_{\rm 800Myr}$, the ratio of SFR averaged within the last 5 Myr to the SFR averaged within the last 800 Myr. The SFR$_{\rm 5Myr}$/SFR$_{\rm 800Myr}$ was estimated by the equivalent widths of H$\alpha$ emission and H$\delta$ absorption. The results in \cite{Wang-20a} therefore confirm that that the variation in the $\Sigma_{\rm SFR}$ profiles in \citetalias{Wang-19} are indeed apparently due to real {\it temporal} variations in the SFR within galaxies, rather than any intrinsic differences between galaxies in the population. Furthermore, based on the same dataset in \cite{Wang-20a}, \cite{Wang-20b} constrained the power spectral distribution (PSD) of the star formation histories of galaxies, i.e. the contribution of the variations in SFR at different timescales. This too showed highly consistent results with our earlier results in \citetalias{Wang-19} and \cite{Wang-20a}. All these results strongly support the importance of the dynamical response of the simple gas-regulator system to a time-varying inflow in producing the variations of SFR or $\Sigma_{\rm SFR}$ at galactic scale. Since the dynamical gas-regulator model has gained success in reproducing the un-evolving FMR \citep{Lilly-13}, and interpreting the dispersion of $\Sigma_{\rm SFR}$ across the galaxy population \citetalias{Wang-19}, it is interesting to further explore the behaviour of this system, and in particular to look again at its response to variations also in star-formation efficiency, and to explore further the gas-phase metallicity as a diagnostic tool. This is the focus of the current paper. In this work, we extend the work of \cite{Lilly-13} and \citetalias{Wang-19} and look at the metal-enrichment process in the dynamical gas-regulator framework. We will present the basic assumptions and continuity equations of the dynamical gas-regulator model in Section \ref{sec:2.1} and examine how the SFR, and the total mass, metal mass and gas-phase metallicity of the gas reservoir, vary in response to time-variations in the inflow rate of gas into the system and/or time-variations in the SFE (Section \ref{sec:2.2} and \ref{sec:2.3}). In addition, we will also explore how the wind mass-loading factor, the metallicity of the inflowing gas, and the yield (defined as the mass of metals returned to the interstellar medium per unit mass that is locked up into long-lived stars), can all modify these responses (Section \ref{sec:2.4}). We then turn to look for evidence of the predicted responses of the dynamic gas regulator in observational data. In Section \ref{sec:3}, we introduce the data used in this work, including the IFS data from the MaNGA survey and from the MUSE Atlas of Disks \citep[MAD;][]{Erroz-Ferrer-19}. The MaNGA sample is taken from \cite{Wang-18a} and \citetalias{Wang-19}, and includes nearly 1000 SF galaxies with typical spatial resolutions of 1-2 kpc, while the MAD sample has only 38 SF galaxies but with the spatial resolution down to 100 pc or even less. Therefore, the MaNGA sample is suitable to study the global effects at galactic and sub-galactic scale, while MAD galaxies can be used to study the effects on the scale of HII regions or individual molecular clouds. In Section \ref{sec:4} and \ref{sec:5}, we present the main observational results and compare them with the model predictions of the dynamical gas-regulator model. In Section \ref{sec:6}, we discuss our results compared with previous findings, and present the implications from the perspective of our understanding of relationship between SFR, cold gas mass and gas-phase metallicity at different physical scales. We summarize the main results of this work in Section \ref{sec:7}. Throughout this paper, we assume a flat cold dark matter cosmology model with $\Omega_m=0.27$, $\Omega_\Lambda=0.73$ and $h=0.7$ when computing distance-dependent parameters. The metallicity in this work is throughout the gas-phase metallicity. In constructing the dynamical gas-regulator model (see Section \ref{sec:2}), we denote the gas-phase metallicity as Z, which is the mass ratio of metals to cold gas. However, observationally, the gas-phase metallicity is usually indicated by the Oxygen abundance, 12+$\log$(O/H), where the $\log$(O/H) is the ratio of the element number density of Oxygen to Hydrogen in logarithmic space. Therefore, we use 12+$\log$(O/H) to indicate the gas-phase metallicity from observations, which is usually used in Section \ref{sec:3}, \ref{sec:4}, and \ref{sec:5}. \section{The dynamic response of the gas-regulator model} \label{sec:2} \subsection{Basic continuity equations} \label{sec:2.1} The basic idea of gas-regulator model is that the formation of stars is instantaneously determined by the mass of a cold gas reservoir, which is regulated by the interplay between inflow, outflow and star formation \citep{Lilly-13}. The instantaneous SFR can be written as: \begin{equation} \label{eq:1} {\rm SFR}(t) = M_{\rm gas}(t) \cdot {\rm SFE}(t), \end{equation} where the SFE$(t)$ is the instantaneous star formation efficiency. We note that the SFE is the inverse of the gas depletion time ($\tau_{\rm dep}$) by definition, i.e. SFE$\equiv 1/\tau_{\rm dep}$. Following the work of \cite{Lilly-13} and \citetalias{Wang-19}, we assume that the mass loss due to outflow is scaled by the instantaneous SFR$(t)$ with a mass-loading factor $\lambda$, i.e. $\lambda$SFR$(t)$. The fraction of stellar mass that is returned to the interstellar medium through winds and supernova explosions is denoted as $R$. We utilize the instantaneous return assumption and take $R = 0.4$ from stellar population models \citep[e.g.][]{Bruzual-03}. We define the effective gas depletion timescale ($\tau_{\rm def,eff}$) as the gas depletion timescale not only due to star formation but also the wind-loading outflow, i.e. $\tau_{\rm dep,eff}= \tau_{\rm def}/(1-R+\lambda)$. We denote the inflow rate as $\Phi(t)$, and the metallicity of infalling gas as $Z_0$. The metal mass of the gas reservoir is denoted as $M_{\rm Z}(t)$. The yield, i.e. the mass of metals returned to the interstellar medium per unit mass of instantaneously formed stars, is denoted as $y$. The basic continuity equations for gas and metals are \citep[see equations 9 and 20 in][]{Lilly-13}: \begin{equation} \label{eq:2} \begin{split} \frac{dM_{\rm gas(t)}}{dt} = & \ \Phi(t) - (1-R)\cdot{\rm SFR}(t) - \lambda \cdot {\rm SFR}(t) \\ = & \ \Phi(t) - (1-R+\lambda) \cdot {\rm SFE}(t)\cdot M_{\rm gas}(t) \end{split} \end{equation} \begin{equation} \label{eq:3} \begin{split} \frac{dM_{\rm Z}(t)}{dt} = &y {\rm SFR}(t) - Z(t) \cdot (1-R+\lambda) \cdot {\rm SFR}(t) + \Phi(t) Z_{\rm 0} \\ = & y {\rm SFE}(t)\cdot M_{\rm gas}(t) - (1-R+\lambda) {\rm SFE}(t) \cdot M_{\rm Z}(t) \\ & + \Phi(t) \cdot Z_{\rm 0} \end{split} \end{equation} where $Z(t) = M_{\rm Z}(t)/M_{\rm gas}(t)$ by definition. We apply the instantaneous re-cycling approximation in this work, i.e. we assume that the metal-enhancement by star formation is instantaneous. We are ignoring the timescale for supernova ejecta to mix with the interstellar medium and start forming a new generation of stars (see more discussion in Section \ref{sec:6.4}), which can be a few tens of Myr \citep{Roy-95}. In Equation \ref{eq:2} and \ref{eq:3}, there are a total of five quantities driving the solution: the possibly time-varying $\Phi(t)$, SFE$(t)$, and the (assumed constant) $\lambda$, $Z_0$ and $y$. The $M_{\rm gas}(t)$, SFR$(t)$, $M_{\rm Z}(t)$ and $Z(t)$ are then the response of the regulator system to these five quantities. In this work we assume the $\lambda$, $Z_0$ and $y$ are time-independent. The mass-loading factor is tightly correlated with the stellar mass of galaxies \citep{Hayward-17}, and is not likely to change significantly on timescales of Gyr given the current relatively low sSFR$\sim$0.1 Gyr$^{-1}$ of local SFMS galaxies. The yield $y$ is a physical parameter reflecting the nuclear synthesis activity and the relative number of massive stars, which is expected to be tightly correlated with the initial mass function (IMF) but which is not likely to change significantly on Gyr timescale. However, we note that $\lambda$, $Z_{\rm 0}$ and even $y$ may well change from galaxy-to-galaxy and even from region-to-region within individual galaxies. Equations \ref{eq:2} and \ref{eq:3} are the two basic continuity equations in the gas-regulator model. The following analysis in Section \ref{sec:2} will focus on the solution of these two equations by driving the regulator system with sinusoidal $\Phi(t)$ or sinusoidal SFE$(t)$, and investigate the correlation between the resulting instantaneous SFR$(t)$ and the gas-phase metallicity $Z(t)$. This correlation will be the main diagnostic used in our later analysis of observational data. In Section \ref{sec:2.2} and \ref{sec:2.3}, we will investigate the properties of the gas-regulator model using sinusoidally varying inflow rate or SFE. Although this is undoubtedly artificial in isolation, we can argue that any given inflow rate or SFE can be expressed as a linear combination of sinusoidal functions with different frequencies via the Fourier transform. The gas-regulator system will respond to these individual sinusoidal components independently. For any given time-varying inflow rate, the resulting $M_{\rm gas}$ (or $M_{\rm Z}$) should be the superposition of the resulting $M_{\rm gas}$ (or $M_{\rm Z}$) obtained from these sinusoidal variations of inflow rate. This statement is also true for SFE, when the variation of SFE is small (see details in Section \ref{sec:2.3}). \subsection{Driving the gas-regulator system with a time-varying inflow rate} \label{sec:2.2} \begin{figure*} \begin{center} \epsfig{figure=./fig/model/inflow/sine/example_sine_inflow.eps,clip=true,width=0.32\textwidth} \epsfig{figure=./fig/model/inflow/sine/plot_zgas_sfr_2Gyr_l0_Am01.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/model/inflow/sine/plot_zgas_mgas_2Gyr_l0_Am01.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/model/inflow/step/example_step_inflow.eps,clip=true,width=0.32\textwidth} \epsfig{figure=./fig/model/inflow/step/plot_zgas_sfr_2Gyr_l0.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/model/inflow/step/plot_zgas_mgas_2Gyr_l0.eps,clip=true,width=0.33\textwidth} \end{center} \caption{Illustration of the SFR, $M_{\rm gas}$\ and $Z$ in response to a sinusoidally varying inflow rate (upper panels) and a periodic step function inflow rate (lower panels), both with a constant SFE, in the gas regulator framework. Upper left panel: Examples of the SFR$(t)$, $M_{\rm gas}$$(t)$, $M_{\rm Z}$$(t)$, and $Z(t)$ (scaled to their mean values) in response to the sinusoidal inflow rate at three different $\xi$ (see text). The cases of different $\xi$ are separated with arbitrary offsets ($-0.5$, 0.0, $+$0.5) in the y-axis for display purposes. Upper middle panel: The correlation of SFR and $Z$ in logarithmic space for different $\xi$. Upper right panel: The correlation of SFR and $M_{\rm gas}$\ in logarithmic space for different $\xi$ (see text). The lower panels are similar to the top panels, but for a periodic step function of the inflow rate. For illustration, the period of the step-function is set to be 2 Gyr, and the $\tau_{\rm dep,eff}$ is set to be 1 Gyr. We denote the period that inflow rate is at its higher value as ``high-phase'', and the rest as ``low-phase''. The duration of the high-phase inflow rate ($\tau_{\rm s}$) within one period varies from 0.1$\tau_{\rm dep,eff}$ to $\tau_{\rm dep,eff}$. The different colors in the lower middle and right panels are for the cases of different $\tau_{\rm dep,eff}/\tau_{\rm s}$, and the data points are equally spaced in time and so their density reflects the speed of the change of the model. Since the SFE is set to be constant over time, the SFR$(t)$ is always overlapped with $M_{\rm gas}$$(t)$ in the two left panels, and the middle panels are the same as the right-most panels. } \label{fig:1} \end{figure*} We now first drive the gas-regulator system with time-varying inflow and time-invariant SFE. As in \citetalias{Wang-19}, we input the simple case that inflow rate is a linear sinusoidal function of time with a period of $T_{\rm p}$: \begin{equation} \label{eq:4} \Phi(t) = \Phi_{\rm 0} + \Phi_{\rm t} \cdot {\rm sin}(2\pi t/T_{\rm p}). \end{equation} Then, we look for solutions in which $M_{\rm gas}(t)$ and $M_{\rm Z}(t)$ are sinusoidal functions phase-shifted from the inflow rate. \begin{equation} \label{eq:5} \begin{split} M_{\rm gas}(t) = &\ M_{\rm 0} + M_{\rm t} \cdot {\rm sin}(2\pi t/T_{\rm p} - \delta) \\ M_{\rm Z}(t) = &\ M_{\rm Z0} + M_{\rm Zt} \cdot {\rm sin}(2\pi t/T_{\rm p} - \beta). \end{split} \end{equation} In \citetalias{Wang-19}, by substituting $M_{\rm gas}(t)$ into Equation \ref{eq:2} and equalising the various time-dependent terms in the usual way, we have the solution of $M_{\rm gas}(t)$: \begin{equation} \label{eq:6} \begin{split} M_{\rm 0} = & \ \Phi_{\rm 0}\tau_{\rm dep,eff} \\ \delta \ \ = & \ {\rm arctan}(\xi) \\ \frac{M_{\rm t}}{M_{\rm 0}} = & \ \frac{1}{(1+\xi^2)^{1/2}} \times \frac{\Phi_t}{\Phi_0}, \end{split} \end{equation} where $\tau_{\rm def,eff}$ is the effective gas depletion time, define as $\tau_{\rm dep}\cdot (1-R+\lambda)^{-1}$ or ${\rm SFE}^{-1}\cdot (1-R+\lambda)^{-1}$ in Section \ref{sec:2.1}, and $\xi$ is the ratio of the effective gas depletion timescale to $T_{\rm p}(2\pi)^{-1}$, i.e. \begin{equation} \label{eq:6.1} \xi \equiv 2\pi \tau_{\rm dep,eff}/T_{\rm p}. \end{equation} As we discussed in \citetalias{Wang-19}, the amplitude and phase-delay of the output $M_{\rm gas}(t)$ strongly depends on the parameter $\xi$, the relative timescale of gas depletion time to the driving period. At fixed $T_{\rm p}$, galaxies or regions with shorter gas depletion time are more able to follow the changes of the inflow rate, leading to a smaller phase-delay and larger amplitude, and vice versa (see more discussion in section 4 of \citetalias{Wang-19}). In the similar way, we substitute Equations \ref{eq:5} and \ref{eq:6} into Equation \ref{eq:3}, and equate the various time-dependent terms to find the solution of $M_{\rm Z}(t)$: \begin{equation} \label{eq:8} \begin{split} M_{\rm Z0} = &\ (y_{\rm eff}+Z_{\rm 0})\Phi_{\rm 0}\tau_{\rm dep,eff} \\ \beta \ \ = & \ {\rm arctan}[\frac{2y_{\rm eff}\xi + Z_{\rm 0}\xi(1+\xi^2)}{y_{\rm eff}(1-\xi^2)+Z_{\rm 0}(1+\xi^2)}] \\ \frac{M_{\rm Zt}}{M_{\rm Z0}} = &\ \frac{(1+\eta^2)^{1/2}}{1+\xi^2}\times \frac{\Phi_{\rm t}}{\Phi_{\rm 0}}, \end{split} \end{equation} where \begin{equation} \label{eq:9} y_{\rm eff} \equiv y\cdot(1-R+\lambda)^{-1} \end{equation} and \begin{equation} \label{eq:10} \eta = \xi Z_0 \cdot (y_{\rm eff}+Z_0)^{-1}. \end{equation} If $\beta$ is less than zero, then $\beta$ should equal to $\beta$+$\pi$. The shorthand $\eta$ is defined for convenience. We remind readers that the effective yield $y_{\rm eff}$ defined in this way is {\it different} from some previous papers \citep[e.g.][]{Edmunds-90, Garnett-02}. We prefer this definition because we believe it is more fundamental. If we assume that the inflow gas is in a pristine state, i.e. $Z_0\sim0$, then the solution of $M_{\rm Z}(t)$ can be simplified further to be \begin{equation} \label{eq:12} \begin{split} M_{\rm Z0} = &\ y_{\rm eff}\Phi_{\rm 0}\tau_{\rm dep,eff} \\ \beta \ \ = &\ {\rm arctan}[\frac{2\xi}{1-\xi^2}] \\ \frac{M_{\rm Zt}}{M_{\rm Z0}} = & \ \frac{1}{1+\xi^2}\times \frac{\Phi_{\rm t}}{\Phi_{\rm 0}}. \end{split} \end{equation} Interestingly, in this specific case with $Z_0\sim0$, the phase-delay of $M_{\rm Z}(t)$ is twice that of $M_{\rm gas}(t)$, i.e. $\beta=2\delta$. Similar to $M_{\rm gas}(t)$, the phase-delay $\beta$ and relative amplitude $M_{\rm Zt}/M_{\rm Z0}$ of $M_{\rm Z}(t)$ strongly depend on the parameter $\xi$. At fixed $T_{\rm p}$, galaxies or regions with shorter (effective) gas depletion time, can more easily follow the change of inflow rate and gas mass, resulting in smaller $\beta$ and larger $M_{\rm Zt}/M_{\rm Z0}$. Specifically, if $\xi$ is much less than unity, then both $\delta$ and $\beta$ are close to zero, and both $M_{\rm t}/M_{\rm 0}$ and $M_{\rm Zt}/M_{\rm Z0}$ are close to $\Phi_{\rm t}/\Phi_{\rm 0}$. In other words, when the (effective) gas depletion time is much less than the driving period, i.e. $\xi \ll 1$, the change of mass of gas reservoir and of the mass of metals in the gas-regulator system can nearly follow the change of inflow rate, with little phase-delay and with nearly the same relative amplitude of variation. If, however, $\xi$ is much larger than 1, then $\delta$ is close to $\pi/2$, $\beta$ is close to $\pi$, and both $M_{\rm t}/M_{\rm 0}$ and $M_{\rm Zt}/M_{\rm Z0}$ are close to zero. This means that, when the (effective) gas depletion time is much longer than the driving period, i.e. $\xi \gg 1$, the gas-regulator system is unable to follow the relatively fast changes in the inflow rate, resulting in little variation in either $M_{\rm gas}(t)$ or $M_{\rm Z}(t)$. The dependence of $M_{\rm gas}(t)$ and $M_{\rm Z}(t)$ on $\xi$ can be clearly seen in the top left panel of Figure \ref{fig:1}, where we show examples of the evolution of $M_{\rm gas}$ (blue), SFR (red), $M_{\rm Z}$ (orange) and $Z$ (purple) when driving the gas-regulator system with periodic sinusoidal inflow. For illustrative purpose, we set $Z_{\rm 0}=0$, $\Phi_{\rm t}/\Phi_{\rm 0}=0.1$, $T_{\rm p}=1$ Gyr, and $\log \xi=-0.5$, 0.0, and 0.5. Given the solutions of $M_{\rm gas}(t)$ and $M_{\rm Z}(t)$ in Equation \ref{eq:6} and \ref{eq:8}, the resulting SFR$(t)$ and $Z(t)$ can be easily obtained. Since the SFE is assumed here (in this subsection) to be time-invariant, the change of SFR will exactly follow the change of cold gas mass. Therefore, the blue and red lines in the top left panel of Figure \ref{fig:1} are overlapped together. However, the $Z(t)$, i.e. the ratio of $M_{\rm Z}(t)$ to $M_{\rm gas}(t)$, has a more complicated behavior than SFR(t), because it is not a sinusoidal function. The variation of the metallicity depends on the amplitude of variations in $M_{\rm Z}(t)$ and $M_{\rm gas}(t)$, as well as the phase-delay between the two. To clarify the correlation between the instantaneous SFR$(t)$ and $Z(t)$, we plot the $\log ({\rm SFR}(t)/\langle {\rm SFR}\rangle)$ vs. $\log (Z(t)/\langle {Z}\rangle)$ and $\log (M_{\rm gas}(t)/\langle {M_{\rm gas}}\rangle)$ vs. $\log (Z(t)/\langle {Z}\rangle)$ for a set of different $\xi$ in the top middle and right panels of Figure \ref{fig:1}, where $\langle {Z}\rangle$, $\langle {\rm SFR}\rangle$ and $\langle {M_{\rm gas}}\rangle$ are the average metallicity, SFR and cold gas mass, respectively. Since the $\log ({\rm SFR}(t)/\langle {\rm SFR}\rangle)$ is a relative quantity, i.e. $\log {\rm SFR}(t) - \log \langle {\rm SFR}\rangle$, we also denote the $\log ({\rm SFR}(t)/\langle {\rm SFR}\rangle)$ as $\Delta \log$SFR. In the same way, we denote the $\log (Z(t)/\langle {Z}\rangle)$ as $\Delta \log Z$, and $\log (M_{\rm gas}(t)/\langle {M_{\rm gas}}\rangle)$ as $\Delta \log M_{\rm gas}$. As shown, at all the different $\xi$ shown here, the gas-regulator model predicts that $\Delta \log$SFR and $\Delta \log Z$ are {\it negatively} correlated when the system is driven with a sinusoidal inflow rate. The slope and the tightness of the $\Delta \log$SFR-$\Delta \log Z$ correlation strongly depend on $\xi$. Generally speaking, at fixed $T_{\rm p}$, the correlation of $\Delta \log$SFR-$\Delta \log Z$ becomes weaker and steeper with increasing the effective gas depletion time. The slope of $\Delta \log$SFR-$\Delta \log$Z relation is always steeper than $-1$. This means that the gas-regulator model requires that the scatter of $\Delta \log Z$ is always less than or equal to the scatter of $\Delta \log$SFR. We will come back to this point later. In addition to the sinusoidal inflow rate, we also for completeness explored the effect of a periodic step function in the inflow rate. We solve the Equation \ref{eq:2} and \ref{eq:3} numerically for this case. The bottom panels of Figure \ref{fig:1} show the resulting $M_{\rm gas}(t)$, SFR$(t)$, $M_{\rm Z}(t)$ and $Z(t)$, as well as the resulting correlation between $\Delta \log$SFR (and $\Delta \log M_{\rm gas}$) and $\Delta \log Z$. In generating the plots, we set the period of step function as 2 Gyr, and change the upper-state duration of inflow rate ($\tau_{\rm s}$). We allow the $\tau_s$ varying from $0.1\tau_{\rm dep,eff}$ to $\tau_{\rm dep,eff}$. As shown in the bottom-left panel of Figure \ref{fig:1}, a sudden increase of inflow rate causes an increase of the SFR (or cold gas mass) and a decrease of gas-phase metallicity, and vice versa. This therefore also leads to a {\it negative} correlation between SFR (or cold gas mass) and metallicity, i.e. between $\Delta \log$SFR (or $\Delta \log M_{\rm gas}$) and $\Delta \log Z$, consistent with the result for the sinusoidal variation in the top panels of Figure \ref{fig:1}. Of course, observationally we cannot follow the temporal evolution of a single galaxy. In \citetalias{Wang-19} we therefore explored the scatter of the instantaneous SFR in a population of gas regulators, $\sigma({\rm SFR})$, when they are driven with simple sinusoidal $\Phi(t)$, and showed that within this population $\sigma({\rm SFR})$/$\sigma(\Phi)$ is a monotonically decreasing function of $\xi$: \begin{equation} \label{eq:13} \frac{\sigma({\rm SFR})}{\sigma(\Phi)} = \frac{1}{(1+\xi^2)^{1/2}} \cdot (1-R+\lambda)^{-1}. \end{equation} In Equation \ref{eq:13}, the scatter of SFR and $\Phi$ are calculated in linear space, while in the observations, the scatter of SFR is usually measured in logarithmic space. Here we present an approximate analytical solution of $\sigma({\rm \log SFR})/\sigma(\log \Phi)$, which can be written as: \begin{equation} \label{eq:14} \frac{\sigma({\rm \log SFR})}{\sigma(\log \Phi)} \approx \frac{1}{(1+\xi^2)^{1/2}}. \end{equation} As can be seen in Equation \ref{eq:14}, if the scatter is measured in logarithmic space, the factor $1-R+\lambda$ vanishes, since this can be viewed as a constant ``inefficiency'' in the star-formation \citep[see][]{Lilly-13}. The details to derive the Equation \ref{eq:14} are given in the Appendix \ref{sec:A}. The left panel of Figure \ref{fig:3} shows the numerical solution (black solid curve) and the approximate analytical solution (gray dashed curve) of $\sigma({\rm \log SFR})/\sigma(\log \Phi)$ as a function $\log\xi$, which are in excellent agreement. Specifically, in obtaining the numerical solution, we first solve the Equation \ref{eq:2} to obtain the SFR($t$) at a set of different $\xi$. Then we calculate the $\sigma(\log {\rm SFR})$ within a single period for each $\xi$. The $\sigma(\log Z)$ is calculated in the similar way. The analytic solution provides the physical insight to how $\xi$ determines the response of gas-regulator model, while the numerical solution provides a double-check for the analytic solution, and provides the judgment for the validation of the {\it approximate} analytic solution. As pointed out in \citetalias{Wang-19} and \cite{Wang-20b}, when driving the gas-regulator with time-varying inflow rate, the amplitude of the variations in SFR is reduced from the amplitude of variations in the inflow rate by a frequency-dependent ($\nu=1/T_{\rm p}$) response curve, i.e. Equation \ref{eq:14} or equation 9 in \citetalias{Wang-19}. In other words, for a given inflow rate with any given PSD$_{\log\Phi}$($\nu$), the power spectral distribution of the resulting logSFR, PSD$_{\rm \log SFR}$($\nu$), can be written as: \begin{equation} \label{eq:28} \begin{split} {\rm PSD}_{\rm \log SFR}(\nu) \approx & \frac{\sigma^2(\log {\rm SFR})}{\sigma^2(\log \Phi)}\cdot {\rm PSD}_{\log \Phi}(\nu) \\ \approx & \frac{1}{1+\xi^2} \cdot {\rm PSD}_{\log \Phi}(\nu) \\ = & \frac{1}{1+(2\pi\tau_{\rm dep,eff}\nu)^2} \cdot {\rm PSD}_{\log \Phi}(\nu). \end{split} \end{equation} The Equation \ref{eq:28} is established under the condition that the variation of inflow rate is small, and therefore the logSFR(t) and log$\Phi(t)$ are close to sinusoidal functions. However, as examined in Appendix \ref{sec:C}, we directly input the sinusoidal $\Phi(t)$ in logarithmic space with a large amplitude (0.5 dex), and find that the Equation \ref{eq:28} is still valid. Therefore, we conclude that the Equation \ref{eq:28} theoretically predicts the connection between the PSDs of $\log$SFR and $\log \Phi$, when driving the gas-regulator with time-varying inflow rate \citep[also see \citetalias{Wang-19};][]{ Wang-20a,Tacchella-20}. However, from the observations, the inflow rate history of galaxies is not of course a directly observable quantity. Finally, we come to the scatter of the gas-phase metallicities in the population. As shown in top middle panel of Figure \ref{fig:1}, the ratio of the scatter in $Z(t)$ to the scatter of SFR$(t)$ (i.e. what would be observed in a given population of regulators at fixed time) is predicted to be strongly dependent on $\xi$. Here we present the approximate analytical solution of $\sigma({\rm \log Z})/\sigma(\log {\rm SFR})$, which can be written as: \begin{equation} \label{eq:15} \frac{\sigma(\log Z)}{\sigma(\log {\rm SFR})} \approx \frac{\xi}{(1+\xi^2)^{1/2}} \cdot \frac{1}{1+Z_0/y_{\rm eff}}. \end{equation} The detailed derivation of Equation \ref{eq:15} is shown in Appendix \ref{sec:A}. Similar to Equation \ref{eq:14}, Equation \ref{eq:15} provides the link between the variability of $\log$SFR and $\log$Z. However, one can not write the correlation of the PSD$_{\rm \log SFR}$ and PSD$_{\rm \log Z}$ in a similar way as Equation \ref{eq:28}, because the resulting metallicity is not a sinusoidal function. Since the $\sigma({\rm \log Z})/\sigma(\log {\rm SFR})$ only depends on $\xi$ at fixed $Z_{\rm 0}/y_{\rm eff}$, we therefore present both the numerical (the blue, green and red curves) and the analytic solutions (the gray dashed curves) of the $\sigma(\log Z)/\sigma(\log {\rm SFR})$ in the left panel of Figure \ref{fig:3} at three different $Z_{\rm 0}/y_{\rm eff}$, which are again in excellent agreement. For different $Z_{\rm 0}/y_{\rm eff}$, the $\sigma({\rm \log Z})/\sigma(\log {\rm SFR})$ monotonically increases with $\log \xi$, which is opposite to $\sigma({\rm \log SFR})/\sigma(\log \Phi)$. Intriguingly, if $Z_0=0$, the Equation \ref{eq:14} and \ref{eq:15} are strictly symmetrical across the axis of $\log \xi=0$. Unlike Equation \ref{eq:14}, the Equation \ref{eq:15} prediction of the gas-regulator model relates two readily observable quantities, the instantaneous SFR and the instantaneous gas-phase metallicity. \subsection{Driving the gas-regulator system with time-varying SFE} \label{sec:2.3} \begin{figure*} \begin{center} \epsfig{figure=./fig/model/inflow/sine/scatter_sine_inflow.eps,clip=true,width=0.42\textwidth} \epsfig{figure=./fig/model/sfe/sine/scatter_sine_sfe.eps,clip=true,width=0.42\textwidth} \end{center} \caption{Left panel: The ratio of $\sigma$($\log Z$) to $\sigma$($\log$SFR) as a function of $\xi$ determined by the numerical calculation, when the gas regulator is driven with a sinusoidal inflow rate ($\Phi_{\rm t}/\Phi_{\rm 0}=0.1$) and constant SFE. The colored lines show the relations for three different $Z_{\rm 0}/y_{\rm eff}$. In the meanwhile, we also display the ratio of $\sigma$($\log$SFR) to $\sigma$($\log \Phi$) as a function of $\xi$ as a black line. Each line is followed by a gray dashed line, which shows the approximate analytic solution (see Equation \ref{eq:14} and \ref{eq:15}). Right panel: The same as the left panel but for the case in which the gas regulator system is driven with a sinusoidal SFE and constant inflow rate (see Equation \ref{eq:19} and \ref{eq:20}). } \label{fig:3} \end{figure*} \begin{figure*} \begin{center} \epsfig{figure=./fig/model/sfe/sine/example_sine_sfe.eps,clip=true,width=0.32\textwidth} \epsfig{figure=./fig/model/sfe/sine/plot_zgas_sfr_2Gyr_l0.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/model/sfe/sine/plot_zgas_mgas_2Gyr_l0.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/model/sfe/step/example_step_sfe.eps,clip=true,width=0.32\textwidth} \epsfig{figure=./fig/model/sfe/step/plot_zgas_sfr_2Gyr_l0.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/model/sfe/step/plot_zgas_mgas_2Gyr_l0.eps,clip=true,width=0.33\textwidth} \end{center} \caption{Illustration of the SFR, $M_{\rm gas}$\ and $Z$ in response to a time-varying SFE$(t)$ of the sinusoidal function (upper panels) and the periodic step function (lower panels) with constant inflow rate, in the gas regulator framework. The panels, lines and colors are the same as those in Figure \ref{fig:1}, except the gas regulator system is now driven with periodic SFE$(t)$ and constant inflow rate. } \label{fig:2} \end{figure*} In the previous subsection, we looked at the behavior of the gas-regulator system with time-varying inflow and time-invariant SFE. In this subsection, we will explore the behaviour when the regulator is driven with a constant inflow rate but experiences a time-varying SFE. Similar to Section \ref{sec:2.2}, we first input a time-invariant inflow, i.e. $\Phi(t)=\Phi_0$, and a sinusoidally varying SFE(t): \begin{equation} \label{eq:16} {\rm SFE}(t) = {\rm SFE_0} + {\rm SFE_t} \cdot {\rm sin}(2\pi t/T_{\rm p}). \end{equation} The driving period of SFE$(t)$ is again denoted as $T_{\rm p}$. As before, we look for the solution of $M_{\rm gas}(t)$ and $M_{\rm Z}(t)$ in terms of Equation \ref{eq:5}. However, we note that, unlike in Section \ref{sec:2.2}, the solutions of $M_{\rm gas}(t)$ and $M_{\rm Z}(t)$ are not exact sinusoidal functions, but can be represented as approximations to sinusoids. We assume the variation of the input SFE(t) is small, i.e. SFE$_{\rm t}\ll $SFE$_{\rm 0}$. Therefore, the variations of the resulting $M_{\rm gas}(t)$ and $M_{\rm Z}(t)$ are also small, i.e. $M_{\rm t}\ll M_{\rm 0}$ and $M_{\rm Zt}\ll M_{\rm Z0}$. Actually, based on Equation \ref{eq:2} and \ref{eq:3}, varying the SFE is mathematically (but of course not physically) equivalent to varying inflow rate as ${\rm SFE_{\rm t}} = -\Phi_{\rm t}/\Phi_{\rm 0}\times {\rm SFE}_{\rm 0}$ and ${\rm SFE}_{\rm t}\ll{\rm SFE}_{\rm 0}$. Therefore, the solution of Equation \ref{eq:2} and \ref{eq:3} can be directly written as: \begin{equation} \label{eq:17} \begin{split} M_{\rm 0} = &\ \Phi_{\rm 0}\tau_{\rm dep,eff} \\ \delta \ \ = &\ {\rm arctan}(\xi) \\ \frac{M_{\rm t}}{M_{\rm 0}} = &\ - \frac{1}{(1+\xi^2)^{1/2}}\times \frac{{\rm SFE}_{\rm t}}{{\rm SFE}_{\rm 0}}, \end{split} \end{equation} and \begin{equation} \label{eq:18} \begin{split} M_{\rm Z0} = &\ (y_{\rm eff}+Z_{\rm 0})\Phi_{\rm 0}\tau_{\rm dep,eff} \\ \beta \ \ = & \ {\rm arctan}[\frac{2y_{\rm eff}\xi + Z_{\rm 0}\xi(1+\xi^2)}{y_{\rm eff}(1-\xi^2)+Z_{\rm 0}(1+\xi^2)}] \\ \frac{M_{\rm Zt}}{M_{\rm Z0}} = &\ -\frac{(1+\eta^2)^{1/2}}{1+\xi^2}\times \frac{{\rm SFE}_{\rm t}}{{\rm SFE}_{\rm 0}}. \end{split} \end{equation} We emphasize that the definitions of $\xi$ and $\eta$ are the same as in Section \ref{sec:2.2}. Comparing Equation \ref{eq:17} and \ref{eq:18} with Equation \ref{eq:6} and \ref{eq:8}, the difference in sign indicates the fact that, when driving the regulator with time-varying $\Phi(t)$, the increase of $\Phi(t)$ produces an increase of both $M_{\rm gas}(t)$ and $M_{\rm Z}(t)$ with some time-delays, while when driving the regulator with time-varying SFE$(t)$, the increase of SFE$(t)$ leads to decreases of both $M_{\rm gas}(t)$ and $M_{\rm Z}(t)$, again with some time-delay. We note that the difference in sign cannot be simply treated as an additional phase-delay of $\pi$, i.e. a time lag of $T_{\rm p}/2$, because an inverse connection between the mass of cold gas and the metallicity is physically expected. The solutions in Equation \ref{eq:17} and \ref{eq:18} are illustrated in the top-left panel of Figure \ref{eq:2}, where we show examples of the evolution of $M_{\rm gas}$ (blue), SFR (red), $M_{\rm Z}$ (orange) and $Z$ (purple). As previously, we also investigate the correlation between $\Delta\log$SFR (and $\Delta\log M_{\rm gas}$) and $\Delta\log Z$ at a set of different $\xi$, as shown in the top-middle (and top-right) panel of Figure \ref{fig:2}. In contrast with the result in Section \ref{sec:2.2}, $\Delta\log$SFR and $\Delta\log Z$ now show a strong {\it positive} correlation. However, we note that $\Delta\log M_{\rm gas}$ and $\Delta\log Z$ remain negatively correlated, as would be expected, independent of whether the gas-regulator is driven with time-varying inflow or time-varying SFE. In this sense, metallicity variations fundamentally reflect variations in total gas mass in the gas regulator reservoir \citep[see][]{Lilly-13, Bothwell-13, Bothwell-16}. As in Section \ref{sec:2.2}, we also look at periodic step functions for the time-variation of SFE. Such changes in SFE may well be more realistic than sinusoidal changes. The bottom panels of Figure \ref{fig:2} demonstrate the resulting $M_{\rm gas}(t)$, SFR$(t)$, $M_{\rm Z}(t)$ and $Z(t)$, as well as the correlation between $\Delta\log$SFR (and $\Delta\log M_{\rm gas}$) and $\Delta\log Z$. In generating the plots, we set the period of step function to be 2 Gyr, and change the upper-state duration ($\tau_{\rm s}$). We allow the $\tau_s$ varying from $0.1\tau_{\rm dep,eff}$ to $\tau_{\rm dep,eff}$. Here the $\tau_{\rm dep,eff}$ is calculated based on the SFE in its lower-state. A sudden increase of SFE causes a sudden increase, but a following decrease, of the SFR, a following decrease of $M_{\rm gas}$ and a following increase of the gas-phase metallicity. As a whole, it is not as immediately clear what the sign of the $\Delta\log$SFR and $\Delta\log$Z correlation will be, given the fact that the lower branch has many more data points (reflecting the longer interval of time) than the upper branch on the bottom-middle panel of Figure \ref{fig:2}. Therefore, it appears that the sign of the correlation between SFR and metallicity depends on the detailed forms of input time-varying SFE. There is a strong {\it asymmetry} in the distribution of SFR through the cycle, indicated by the number density of the data points in the bottom-middle panel of Figure \ref{fig:2}. Specifically, SFR$(t)$ stays close to its median value for most of the time, but shows a strong increase for a short period. The asymmetry becomes more significant as the relative duration of the increased phase of SFE is decreased. However, one thing is clear: the states with strongly {\it enhanced} SFR are always {\it metal-enhanced} with respect to the mean metallicity. These phases are represented in the upper locus of points in the figure which have $Z > \langle Z \rangle$. Consistent with the top-right panels of Figure \ref{fig:2}, the $\Delta\log M_{\rm gas}$ and $\Delta\log Z$ we conclude that there will always overall be a negative correlation that will be most clearly seen in the highest SFR points. Similar to Section \ref{sec:2.2}, we again present the approximate analytical solution of $\sigma(\log {\rm SFR})/\sigma(\log {\rm SFE})$, and $\sigma(\log Z)/\sigma(\log {\rm SFR})$ when driving the gas-regulator with sinusoidal SFE. These quantities can be written as: \begin{equation} \label{eq:19} \frac{\sigma({\rm \log SFR})}{\sigma(\log {\rm SFE})} \approx \frac{\xi}{(1+\xi^2)^{1/2}} \end{equation} and \begin{equation} \label{eq:20} \frac{\sigma(\log Z)}{\sigma(\log {\rm SFR})} \approx \frac{1}{(1+\xi^2)^{1/2}} \cdot \frac{1}{1+Z_0/y_{\rm eff}}. \end{equation} In the similar way as in Section \ref{sec:2.2}, according to Equation \ref{eq:19}, we can write the correlation between the PSDs of $\log$SFR$(t)$ and $\log$SFE$(t)$ as: \begin{equation} \label{eq:30} {\rm PSD}_{\rm \log SFR}(\nu) \approx \frac{1}{1+(2\pi\tau_{\rm dep,eff}\nu)^{-2}} \cdot {\rm PSD}_{\rm \log SFE}(\nu) \end{equation} The Equation \ref{eq:30} is established when the variation of SFE is small, i.e. SFE$_{\rm t}\ll $SFE$_{\rm 0}$. The right panel of Figure \ref{fig:3} shows the numerical solution (solid curves) and the approximate analytical solution (gray dashed curves) of $\sigma(\log {\rm SFR})/\sigma(\log {\rm SFE})$ and $\sigma(\log Z)/\sigma(\log {\rm SFR})$ as a function $\log\xi$. As shown, the numerical solution is well matched by the analytical solution. Intriguingly, the Equation \ref{eq:19} and \ref{eq:20} are strictly symmetrical to the Equation \ref{eq:14} and \ref{eq:15}, respectively, at the axis of $\log \xi=0$. When driving the gas-regulator with a time-varying SFE, the $\sigma(\log {\rm SFR})/\sigma(\log {\rm SFE})$ is predicted to increase with $\xi$, while $\sigma(\log Z)/\sigma(\log {\rm SFR})$ is predicted to decrease with $\xi$. Although the Equation \ref{eq:14}, \ref{eq:15}, \ref{eq:19} and \ref{eq:20} are only approximate solutions obtained in the limit of small variations of inflow rate or SFE, we have verified numerically that these equations remain reasonable approximations even when the variations of inflow rate or SFE are quite significant. In Appendix \ref{sec:C}, we examine the correlation of $\Delta \log$SFR vs. $\Delta \log$Z, when there is a large variation (0.5 dex) of inflow rate or SFE. We confirm that the model prediction of $\Delta \log$SFR vs. $\Delta \log$Z in the top-middle panels of Figure \ref{fig:1} and \ref{fig:2}, are scalable to the case in which the variation of driving factor is quite significant. \subsection{The effects of mass-loading, $Z_{\rm 0}$ and the yield} \label{sec:2.4} \begin{figure*} \begin{center} \epsfig{figure=./fig/model/lyz/plot_zgas_sfr_all_inflow1.eps,clip=true,width=0.95\textwidth} \epsfig{figure=./fig/model/lyz/plot_zgas_sfr_all_inflow2.eps,clip=true,width=0.95\textwidth} \epsfig{figure=./fig/model/lyz/plot_zgas_sfr_all_inflow.eps,clip=true,width=0.95\textwidth} \end{center} \caption{Illustration of the role of $Z_{\rm 0}$ and the mass-loading factor $\lambda$ in shaping the correlation of SFR and $Z$, when driving the gas regulator system with a sinusoidal inflow rate and constant SFE. From top to bottom, we set the $Z_{\rm 0}$ to be 0.0, 0.5$y$ and 0.5$y_{\rm eff}$. The $y_{\rm eff}$ is defined as $y/(1-R+\lambda)$. In each panel, we explore the cases of five different mass-loading factors, i.e. $\lambda=$0.0, 0.5, 1.0, 2.0, and 4.0. The lines are color-coded with the $\xi$ as before. The black line segments indicate the median gas-phase metallicity of the different five mass-loading factors for the three panels, which is exactly $Z=y_{\rm eff}+Z_{\rm 0}$. } \label{fig:4} \end{figure*} \begin{figure*} \begin{center} \epsfig{figure=./fig/model/lyz/plot_zgas_sfr_all_sfe1.eps,clip=true,width=0.95\textwidth} \epsfig{figure=./fig/model/lyz/plot_zgas_sfr_all_sfe2.eps,clip=true,width=0.95\textwidth} \epsfig{figure=./fig/model/lyz/plot_zgas_sfr_all_sfe.eps,clip=true,width=0.95\textwidth} \end{center} \caption{The same as Figure \ref{fig:4}, but driving the gas regulator system with sinusoidal SFE and constant inflow. } \label{fig:5} \end{figure*} In Section \ref{sec:2.2} and \ref{sec:2.3}, we have explored the behavior of the gas-regulator system when it is driven by time-varying inflow and time-varying SFE, respectively. In this subsection, we will explore how the metallicities are modified by changes in the wind mass-loading factor and the metallicity of inflow gas. The assumed yield enters as a simple scaling factor throughout. Following Section \ref{sec:2.2}, we drive the gas-regulator with a sinusoidally varying inflow rate for a set of different the mass-loading factors and $Z_{\rm 0}$. First, we set $Z_{\rm 0} = 0$, and show the $\Delta\log$SFR vs. $\log Z - \log y$ in the top panel of Figure \ref{fig:4} for different mass-loading factors. To eliminate the dependence on the value of $y$, we set the x-axis to be $\log Z - \log y$ rather than $\log Z$. At $Z_{\rm 0}=0$, the relative changes in metallicity and star-formation, i.e. the $\Delta\log$SFR-$\Delta\log Z$ relation, stays the same with varying $\lambda$, while the {\it mean} metallicity decreases with increasing $\lambda$. We then set $Z_{\rm 0} = 0.5y$, and obtain the $\Delta\log$SFR vs. $\log Z-\log y$ shown in the middle panel of Figure \ref{fig:4}. As shown, the basic negative correlation between $\Delta\log$SFR vs. $\Delta\log Z$ still retains, but the relative change in metallicity, i.e. the scatter of metallicity in the population, decreases with increasing $Z_{\rm 0}$ and increasing $\lambda$ (if $Z_{\rm 0}\ne 0$), compared to the top panel of Figure \ref{fig:4}. Finally, we set $Z_{\rm 0}= 0.5y_{\rm eff}$, and the $\Delta\log$SFR vs. $\log Z-\log y$ is shown in the bottom panel of Figure \ref{fig:4}. The correlation between $\Delta\log$SFR vs. $\Delta\log Z$ is the same when $Z_0/y_{\rm eff}$ is fixed. These results follow the Equation \ref{eq:15} where it is clear that the ratio of $\sigma(\log Z)$ to $\sigma(\log {\rm SFR})$ only depends on $Z_{\rm 0}/y_{\rm eff}$ once $\xi$ is fixed. At a given $\xi$, the scatter of $\log Z$ is scaled by the factor $(1+Z_{\rm 0}/y_{\rm eff})^{-1}$ with respect to $\sigma(\log Z)$ at $Z_{\rm 0}=0$. Another interesting point is that the mean metallicity does depend on $Z_0$, $\lambda$ and $y$. As a whole, the mean gas-phase metallicity increases with increasing $Z_0$ and/or decreasing $\lambda$. Actually, the mean SFR and metallicity can be solved analytically based on Equation \ref{eq:6} and \ref{eq:8} (or Equation \ref{eq:17} and \ref{eq:18}), which can be written as \citep[also see][]{Sanchez-Almeida-14}: \begin{equation} \label{eq:21} \langle {\rm SFR}\rangle = M_{\rm 0}\cdot {\rm SFE} = \Phi_{\rm 0} \cdot (1-R+\lambda)^{-1}. \end{equation} and \begin{equation} \label{eq:22} \langle {Z}\rangle = \frac{M_{\rm Z0}}{M_{\rm 0}} = Z_{\rm 0}+y_{\rm eff}. \end{equation} For completeness, we also look at driving the gas-regulator system with sinusoidal SFE. The results are shown in Figure \ref{fig:5}. As shown, the effects of $Z_{\rm 0}$, $\lambda$ and $y$ on the correlation between $\Delta\log$SFR and $\Delta\log Z$ follow the Equation \ref{eq:20}, and the effects of them on mean SFR and metallicity follow the Equation \ref{eq:21} and Equation \ref{eq:22}. Although here we do not try to set a different yield, we argue that the effect of varying yield would also follow the Equation \ref{eq:15} or Equation \ref{eq:20}. Based on Equation \ref{eq:21} and \ref{eq:22}, we find that the mean SFR is determined by the mean inflow rate and mass-loading factor regardless of the SFE, simply because the gas reservoir adjusts to maintain long-term balance, while the mean metallicity only depends on $Z_{\rm 0}$ and $y_{\rm eff}$ regardless of how large is the inflow or SFE. We conclude that, under the gas-regulator framework, the metallicity is primarily determined by $Z_{\rm 0}$ and $y_{\rm eff}$ with a secondary effect on SFR (or cold gas mass) due to the time-variation of the inflow rate or SFE. It is important to note that the metallicity does not depend on the {\it absolute} values of inflow rate and SFE, but rather on the change in them. Therefore, when investigating the question whether the SFR is a secondary parameter to determine the metallicity, one should look at the correlation of the relative values of SFR and $Z$ (or residuals like $\Delta\log$SFR and $\Delta\log Z$ in Figure \ref{fig:1} and Figure \ref{fig:2}), rather than the absolute values, in order to eliminate different $\langle {\rm SFR}\rangle$ and $\langle Z\rangle$ for different galaxies or regions. In Section \ref{sec:2.2} and \ref{sec:2.3}, we have investigated the properties of gas-regulator by driving the gas-regulator system with 1) time-varying $\Phi$ and time-invariant SFE, and 2) time-varying SFE and time-invariant $\Phi$. The most important result is that there are opposite correlations (negative and positive, respectively) between $\Delta\log$SFR and $\Delta\log Z$ for these two modes. However, in the real universe, the inflow rate and SFE could both be time-varying. If the variation of inflow rate is much more dominant than the variation of SFE, a negative correlation between $\Delta\log$SFR and $\Delta\log Z$ is expected according to the analysis of Section \ref{sec:2.2}, and vice versa. This implies that the correlation between $\Delta\log$SFR and $\Delta\log Z$ (at least for regions of significantly enhanced star formation) is a powerful and observationally accessible diagnostic of the {\it dominant} mechanism for the variation of SFR, changes in inflow or changes in SFE, even if both of them may vary with time. In the whole of Section \ref{sec:2}, we have used $\Delta\log$SFR to indicate the temporal elevation and suppression of star formation. We note that one could also use the displacement of sSFR to its {\it smooth cosmic evolution} in logarithmic space, $\Delta \log$sSFR. These are essentially equivalent provided that the timescales of interest (essentially a single period of oscillation) are much shorter than the inverse sSFR, so that the mass changes negligibly through the oscillation cycle. We have examined that the difference is negligible in case of sSFR$\sim10^{-10}$yr$^{-1}$ for typical local SF galaxies when replacing $\Delta \log$SFR with $\Delta \log$sSFR. This is true for the present work, since the galaxies used in the observational part are low-redshift galaxies. As discussed below, there are good reasons to use $\Delta \log$sSFR as the observational measure of the relative star-formation rate rather than $\Delta \log$SFR. In addition, the metallicity we measure from the observation is the Oxygen abundance, defined as the number ratio of Oxygen to Hydrogen. We argue that in Equation \ref{eq:3}, the mass of metals can be replaced by the number of Oxygen, if the $y$ and $Z_0$ are defined in the terms of the number fraction of Oxygen. Therefore, the model predictions in the whole Section \ref{sec:2} in terms of Z are also valid in terms of O/H, including Figure \ref{fig:1}-\ref{fig:5}, as well as the corresponding equations. The models predict the correlation of the temporal changes in SFR and Z within a given period for an individual gas-regulator system. Observationally, it is of course not possible to monitor a single galaxy or region for the timescales of interest. Instead, we have the SFR and metallicity for a large population of galaxies or regions. Therefore, we must make the assumption of {\it ergodicity} in order to compare the models with the observational data. In other words, we must assume that the variations that we see of the SFR and Z across the observed galaxy population (or between different regions within a galaxy) observed at a single epoch do indeed reflect the typical temporal changes of the SFR and Z within a given system \citep[see more details in Section 3.2 of][]{Wang-20b}. \section{Data} \label{sec:3} In Section \ref{sec:2}, we have established links between the variations of SFR, cold gas mass, and gas-phase metallicity when a gas-regulator system is driven by changes in inflow or SFE, and established that the sign of the correlation between changes in the easily observable SFR and changes in the metallicity is a key diagnostic in establishing whether changes are driven by variations in inflow or SFE. While these relations have been constructed based on the {\it temporal} changes in a single system, they would of course apply also to an ensemble of such systems observed at a single epoch (assuming the phases are random), provided the assumption of ergodicity applies \citep[see][for a discussion]{Wang-20b}. The goal of the observational part of the paper is therefore to examine the correlation of $\Delta\log$SFR and $\Delta\log Z$, computed relative to suitably chosen fiducial values, at different locations. We will wish to examine this correlation both from galaxy to galaxy, but also at different locations within galaxies, in order to try to assess the relative importance of changes in inflow and SFE on different physical scales. In this section, we will briefly introduce the data used in the observational part of this work, namely the 38 SF galaxies from the MUSE Atlas of Disks \citep[MAD;][]{Erroz-Ferrer-19} survey, and the nearly 1000 well-defined SF galaxies from Mapping Nearby Galaxies at APO (MaNGA) survey (\citetalias{Wang-19}). We refer the readers to \cite{Erroz-Ferrer-19} and \citetalias{Wang-19} for more details of these two galaxy samples. \subsection{The MAD galaxy sample} \label{sec:3.1} The final released sample\footnote{https://www.mad.astro.ethz.ch} of the MAD survey includes 38 weakly inclined, spiral galaxies, spanning a large range of stellar mass from $10^{8.5}$ to $10^{11.2}$${\rm M}_\odot$. These galaxies were observed during the MUSE Guaranteed Time Observing runs on the Very Large Telescope (VLT) between 2015 April and 2017 September. The on-source exposure time was for each galaxy 1 h and the seeing ranged between 0.4 and 0.9 arcsec. These galaxies are very nearby, with $z<0.013$, leading to an average spatial resolution of $\sim$100 pc or better. The MUSE field of view is 1 arcmin$^2$, and the wavelength coverage of the spectra is from 4650 to 9300 \AA. The coverage for MAD galaxies is in the range of $\sim$0.5-3 effective radius ({$R_{\rm e}$}), with a median value of 1.3{$R_{\rm e}$}. The data were reduced using the MUSE pipeline \citep{Weilbacher-NG}, including bias and dark subtraction, flat fielding, wavelength calibration and so on. The data downloaded from the data release is not the original two-dimensional spectra, but the derived measurements from the spectra. These include the strengths of the emission lines, such as the flux map of H$\beta$, [OIII]$\lambda$4959,5007, H$\alpha$, [NII]$\lambda$6548,6583, and [SII]$\lambda$6717,6731. The emission lines are modelled by single Gaussian profiles. The fluxes of emission lines are corrected for the dust attenuation by using the Balmer decrement with case B recombination. The intrinsic flux ratio of H$\alpha$ to H$\beta$ is taken to be 2.86, assuming the electron temperature of 10$^4$ K and the electron density of $10^2$ cm$^{-3}$. The adopted attenuation curve is the CCM \citep[][]{Cardelli-89, ODonnell-94} extinction curve with $R_{\rm V}=3.1$. In addition to the maps of emission lines, the released MAD data also includes the maps of derived quantities, such as SFR surface density ($\Sigma_{\rm SFR}$) and stellar mass surface density ($\Sigma_*$). The SFR is derived from the H$\alpha$ luminosity \citep{Kennicutt-98, Hao-11}, assuming the \cite{Kroupa-01} IMF. The stellar mass density is derived by fitting stellar population models to the stellar continuum spectra twice. The first continuum fitting is performed spaxel-by-spaxel, and the second fitting was performed on the stellar Voronoi tessellation (single-to-noise of 50 on the continuum) using the stellar templates from {\tt MILES} \citep{Sanchez-Blazquez-06} with {\tt pPXF} \citep[][]{Cappellari-04}. Then, the resulting $\Sigma_*$ map is a spaxel-by-spaxel based map, assuming that the $\Sigma_*$ is the same for all spaxels within a single Voronoi bin. We note that in the released maps, spaxels that are located within the Seyfert and LINER regions of the BPT diagram \citep[e.g.][]{Baldwin-81, Kewley-01} are masked out. The SFR and gas-phase metallicity cannot be well measured in these masked regions. Their exclusion should not affect our analysis and, therefore, in this work we only focus on the SF and ``composite" regions. The SF and composite regions are identified by the demarcation lines of \cite{Kauffmann-03} and \cite{Kewley-01} on the BPT diagram \citep[see figure 1 in ][]{Erroz-Ferrer-19}. This also means that for each galaxy, we only use some fraction of the spaxels within the MUSE field. The fraction of valid spaxels varies from galaxy-to-galaxy, with a median value of 0.33. We have checked that our result does not depend on the fraction of spaxels that are used. The highly-resolved MAD data enables us to investigate the correlation between star formation and metal enhancement down to the scale of giant molecular clouds (GMC). However, the MAD sample only includes 38 galaxies, which limits the statistical power when examining the galaxy population as a whole. Therefore, we utilize complementary data on the integrated spectra of galaxies from MaNGA survey. We do not use the individual spaxel data for the MaNGA galaxies because the resolution is so much worse than for MAD. \subsection{The MaNGA galaxy sample} \label{sec:3.2} MaNGA is the largest IFS surveys of nearby galaxies up to now, and aims at mapping the 2-dimensional spectra for $\sim$10,000 galaxies with redshifts in the range of $0.01<z<0.15$. Using the two dual-channel BOSS spectrographs at the Sloan Telescope \citep{Gunn-06, Smee-13}, MaNGA covers the wavelength of 3600-10300 \AA\ at R$\sim$2000. The spatial coverage of individual galaxies is usually larger than 1.5{$R_{\rm e}$}\ with a resolution of 1-2 kpc. The flux calibration, including the flux loss due to atmospheric absorption and instrument response, is accurate to better than 5\% for more than 89\% of MaNGA’s wavelength range \citep{Yan-16}. In this work, we use the well-defined sample of SF galaxy from \citetalias{Wang-19}. Here we only briefly describe the sample definition, and refer the reader to \citetalias{Wang-19} for further details. This galaxy sample is originally selected from the SDSS Data Release 14 \citep{Abolfathi-18}, excluding quenched galaxies, mergers, irregulars, and heavily disturbed galaxies. The quenched galaxies are identified and excluded based on the stellar mass and SFR diagram. For each individual galaxy, the stellar mass and SFR are measured within the effective radius, i.e. $M_*(<R_{\rm e})$ and SFR($<${$R_{\rm e}$}), based on the MaNGA 2-dimensional spectra. The final MaNGA sample includes 976 SF galaxies, and is a good representation of normal SFMS galaxies. Similar to the measurements of SFR for the MAD galaxies, the map of SFR surface density of MaNGA galaxies is also determined by the extinction-corrected H$\alpha$ luminosity \citep{Kennicutt-98}. The correction of dust attenuation follows the same approach for MAD galaxies, as described in Section \ref{sec:3.1}, by using the Balmer decrement and adopting the CCM extinction curve. The maps of stellar mass surface density for MaNGA galaxies are obtained by fitting the stellar continuum using the public fitting code {\tt STARLIGHT} \citep{Cid-Fernandes-04}, using single stellar populations with {\tt Padova} isochrones from \cite{Bruzual-03}. However, we note that in determining the $\Sigma_{\rm SFR}$ and $\Sigma_*$ for MaNGA galaxies, the \cite{Chabrier-03} IMF is assumed, which is different from the one adopted for MAD galaxies. We argue that the two IMFs are quite close to each other, with only a small overall shift on SFR and $M_*$ (or $\Sigma_{\rm SFR}$ and $\Sigma_*$) which does not change any of our conclusions in this work. \subsection{The estimation of gas-phase metallicity} \label{sec:3.3} The $T_{\rm e}$-based method is widely understood to represent the ``gold standard'' in determining the gas-phase metallicity \citep[e.g.][]{Skillman-89, Garnett-02, Bresolin-07, Berg-15, Bian-18}. However, the measurement of the weak [OIII]$\lambda$4363 emission line is needed, which is often not detected in the available spectra. Therefore, a set of empirical recipes have been proposed to derive the gas-phase metallicity based only on the strong emission lines \citep[e.g.][]{Kobulnicky-04, Pettini-04, Maiolino-08, Perez-Montero-09, Pilyugin-10, Marino-13, Vogt-15}, such as [OII]$\lambda$3227, H$\beta$, [OIII]$\lambda$5007, H$\alpha$, [NII]$\lambda$6584, and [SII]$\lambda\lambda$6717,6731. However, systematic offsets of 0.2 dex or more are found between these different empirical calibrations, even using the same line measurements \citep{Kewley-08, Blanc-15}. Not only are there systematic offsets, the range of the derived metallicities are also different. This is unfortunately important since we have argued that the variation of gas-phase metallicity (or the residuals of metallicity) as the SFR varies is an important diagnostic. The dispersion in metallicities must be considered in the context of the different ranges of the Oxygen abundance measurements using the different methods. Unfortunately, the wavelength coverage of MUSE does not include the [OII]$\lambda$3727 and [OIII]$\lambda$4363 lines, leading to a limited number of usable strong line prescriptions. In this work, we adopt the empirical relations from \citet[][hereafter \citetalias{Dopita-16}]{Dopita-16} and \citet[][hereafter \citetalias{Pilyugin-16}]{Pilyugin-16}. These two empirical relations are the most recently constructed and have advantages over the previous methods, although ultimately it is not known whether they are more accurate than the other methods. In the following, we briefly introduce these two calibrators. \subsubsection{{\tt N2S2H$\alpha$}} The {\tt N2S2H$\alpha$} is a remarkably simple diagnostic proposed by \citetalias{Dopita-16}, which can be written as: \begin{equation} \label{eq:23} {\tt N2S2H\alpha} = \log([{\rm NII}]/[{\rm SII}]) + 0.264\log([{\rm NII}]/{\rm H}\alpha) \end{equation} where [NII] is the flux of [NII]$\lambda$6584 and [SII] is the total flux of [SII]$\lambda\lambda$6717,6731. Then, the empirical relation of the metallicity can be written as: \begin{equation} \label{eq:24} 12 + \log ({\rm O/H}) = 8.77 + {\tt N2S2H\alpha} + 0.45({\tt N2S2H\alpha}+ 0.3)^5. \end{equation} This simple empirical relation is calibrated in the range of $7.7<12+\log({\rm O/H})<9.2$, which is valid for both MAD and MaNGA galaxies. The H$\alpha$, [NII]$\lambda$6584 and [SII]$\lambda\lambda$6717,6731 are located close together in wavelength, limiting the spectral range needed, and making the {\tt N2S2H$\alpha$} diagnostic to be insensitive to reddening. The [NII]/H$\alpha$ term provides a correction for the weak dependence on the ionization parameter and gas pressure. \citetalias{Dopita-16} argued that this diagnostic should be a fairly reliable metallicity estimator with only small residual dependence on other physical parameters, and that it can be used in a wide range of environments. However, this metallicity estimator strongly depends on the relative N/O ratio. In the calibration of {\tt N2S2H$\alpha$}, \citetalias{Dopita-16} assumed a universal correlation between N/O and O/H, which is determined based on a mixture of both stellar and nebular sources \citep{Nicholls-17}. This means that any galaxies or regions that deviate from the adopted N/O-O/H relation would have an extra uncertainty in metallicity measurement when using {\tt N2S2H$\alpha$} as the metallicity estimator. The metallicity also depends on the relative S/O ration, while \citetalias{Dopita-16} have adopted [NII/SII] term in the metallicity indicator {\tt N2S2H$\alpha$}, which likely accounts for some of the dependence of metallicity on S/O. \subsubsection{{\tt Scal}} \label{sec:3.3.2} The S-calibration ({\tt Scal}) metallicity estimator has been proposed by \citetalias{Pilyugin-16}, based on three standard diagnostic line ratios: \begin{equation} \begin{split} {\tt N2} = \ & [{\rm NII}]\lambda\lambda6548,6584/{\rm H}\beta, \\ {\tt S2} = \ & [{\rm SII}]\lambda\lambda6717,6731/{\rm H}\beta, \\ {\tt R3} = \ & [{\rm OIII}]\lambda\lambda4959,5007/{\rm H}\beta. \end{split} \end{equation} The {\tt Scal} diagnostic is defined separately for the upper and lower branches of $\log {\tt N2}$. The {\tt Scal} indicator for the upper branch ($\log {\tt N2}\ge -0.6$) can be written as: \begin{equation} \label{eq:26} \begin{split} 12 + \log({\rm O/H}) = \ & 8.424 + 0.030\log({\tt R3/S2}) + 0.751\log {\tt N2} \\ \ & + (-0.349 + 0.182\log({\tt R3/S2}) \\ \ & + 0.508\log {\tt N2})\times \log {\tt S2}, \end{split} \end{equation} and the {\tt Scal} indicator for the lower branch ($\log {\tt N2}<-0.6$) can be written as: \begin{equation} \label{eq:27} \begin{split} 12 + \log({\rm O/H}) = \ & 8.072 + 0.789\log({\tt R3/S2}) + 0.726\log {\tt N2} \\ \ & + (1.069 - 0.170\log({\tt R3/S2}) \\ \ & + 0.022\log {\tt N2})\times \log {\tt S2}. \end{split} \end{equation} The {\tt Scal} prescription is calibrated to metallicity measurements, from the T$_{\rm e}$-based method, of several hundred nearby HII regions. \citetalias{Pilyugin-16} found that the {\tt Scal} indicator gives metallicities in very good agreement with the $T_{\rm e}$-based methods, with a scatter of only $\sim$0.05 dex across the metallicity range of $7.0<12+\log({\rm O/H})<8.8$. Furthermore, {\tt Scal} indicator takes advantage of three emission line ratios, which is an improvement over previous strong-line methods. In principle, given the wavelength coverage of MUSE, the {\tt N2} and {\tt O3N2} diagnostics, calibrated by \cite{Marino-13}, are also applicable. As pointed out by \cite{Kreckel-19}, using {\tt O3N2} (or {\tt N2}) and {\tt Scal} can result in qualitatively different results from the MUSE data. In this work, we prefer to use metallicity indicators of {\tt N2S2H$\alpha$} and {\tt Scal}, rather than {\tt N2} and {\tt O3N2}. Actually, \cite{Marino-13} calibrated the {\tt O3N2} and {\tt N2} diagnostics to the T$_{\rm e}$-based method, and found that the {\tt O3N2} and {\tt N2} diagnostics result in the uncertainty in Oxygen abundance of 0.18 dex and 0.16 dex, respectively. Given the similar ranges of the metallicity determined by {\tt Scal}, {\tt O3N2} and {\tt N2} for a given data set, the smaller uncertainty of {\tt Scal} diagnostic indicates a significant improvement over the previous {\tt O3N2} and {\tt N2} diagnostics. This improvement may come from the fact that 1) {\tt Scal} use more emission line ratios, and 2) these break the degeneracy between metallicity and ionization parameter \citep[also see][]{Kreckel-19}. The later is also true for the {\tt N2S2H$\alpha$} indicator. \subsubsection{The contamination from diffuse ionized gas} The fluxes of emission lines cannot be fully attributed to star formation activity alone. The diffuse ionized gas (DIG) makes a substantial contribution to the total emission-line flux from disk galaxies \citep{Walterbos-94, Ferguson-96, Greenawalt-98}, especially for regions of low H$\alpha$ surface brightness \citep{Oey-07, Zhang-17}. The line ratios for emission from HII regions and DIG are different, reflecting their different physical origins. The empirical relations for deriving metallicity and SFR from line ratios and line strengths always assume that all of the line emission is due to star formation. This assumption is not unreasonable if the target regions are SF regions on the BPT diagram, while a significant contamination ($\sim$40\% or more) from DIG is expected in the ``composite'' and LINER regions \citep{Erroz-Ferrer-19}. Compared with HII regions, the DIG shows enhanced line ratios in [NII]$\lambda$6584/H$\alpha$ and [SII]$\lambda\lambda$6717,6731/H$\alpha$ \citep{Reynolds-85, Hoopes-03, Madsen-06}. The emission from DIG moves the position of SF regions towards the composite or LINER regions on the BPT diagram \citep{Sarzi-06, Yan-12, Gomes-16}. In this work, we therefore only use the SF and composite regions of galaxies, and exclude those spaxels that are classified as Seyfert or LINERs. However, the contamination of DIG for the composite regions may still be significant. \cite{Erroz-Ferrer-19} have identified the regions in MAD galaxies in which star formation or DIG is dominant. Following the method developed in \cite{Blanc-09}, they measured the fraction of flux coming from DIG and from HII regions, and further defined the DIG regions to be those in which the flux contribution of HII regions is less than 60\%. They found that the HII regions show on average $\sim$0.1 dex higher metallicity than the DIG, while the metallicity radial gradient in both components is similar. Following the analysis of \cite{Erroz-Ferrer-19}, in this work, we will use all the spaxels in the SF or composite regions to do the analysis, but ignoring whether they are classified as HII regions or DIG. However, we have recalculated our main result when only using the HII regions, and find that the basic result remains unchanged. This indicates that the contamination of DIG is not a big concern in the present work. \section{Observational analysis of MAD galaxies} \label{sec:4} \subsection{Maps and profiles of sSFR and Oxygen abundance for MAD galaxies} \label{sec:4.1} \begin{figure*} \begin{center} \epsfig{figure=./fig/obs/example/example_prof_ssfr_i27.eps,clip=true,width=0.3\textwidth} \epsfig{figure=./fig/obs/example/example_prof_zgas_i27_DOP16.eps,clip=true,width=0.3\textwidth} \epsfig{figure=./fig/obs/example/example_prof_zgas_i27_PG16.eps,clip=true,width=0.3\textwidth} \epsfig{figure=./fig/obs/example/example_ssfr_i27.eps,clip=true,width=0.3\textwidth} \epsfig{figure=./fig/obs/example/example_zgas_i27_DOP16.eps,clip=true,width=0.3\textwidth} \epsfig{figure=./fig/obs/example/example_zgas_i27_PG16.eps,clip=true,width=0.3\textwidth} \epsfig{figure=./fig/obs/example/example_dssfr_i27.eps,clip=true,width=0.3\textwidth} \epsfig{figure=./fig/obs/example/example_dzgas_i27_DOP16.eps,clip=true,width=0.3\textwidth} \epsfig{figure=./fig/obs/example/example_dzgas_i27_PG16.eps,clip=true,width=0.3\textwidth} \end{center} \caption{An example of the profiles and 2-d maps of SFR and log(O/H) for a representative MAD galaxy, NGC 1483. Top three panels: The profiles of sSFR, log(O/H)-\citetalias{Dopita-16} and log(O/H)-\citetalias{Pilyugin-16} for NGC1483. In each panel, the small dots show individual spaxels from NGC 1483, the blue line is the running median profile, and the red line is a linear fit to the data points. Middle three panels: The 2-d maps of sSFR, log(O/H)-\citetalias{Dopita-16} and log(O/H)-\citetalias{Pilyugin-16} for NGC1483. In each panel, the white regions within the MUSE field of view are due to the fact that these regions are located in the Seyfert or LINER regions according to the BPT diagram, where the SFR and log(O/H) can not be well determined based on emission lines. Bottom three panels: The maps of $\Delta\log$sSFR, $\Delta$log(O/H)-\citetalias{Dopita-16} and $\Delta$log(O/H)-\citetalias{Pilyugin-16} for this galaxy. The $\Delta\log$sSFR, $\Delta$log(O/H)-\citetalias{Dopita-16} and $\Delta$log(O/H)-\citetalias{Pilyugin-16} of each individual spaxel are defined to the deviations from the red lines in the corresponding top panels. } \label{fig:6} \end{figure*} In Section \ref{sec:2}, we investigated the behavior of the SFR$(t)$, the cold gas mass $M_{\rm gas}(t)$ and the gas-phase metallicity $Z(t)$ in the gas regulator system in response to variations in the inflow rate $\Phi(t)$ and star-formation efficiency SFE$(t)$, and how this response depends on the (assumed constant) wind mass-loading factor $\lambda$, and metallicity of the inflowing gas $Z_{\rm 0}$. Specifically, we found a {\it negative} correlation between $\Delta \log$SFR and $\Delta \log$Z (i.e. $\log ({\rm SFR}/\langle {\rm SFR}\rangle$) vs. $\log (Z/\langle {Z}\rangle)$) when driving the gas-regulator with time-varying inflow rate, and a {\it positive} correlation between $\Delta \log$SFR and $\Delta \log Z$ when driving with time-varying SFE$(t)$. Therefore, one can in principle identify the driving mechanism of star formation activity by looking at the sign of the correlation between SFR and gas-phase metallicity in observational data. However, as pointed out above in Section \ref{sec:2.4}, one should look at the correlations of the relative values of SFR and $Z$ (i.e. the residuals $\Delta \log$SFR and $\Delta \log Z$ in Figure \ref{fig:1} and Figure \ref{fig:2}), rather than the absolute values, in order to take out the effects of different $\langle {\rm SFR}\rangle$ and $\langle Z\rangle$ for different galaxies or of different regions within them, e.g. the overall mass-metallicity or mass-sSFR relations or radial gradients in metallicity or sSFR within galaxies. In this section, we will therefore construct radial profiles of sSFR and log(O/H) for the MAD galaxies, and use these to construct localized $\Delta \log$sSFR and $\Delta \log$(O/H) data points from the observations. We here clarify why the sSFR is better than the SFR to characterize variations in the SFR within and between galaxies. Ideally, we would wish, in order to compare observations with the model in Section \ref{sec:2}, to follow a given galaxy, or a region within a galaxy, as it changes temporally. This is of course not possible. Instead, we must compare different galaxies in the population, or different regions at the same radius within a galaxy, and, invoking ergodicity, assume that these reflect the temporal variations that we are interested in. For each observational point, we therefore need the best estimate of the average (long-term) state of that location. It is an observational fact that, both from galaxy to galaxy and within galaxies, the range of sSFR is smaller than the range of SFR. For this reason, the average sSFR will be better defined than the average SFR. Related to this, normalizing by the local $\Sigma_*$, even for spaxels at the same galactic radius, also removes the effect of azimuthal variations in the density of gas to the extent that both gas and stars vary in the same way. We emphasize again that, in the model predictions, the $\Delta\log$SFR is replaceable by $\Delta\log$sSFR, and the $\Delta\log$Z is replaceable by $\Delta\log$(O/H), as mentioned in the end of Section \ref{sec:2}. Figure \ref{fig:6} shows an example of the radial profiles and 2-dimensional maps of sSFR and 12+log(O/H) for one individual MAD galaxy, NGC 1483. This typical galaxy is not specially selected in anyway, and is shown for illustration purposes as being representative of the general sample. The top three panels of Figure \ref{fig:6} show the sSFR$(r)$, and the 12+log(O/H) as estimated by {\tt N2S2H$\alpha$} and {\tt Scal} as a function of radius for individual spaxels. The log(O/H) based on {\tt N2S2H$\alpha$} diagnostic is denoted as log(O/H)-\citetalias{Dopita-16}, and the log(O/H) based on {\tt Scal} approach is denoted as log(O/H)-\citetalias{Pilyugin-16}. The radius used here is the de-projected radius scaled by the effective radius of the galaxy. In computing this de-projected radius we use the disk inclination based on the measured minor-to-major axis ratio and the position angle taken from the S4G photometry \citep{Salo-15}, assuming an infinitely thin disk. In each of the top panels, the blue line is a running median of 201 spaxels. As shown, for NGC 1483 the distribution of the sSFR at a given radius is quite strongly asymmetric, over nearly the whole range of galactic radius. While the sSFR of most spaxels are close to the median profile (or lightly less), a small fraction of spaxels have sSFR that is enhanced by up to an order of magnitude relative to the median profile. This is due to the fact that star formation activity is not uniform across the disk, but happens mostly in spiral arms or other star-formation regions. The regions with strong of enhanced sSFR can clearly be seen in the sSFR map in the middle-left panel of Figure \ref{fig:6}. In addition, the sSFR profile shows a positive radial gradient, which is consistent with the inside-out growth expected in disk galaxies \citep[e.g.][]{Perez-13, Li-15, Ibarra-Medel-16, Lilly-16, Goddard-17, Rowlands-18, Wang-18a}. The impression of strong asymmetry is reduced in 12+$\log({\rm O/H})$, for both metallicity indicators. In addition, the overall 12+$\log({\rm O/H})$ profiles for both indicators have a negative radial gradient, consistent with the previous studies of disk galaxies \citep[e.g.][]{Pilkington-12, Belfiore-17, Erroz-Ferrer-19}. This feature also can be seen in the maps of 12+$\log ({\rm O/H})$, shown in the middle row of Figure \ref{fig:6}. The measurements of $\log ({\rm O/H})$ based on {\tt N2S2H$\alpha$} and {\tt Scal} are qualitatively consistent. However, for a given dataset, $\log ({\rm O/H})$-\citetalias{Dopita-16} is usually larger than $\log ({\rm O/H})$-\citetalias{Pilyugin-16}, and the range of $\log ({\rm O/H})$-\citetalias{Dopita-16} is nearly twice as that of $\log ({\rm O/H})$-\citetalias{Pilyugin-16}. We note that the particular galaxy is typical of the sample, and most of the SF disk galaxies show similar features in sSFR and $\log ({\rm O/H})$ as shown for this galaxy. As pointed out in Section \ref{sec:2.4}, the gas-regulator predicts that the average SFR will only depend on the average inflow rate $\Phi_{\rm 0}$ and mass-loading factor $\lambda$, and the average $Z$ is determined by the effective yield, defined from the wind mass-loading as $y(1-R+\lambda)^{-1}$, and the metallicity of the inflowing gas $Z_{\rm 0}$. However, the $\lambda$ (and possibly also the $Z_{\rm 0}$) may well change radially within individual galaxies, because $\lambda$ should be directly related to the gravitational potential well. If so, the fitted average $12+\log({\rm O/H})$ profile should therefore reflect the radial dependence of the wind mass-loading and/or $Z_{\rm 0}$ (see further discussion in Section \ref{sec:4.2.2}). We are not interested in this effect if it is caused by time-invariant factors ($\lambda$ and $Z_0$). Rather we are interested in the spaxel-by-spaxel variations that are superposed on this smooth radial profile, because these are presumably caused by shorter term temporal variations. Therefore, we focus on the values of individual spaxels relative to these underlying trends, i.e. on the $\Delta \log$sSFR and $\Delta \log$(O/H) residuals when the underlying sSFR$(r)$ and $\log ({\rm O/H})(r)$ profiles are subtracted from each spaxel (compare to the relation of $\Delta \log$SFR vs. $\Delta \log Z$ in Figure \ref{fig:1} and \ref{fig:2}). In this way, we are effectively not allowing a variation of $\lambda$ and $Z_0$ between spaxels at a given radius for an individual galaxy, but allowing these quantities to vary radially, and removing the effect of this from our analysis. To achieve this, we first perform a linear fit to the sSFR$(r)$ and 12+$\log ({\rm O/H})(r)$ profiles based on all the individual spaxels. These fits are shown as red lines in the top panels of Figure \ref{fig:6}. As shown, for both sSFR or 12+$\log ({\rm O/H})$, the linear fit is quite a good representation of the median profile, although it is not perfect. We then define the $\Delta \log$sSFR and $\Delta \log ({\rm O/H})$ for each individual spaxel as the deviation of each spaxel from the fitted profile of sSFR$(r)$ or 12+$\log ({\rm O/H})(r)$ respectively. In this way, we can eliminate the overall radial dependences of sSFR and $\log ({\rm O/H})$, as well as global differences between galaxies, such as the overall sSFR or effects due to the mass-metallicity relation. As shown in Section \ref{sec:2}, in the gas-regulator framework, these changes in $\langle {\rm SFR} \rangle$ or $\langle Z \rangle$ will reflect differences (radially within galaxies or from galaxy to galaxy) in the overall inflow rate, mass-loading factor and $Z_{\rm 0}$. The bottom three panels of Figure \ref{fig:6} show the maps of $\Delta \log$sSFR and $\Delta \log ({\rm O/H})$ that are obtained for NGC 1483 after removing the radial gradients in $\log$sSFR and $\log ({\rm O/H})$. It is immediately apparent that regions with enhanced SFR, indicated by red bumps, nearly always show enhanced metallicity for both of the two metallicity indicators. This is consistent with the previous analysis of \cite{Kreckel-19} and \cite{Erroz-Ferrer-19}, at the similar spatial resolution of $\sim$100 pc. It should be noted that the color scale used for $\Delta \log ({\rm O/H})$-\citetalias{Dopita-16} has twice the range of that used for $\Delta \log ({\rm O/H})$-\citetalias{Pilyugin-16}. Figure \ref{fig:7} shows the fitted linear profiles of sSFR, 12+$\log ({\rm O/H})$-\citetalias{Dopita-16} and 12+$\log ({\rm O/H})$-\citetalias{Pilyugin-16} for all the 38 MAD galaxies. In displaying these profiles, we separate galaxies into three mass bins: $\log (M_\ast/$\msolar)$<10.0$ (blue lines), $10.0<$$\log (M_\ast/$\msolar)$<10.8$ (green lines), and $10.8<$$\log (M_\ast/$\msolar)\ (red lines). As shown, almost all of the MAD galaxies have positive radial gradients in sSFR with only four exceptions. It can be seen that there is no strong dependence of the sSFR profile on the stellar mass of galaxies. While the profiles in 12+$\log ({\rm O/H})$ have similar slopes, the overall values show a very strong dependence on global stellar mass, reflecting the MZR. \begin{figure*} \begin{center} \epsfig{figure=./fig/obs/DOP16_radius/global_rsfms.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/obs/DOP16_radius/global_rmzr.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/obs/PG16_radius/global_rmzr.eps,clip=true,width=0.33\textwidth} \end{center} \caption{ The linearly fitted profiles of sSFR (left panel), 12+log(O/H)-\citetalias{Dopita-16} (middle panel), and 12+log(O/H)-\citetalias{Pilyugin-16} (right panel) for all MAD galaxies. In each panel, the MAD galaxies are separated into three color-coded stellar mass bins: $\log (M_\ast/$\msolar)$<10.0$ (blue), $10.0<$$\log (M_\ast/$\msolar)$<10.8$ (green), and $10.8<$$\log (M_\ast/$\msolar)\ (red). } \label{fig:7} \end{figure*} \begin{figure*} \begin{center} \epsfig{figure=./fig/obs/DOP16_radius/global_define_delta.eps,clip=true,width=0.95\textwidth} \end{center} \caption{ The sSFR (left panel), 12+log(O/H)-\citetalias{Dopita-16} (middle panel) and 12+log(O/H)-\citetalias{Pilyugin-16} (right panel) at 0.5{$R_{\rm e}$}\ as a function of the stellar mass for the MAD galaxies. In each panel, the data points are color-coded with the stellar mass, and the black solid line is the linear fit to the data points. } \label{fig:8} \end{figure*} The definition of $\Delta\log$sSFR and $\Delta \log ({\rm O/H})$ for the individual spaxels, based on Figure \ref{fig:7}, enables us to investigate the correlations of $\Delta\log$sSFR vs. $\Delta \log ({\rm O/H})$ for small-scale regions ($\sim$100 pc) within individual MAD galaxies. However, one can well imagine that the physical processes driving the small-scale star formation, may be very different from those driving the star formation on galactic scales and this motivates define analogous quantities, $\Delta\log$sSFR and $\Delta \log ({\rm O/H})$ to reflect the {it global} properties of galaxies with the MAD population. For this purpose, we choose the fitted values of sSFR and 12+$\log ({\rm O/H})$ at 0.5{$R_{\rm e}$}\ (see the red lines in the top panels of Figure \ref{fig:6}) as a representative of the global properties of individual galaxies\footnote{We realize that the sSFR and 12+$\log ({\rm O/H})$ at one specific galactic radius can not perfectly reflect the global sSFR and gas-phase metallicities, because both sSFR and 12+$\log ({\rm O/H})$ show radial gradients. In Section \ref{sec:5.1}, we will treat the sSFR and 12+$\log ({\rm O/H})$ that are measured within 1.5{$R_{\rm e}$}\ as a representative of global quantities for MaNGA galaxies. }, because the spatial coverage for MAD galaxies generally extends to at least 0.5{$R_{\rm e}$}. Figure \ref{fig:8} shows the sSFR$_{\rm 0.5Re}$, 12+$\log ({\rm O/H})_{\rm 0.5Re}$-\citetalias{Dopita-16} and 12+$\log ({\rm O/H})_{\rm 0.5Re}$-\citetalias{Pilyugin-16} as a function of the overall stellar mass of the galaxies. The overall stellar mass is obtained by broad-band spectral energy distribution fitting \citep{Erroz-Ferrer-19}. The sSFR$_{\rm 0.5Re}$ decreases slightly with stellar mass, and 12+$\log ({\rm O/H})_{\rm 0.5Re}$ increases significantly with stellar mass, for both metallicity indicators. Both of these trends are of course well-established for SF galaxies in the literature. As pointed out above, in the framework of gas-regulator model, the dependence of SFR and $Z$ on stellar mass is due to the stellar mass-dependence of the inflow rate, $\lambda$ and $Z_{\rm 0}$ \citep[see Equation \ref{eq:21} and Equation \ref{eq:22} and discussion in][]{Lilly-13}. To eliminate the mass dependence, we perform a linear fit for each of these two relation, as shown with the black lines in Figure \ref{fig:8}. In a similar way, for each individual MAD galaxy, we can then define the $\Delta \log$sSFR$_{\rm 0.5Re}$ or $\Delta \log ({\rm O/H})_{\rm 0.5Re}$ to be the deviation of an individual galaxy from the linearly fitted relation. This is useful to study the driving mechanisms of star formation from galaxy to galaxy within the population. In Appendix \ref{sec:B}, we repeat the our basic analysis by defining the $\Delta\log$sSFR and $\Delta \log ({\rm O/H})$ for individual spaxels using average relations with $\Sigma_*$ rather than with galactic radius, and find that the basic results remain the same. \subsection{Correlations between $\Delta\log$sSFR and $\Delta\log$(O/H) on different spatial scales} \label{sec:4.2} \begin{figure*} \begin{center} \epsfig{figure=./fig/obs/DOP16_radius/data_dist_all_model.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/obs/DOP16_radius/data_dist_all_bifit.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/obs/DOP16_radius/global_intermedian_DOP16.eps,clip=true,width=0.663\textwidth} \epsfig{figure=./fig/obs/DOP16_radius/global_intermedian_res_DOP16.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/obs/DOP16_radius/global_05Re_DOP16_mad_model.eps,clip=true,width=0.33\textwidth} \end{center} \caption{ (a), The $\Delta\log$sSFR-$\Delta\log$(O/H) diagram for all the usable spaxels of MAD galaxies. The grayscale shows the number density of spaxels in logarithmic space. The black contours show equal number densities of spaxels decreasing 0.5 dex from the inside outwards. For comparison, model predictions ($\log \xi$=0.6 and $Z_{\rm 0}=0$) from the top-middle panel of Figure \ref{fig:2} are shown as white ellipses, scaled to match the scatter of $\Delta\log$sSFR as described in the text. (b), Bisector fits for the individual spaxels on the $\Delta\log$sSFR-$\Delta\log$(O/H) diagram. The lines correspond to the individual MAD galaxies color-coded by the overall stellar mass. The length of the line is determined by the range of $\Delta\log$sSFR for each individual galaxy. (c), The sSFR-metallicity relation of the fitted profiles of sSFR and log(O/H) (shown in Figure \ref{fig:7}) for MAD galaxies, color-coded by the stellar mass. The triangles shows the values of sSFR and log(O/H) at 0.5{$R_{\rm e}$}\ for each individual galaxies. The black solid line in this panel shows the sSFR-log(O/H) relation for the fitted relation of sSFR$_{\rm 0.5Re}$-$M_*$ and log(O/H)$_{\rm 0.5Re}$-$M_*$ relations, which are determined in Figure \ref{fig:8}. (d), The $\Delta\log$sSFR-$\Delta\log$(O/H) relation of the fitted profiles of sSFR and log(O/H) for individual MAD galaxies. The colored lines in panel (d) are taken from panel (c) but shifting the lines with the sSFR and log(O/H) at 0.5{$R_{\rm e}$}\ (indicated by the triangles) to be zero. (e), The $\Delta\log$sSFR$_{\rm 0.5Re}$-$\Delta$log(O/H)$_{\rm 0.5Re}$ diagram for the 38 MAD galaxies, with the color-coding of stellar mass. In panel (e), the black line is the bisector fits of the data points. The model predictions (for $\log \xi=-0.2$ and $Z_{\rm 0}=0$) from the top-middle panel of Figure \ref{fig:1} are overlaid as black ellipses, scaled to match the scatter of the $\Delta \log$sSFR of MaNGA galaxies (see Figure \ref{fig:12}). In all five panels, the scale of x-axis (or y-axis) are the same in displaying, so that the readers can directly compare the slope of the lines in all five panels. We note that, in all the panels, the gas-phase metallicity is log(O/H)-\citetalias{Dopita-16}. } \label{fig:9} \end{figure*} \begin{figure*} \begin{center} \epsfig{figure=./fig/obs/PG16_radius/data_dist_all_model.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/obs/PG16_radius/data_dist_all_bifit.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/obs/PG16_radius/global_intermedian_PG16.eps,clip=true,width=0.663\textwidth} \epsfig{figure=./fig/obs/PG16_radius/global_intermedian_res_PG16.eps,clip=true,width=0.33\textwidth} \epsfig{figure=./fig/obs/PG16_radius/global_05Re_PG16_mad_model.eps,clip=true,width=0.33\textwidth} \end{center} \caption{The same as Figure \ref{fig:9}, but using the gas-phase metallicity based on the empirical formula from \citetalias{Pilyugin-16}. } \label{fig:10} \end{figure*} To compare with our theoretical expectations of the gas-regulator model constructed in Section \ref{sec:4.1}, we have defined in the previous section $\Delta\log$sSFR and $\Delta\log$(O/H) both on the 100-pc scale of individual spaxels in MAD galaxies, and for the much larger galactic scale of MAD galaxies as a whole (defined at 0.5{$R_{\rm e}$}). In this section, we will explore the correlations between $\Delta\log$sSFR and $\Delta\log$(O/H) on these scales, and interpret the results in terms of the predictions of the gas-regulator model. \subsubsection{100-pc scales} \label{sec:4.2.1} We first look at the $\Delta\log$sSFR-$\Delta\log$(O/H) relation on scales of $\sim$100 pc. The panel (a) of Figure \ref{fig:9} shows the distribution of all valid MAD spaxels on the $\Delta\log$sSFR vs. $\Delta\log$(O/H)-\citetalias{Dopita-16} diagram. The grayscale shows the number density of spaxels in logarithmic space. In constructing panel (a), we give each MAD galaxy the same weight, by weighting the contribution of spaxels from each galaxy in panel (a) of Figure \ref{fig:9}. In other words, for a given MAD galaxy, we weight the individual spaxels of that galaxy by $N_{\rm spaxel}^{-1}$, where $N_{\rm spaxel}$ is the total number of valid spaxels for that particular galaxy. This ensures that the sum of the weights of all the spaxels in each galaxy is the same, and therefore that each galaxy is equally represented in the figure. As a whole, we find a significant positive correlation between $\Delta\log$sSFR and $\Delta\log$(O/H)-\citetalias{Dopita-16} for all the individual spaxels in the 38 MAD galaxies. Furthermore, we also investigate the correlation of $\Delta\log$sSFR vs. $\Delta\log$(O/H) for each individual MAD galaxy. The panel (b) of Figure \ref{fig:9} shows bisector fits of $\Delta\log$sSFR vs. $\Delta\log$(O/H)-\citetalias{Dopita-16} relation of individual spaxels for each of the 38 MAD galaxies. Here we adopt a bisector fitting \citep{Isobe-90}, because there is no reason for us to prefer regression of $\Delta\log$(O/H) on $\Delta\log$sSFR or $\Delta\log$sSFR on $\Delta\log$(O/H). As can be seen, consistent with the result of panel (a), the $\Delta\log$sSFR and $\Delta\log$(O/H)-\citetalias{Dopita-16} show positive correlation for 37 of the 38 MAD galaxies with only one exception. The exception is NGC 4030, the most massive galaxy in MAD sample. Inspection of the color-coding suggests that this result is not dependent on the mass of the galaxy. The same analysis is repeated for the log(O/H)-\citetalias{Pilyugin-16}, in the panels (a) and (b) of Figure \ref{fig:10}. Overall, we find that $\Delta\log$sSFR and $\Delta$log(O/H)-\citetalias{Pilyugin-16} still show a positive correlation, although not as significant as in panel (a) of Figure \ref{fig:9}. Consistent with this, panel (b) shows that 32 of the MAD galaxies show positive correlations of $\Delta\log$sSFR vs. $\Delta\log$(O/H)-\citetalias{Pilyugin-16}, but now 6 MAD galaxies show negative correlations. We note that when using {\tt O3N2}, the result is qualitatively different from that when using {\tt Scal} (or {\tt N2S2H$\alpha$}) for galaxies with stellar mass below $\sim10^{10.5}$${\rm M}_\odot$. This is likely due to the fact that the {\tt O3N2} indicator has a larger uncertainty ($\sim$0.18 dex) than {\tt Scal} ($\sim$0.05 dex) in determining the metallicity (see more discussion in Section \ref{sec:3.3.2} and \ref{sec:6.4}). For comparison purposes, we also show the model predictions of $\Delta \log$SFR vs. $\Delta \log$Z on the panels (a) of Figures \ref{fig:9} and \ref{fig:10}. The two white roughly elliptical shapes are based on the model predictions shown in the top-middle panel of Figure \ref{fig:2}. As discussed in Section \ref{sec:2}, the correlation of $\Delta \log$Z vs. $\Delta \log$SFR depends on both $\xi$ and $Z_0/y_{\rm eff}$. For simplicity, we only consider here the model predictions for $Z_0=0$. We then choose the $\log \xi$ (i.e. the ellipse in Figure \ref{fig:2}) that has the same ratio of the dispersion in $\Delta \log$Z (x-axis) and $\Delta \log$SFR (y-axis) as the observed data points in panels (a), and then scale this model ellipse so as to match the 1$\sigma$ and 2$\sigma$ dispersions of the observed quantities (see further discussion in Section \ref{sec:4.3}). Given the simplicity of the models in Section \ref{sec:2}, and especially given the different ranges of metallicity returned by the two metallicity indicators, we do not believe that a quantitative fitting would be meaningful. Nominally, we get $\log \xi=0.6$ for log(O/H)-\citetalias{Dopita-16}, and $\log \xi=0.96$ for log(O/H)-\citetalias{Pilyugin-16}. Within the limitations of model and metallicity estimators, this qualitative matching approach suggests that the prediction of time-varying SFE in the gas-regulator model is in good agreement with the data in predicting a positive correlation between $\Delta \log$sSFR vs. $\Delta \log$(O/H) at 100 pc scale. We emphasize that given the highly idealised nature of the model, any precise comparison of the model with the data would not be very useful. Rather, the model was intended to establish two things: the {\it sign} of the correlation between changes in the SFR and Z (positive or negative), and the approximate {\it relative} variation of these two quantities. Throughout the paper, the comparisons between the model and the data should be treated in this spirit. \begin{figure*} \begin{center} \epsfig{figure=./fig/obs/DOP16_radius/coef_dist_all.eps,clip=true,width=0.4\textwidth} \epsfig{figure=./fig/obs/PG16_radius/coef_dist_all.eps,clip=true,width=0.4\textwidth} \end{center} \caption{ Left panel: The distribution of the Pearson correlation coefficient for the $\Delta\log$sSFR-$\Delta\log$(O/H)-\citetalias{Dopita-16} relation of the individual spaxels for 38 MAD galaxies. We measure two kinds of Pearson correlation coefficient: one is without weighting the spaxels (black histogram), and the other is after weighting the spaxels by their SFR (red histogram). Right panel: The same as the left panel, but using $Z_{\rm gas}$-\citetalias{Pilyugin-16}. } \label{fig:11} \end{figure*} To try to quantify the significance of this positive correlation, we calculate the distribution of Pearson correlation coefficients for the 38 MAD galaxies in Figure \ref{fig:11}. For each individual MAD galaxy, the coefficient is calculated in two ways, both by weighting the individual spaxels equally and by weighting them by the SFR of the spaxels. There are two reasons for doing the SFR-weighting. First it ensures that the inner parts of galaxies, which have fewer but brighter pixels, are not swamped by the outer parts. Secondly, it is clear in Figure \ref{fig:2} that while the sinusoidal variation in SFE produces symmetric changes in SFR and metallicity, the (possibly more realistic) step-function changes produce asymmetric changes, in which the positive correlation is most clearly established by the regions of highest SFR. As shown, for both the approaches of computing correlation coefficient, and for both the metallicity indicators, we find that most of the coefficients are positive with only a few less than zero. This is consistent with the results in panels (a) and (b) of the Figure \ref{fig:9} and Figure \ref{fig:10}. The correlation of $\Delta \log$sSFR vs. $\Delta \log$(O/H) becomes more significant when weighting the spaxels with their SFR. This is due to the fact that regions with strongly enhanced star formation, always show enhanced gas-phase metallicity (also see panel (a) of Figure \ref{fig:9} and \ref{fig:10}). Both the positive correlation of $\Delta \log$sSFR vs. $\Delta \log$(O/H) and the increased significance of the correlation for regions of enhanced star formation are also visible to the eye in Figure \ref{fig:6}. This may reflect the point made in the context of Figure \ref{fig:2} in Section \ref{sec:2.3}, that the positive correlations caused by step function changes to the SFE are most clearly seen when the SFR is highest. We have also examined the significance of the correlation based on the p-value\footnote{https://en.wikipedia.org/wiki/P-value}, which is the probability of obtaining the real observed result under the assumption of no correlation. A smaller p-value means that the correlation is more significant. For both metallicity indicators, we find the p-values are very close to zero ($p<0.001$) for all except one or two (for different metallicity indicators) of the 38 MAD galaxies. This test shows that the positive correlation between $\Delta \log$sSFR and $\Delta \log$(O/H) is present also within individual galaxies as well as in the ensemble analysis. In the gas-regulator framework, we showed that the $\Delta\log$SFR (or $\Delta\log$sSFR) and $\Delta\log Z$ will be positively correlated when the gas-regulator system is driven by a time-varying SFE (see Figure \ref{fig:2}). Comparing the model predictions with the panel (a) of Figure \ref{fig:9} or Figure \ref{fig:10}, we can conclude that at $\sim$100 pc scales within galaxies, the variation of SFR (and $Z$) is due to a time-varying SFE experienced by a particular packet of gas. This conclusion implies an assumption that different {\it spatial} regions of galaxies, around an annulus of given galactic radius, constitute different temporal phases of a gas packet, as modelled in Figure \ref{fig:2}. This assumption is not unreasonable, and is supported by other strong observational evidence in favour of a time-varying SFE at $\sim$100 pc scale. \cite{Leroy-13} and \cite{Kreckel-18} have found that the dispersion of the SFE based on molecular gas measurements increases significantly towards smaller scales \citep[see also][]{Kruijssen-14}. Specifically, the scatter of the SFE is $\sim$0.4 dex at $\sim100$ pc \citep{Kreckel-18}. Consistent with this, \cite{Kruijssen-19} and \cite{Chevance-20} have shown that molecular gas mass and star formation rates are spatially de-correlated on the scales of individual GMCs in nearby disk galaxies, contrary to the tight correlation seen on kpc- and galactic-scales \citep[e.g.][]{Shi-11, Bigiel-08}. This de-correlation implies rapid temporal cycling between GMCs, star formation, likely involving feedback processes. By using this de-correlation, \cite{Chevance-20} constrain the properties of GMC and claimed that the GMC lifetimes are typically 10–30 Myr, which consists of a long inert phase without massive star formation and a short phase (of less than 5 Myr) with a burst of star formation. These observational results indicate a strong spatial variation of SFE at $\sim$100 pc scales, simultaneously suggesting a strong temporal variation of SFE for a given packet of gas. In Section \ref{sec:2.3}, we constructed two models of time-varying SFE$(t)$ in the form of both a sinusoidal function and a periodic step-function. According to the above discussion, we find that the SFE$(t)$ at $\sim$100 pc scale may be better characterized by a step-function with a short duration of top-phase and a long duration of bottom-phase, rather than the sinusoidal model. Consistent with this, we find the model prediction with the step-function is in good qualitative agreement with the observational results. Specifically, the distribution of $\Delta\log$sSFR is strongly asymmetric with a very small fraction of spaxels showing strongly enhanced SFR, and with these regions also show strongly enhanced metallicity, as found in the panel (a) of both Figure \ref{fig:9} and \ref{fig:10}. These features can also be found in the model prediction shown in the bottom-middle panel of Figure \ref{fig:2}. It goes without saying that the models explored in Section \ref{sec:2} are simple heuristic models, which cannot be expected to explain all the details of the situation. Not least, timescales of star-formation of only 10-30 Myr \citep{Chevance-20} are comparable to the timescales for chemical enrichment, an effect neglected by our use of the instantaneous recycling approximation in Section \ref{sec:2}. Nevertheless, we can conclude qualitatively that the variation of SFR and gas-phase metallicity on 100 pc scales is primarily due to a time-varying star formation efficiency experienced by the gas. \subsubsection{Sub-galactic scales} \label{sec:4.2.2} The two Panels (c) on Figures \ref{fig:9} and \ref{fig:10} show the correlation between the average sSFR and the average 12+log(O/H), as at a given galactic radius, for all 38 MAD galaxies, color-coded by their stellar mass. These are obtained by combining the linear fits of sSFR$(r)$ and 12+log(O/H)$(r)$ (see Figure \ref{fig:7} in Section \ref{sec:4.1}) to eliminate $r$ and thereby produce a linear sSFR-metallicity relation for each galaxy. The triangular points on each line show the values of sSFR and 12+log(O/H) at a fiducial 0.5{$R_{\rm e}$}, chosen to be representative of the global quantities for the galaxies. Since the metallicity profiles of MAD galaxies always show negative radial gradients, for a given line segment in these Panels, the central regions of the galaxies correspond to the higher log(O/H) end of each line segment, which generally also have lower sSFR. These individual lines in the Panels (c) therefore represent the radial variations of sSFR and 12+log(O/H) {\it within} a given galaxy, i.e. on {\it sub-galactic} scales. All azimuthal variations are eliminated and the radial variations are greatly smoothed out by the linear fits to sSFR and metallicity. Shifting these lines to align the triangles would therefore produce a residual plot that would in principle be directly analogous to that in the Panels (b). This is done in the Panels (d). Comparing the lines in the panels (d) with those in the panels (b) of Figure \ref{fig:9} and \ref{fig:10}, it is clear that the correlation of $\Delta\log$sSFR and $\Delta\log$(O/H) on these larger ``sub-galactic'' scales show the {\it opposite} correlation to that seen on 100-pc scales. Almost all the individual galaxies show positive correlations on 100-pc scales in the Panels (b) but most, especially at low stellar masses, show a negative correlation between in the Panels (d). It should be noted that the trend with stellar mass of the galaxy is much clearer than in the Panels (b). The interpretation of the (anti-)correlation of $\Delta\log$sSFR and $\Delta\log$(O/H) radially across the galaxy is less trivially interpreted than the positive correlation on 100-pc scales in Section \ref{sec:4.2.1} or, we will argue, on larger galactic scales in Section \ref{sec:4.2.3} below. An important difference between these radial ``sub-galactic'' variations and those on both smaller (Section \ref{sec:4.2.1}) and larger (Section \ref{sec:4.2.3} below) scales concerns the potential effects of the wind mass-loading term, $\lambda$ and/or possibly the metallicity of inflow gas, $Z_{\rm 0}$. On 100-pc scales, we normalised each spaxel by the average properties of all the spaxels at the {\it same} galactic radius in the {\it same} galaxy. We might expect the $\lambda$ and $Z_{\rm 0}$ to be the same for all these pixels if they are determined by the location within the galactic potential well. Likewise, on the larger scales when we consider the integrated properties of galaxies, we will normalise by the average properties of galaxies with the same integrated stellar mass, which again we may argue are likely have similar overall values of $\lambda$ and $Z_{\rm 0}$. But, in the current subsection, in which we are looking at radial variations of sSFR and log(O/H) {\it within} galaxies, it is very likely that there will be a positive radial gradient in $\lambda(r)$ (and a possibly negative radial gradient in $Z_{\rm 0}$). From Equation \ref{eq:22}, in which the average $Z$ is determined by $Z_{\rm 0}+y_{\rm eff}$, it is easy to produce a negative gradient in $Z$ with a positive gradient in $\lambda$ and/or a negative gradient in $Z_{\rm 0}$. This explanation of the negative gradient of metallicity with radius does not take into account possible radial flows within the galaxy. Hydrodynamical simulations, including EAGLE \citep{Schaye-15} and IllustrisTNG \citep{Nelson-18}, show that the inflow is more substantial along the galaxy major axis, while outflow is strongest along the minor axis \citep{Peroux-20}. In parallel work (E. Wang et al, in preparation), we find that the negative metallicity gradient can be naturally produced assuming that a radial gas inflow is dominant within the disk. We will return to the radial profiles of (s)SFR and log(O/H) below in Section \ref{sec:5.2}, where we analyze the MaNGA sample, which is not only much larger but extends over a much larger range of radii range, albeit with much poorer spatial resolution. \subsubsection{Galactic scales} \label{sec:4.2.3} Finally, the two Panels (e) of Figures \ref{fig:9} and \ref{fig:10} show the correlations between the residuals of the overall sSFR and metallicity, i.e. $\Delta \log$sSFR$_{\rm 0.5Re}$ and $\Delta \log ({\rm O/H})_{\rm 0.5Re}$, for each MAD galaxy, once the overall trends of sSFR$_{\rm 0.5Re}$ and $\log({\rm O/H})_{\rm 0.5Re}$ with galactic stellar mass (shown in Figure \ref{fig:8}) are taken out. Each triangle represents an individual MAD galaxy with the color-coding of its stellar mass. The Panels (e) on the two figures therefore show whether a given MAD galaxy is {\it overall} elevated or depressed in sSFR and log(O/H) relative to other galaxies of the same mass. They are therefore completely analogous (albeit with a vastly different number of points) to the Panels (a) which showed whether individual spaxels within a MAD galaxy were elevated or depressed in these two quantities relative to the other spaxels at the same radial location within the same galaxy. As argued in the previous subsection, the effects of any systematic variations of wind mass-loading and metallicity of inflow gas with stellar mass should not be present in this diagram. The black solid lines in the Panels (e) show a bisector fit for the data points. For both metallicity indicators, a negative correlation between $\Delta\log$sSFR$_{\rm 0.5Re}$ and $\Delta \log({\rm O/H})_{\rm 0.5Re}$ on galactic scales can clearly be seen. The Pearson correlation coefficients are $-$0.23 and $-$0.17 for the {\tt N2S2H$\alpha$} and {\tt Scal} indicators, respectively. This negative correlation and fit is in clear contrast to the positive correlation and fits on 100-pc scales that are shown in the (directly analogous) Panels (a) and (b). Given the limit number of MAD galaxies that are available, one might be concerned about the statistical validity of the reversal of sign between the negative correlation in Panels (e). Therefore, we examined the p-values of the correlation in both Panels (e), which are 0.171 and 0.317 for $\log({\rm O/H})$-\citetalias{Dopita-16} and $\log({\rm O/H})$-\citetalias{Pilyugin-16}, respectively. Also, in the next section of the paper, we perform a similar analysis on the much larger sample of 976 MaNGA galaxies, and find completely consistent results in that much larger sample. For comparison, we also present the model predictions from Section \ref{sec:2.2}, in which the gas-regulator is driven with a sinusoidal inflow rate. Similar to panels (a), we scaled the model prediction with an overall factor so as to match the scatter of $\Delta \log$sSFR for MaNGA galaxies, since the large number of MaNGA galaxies enables a reliable estimate of the scatter (see Figure \ref{fig:12}). We have $\log \xi=-0.2$ for log(O/H)-\citetalias{Dopita-16}, and $\log \xi=-0.6$ for log(O/H)-\citetalias{Pilyugin-16} at $Z_0=0$. As can be seen in panels (e) of Figure \ref{fig:9} and \ref{fig:10}, the model of time-varying inflow rate is in good agreement with the observed diagram of $\Delta \log$sSFR vs. $\Delta \log$(O/H) on galactic scales (again in the sense of reproducing the sign of the correlation and the relative amplitudes of the variations of these two quantities). We conclude that the inverse correlation between variations in the {\it overall} sSFR and $\log({\rm O/H})$ across the galaxy population at a given mass, are due to temporal variations in the inflow rate onto the galaxies. This is in marked contrast to the situation on 100-pc scales, where we argued that the positive correlation between these quantities was the clear signature of temporal variations in the SFE as a given packet of gas enters and leaves regions of high SFE. Finally, it should be noted in passing that the observed negative correlation between the overall $\Delta\log$sSFR and $\Delta\log$(O/H) in the Panels (e) is a straightforward manifestation of the existence of SFR as a second parameter in the mass-metallicity relation \citep[e.g.][]{Mannucci-10, Salim-14, Cresci-19, Curti-20}. It has further been claimed that the $Z(M_*,{\rm SFR})$ relation is epoch-independent, i.e. that there is a so-called FMR \citep[e.g.][]{Richard-11, Nakajima-12, Huang-19}. One of the successes of the gas-regulator model presented by \cite{Lilly-13} was to provide a natural analytic explanation for the presence of SFR as a second parameter and even to predict that the $Z(M_*,{\rm SFR})$ relation could well be more or less epoch independent. The \cite{Lilly-13} analysis considered in the first instance a constant specific inflow rate. But a steady specific inflow implicitly produces an {\it increase} in the inflow rate. If the specific inflow changes in such a way that the inflow rate is constant, then the sensitivity to the SFR vanishes (see \cite{Lilly-13} for discussion, also the Appendix to \citet{Onodera-16}). This emphasizes both that the anti-correlation in Panel (e) is not a new or controversial observational result, and also, in a very general sense, that this negative correlation between overall metallicity and star-formation rate on galactic scales is fundamentally driven by {\it changes} to the inflow rate, as discussed in this paper. \subsection{Quantitative interpretation of the dispersion of gas-phase metallicity and sSFR on 100-pc scales in MAD galaxies} \label{sec:4.3} \begin{figure*} \begin{center} \epsfig{figure=./fig/obs/DOP16_radius/scatter_mstar_all.eps,clip=true,width=0.45\textwidth} \epsfig{figure=./fig/obs/PG16_radius/scatter_mstar_all.eps,clip=true,width=0.45\textwidth} \end{center} \caption{ Left panel: The ratio of $\sigma$($\Delta\log$(O/H)) to $\sigma$($\Delta\log$sSFR) as a function of stellar mass for the 38 MAD galaxies. For each MAD galaxies, the $\sigma$($\Delta\log$(O/H)) (or $\sigma$($\Delta\log$sSFR)) is the standard deviation of $\Delta\log$(O/H) (or $\Delta\log$sSFR) for the all the individual spaxels in this galaxy. The data points are color-coded with the stellar masses. Right panel: The same as the left panel, but using log(O/H)-\citetalias{Pilyugin-16}. } \label{fig:16} \end{figure*} We showed in Section \ref{sec:2} that the relative strength of variations in the SFR and metallicity of the gas-regulator should depend on the response timescale of the system, set by the gas depletion timescale, relative to the timescales of any driving variations in the inflow or in the SFE. The Equation \ref{eq:20} describes the relative amplitude of these variations, characterized by $\sigma(\Delta\log({\rm O/H}))$/$\sigma(\Delta\log {\rm SFR})$ across the population, as a function of $\xi$ when driving the gas-regulator system with sinusoidal SFE$(t)$. According to Equation \ref{eq:20}, the $\sigma(\Delta\log({\rm O/H}))$/$\sigma(\Delta\log {\rm SFR})$ should decreases with increasing $\xi$, i.e. decrease with increasing effective gas depletion time if we ignore the possible variation of the driving period of SFE$(t)$ (see Figure \ref{fig:5}). We have therefore calculated the dispersions across the spaxels of the residual quantities $\Delta \log({\rm O/H})$ and $\Delta \log {\rm sSFR}$ that were plotted in the two Panels (a) in Figures \ref{fig:9} and \ref{fig:10} for the two metallicity indicators respectively. We calculate this dispersion for each of the MAD galaxies respectively. Figure \ref{fig:16} shows the resulting ratios of $\sigma(\Delta\log({\rm O/H}))$ to $\sigma(\Delta\log {\rm sSFR})$ for each of the 38 MAD galaxies for the two metallicity indicators. It is obvious that the $\sigma(\Delta\log({\rm O/H}))$ based on {\tt N2S2H$\alpha$} is overall greater than that based on {\tt Scal}. This is not a noise issue, but is due to the fact that the range of log(O/H)-\citetalias{Dopita-16} is nearly twice the range of log(O/H)-\citetalias{Pilyugin-16} for a given dataset, as mentioned earlier in this paper. This systematic uncertainty hinders the quantitative interpretation of these dispersions, although trends established within a single estimator (i.e. within a single panel of Figure \ref{fig:16}) should have some validity. As can be seen, the $\sigma(\Delta\log({\rm O/H}))$/$\sigma(\Delta\log {\rm sSFR})$ based on {\tt N2S2H$\alpha$} is in the range of 0.12 to 0.42 with the median value of 0.24. The $\sigma(\Delta\log({\rm O/H}))$/$\sigma(\Delta\log {\rm sSFR})$ based on {\tt Scal} is about a half this value, mostly in the range of 0.06-0.17 with a median value of 0.11. We do not find a significant dependence of $\sigma(\Delta\log({\rm O/H}))$/$\sigma(\Delta\log {\rm sSFR})$ on stellar mass, except possibly for an increase at the lowest stellar masses below $10^9$${\rm M}_\odot$. The MAD galaxy with the largest $\sigma(\Delta\log({\rm O/H}))$/$\sigma(\Delta\log {\rm sSFR})$ is ESO499-G37. A small fraction of spaxels in ESO499-G37 have $\log${\tt N2}$<-0.6$, resulting in very low metallicity in these regions (see Equation \ref{eq:27}), and producing a large dispersion in $\Delta\log({\rm O/H})$-\citetalias{Pilyugin-16}. Although we suspect that a sinusoidally time-varying SFE is not likely to be realistic in the universe at $\sim$100 pc, Equation \ref{eq:20} (also see Figure \ref{fig:3}) does permit a rough order of magnitude estimate of whether these relative dispersions are reasonable and of the approximate timescales involved. If we examine what values of $\xi$ in Equation \ref{eq:20} produce the observed $\sigma(\Delta\log({\rm O/H}))$/$\sigma(\Delta\log {\rm sSFR})$ for typical MAD galaxies, i.e. 0.24 and 0.12 for the two metallicity estimators, then these are $\log \xi=$ 0.607 for {\tt N2S2H$\alpha$}, and $\log \xi=$ 0.956 for {\tt Scal} at $Z_{\rm 0}=0$. If we take 1 Gyr as a reasonable estimate for the effective gas depletion timescale for the overall galaxy population (see \citetalias{Wang-19}), we get rough estimates of $T_{\rm p}=$ 1.5 Gyr for {\tt N2S2H$\alpha$}, and $T_{\rm p}=$ 0.7 Gyr for {\tt Scal} as the nominal period of a time-varying SFE. Intriguingly, in the Milky Way, a periodic star formation history with a period of $\sim$0.5 Gyr has been suggested in the solar neighborhood, from analysis of the resolved stellar population \citep{Hernandez-00, de-la-Fuente-Marcos-04}. As further pointed out by \cite{Egusa-09}, this periodic star formation history may be associated with the passage of the spiral potential in the density wave theory \citep{Lin-69}. Assuming that the potential has a two-armed pattern as suggested by \cite{Drimmel-00}, the pattern speed can be calculated as $\Omega_{\rm P}=\Omega(r=R_{\odot})-\pi/(0.5\ {\rm Gyr}) = 21$ km s$^{-1}$ kpc$^{-1}$. This is also consistent with the result from numerical simulations of the stellar and gaseous response to the spiral potential, presented by \cite{Martos-04}. It is quite suggestive that this same sort of timescale emerges in our own analysis of the metallicities in terms of a periodically varying SFE, and suggests that the periodic (or time-varying) SFE$(t)$ that is relevant on 100-pc scales may be explained by the passage of orbiting packets of gas through the spiral density wave. It should be noted in the context of the gas regulator that changes in the density of the gas due to {\it local} flows associated with the passage of a spiral density wave have no effect, except (quite possibly) to change the SFE because of the changed density of gas, and should not be considered as ``inflows'' or ``outflows", which are explicitly concerned with changes in the total (baryonic) mass of the gas regulator ``system" being considered. Both the metallicity and the specific star-formation rate, are formally {\it intrinsic} quantities that make no statement about the size of the system, i.e. whether the boundary of the reservoir is of fixed physical size or not. Put another way, neither the measurements of metallicity nor the sSFR will change (per se) due to a compression of the whole system of gas and stars (unless there are associated physical changes, e,g, in the SFE). However, given the many steps and uncertainties in our analysis, we certainly caution against over-interpretation of our analysis. \section{Analysis of MaNGA galaxies} \label{sec:5} In the previous section of the paper, we have presented results based on MAD galaxies. The high spatial resolution of MAD galaxies enables us to obtain a robust statistical analysis on the correlation of the $\Delta\log$sSFR-$\Delta$log(O/H) on 100 pc scales. However, the analysis at galactic scales needs to be verified because of the limited sample size of MAD galaxies. In this section, we therefore present a similar analysis as in Section \ref{sec:4} by using a well-defined set of SF MaNGA galaxies. The MaNGA sample used in this work includes 976 SF galaxies, which is $\sim$25 times larger than the MAD sample. The spatial coverage of MaNGA sample is also greater than 1.5{$R_{\rm e}$}, which is larger than the MAD coverage as a whole. However, the spatial resolution of MaNGA galaxies (1-2 kpc) is much worse than that of MAD galaxies. Therefore, we only focus on the analysis on galactic and ``sub-galactic" scales for MaNGA galaxies in this section, rather than on individual spaxels. \subsection{Correlations of the integrated $\Delta\log$sSFR and $\Delta\log$(O/H) for MaNGA galaxies} \label{sec:5.1} \begin{figure*} \begin{center} \epsfig{figure=./fig/obs/DOP16_radius/manga_15Re_DOP16_model.eps,clip=true,width=0.43\textwidth} \epsfig{figure=./fig/obs/PG16_radius/manga_15Re_PG16_model.eps,clip=true,width=0.516\textwidth} \end{center} \caption{ The $\Delta\log$sSFR$_{\rm <1.5Re}$-$\Delta\log$Z$_{\rm <1.5Re}$ diagram for MaNGA SF galaxies, color-coded by integrated stellar mass. Unlike in the panel (e) of Figure \ref{fig:9} and Figure \ref{fig:10}, we use the integrated sSFR (and metallicity) measured within 1.5{$R_{\rm e}$}\ to be representative of the global quantity, rather than using the values at one specific galactic radius. In both panels, the black lines are the bi-sector fittings of the data points. For comparison, the model predictions ($\log \xi=-0.2$ for log(O/H)-\citetalias{Dopita-16} and $\log \xi=-0.6$ for log(O/H)-\citetalias{Pilyugin-16}) are overlaid in black curves, taken from the top-middle panel of Figure \ref{fig:1}. The model predictions are scaled to match to the scatter of the $\Delta \log$sSFR$_{\rm <1.5Re}$ of MaNGA galaxies. } \label{fig:12} \end{figure*} In Section \ref{sec:4.2.3} we examined the relation between the measure of the overall, i.e. ``galactic-scale", $\Delta\log$sSFR-$\Delta\log$(O/H) for MAD galaxies, taking as measures of these quantities the values from the radial profiles at a fiducial radius of 0.5 {$R_{\rm e}$}, as shown for the 38 MAD galaxies in the two Panels (e) in Figures \ref{fig:9} and \ref{fig:10}. Since the MaNGA coverage usually extends further than 1.5{$R_{\rm e}$}\ for individual galaxies, we can now measure the integrated sSFR and metallicity within 1.5{$R_{\rm e}$}. The metallicity within 1.5{$R_{\rm e}$}\ is computed as the H$\alpha$ luminosity weighted 12+log(O/H) for all the spaxels within 1.5{$R_{\rm e}$}. This is probably more representative of the global quantities than the sSFR (or metallicity) that was measured in the MAD sample at one particular radius. As before, we first construct the sSFR$_{\rm <1.5Re}$ vs. mass and log(O/H)$_{\rm <1.5Re}$ vs. mass relations, and use these to normalize the measurements of individual galaxies. The first is obtained with a linear fit of the sSFR$_{\rm <1.5Re}$ vs. stellar mass relation. For the metallicity, we do a polynomial fit to the log(O/H)$_{\rm <1.5Re}$ vs. stellar mass relation, since it clearly flattens at the high-mass end. For each individual MaNGA galaxy, we thereby define the $\Delta\log$sSFR$_{\rm <1.5Re}$ and $\Delta\log$(O/H)$_{\rm <1.5Re}$ to be the (logarithmic) deviation from these relations. The correlation between $\Delta\log$sSFR$_{\rm <1.5Re}$ and $\Delta\log$(O/H)$_{\rm <1.5Re}$ for the MaNGA sample is shown for the two metallicity indicators in the two panels of Figure \ref{fig:12}. As shown, enlarging the sample size by a factor of about 25, the negative correlation between $\Delta\log$sSFR and $\Delta\log$(O/H) is very clearly seen for both metallicity indicators. The Pearson correlation coefficients for the MaNGA sample are $-$0.23 and $-$0.36 for {\tt N2S2H$\alpha$} and {\tt Scal} indicators, respectively. The p-values are zero ($<10^{-10}$) for both metallicity indicators, substantially strengthening the results found in Panels (e) of Figure \ref{fig:9} and \ref{fig:10} for the much smaller MAD sample. For each metallicity indicator, the linear slopes obtained in MaNGA by bi-sector fitting are very similar to those seen in the MAD sample (Panels (e) of Figures \ref{fig:9} and \ref{fig:10}). However, we note again that the slopes for the two metallicity indicators are significantly different, due to the range of log(O/H)-\citetalias{Dopita-16} being nearly twice of that of log(O/H)-\citetalias{Pilyugin-16}. As also noted above, this clear inverse correlation is a re-statement of the existence of SFR as a (negative) second parameter in the mass-metallicity relation. For comparison, we show the model predictions for a sinusoidally time-varying inflow rate as black curves scaled to the MaNGA data using the same procedure as described in Section \ref{sec:4.2.1}. These same curves were plotted in panels (e) of Figure \ref{fig:9} and \ref{fig:10} for comparison with the MAD data. As can be seen, a single model prediction is broadly consistent with both the observed MaNGA and MAD data points. \subsection{Radial profiles of sSFR and Oxygen abundance for MaNGA galaxies} \label{sec:5.2} We now turn to look more closely at the radial variations of SFR and log(O/H) in galaxies and how these vary across the galaxy population, using again the MaNGA sample. This analysis may be regarded as an extension of the analysis of SFR profiles that was presented in \citetalias{Wang-19}. In \citetalias{Wang-19}, we investigated the dependence of the normalized star-formation surface density profiles, $\Sigma_{\rm SFR}$(R/{$R_{\rm e}$}) of star-forming MaNGA galaxies on the overall stellar mass and on $\Delta\log$sSFR$_{\rm <Re}$, the vertical deviation of the galaxy from the ``nominal'' SFMS\footnote{The ``nominal'' SFMS is the linear fitted relation of SFR($<${$R_{\rm e}$}) versus $M_*$($<${$R_{\rm e}$}) for the SF galaxies of MaNGA (see more details in section 2 of \citetalias{Wang-19}). }. We showed (figure 5 of that paper) that star-forming MaNGA galaxies that lie significantly above (or below) the overall SFMS show an elevation (or suppression) of SFR at all radii. In addition, we showed that whereas at low stellar masses this elevation (suppression) of star-formation is more or less uniform with radius, for the more massive galaxies the elevation (or suppression) of star-formation becomes more pronounced in the central regions of the galaxies. As a direct consequence of this, the dispersion in the (normalized) $\Sigma_{\rm SFR}$ across the galaxy population, which we designated $\sigma(\Delta\log\Sigma_{\rm SFR})$ was found to be correlated with the local surface mass density, which is equivalent to an inverse correlation with the gas depletion timescale in the extended Schmidt Law \citep{Shi-11}, since in that local surface density and the gas depletion timescale are directly related. This result was shown in figure 9 of \citetalias{Wang-19}. \begin{figure*} \begin{center} \epsfig{figure=./fig/obs/DOP16_radius/zgas_radius_mass_DOP16.eps,clip=true,width=0.84\textwidth} \end{center} \caption{The median gas-phase metallicity profile for MaNGA sample galaxies at given stellar mass and a given $\Delta\log$sSFR$_{\rm <Re}$. The $\Delta\log$sSFR$_{\rm <Re}$ is defined to be the vertical deviation (i.e. in sSFR) from the ``nominal'' SFMS, i.e. sSFR$_{\rm <Re}$-$M_{\rm *,<Re}$ relation \citep{Wang-19}. The blue, green, yellow and red profiles are the median metallicity profiles of galaxies with different $\Delta\log$sSFR$_{\rm <Re}$. The width of each profile is calculated using the boot-strap method from the sample in question. We note that the gas-phase metallicity profile here is generated based on log(O/H)-\citetalias{Dopita-16}.} \label{fig:14} \end{figure*} \begin{figure*} \begin{center} \epsfig{figure=./fig/obs/PG16_radius/zgas_radius_mass_PG16.eps,clip=true,width=0.84\textwidth} \end{center} \caption{ The same as Figure \ref{fig:14}, but using log(O/H)-\citetalias{Pilyugin-16}. } \label{fig:15} \end{figure*} In this subsection, we will carry out an analogous study of the metallicity profiles for the same MaNGA galaxies used in \citetalias{Wang-19}. We will investigate the dependence of these metallicity profiles on stellar mass and the deviation from the SFMS, analogously to the analysis of $\Sigma_{\rm SFR}$ in that paper. For consistency with that earlier work, we use the $\Delta\log$sSFR$_{\rm <Re}$ to separate galaxies. Galaxies are classified into four sub-samples: 0.33 $<\Delta\log$sSFR$_{\rm <Re}$, 0.0 $<\Delta\log$sSFR$_{\rm <Re}<$ 0.33, $-$0.33 $<\Delta\log$sSFR$_{\rm <Re}<$ 0.0 and $\Delta\log$sSFR$_{\rm <Re}<-0.33$ and further split the sample into five bins of overall stellar mass, as in \citetalias{Wang-19}. For each individual galaxy, we first compute a radial profile of 12+log(O/H), determined as the median 12+log(O/H) of spaxels located within each radial bin. In each of these subsamples we then construct the median 12+log(O/H)($r$/{$R_{\rm e}$}) radial profiles, using the two metallicity estimators in turn, and estimating uncertainties by boot-strapping the sample galaxies. Figure \ref{fig:14} shows the median 12+log(O/H)-\citetalias{Dopita-16} profiles for these sub-samples of galaxies. In each stellar mass bin, the metallicity profiles are displayed in the blue, green, yellow and red in descending order of their overall $\Delta\log$sSFR$_{\rm <Re}$. Figure \ref{fig:15} is the same as Figure \ref{fig:14} but for log(O/H)-\citetalias{Pilyugin-16}. The first-order result is clear: independent of which metallicity indicator is used, low mass galaxies that lie significantly above (or below) the SFMS in their overall sSFR, have systematically lower (or higher) log(O/H) over the whole galactic radii. This dependence of log(O/H) on $\Delta\log$sSFR however decreases (and even vanishes and possibly reverses) with increasing stellar mass. There is some evidence that it also decreases to the centers of the galaxies (see for example the higher mass bins in Figure \ref{fig:14}). The result of Figure \ref{fig:14} is broadly consistent with the result of Figure \ref{fig:15}, except for the highest mass bins. In the highest mass bin of Figure \ref{fig:15}, a positive correlation between $\Delta\log$sSFR and $\Delta\log$(O/H) can be seen, which is clearly opposite to the result of other mass bins. This might be due to the failure of the {\tt N2S2H$\alpha$} indicator that the assumed N/O-O/H relation does not hold for the most massive SF galaxies. The overall negative correlation between the overall $\Delta\log$sSFR and log(O/H) shown by the sets of profiles in Figures \ref{fig:14} and \ref{fig:15} is another manifestation of the inverse correlations in Panels (e) of Figures \ref{fig:9} and \ref{fig:10} and in Figure \ref{fig:12}. We can then argue that these indicate that time-varying inflows are the primary drivers of variations of sSFR and log(O/H), across the galaxy population \citep[also see \citetalias{Wang-19};][]{Wang-20a}. As noted above, this result is a manifestation of the general presence of SFR as a (negative) second parameter in the overall mass-metallicity relation \citep[e.g.][]{Mannucci-10}. The fact that we see the range of log(O/H) (at given R/{$R_{\rm e}$}) decreasing with stellar mass is also consistent with previous studies of the overall $Z(M_*,{\rm SFR})$ relation \citep[e.g.][]{Mannucci-10, Curti-20}. In figure 5 of \citetalias{Wang-19}, the {\it dispersion} in the (normalized) $\Sigma_{\rm SFR}$ increases slightly with increasing stellar mass, and also increases towards the centers of galaxies for galaxies of a given stellar mass. It is quite striking how the {\it dispersion} of $\log({\rm O/H})$ (at a given mass and radius) behaves in the {\it opposite} way to the dispersion in (normalized) $\Sigma_{\rm SFR}$ shown in our previous work. Whereas the former decreases with increasing stellar mass (and possibly towards the centers of galaxies), the dispersion of $\Sigma_{\rm SFR}$ (or sSFR) increases slightly with mass, and increases towards the centers of galaxies. We will discuss this in detail in the next Section \ref{sec:5.3}. \subsection{Quantitative interpretation of the dispersion of gas-phase metallicity and sSFR in MaNGA galaxies} \label{sec:5.3} \begin{figure*} \begin{center} \epsfig{figure=./fig/obs/manga/sigma_dsfr7_DOP16.eps,clip=true,width=0.45\textwidth} \epsfig{figure=./fig/obs/manga/sigma_dsfr7_PG16.eps,clip=true,width=0.45\textwidth} \end{center} \caption{Left panel: The ratio of $\sigma$($\Delta \log$(O/H))-\citetalias{Dopita-16} to $\sigma$($\Delta \log \Sigma_{\rm SFR}$) as a function of $\tau_{\rm dep}$ derived using the extended Schmidt law \citep{Shi-11}, based on the MaNGA galaxies. The different colors represent different stellar mass bins, as denoted in the top-left corner. Data points with the radius larger than {$R_{\rm e}$}, are indicated in gray. The uncertainties are measured by bootstrap method. Right panel: The same as the left panel, but using log(O/H)-\citetalias{Pilyugin-16}. In both panels, the black solid lines are the model predictions taken from the left panel of Figure \ref{fig:3} with $Z_0/y_{\rm eff}=$0.0, 0.5 and 1.0, but with horizontal shifts. } \label{fig:17} \end{figure*} As discussed in Section \ref{sec:2}, the simple gas-regulator model predicts not only the sign of the correlation between $\Delta \log$SFR and $\Delta \log Z$, but also the variation of $\Delta \log$SFR and $\Delta \log Z$ as a function of $\xi$ (see Figure \ref{fig:3}) defined as the ratio of the driving period to the effective gas depletion timescale in Equation \ref{eq:6.1}. In this subsection, we will investigate more quantitatively the variation of $\Delta \log$SFR and $\Delta \log$(O/H) in MaNGA galaxies. This will inevitably run up against the systematic uncertainties arising from the significantly different outputs of our two chosen metallicity indicators. We can however try to minimize the effects of these by looking at relative effects across the galaxy population, expecting that systematic effects will thereby cancel out. In \citetalias{Wang-19}, we constructed the $\Sigma_{\rm SFR}$ profiles for MaNGA galaxies at five different stellar mass bins and defined the parameter $\Delta \log \Sigma_{\rm SFR}$ as the deviation of a given galaxy from the median $\Sigma_{\rm SFR}$ profile at a given galactic radius and in a given stellar mass bin. We then computed the dispersion of this quantity across the population within five stellar mass bins and within five radial bins of $r$/{$R_{\rm e}$}. It was found that the scatter of $\Delta \log \Sigma_{\rm SFR}$, which we denoted as $\sigma(\Delta \log \Sigma_{\rm SFR})$, increases significantly with stellar surface mass density, $\Sigma_*$. We interpreted this trend in terms of a decreasing gas depletion time, linking the stellar surface mass density to the gas depletion timescale via the extended Schmidt law \citep{Shi-11}, i.e. $\tau_{\rm dep} \propto \Sigma_*^{-1/2}$. The trend with the inferred gas depletion time was found to be consistent with the model predication of time-varying inflow rate driven scenario, provided that the driving period of the inflow was more or less the same for all galaxies. In this subsection, we look at the analogous variation of $\Delta \log ({\rm O/H})$ in the population, and compare the result with the model predictions. Similar to the definition of $\Delta \log \Sigma_{\rm SFR}$, we define the $\Delta \log$(O/H) to be the deviation from the median $\log$(O/H) at a given specific radius and a given stellar mass bin. We then compute the dispersion of this quantity, $\sigma$($\Delta \log({\rm O/H})$), and compare this to the dispersion of the completely independent (normalised) star-formation surface density, $\sigma(\Delta \log \Sigma_{\rm SFR})$ used in \citetalias{Wang-19}. Figure \ref{fig:17} shows the ratio of $\sigma$($\Delta \log({\rm O/H})$) to $\sigma$($\Delta\log \Sigma_{\rm SFR}$) as a function of the inferred gas depletion time, for both metallicity indicators. The gas depletion time is again derived from the $\Sigma_*$ on the basis of the extended Schmidt law \citep{Shi-11}. To be in line with \citetalias{Wang-19}, we here use the scatter of $\Delta\log\Sigma_{\rm SFR}$, rather than the scatter of $\Delta\log{\rm sSFR}$. We have checked that using the latter gives a consistent result to that in Figure \ref{fig:17}. In each panel of Figure \ref{fig:17}, different colors are used for different stellar mass bins. The different data points of the same color represent the different radial bins, evenly spaced with a bin width of 0.2{$R_{\rm e}$}. At a fixed stellar mass bin, the $\tau_{\rm dep}$ monotonically increases with galactic radius as $\Sigma_*$ decreases. As in \citetalias{Wang-19}, data points at galactic radii larger than {$R_{\rm e}$}\ are indicated in gray, since these may be more easily affected by environmental effects \citep[also see \citetalias{Wang-19};][]{Wang-20a}. Readers are invited to consult figure 9 in \citetalias{Wang-19} and figures 15 and 17 in \cite{Wang-20a} for further insights into the variations of $\Sigma_{\rm SFR}$. As a whole, the ratio $\sigma(\Delta \log (\rm O/H)$/$\sigma(\Delta \log \Sigma_{\rm SFR})$ increases quite sharply with the inferred gas depletion time for both the metallicity indicators. This is individually true for each the five stellar mass bins (where it reflects radial changes) except for the highest stellar mass bin for log(O/H)-\citetalias{Dopita-16} and it is true when comparing galaxies of different stellar masses. These trends reflect quantitatively a combination of the effects that are seen in Figures \ref{fig:14} and \ref{fig:15} of this paper and figure 5 of \citetalias{Wang-19}. It should however be noted that the dispersions $\sigma$ are calculated using the individual galaxies whereas these figures plot the median values within the four bins of $\Delta \log$SFR, and so are not directly comparable. We have discussed the particular case of the highest mass bin for log(O/H)-\citetalias{Dopita-16} in Section \ref{sec:5.2}. It is again clear that the dispersions $\sigma(\Delta \log ({\rm O/H})$ obtained using the different metallicity estimators differ by a factor of two, reflecting their two-fold difference in range within the sample. In the absence of a reliable reason to prefer one over the other, this makes any precise quantitative comparison with the predictions of the simple gas-regulator model (see Figure \ref{fig:3}) impossible. However, three points may be made, independent of the choice of estimator. First, both metallicity estimators show about a factor of four {\it increase} in the ratio $\sigma(\Delta \log ({}\rm O/H)$/$\sigma(\Delta \log \Sigma_{\rm SFR})$ as the inferred gas depletion timescale increases by an order of magnitude from $\log \tau_{\rm dep}$ $\sim$ 8.8 to 9.8. This is as expected for variations in inflow rate (around $\log \xi \sim 0$) and quite opposite to the trend expected for variations in SFE, which would have predicted a decrease in this ratio as the gas depletion timescale increases. By horizontally shifting the model prediction in the left panel of Figure \ref{fig:3} (according to Equation \ref{eq:15}) so as to match the result of Figure \ref{fig:17} (see the black lines), we get $(1+\lambda)T_{\rm p}/2\pi \sim 10^{9.5}$ yr for log(O/H)-\citetalias{Dopita-16}, and $\sim 10^{9.8}$ yr for log(O/H)-\citetalias{Pilyugin-16}. These are equivalent to driving periods of the inflow of $T_{\rm p}$ of a few to several Gyr. This is broadly consistent with the independent argument presented in \cite{Wang-20a} that the variation of $\Delta\log$sSFR is mainly produced by a variation of inflow rate on relatively long timescales. The driving period of changes in the inflow appears to be considerably longer than the period of the temporal variations in SFE discussed in Section \ref{sec:4.3}. Second, it can be seen that at a given gas depletion time (stellar surface mass density), more massive galaxies tend to have a {\it lower} value of $\sigma(\Delta \log({\rm O/H})$/$\sigma(\Delta \log \Sigma_{\rm SFR})$. This could possibly reflect the quite plausible expectation that more massive galaxies might well have higher $Z_0/y_{\rm eff}$ than less massive galaxies due to either a lower wind-mass loading $\lambda$ or a higher inflow metallicity $Z_0$, leading to a reduction in $\sigma(\Delta \log({\rm O/H}))$/$\sigma(\Delta \log \Sigma_{\rm SFR})$, as shown in Figure \ref{fig:3}. Finally, it is noticeable that, even with the larger range of the log(O/H)-\citetalias{Dopita-16}, the ratio of $\sigma(\Delta \log ({\rm O/H})$/$\sigma(\Delta \log \Sigma_{\rm SFR})$ never exceeds unity, the maximum value permitted by the gas-regulator model, as shown in Figure \ref{fig:3}. \section{Discussion} \label{sec:6} \subsection{The scale effect of $\Delta\log${\rm sSFR}-$\Delta\log$(O/H) relation} \label{sec:6.1} In this work, we find that on $\sim$100 pc scales within individual galaxies, the local $\Delta \log$(O/H) appears to positively correlated with the local $\Delta \log$sSFR, while when looking into the integrated quantities of the same galaxies across the galaxy population, the $\Delta \log ({\rm O/H})$ is found to be negatively correlated with $\Delta\log$sSFR. These results are quite consistent with previous findings, as discussed in Section \ref{sec:introduction}. Specifically, based on the $\sim$1000 SAMI galaxies, \cite{Sanchez-19} found the $\Delta \log$sSFR and $\Delta \log$(O/H) (defined in a similar way as in the present work) show a negative correlation across the galaxy population for a wide range of metallicity indicators with the correlation coefficient between $-$0.32 to $-$0.14. At highly-resolved scale ($\sim$100 pc), many authors have found that regions with strongly enhanced star formation show enhanced gas-phase metallicity \citep[e.g.][]{Ho-18, Erroz-Ferrer-19, Kreckel-19}. Our results are consistent with these previous results. In the context of our simple gas-regulator framework, the opposite sign of the correlation between $\Delta \log$sSFR and $\Delta \log$(O/H) on 100-pc (GMC-scales) and on galactic-scales and large sub-galactic scales indicates that different physical processes regulate the star formation and chemical enhancement on these different scales. As a whole, a positive $\Delta \log$sSFR-$\Delta \log$(O/H) relation arises driving the gas-regulator with time-varying SFE$(t)$, and a negative $\Delta \log$sSFR-$\Delta \log$(O/H) relation is the result of driving it with a time-varying inflow rate. A time-varying SFE$(t)$ at $\sim$100 pc scale \citep[see][]{Kruijssen-19, Chevance-20}, and a time-varying inflow rate at galactic scale \citep[see][]{Wang-19, Wang-20b}, are also suggested by other recent works. In this work, we have not examined the intermediate scales of, say, 1 kpc. However, it is not difficult to infer that, as the scale is increased, the effect of time-varying SFE$(t)$ becomes weaker and that of time-varying inflow rate becomes stronger. This is likely the reason that at $\sim$1 kpc scale, the correlation between $\Delta \log$sSFR and $\Delta \log$(O/H) is weaker or even disappeared, as seen by previous works \citep{Moran-12, Barrera-Ballesteros-17, Berhane-Teklu-20}. \subsection{What determines the gas-phase metallicity?} \label{sec:6.2} As shown in Equation \ref{eq:22}, the metallicity of the steady state (i.e. constant inflow rate and constant SFE) is only determined by the metallicity of the inflow gas $Z_0$ and the effective yield including the effects of any wind, $y(1-R+\lambda)^{-1}$. These two parameters are expected to be strongly correlated with the global stellar mass. The $Z_0$ may be expected to increase with stellar mass, because the circumgalactic medium of galaxies was enriched by the outflow of enriched material driven by star formation in the past. The wind-loading $\lambda$ is expected to decrease with stellar mass, because more massive galaxies have deeper gravitational potential wells. This is probably the origin of the observed mass-metallicity relation. In addition, we emphasize that the $Z$ does not depend on the {\it absolute} value of SFE or inflow rate, but depends on the change (with time) of it, under the gas-regulator model frame. Based on the analysis of the present work (and also some previous works), the so-called ``fundamental metallicity relation'' is clearly not valid at sub-kpc scales. In the gas-regulator framework, we predict that the mass of the gas reservoir is always negatively correlated with metallicity (see Figure \ref{fig:1} and \ref{fig:2}), regardless of the driving mechanisms by time-varying SFE or time-varying inflow rate. Observationally, \cite{Bothwell-13} and \cite{Bothwell-16} found that the cold gas mass appears is more fundamental in determining the MZR than SFR, for both atomic and molecular gas, consistent with this picture. In this sense, the mass of gas reservoir is a better secondary parameter than SFR in determining the metallicity across the GMC-scale to galactic scale. The importance of the SFR is that, in studying the correlation between $\Delta\log$sSFR and $\Delta\log$(O/H) (as in this paper) we can distinguish the underlying physical processes that are driving variations in gas content, SFR and metallicity, even for individual galaxies. Specifically, even though the negative $\Delta\log$sSFR vs. $\Delta\log$(O/H) relation across the galaxy population indicates that time-varying inflow is the main driver of variations in SFR, it is completely possible that in some particular galaxies, existing reservoirs of cold gas are undergoing strong gravitational instability, leading to a temporary increase in their SFE. For instance, \cite{Jin-16} identified 10 SF galaxies with kinematically misaligned gas and stars from MaNGA survey. These galaxies have intense on-going star formation and high gas-phase metallicity in their central regions with respect to normal SF galaxies, which can be easily interpreted as evidence of a temporary increase in SFE due to gas-gas collision between the pre-existing gas and the misaligned inflowing gas. \subsection{Caveats} \label{sec:6.4} In Section \ref{sec:2.2} (or Section \ref{sec:2.3}), we always explore the effects of time-varying inflow rate (or SFE) while assuming the other to be time-invariant. However, we note that in the real universe, both inflow rate and SFE could vary with time simultaneously. On galactic scales, the negative correlation between $\Delta\log$sSFR and $\Delta\log$(O/H) in the observations indicates that time-varying inflow rates are dominant. However, this does not means that the SFE is fully time-invariant at galactic scale for all SF galaxies. Actually, as also mentioned in Section \ref{sec:6.2}, the SFE may be temporally enhanced in some galaxies, due to some physical processes, such as a merger or interaction with close companions. Mergers and interactions may trigger gravitational instabilities in the cold gas, and further enhance the SFE. At the small 100-pc scale, although the variation of SFE is dominant, we can probably ignore the possible variation of inflow rate for different regions within individual galaxies. In addition, we do not consider other feedback processes of star formation in the model, except for the outflow, which is assumed to be simply proportional to the SFR. The dispersal and mixing of enriched supernova ejecta with the interstellar medium are assumed for simplicity in this paper to be instantaneous. The mixing timescale is expected to be strongly dependent on physical scale \citep{Roy-95, deAvillez-02}. Specifically, on scales of 100 pc or less, the mixing timescale should be a few tens of Myr, of order ten times shorter than the few hundred Myr variation timescales of the SFE that we proposed in this paper. Therefore, the simplifying assumption of instantaneous mixing is unlikely to invalidate the conclusions drawn here. In the model, we also assume that the yield $y$ of metals is uniform both within galaxies and across the galaxy population. The yield $y$ is closely related with the relative number of Type II supernova, and therefore to the IMF. In the real universe, the IMF may be different from galaxy to galaxy, or even different in different parts of the same galaxy. Indeed, by using a sensitive index of the IMF, $^{13}$CO/C$^{18}$O, \cite{Zhang-18} found that the IMF in dusty star-burst galaxies at redshift $\sim$2-3 may be more top-heavy with respect to \cite{Chabrier-03} IMF. The top-heavy IMFs would result in larger $y$ than the bottom-heavy IMF, which increases the complexity of understanding the metal enhancement process by star formation. In Section \ref{sec:4}, we presented the observational results obtained using {\tt N2S2H$\alpha$} and {\tt Scal} metallicity indicators. The two indicators produce broadly consistent results. As discussed in Section \ref{sec:3.3}, these two indicators, proposed by \citetalias{Dopita-16} and \citetalias{Pilyugin-16} respectively, offer significant improvements and advantages over the previous indicators, like {\tt N2} and {\tt O3N2}. However, we note that when using {\tt O3N2}, the derived results differ in part from those presented here. Specifically, a negative correlation between $\Delta\log$sSFR and $\Delta\log$(O/H) across the MaNGA galaxy population can be still seen with {\tt O3N2}, while the $\Delta\log$sSFR-$\Delta\log$(O/H) relation of individual spaxels for MAD galaxies is found to depend quite strongly on stellar mass. For galaxies with stellar mass above $\sim10^{10.5}$${\rm M}_\odot$, the $\Delta\log$sSFR and $\Delta\log$(O/H) of spaxels show positive correlation, which is similar to the results based on {\tt N2S2H$\alpha$} and {\tt Scal}. But for galaxies with stellar massed below $\sim10^{10.5}$${\rm M}_\odot$, the correlation between $\Delta\log$sSFR and $\Delta\log$(O/H) of spaxels becomes negative, different from the results of {\tt N2S2H$\alpha$} and {\tt Scal} shown in this paper. This may be due to the fact that both the {\tt N2S2H$\alpha$} and {\tt Scal} indicators break the degeneracy between metallicity and ionization parameter. Although we prefer to use the {\tt N2S2H$\alpha$} and {\tt Scal} indicators, we mention this alteration of the results with using {\tt O3N2} here for those readers who may prefer that metallicity indicator. In the present work, we have only compared the observational results on low redshift galaxies with the model predictions. However, we note that our model prediction is also suitable for the high-redshift galaxies. One may expect to push the analysis in the current work to high-redshift in the near future, based on near-infrared spectroscopic galaxy surveys with the JWST. \section{Summary and Conclusions} \label{sec:7} \label{sec:conclusion} The present work consists mainly of two parts. One is the theoretical prediction of the correlation between SFR and gas-phase metallicity in the gas-regulator framework (see Section \ref{sec:2}). The other is the study of this correlation directly from the observation and the comparison of the results with the model predictions (see Section \ref{sec:4} and \ref{sec:5}). We may summarize the results of these two parts in the following. The gas-regulator model is based on the interplay between inflow, outflow and star formation, assuming that the star formation is instantaneously determined by the mass of cold gas reservoir \citep[][\citetalias{Wang-19}]{Lilly-13}. According to the continuity equations for the mass of metals and of the gas reservoir, we build the two basic continuity equations, shown in Equation \ref{eq:2} and Equation \ref{eq:3}. There are in total five quantities that determine the solution of the equations, which are the (assumed here varying) inflow rate $\Phi(t)$ and SFE$(t)$, and the (assumed here constant) mass-loading factor $\lambda$, metallicity of inflow gas $Z_{\rm 0}$ and the yield $y$. Once these five quantities are input, the solution of SFR$(t)$, $M_{\rm gas}(t)$ and $Z(t)$ are unique. The model predictions are listed below. \begin{itemize} \item When driving the gas-regulator system with a sinusoidal inflow rate and a time-invariant SFE, the resulting SFR$(t)$, $M_{\rm gas}(t)$ and $M_{\rm Z}(t)$ are also in the form of an exact sinusoidal function with time, but with some phase-delay to the inflow rate (see Equation \ref{eq:6} and \ref{eq:8}). The $\Delta\log$SFR and $\Delta \log Z$, defined as $\log {\rm SFR}(t)/\langle {\rm SFR}\rangle$ and $\log Z(t)/\langle {Z}\rangle$, are found to be negatively correlated, and the ratio of $\sigma(\Delta \log{\rm SFR})$ to $\sigma(\Delta \log Z)$ increases with increasing $\xi$, defined in terms of the effective gas depletion timescale to be $2\pi\tau_{\rm dep,eff}/T_{\rm p}$ (see Equation \ref{eq:15}). If the gas-regulator is driven by a periodic step-function in the inflow rate, a similar negative correlation between $\Delta \log$SFR and $\Delta \log Z$ is produced. \item When driving the gas-regulator system with a sinusoidal SFE and time-invariant inflow rate, the resulting SFR$(t)$, $M_{\rm gas}(t)$ and $M_{\rm Z}(t)$ can be solved approximately by a sinusoidal function if the variation of SFE is small (see the approximate solution in Equation \ref{eq:17} and \ref{eq:18}). Opposite to the case of time-varying inflow rate, the $\Delta\log$SFR and $\Delta \log Z$ are now positively correlated, and the ratio of $\sigma(\Delta \log{\rm SFR})$ to $\sigma(\Delta \log Z)$ decreases with increasing $\xi$ (see Equation \ref{eq:20}). When driving the gas-regulator with a periodic SFE in the form of step-function, we find the positive correlation between $\Delta\log$SFR and $\Delta \log Z$ becomes less significant with respect to the case of sinusoidal SFE. However, one thing is clear: the states with highly enhanced SFR are always metal-enhanced with respect to the mean metallicity. \item Regardless of whether the gas regulator is driven with time-varying inflow or time-varying SFE, the $\Delta \log M_{\rm gas}$ is always predicted to be negatively correlated with $\Delta \log Z$ (see Figure \ref{fig:1} and Figure \ref{fig:2}). \item The scatter of $\Delta\log$Z always decreases with increasing $Z_0/y_{\rm eff}$, where the $y_{\rm eff}$ is defined as the $y(1-R+\lambda)^{-1}$ (see Equation \ref{eq:15}, Equation \ref{eq:20} and Figure \ref{fig:3}). \item The mean SFR is determined by the mean inflow rate and mass-loading factor, and the mean metallicity is determined by the $Z_{\rm 0}+y_{\rm eff}$ (see Equation \ref{eq:21} and \ref{eq:22}). The resulting $Z$ does not depend on the SFE (or inflow rate) itself, but does depend on the temporal {\it changes} of it. \end{itemize} The key point is that a time-varying inflow rate leads to the opposite correlation between $\Delta\log$SFR and $\Delta \log Z$ from that produced by a time-varying SFE. Therefore, studying the $\Delta \log$SFR-$\Delta \log Z$ relation in observational data on different spatial scales can in principle directly distinguish the driving mechanisms of the variation of star formation and gas-phase metallicity in galaxies. In the model predictions, we note that the conclusions concerning the $\Delta \log$SFR-$\Delta \log Z$ relation are also valid for the $\Delta \log$sSFR-$\Delta \log$(O/H) relation, when the $\Delta \log$sSFR is defined as the displacement of logsSFR(t) from its smooth cosmic evolution. We then utilize the two-dimensional spectroscopic data of 38 SF galaxies from the MAD survey \citep{Erroz-Ferrer-19}, as well as a well-defined SF sample of $\sim$1000 galaxies from MaNGA survey (\citetalias{Wang-19}). The spatial resolution of MAD galaxies is $\sim$100 pc or less, while the spatial resolution of MaNGA galaxies is 1-2 kpc. The MAD sample enables us to study the $\Delta \log$sSFR-$\Delta \log({\rm O/H})$ relation down to 100-pc (GMC) scales, while the large sample size of MaNGA enables us to statistically study the $\Delta\log$sSFR-$\Delta \log({\rm O/H})$ relation at galactic or large (radial) sub-galactic scales across the galaxy population. The SFR is measured based on the dust attenuation corrected H$\alpha$ luminosity \citep{Kennicutt-98}. The two versions of gas-phase metallicity are measured by adopting two recently-developed indicators: {\tt N2S2H$\alpha$} (\citetalias{Dopita-16}) and {\tt Scal} (\citetalias{Pilyugin-16}), which represent improvements and advantages over the previously widely-used indicators. The results of these two metallicity indicators are very similar. Here we summarize the main observational results, which are valid for both these metallicity indicators. \begin{itemize} \item Consistent with previous studies, we find that MAD galaxies generally show a positive sSFR profile, confirming an inside-out growth scenario. As a whole, the gas-phase metallicity increases strongly with stellar mass, and decreases with galactic radius within individual galaxies, as expected. \item At $\sim$100 pc scale in MAD galaxies, we find that $\Delta\log$sSFR and $\Delta \log$(O/H) are positively correlated. This positive correlation shows little or no dependence on the overall stellar mass of the galaxy. We note that the positive correlation of $\Delta\log$sSFR vs. $\Delta \log$(O/H) at 100 pc scale does not hold when using {\tt O3N2} metallicity indicator for galaxies with stellar mass below $\sim10^{10.5}$${\rm M}_\odot$. The inconsistency is likely arising from the fact that the {\tt O3N2} shows much larger uncertainty than {\tt Scal} in determining the metallicity, as discussed in Section \ref{sec:3.3.2}. \item At galactic scale, we find in contrast that $\Delta \log$sSFR and $\Delta \log({\rm O/H})$ are negatively correlated across the galaxy population. This is true for both MAD and the larger MaNGA samples. The correlation between $\Delta\log$sSFR and $\Delta \log({\rm O/H})$ shows a strong dependence of global stellar mass and galactic radius. \item At the $\sim$100 pc scale, the ratio of $\sigma(\Delta \log({\rm O/H})$ to $\sigma(\Delta \log \Sigma_{\rm SFR})$ show almost no dependence on the global stellar mass. However, at galactic scale, the $\sigma(\Delta \log({\rm O/H}))$/$\sigma(\Delta \log \Sigma_{\rm SFR})$ increases with the inferred gas depletion time (inferred from the surface mass density using the extended Schmidt law). At fixed gas depletion time, the $\sigma(\Delta \log({\rm O/H}))$/$\sigma(\Delta \log \Sigma_{\rm SFR})$ appears to be smaller for galaxies of higher stellar mass. \end{itemize} We interpret the observational results in the frame of the gas-regulator model. The overall increase of metallicity with global stellar mass and the decrease of metallicity with galactic radius, can be well explained as the mass and radial dependence of the metallicity of inflow gas $Z_0$ and the mass-loading factor $\lambda$. At 100-pc scales, the positive correlation between $\Delta \log$sSFR and $\Delta \log({\rm O/H})$ indicates that the time-varying SFE plays a dominant role in governing the star formation and metal enhancement. This is also consistent with the fact that the variation of SFE increases strongly towards smaller scale \citep{Kreckel-18, Chevance-20} and is likely caused by the passage of orbiting gas through regions of higher SFE, such as spiral arms. At galactic or sub-galactic scales, the negative correlation across the galaxy population indicates that the time-varying inflow rate plays a dominant role. In addition, the variation of $\Delta \log$sSFR and $\Delta \log({\rm O/H})$ as a function of gas depletion time are in quite good agreement with the model predictions. This strongly strengthens the conclusion that on galactic scales the star formation and metal-enhancement is primarily regulated by the time-varying inflow rate of gas from the surrounding medium. We emphasize that the sign of the correlation between gas-phase metallicity and SFR is a powerful diagnostic of the driving mechanisms of star formation. Our study provides a new perspective in understanding the correlation between star formation rate, gas-phase metallicity and mass of cold gas reservoir, that is applicable from 100 pc-scales up to galactic scales, from individual galaxies up to the overall galaxy population, and at both low and high redshifts. \acknowledgments Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrof\'isica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut f\"ur Astrophysik Potsdam (AIP), Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg), Max-Planck-Institut f\"ur Astrophysik (MPA Garching), Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE), National Astronomical Observatory of China, New Mexico State University, New York University, University of Notre Dame, Observat\'ario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\'onoma de M\'exico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
1,108,101,563,999
arxiv
\section{Introduction} Currently, over 400 extra-Solar planetary systems have been found; most of hem around main sequence stars, with a few tens found in wide binary systems. Typically, such planets are thought to form in a protoplanetary disk left over following the central star formation in a protostellar disk \citep[e.g. ][for a recent review]{arm07}. Several studies explored the later effects of stellar evolution on the survival and dynamics of planets around an evolving star \citep{deb+02,vil+07} or the possible formation of planets around neutron stars (NSs; see \citet{phi+94,pod95} for reviews). Others studied the formation and stability of planets in binary systems \citep[see][for a review]{hag09}. In this study we focus on the implications of stellar evolution\emph{ in binaries} on the formation and growth of planets. One of the most likely outcomes of stellar evolution in binaries is mass transfer from an evolved donor star {[}which later becomes a compact object; a white dwarf (WD), NS or a black hole (BH); here we mostly focus on low mass stars which evolve to become WDs{]} to its binary companion. If the binary separation is not too large, this process could result in the formation of an accretion disk containing a non-negligible amount of mass around the companion. Here we suggest that such a disk could resemble in many ways a protoplanetary disk, and could therefore produce a second generation of planets and debris disks around old stars. In addition, the renewed supply of material to a pre-existing ('first generation') planetary system (if such exists after surviving the post-MS evolution of the host star), is likely to have major effects on this system, possibly leading to the regrowth/rejuvenation of the planets and planetesimals in the system as well as possibly introducing a second epoch of planetary migration. The later evolution of evolved binaries could, in some cases, even lead to a third generation of planet formation in the system. When the previously accreting star (the lower mass star in the pre-evolved system) goes through its stellar evolution phase, it too can expand to become a mass donor to its now compact object companion. A new disk of material then forms around the compact object and planet formation may occur again, this time around the compact object (see also \citealp{bee+04} which tries to explain the formation the pulsar planet system PSR B1620-26). One could even suggest further next generations of planets in multiple systems (triples, quadruples etc.), in which the post-MS evolution in each of the stellar components could provide new material for a another generation of planet formation. Such further generation of planets, however, is less likely, requiring much more fine tuned conditions than those existing more robustly in binary systems. \begin{figure} \includegraphics[scale=0.4]{fig1d} \caption{Second (and third) generation planet formation. The various stages of second generation planet formation are shown schematically. (a) The initial configuration: a binary MS system, possibly having a first generation (I) circumstellar planet around the lower mass on an allowed (stable) orbit. (b) The higher mass stars evolves to the AGB phase, and sheds material which is accreted on the secondary, and forms a protoplanetary disk. The binary orbit expands, and the allowed stability region expands with it. The existing first generation planet may or may not survive this stage (see text). (c) Second generation debris disk and planets are formed, in regions previously forbidden for planet formation (in the pre-evolved system; panel a). (d) The secondary evolves off the MS, and sheds material to its now WD companion. A protoplanetary disk forms. The binary orbit and the planetary allowed region (around the WD) further expand. Second generation planets may or may not survive this stage (see text). (e) Third generation debris disk and planets are formed around the WD, in regions previously forbidden for planet formation (in the pre-evolved system; panel a).} \end{figure} In the following we discuss the role of binary stellar evolution in the formation of a second and third generation of planets in evolving binary systems (a schematic overview of this scenario is given in Fig. 1). We begin by discussing the conditions for the formation of a second generation protoplanetary disk and its properties (\S 2). We then explore the role of such disks in the formation of new planets (\S 3), their effects on pre-exiting planetary systems (\S 4), and on the formation of planets around compact objects (\S 5) . We then review the observational expectations for such second generation planetary systems as suggested from our discussion (\S 6). Finally we suggest several planetary systems as being candidate second generation planetary systems (\S 7) and then conclude (\S 8). \section{Second generation protoplanetary disks} The first stage in planet formation would be setting the initial conditions of the protoplanetary disk in which the planets could form. We therefore first try to understand whether appropriate disks could be produced following a mass transfer epoch during the stellar evolution of a binary. Studies of planet formation in binary stars suggest that the binary separation should be large, in order for planets to be formed and produce a planetary system around one of the binary stellar components \citep[see][for a review]{hag09}. This is also consistent with the observational picture in which the smallest separations for a planets hosting binaries are of the order of \textasciitilde{}20 AU \citep{egg+04}. We therefore mostly focus on relatively wide wind accreting binaries in \S2.1. Nevertheless, mass transfer in close binaries could produce circumbinary disks of material, possibly serving as an environment for formation of circumbinary planets, we briefly discuss this latter more complicated possibility in \S2.2. We then shortly also discuss the composition of second generation disks. \subsection{Circumstellar disks in wide binaries } In order to understand the conditions for the formation of circumstellar disks from wind accreted material, we follow \citet{sok+00}. For a disk to form around a star (or compact object), one requires that that $j_{a}>j_{2}$, where $j_{a}$ is the specific angular momentum of the accreted material, and $j_{2}=(GM_{2}R_{2})^{1/2}$ is the specific angular momentum of a particle in a Keplerian orbit at the equator of the accreting star of radius $R_{2}$. For typical values for a MS accretor and a mass-losing terminal AGB star they find the following condition for the formation of a disk \begin{multline} 1<\frac{j_{a}}{j_{2}}\simeq7.2\left(\frac{\eta}{0.2}\right)\left(\frac{M_{1}+M_{2}}{2.5M_{\odot}}\right)\left(\frac{M_{2}}{M_{\odot}}\right)^{3/2}\\ \times\left(\frac{R_{2}}{R_{\odot}}\right)^{-1/2}\left(\frac{a}{10AU}\right)^{-3/2}\left(\frac{v_{r}}{15km\, s^{-1}}\right)^{-4}\end{multline} where $M_{1}$and $M_{2}$ are the masses of the donor and accreting stars, respectively. $R_{2}$ is the radius of the accreting star, and $a$ is the semi-major axis of the binary. $v_{r}$ is the relative velocity of the wind and the accretor, $\eta$ is the parameter indicating the reduction in the specific angular momentum of the accreted gas caused by the increase in the cross-section for accretion from the low-density side (where $\eta\sim0.1$ and $\eta\sim0.3$ for isothermal and adiabatic flows, respectively; \citealp{liv+86}), and where they approximate $v_{r}$ to be a constant equal to $15$ km s$^{-1}$. From this condition, it turns out that a disk around a main sequence star with $R_{2}\sim R_{\odot}$, could be formed up to orbital separations of $a\sim37$ AU; for a WD accretor an orbital separation of even $a\sim60$ AU is still possible. Note, however, the strong dependence on the wind velocity, which could produce a few times wider or shorter accreting systems for the range of $5-20$ km s$^{-1}$ possible in these winds \citep[e.g. ][]{ken86,hab+96}. Since the peak of the typical binary separation distribution is at this separation range \citep{duq+91}, we conclude that a non negligible fraction of all binaries should evolve to form accretion disks during their evolution. AGB stars can lose a large fraction of their mass during to their companion. Given a simple estimate, a fraction of $(R_{a}/2a)^{2}$ of the mass, where $R_{a}$ is the radius of the Bondi-Hoyle accretion cylinder (i.e. gas having impact parameter $b<R_{a}=2GM_{2}/v_{r}^{2}$), is transferred to companion accretion disk. The total mass transferred from the AGB could therefore range between $\sim0.1-20$ percents of the total mass lost from the AGB star (for wind velocities of $5-15$ $km\, s^{-1}$ and separation between $10-40$ AU; with the smallest separation and largest mass corresponding to the largest fraction and vice verse). The total mass going through the accretion disk could therefore range between $\sim10^{-3}-1$ $M_{\odot}$ (for the range of parameters and donor stars masses of $1-7$ $M_{\odot}$). Accordingly the accretion rate on to the star could have a wide range, with rates as high as $\sim10^{-4}$ $M_{\odot}$ yr$^{-1}$ and as low as $10^{-7}$ yr$^{-1}$ \citep[See also][for detailed discussion of mass loss rates]{win+00}. Red giants before the AGB stage lose mass at slower rates, with $\dot{M}$$\sim10^{-10}-10^{-8}$$M_{\odot}$ $yr^{-1}$ \citep{ken86}. Such a range of accretion rates at the different stellar evolutionary stages is comparable to the range expected and observed in regular ('first generation') protoplanetary disks at different stages of their evolution \citep[e.g. ][for a review]{ale08}. \subsection{Circumbinary disks around close binaries} Formation of circumbinary disks in wide binaries is of less interest for the the formation of second generation planets. Dynamically stable configuration of disks and/or planets would require them to have very wide separations (a few times the binary orbits; \citealp{hag09}). Disks around wide binaries would have even wider configurations, at which regions planet formation is less likely to occur \citep{dod+09d}. The evolution of close binaries could be quite complicated. Close binaries could evolve through a common envelope stage, which is currently poorly understood. Such binaries, and even wider binaries which do not go through a common envelope stage, but tidally interact, are likely to inspiral and form shorter period binaries during the binary post-MS evolution (see 3.1). Formation of circumbinary disks from the material lost from the evolving component of the binary, have hardly been discussed in the literature, and suggested models are highly uncertain \citep[; also Soker, private communication, 2010)]{aka+08}. Nevertheless, the higher mass of the binary relative to the evolving AGB star, always provides a deeper gravitational potential than that of the AGB star. In such a configuration material from the slow AGB wind which escapes the AGB star potential would still be bound to the binary, and therefore fall back and accrete onto the binary. Indeed, many circumbinary disks are observed around evolved binary systems (see section 7). Another possibility exists in which a third companion in a hierarchical triple (with masses $M_{1}=m_{1}+m_{2},$ for the close binary and $M_{2}$ for the distant third companion, and semi-major axis $a_{1}$and $a_{2}$ for the inner and outer system, respectively) evolves and shed its material on the binary. This possibility is more similar to the case of the formation of a circumstellar disk in a binary, in which case Eq. 1 above could be applied. In this case the radius of the accreting star is replaced by the binary separation of the close binary, i.e. $R_{2}\rightarrow a_{1}$ and the mass of the accretor is taken to be to be that of the close binary system etc. \subsection{Disks composition} AGB stars are thought to have a major contribution to the chemical enrichment of the galaxy \citep{van+97,tos07}. Such stars could chemically pollute their surrounding and create an environment with higher metallicity in which later generations of stars form as higher metallicity stars \citep[see e.g.][for a recent review]{tos07}. In addition, large amounts of dust are formed and later on ejected from the atmospheres of evolved stars and their winds \citep[ and references therein]{gai+09,gai09}. Given the high metallicity and dust abundances expected from, and observed in the ejecta of evolved stars, the accretion disk formed from such material is expected to be metal and dust rich. The composition of such disks may therefore serve as ideal environment for planet formation, as reflected in the correlation between the metallicity of stars and the frequency of exo-planetary systems around them \citep[e.g. ][]{fis+05}. \section{Second generation planets} Studies of first generation planet formation suggest that the accretion rates in regular protoplanetary disks, even in close binaries \citep{jan+08}, could be very similar to those expected in accretion disks formed following mass transfer. The composition of such disks may be even more favorable for planet formation due to their increased metallicity. Observationally, first and second generation disks (formed from the protostellar disks of newly formed stars, and from mass transfer in binaries, respectively) seem to be very similar in appearance, raising the possibility of planet formation in these disks, as already mentioned in several studies \citep{jur+98,zuc+08,mel+09}. The lifetime of AGB stars is of the order of up to a few Myrs (e.g. \citealp{mar+07}), comparable to that of regular, first generation, protoplanetary disks. In view of the discussion above, it is quite plausible that new, second generation planets, could form in the appropriate timescales in second generation disks around relatively old stars, in much the same way as planets form in the protoplanetary disk of young stars. In the following we discuss the implications and evolution of such second generation planetary systems. Although very similar, the environments in which second generation planets form has several differences from the protoplanetary environment of first generation planets. As mentioned above the composition of the disk material is likely to be more metal rich than typical protoplanetary disk, and these planets should all form and be observed in WD binaries (or other evolved binaries or compact objects such as NSs or BHs ), with the youngest possibly already existing in post-AGB binaries. Both the binarity and disk composition properties of second generation planets may not be unique, and could be similar to those in which first generation planets form and evolve. Indeed many planets are found in binary systems or have metal rich host stars. Nevertheless, second generation planets form in much older systems than first generation ones and have a different source of material. Such differences could have important effects on the formation and evolution of second generation planets which are also likely to be reflected in their observational signatures, as we now discuss. Second generation planets should be exclusively found in binaries with compact objects (most likely WDs), including binaries in which both components are compact. A basic expectation would be that the fraction of planet hosting WD binary systems should differ from that in MS binary systems with similar dynamical properties. Moreover, double compact object binaries (WD-WD, WD-NS etc.) may show a larger frequency of planets than single compact objects% \footnote{Planets around an evolving donor star could be engulfed by the star and be destroyed. We are not interested in comparing the planet frequency around these donor stars, but rather the planet frequency around the accretor. Specifically, the frequency of planets around the MS star in MS-WD systems should be compared with the planet frequency around either MS stars in MS-MS systems; similarly, the planet frequency around the WDs in WD-WD systems should be compared with planet frequency around the WD in WD-MS systems. % }. Correlation between planetary companions and their host star metallicity could show a difference between WD binary systems and MS binary systems. This may be a weak signature, given that the host star itself may accrete metal rich material from the companion. Nevertheless, the latter case could be an advantage for targeting second planet searches, through looking for them around chemically peculiar stars in WD systems \citep[e.g.][]{jef+96,jor+99,bon+03}. The high metallicity of second generation disks originate from the evolved star that produced them and need not be related to the metallicity of their larger scale environments. For this reason, second generation planets can form even in metal poor environments such as globular clusters, or more generally around metal poor stars. This possibility could be tested by searching for WD companions to planet hosting, but metal poor, stars (e.g. \citealp{san+07}) . Reversing the argument, one can direct planet searches around metal poor stars (which typically do not find planets; \citealp{soz+09}) to look for them in binary systems composed of a metal poor star and a WD. Similarly, planet hosting stars in the metal poor environments of globular clusters are likely to be members of binary WD systems. Note, however, that given their formation in binaries, second generation planets are not expected to exist in the globular cluster cores, where dynamical interactions may destroy even relatively close binaries. Such second generation planetary systems should still be able to exist at the outskirts of globular clusters. Interestingly, the only planet found in a globular cluster (PSR B1620\textminus{}26; \citealp{bac+93}) is a circumbinary planet around a WD-NS binary, found in the outskirts of a globular cluster, as might be expected from our discussion (see section 7 for further discussion of this system). The age of second generation planets, if could be measured, should be inconsistent with and much younger than their stellar host age (such implied inconsistencies could be revealed in some cases, e.g. WASP-18 planetary system; \citealp{hel+09}). Their typical composition is likely to show irregularities and be more peculiar and metal rich, relative to that of typical first generation planets. The second generation protoplanetary disk in which second generation planets form does not have to be aligned with the original protostellar disk of the star and its rotation direction, but is more likely to be aligned with the binary orbit (but not necessarily; the accretion disk from the wind of a wide binary may form in a more arbitrary inclination). Studies on the formation and stability of planets in circumbinary orbits (see \citealp{hag09} for a review) suggest that they can form and survive in such systems. Recently, several circumbinary planets candidates have been found \citep{qia+09,qia+10,qia+10b,lee+09}, possibly confirming such theoretical expectations. Circumbinary second generation disks such as discussed above could therefore serve, in principle, as nurseries for the formation and evolution of second generation planets. The evolution of second generation planets in circumstellar and circumbinary disk could be very different. In the following we highlight some important differences between the types of orbits possible for second generation planets, and the way in which these could serve as a smoking gun signature for the effects and evolution of second generation planets and/or disks. \subsection{Orbital phase space of second generation planets } \subsubsection{Circumstellar planets} Stable circumstellar planetary systems in binaries could be limited to a close separation to their host star, since planets may be prohibited from forming at wider orbits or become unstable at such orbits which are more susceptible to the perturbations from the stellar binary companion \citep[and references therein]{hag09}. During the post-MS evolution of a wide binary, its orbit typically widens (due to mass loss), therefore allowing for planets to form and survive at wider circumstellar orbits. This larger orbital phase space, however, is open only to second generation planets formed after the post-MS evolution of the binary (see also \citealp{the+09}, for a somewhat related discussion on a dynamical evolution of such forbidden zone due to perturbations in a stellar cluster). Let us illustrate this by a realistic example. Consider a MS binary with stellar components of 1.6 and 0.8 $M_{\odot}$and a separation of $a_{b}=12$ AU on a an orbit of 0.3 eccentricity. The secondary protoplanetary disk in this binary would be truncated at about 2-2.5 AU in such a system \citep{art+94}, and a planetary orbit would become unstable at similar separations \citep{hol+99}. Specifically, \citeauthor{hol+99} find \begin{multline} a_{c}/a_{b}=(0.464\pm0.006)+(-0.38\pm0.01)\mu\\+(-0.631\pm0.034)e_{b} +(0.586\pm0.061)\mu e_{b}\\+(0.15\pm0.041)e_{b}^{2}+(-0.198\pm0.047)\mu e_{b}^{2},\label{eq:a_crit_stellar}\end{multline} where $a_{c}$ is critical semi major axis at which the orbit is still stable, ${m} = M2/(M1 +M2)$, $a_{b}$ and $e_{b}$ are the semi major axis and eccentricity of the binary, and M1 and M2 are the masses of the primary and secondary stars, respectively. Giant planets are not likely to form in such a system, since such planets are thought to form far from their host star (although they may migrate later on to much smaller separations, e.g. forming hot Jupiters), where icy material is available for the initial growth of their planetary embryos \citep{pol+96}. In fact, it is not clear if any type of planet could form under such hostile conditions in which strong disk heating is induced by perturbations from the stellar companion. The smallest binary separations in which planets could form are thought to be \textasciitilde{}20 AU for giant planets \citep[and references therein]{hag09}, although some simulations suggest that terrestrial planets may form even in closer systems, near the host star (up to \textasciitilde{}0.2$q_{b}$, where $q_{b}$ is the stellar binary pericenter distance; \citealt{qui+07}). Indeed the smallest separation observed for planet hosting MS binaries is of \textasciitilde{}20 AU. We can conclude that first generation circumstellar giant planets are not likely to form in the binary system considered here. In our example the pericenter distance of the initial system is $q=a_{b}(1-e)=8.4$ AU, i.e. planets, and especially gas giants, are not likely to form in such system. Back to our example, at a later epoch, the more massive stellar component in this system evolves off the main sequence to end its life as a WD of \textasciitilde{}0.65 $M_{\odot}$ \citep[see e.g. ][]{lag+06}. Due to the adiabatic mass loss from the system it may evolve to a larger final separation, given by \citep[e.g.][and references therein]{lag+06} \begin{equation} a_{f}=\frac{m_{i}}{m_{f}}a_{i},\label{eq:adiabatic}\end{equation} where $m_{f}\,(=0.8+0.65=1.45\, M_{\odot})$ is the final mass of the system after its evolution, and $m_{i}\,(=1.6+0.8=2.4\, M_{\odot})$ and $a_{i}\,(=a_{b}=12$ AU ) are the initial mass and separation of the system, respectively (where the eccentricity does not change in this case). We now find $a_{f}=2.4/1.45\, a_{i}=19.8$ AU {[}more detailed calculations, using the binary evolution code by \citet{hur+02} give a similar scenario{]}. At such a separation even circumstellar giant planets could now form in the system \citep{kle+08}, i.e. second generation planets could form in in this binary either at a few AU around the star, or closer if they migrated after their formation). Therefore, any circumstellar giant planet observed around the MS star in this system would imply that such a planet must be a second generation planet, since it could not have formed as a first generation planet in the pre-evolved system. Thus, such cases could serve as a smoking gun signature and a unique tracer for the existence and identification of second generation planets. In fact, the example chose here is not arbitrary. The final configuration of the system in this example is very similar to that of the planetary system observed in the WD binary system Gl 86. A giant planet of $4\, M_{Jup}$ have been found at $\sim0.1$ AU from its MS host star, which has a WD companion, at a separation of $\sim18$ AU \citep{mug+05,lag+06}. The configuration of this system possibly indicates that this system is indeed a Bone fide second generation planetary system (see also section 7) . In Fig. 2, we use more detailed stellar evolution calculations (using the binary stellar evolution code BSE; \citealp{hur+02}) to show a range of final vs. initial semi-major axis of circular binary systems with initial masses $M_{1}=0.8\, M_{\odot}$ and $M_{2}=1.6\, M_{\odot}$ in which the higher mass star evolves to become a WD (of mass $\sim0.6\, M_{\odot})$. Also shown in this figure is the critical semi-major axis $a_{c}$ in both the initial (pre-evolved MS-MS binary) and the final (evolved; MS-WD binary) configuration. As can be seen, the pre and post evolution critical semi-major axis differ significantly, presenting a region of orbital phase space open only to second generation planets but forbidden for first generation ones. \begin{figure} \includegraphics[scale=0.4]{fig2}\caption{The configuration of evolved binary systems vs. their pre-evolved configuration. The pre-evolved binary is a MS-MS binary on a circular orbit with component masses $M_{1}=0.8\, M_{\odot}$and$M_{2}=1.6\, M_{\odot}$ , an the evolved binary is a MS-WD binary on a circular orbit with component masses $M_{1}=0.8\, M_{\odot}$and$M_{2}=0.6\, M_{\odot}$. Solid thick line shows the final semi-major axis of binary systems vs. their initial, pre-evolved, semi-major axis. Solid thin line shows the critical semi-major axis $a_{c}$ below which circumstellar planetary orbits around $M_{2}$are stable in the pre-evolved MS-MS system. Dashed line shows the critical semi-major axis $a_{c}$ below which circumstellar planetary orbits around $M_{2}$are stable in the evolved MS-WD system. The region between these lines is forbidden for first generation planetary orbits in the pre-evolved system, but allowed for second generation orbits in the evolved system.} \end{figure} \subsubsection{Circumbinary planets} In a sense, the requirements from a stable circumbinary planetary system present a mirror image of those needed for a circumstellar system in a binary. The orbital separation of circumbinary planets should be typically a few (>2-4) times the binary separation \citep{hol+99,mor+04,pie+07,pie+08,hag09}, in order not to be perturbed by the binary orbit. Specifically, \citeauthor{hol+99} find \begin{multline} a_{c}/a_{b}=(1.6\pm0.04)+(5.1\pm0.05)e_{b}+(4.12\pm0.09)\mu\\+(-2.22\pm0.11)e_{b}+(-4.27\pm0.17)e_{b}\mu\\ +(-5.09\pm0.11)\mu^{2}+(4.61\pm0.36)e_{b}^{2}\mu^{2},+(-4.27\pm0.17)e_{b}\mu\\+(-5.09\pm0.11)\mu^{2}+(4.61\pm0.36)e_{b}^{2}\mu^{2}\label{eq:a_crit_bin}\end{multline} where $a_{c}$ is critical semi major axis at which the orbit is still stable, ${m} = M2/(M1 +M2)$, $a_{b}$ and $e_{b}$ are the semi major axis and eccentricity of the binary, and M1 and M2 are the masses of the primary and secondary stars, respectively. Circumbinary planets are therefore not expected to be observed very close to their host stars. As with circumstellar second generation planets, the orbital phase space available for second generation planets differs from that of pre-existing first generation stars. The post-MS orbit of a relatively pre-evolved close binary (e.g. separation of 1-2 AU) could be shrunk through it's evolution due to angular momentum loss in a common envelope or circumbinary disk phase {[}e.g. \citet{rit08} in the context of forming cataclysmic variables{]}. Second generation planets could therefore form much closer to such evolved binaries than any pre-existing first generation planets. Therefore, similar to the circumstellar case (but now looking on inward migration of the binary), circumbinary planetary systems observed to have relatively close orbits around their evolved host binary would point to their second generation origin. As described below (section 7), such candidate planetary systems have been recently observed. We note, however, that a caveat for such an argument is that pre-exiting first generation planets, if they survive the post-MS evolution, could migrate inward (together with the binary, or independently) in the second generation disk. Nevertheless, even the latter case would reflect the existence and importance of a second generation protoplanetary disk and its interaction with first generation planets, as briefly discussed in the next section. Whether such inward migration is a likely consequence presents an interesting question for further studies. Fig. 3, is similar to Fig. 2, but now showing the critical semi-major axis $a_{c}$ for a circumbinary orbit in both the initial (pre-evolved MS-MS binary) and the final (evolved; MS-WD binary) configuration. Again, the pre and post evolution critical semi-major axis differ significantly, presenting a region of orbital phase space open only to second generation planets but forbidden for first generation ones. \begin{figure} \includegraphics[scale=0.4]{fig3}\caption{The configuration of evolved binary systems vs. their pre-evolved configuration. The pre-evolved binary is a MS-MS binary on a circular orbit with component masses $M_{1}=0.8\, M_{\odot}$and$M_{2}=1.6\, M_{\odot}$ , an the evolved binary is a MS-WD binary on a circular orbit with component masses $M_{1}=0.8\, M_{\odot}$and$M_{2}=0.6\, M_{\odot}$. Solid thick line shows the final semi-major axis of binary systems vs. their initial, pre-evolved, semi-major axis. Solid thin line shows the critical semi-major axis $a_{c}$ above which circumbinary planetary orbits are stable in the pre-evolved MS-MS system. Dashed line shows the critical semi-major axis $a_{c}$ above which circumstellar planetary orbits are stable in the evolved MS-WD system. The region between these lines is forbidden for first generation planetary orbits in the pre-evolved system, but allowed for second generation orbits in the evolved system.} \end{figure} We note in passing, that even non-evolved (low mass) MS close binaries with orbits of a few days were most likely evolved from wider binaries in triple systems, that shrunk to their current orbit due to the processes of Kozai cycles and tidal friction \citep{maz+79,egg06,tok+06,fab+07}. Formation of circumbinary close planetary systems even around short period MS binaries is therefore likely to be dynamically excluded, or would require a complicated and fine tuned dynamical history. Similarly, existence of close planetary systems around blue straggler stars, which were likely formed through similar processes \citep{per+09a} is also unlikely. \section{First generation planets in second generation disks} At the formation epoch of the second generation disk, the accreting star may already host a planetary system. The first generation planetesimals and/or planets could therefore serve as {}``seeds'' for a much more rapid growth of second generation planets. In this case second generation planet formation might behave quite differently than regular planet formation, suggesting a stage where large planetesimals and planets co-exist with and are embedded in a large amount of gaseous material. The first generation planets and planetesimals could now go through an epoch of regrowth (rejuvenation) through accretion of the replenished material. The first generation planets could now grow to become much more massive than typical planets. Moreover, such planetary seeds could induce a more efficient planet formation and produce more massive planets on higher eccentricity orbits \citep{arm+99}. A possible observational signature of these planets could therefore be their relative larger masses, and possibly higher eccentricity. Whether such regrowth could even lead to a core accretion formation of exceptionally massive planets, effectively becoming brown dwarfs (in this case such brown dwarfs could now form much more often in the so called brown dwarf desert regime), is an interesting possibility. Note that in newly formed young binaries late infall of material from the protostellar disk of one star to its companion (the protoplanetary disk of one star may form earlier than that of its companion), may drive similar processes. Such a possibility could explain the existence of more massive close in planets in planet hosting binary systems. If protoplanetary disks preferentially form earlier around the more (less) massive component of the binary, more massive close in planets should preferentially form around these components, providing an observational signature for this process. In such regrowth scenarios, more massive planets are likely to form in binaries with closer separation, which could typically form more massive disks. The replenished gas may also drive a new epoch of planetary migration. Together with the change in planetary masses and/or the formation of additional new planets in the system, the previously steady configuration of the planetary systems may now dynamically evolve into a new configuration. Such late dynamical reconfiguration could lead to ejection of planets and planetesimals from the system as well as to inward migration and/or possibly infall into their host stars. Ejected planets could become free floating planets. Alternatively these could still be bound to the binary system, in which case further interactions with either the original host star or the companion would produce a more complicated dynamical history. Several possibilities are opened in this case; the dynamical evolution could lead to a later ejection from the system, infall into one of the stars or a recapture into a more stable circumstellar or circumbinary orbit. All of these possibilities, including the possibility of planet exchange between the binary stars, are not restricted, however, to second generation exo-planets, and could be possible in first generation planetary systems hosted by binaries \citep[e.g. ][]{marz+05}. Infall of planets into their host stars may spin them up; stars in WD binary systems may therefore show higher rotational velocities. This possible signature, however, could also be induced by the material accreted from the second generation disk, rather than a planet infall. Given the possible misalignment between the second generation disk and the pre-existing planetary system, unique disk-planets configurations could be produced. Planets misaligned with the gaseous disk may both affect the disk through warping, and could be affected by it \citep{mar+09}. Small relative inclinations are likely to be damped through the planets interactions with the disk, re-aligning the planets \citep{cre+07}. Observations of two co-existing misaligned planetary systems or even counter rotating planets in the same system could, in principle, serve as a spectacular example of second generation planetary formation with pre-existing first generation planets. However, dynamical interactions in multiplanetary systems could possibly produce somewhat similar effects, although likely not the counter rotating configurations \citep{cha+08,jur+08}. Mass loss from the donor star in the binary can result in the dynamical evolution of relatively wide orbits into even wider orbits, due to the reduced gravitational potential of the system (see section 3.1). Alternatively, close binaries could become even closer due to gas drag and common envelope evolution. The change in the dynamical configuration of the binary could therefore have major impact on the evolution of pre-existing planets, not discussed above (where it was assumed that the binary system have already evolved to a stable configuration, and only planetary formation and evolution processes take place). Pre-existing stable planetary systems could be destabilized at this stage due to the migration of the hosting stars, involving direct scattering by the stars and/or going through resonant interactions, similar, but with a greater amplitude, to asteroids and Kuiper belt objects in the Solar system, thought to scatter resonantly due to planet migration. These complex issues are beyond the scope of this paper, and will be explored elsewhere. Finally, we note in passing that in binaries of small separations (a few AU) even very low mass companions such as brown dwarfs or even massive planets (rather than a stellar binary companion) could develop accretion disks, under favorable conditions, such as low wind velocities. Such disks would have very low masses; nevertheless, they might suffice for the formation of a new generation of moons or rings around these planets. \section{Second/third generation planets formation around compact objects and evolved stars} Few planetary systems and debris disks have been found to exist around compact objects, such as the pulsar planets\citep{wol+92}, debris disks around the neutron star \citep{wan+06}, planets around evolved stars \citep{sil+07,lee+09} and the possible planets observed around WDs \citep{qia+09,qia+10,qia+10b}. These systems are generally thought to either form in a fall back disk of material following the post-MS evolution of the progenitor star (e.g. fall back material from a supernova; \citealt{lin+91}, bus see also \citealp{liv+92}); or form around the MS star and survive the post-MS evolution stage of the star. Mass transfer in binaries poses an alternative and robust way of providing disk material to compact objects. Second (or possibly third) generation planets could therefore naturally form around many compact objects, and should typically exist around compact objects in doubly compact binary systems. Such systems may show differences relative to planets formed around MS stars due to the quite different radiation from the hosting (compact) star, as well as the difference of their magnetic fields. Some studies explored planet formation around neutron stars \citep[see ][for a recent review]{sig+08} and WDs \citep{liv+92}, but these issues and the issue of binarity in these systems are yet to be studied in more depth. In this context \citet{tav+92,ban+93} made some pioneering efforts relating to planet formation in NS binaries, although focusing on pulsar planets and the evaporation of stellar companion to the NS as a source for protoplanetary disk material. Much more directly related are the studies done later by \citet{liv+92} and \citet{bee+04} in order to explain the pulsar planet PSR B1620\textminus{}26 (see section 7). This alternative scenario for the formation of planetary systems around compact objects suggests a different interpretation for planets in compact object system; observations of such planetary systems might not reflect the survival of planets through post-MS evolution of their host star, but rather their formation at even a later stage from the ashes of a companion star. Such systems \citep[e.g.][]{sch+05,max+06,sil+07,gei+09} should therefore be targets for searches of compact object companions. Note, however, that formation of sdB stars may require the existence of a close companion \citep[see][and references therein]{heb09}, such as these observed planets. This, in turn, would suggest that a planet have already been in place prior to the sdB star formation in these systems. \section{Observational expectations } From the above discussion we can try to summarize and formulate basic expectations regarding second generations planetary systems and their host stars. These could serve both to identify the second generation origin of observed planetary systems and to provide guidelines and directions for targeted searches for second generation planets. \begin{enumerate} \item Second generation planets should exist in evolved binary systems, most frequently in WD-MS or WD-WD binaries. Such systems should therefore be the prime targets for second generation planetary systems searches. Since WD binaries are relatively frequent\citep{hol+09}, second generation planets could be a frequent phenomena (with the caution that our current understanding of planet formation, especially in such systems, is still very limited). Evolved binary systems may show a different frequency of hosting planetary systems (around the MS star in a binary MS-WD systems) than MS binaries with similar orbital properties. Planet hosting compact objects are more likely to be part of double compact object binaries rather than be singles. \item The host binaries separation are likely to be between 20 AU to 200 AU for binaries hosting circumstellar planets, and up to a few AU for binaries hosting circumbinary planets. \item Planets in post-MS binaries could reside in orbital phase space regions inaccessible to pre-existing first generation planets in such systems (section 3.1). Observations of planets in such orbits could serve as a smoking-gun signature of their second generation origin. \item Second generation planets could form even around metal poor stars and/or in metal poor environments such as globular clusters; planetary systems in such environment are likely to be part of evolved binary systems. \item Planets showing age inconsistency with (i.e. being younger than) their parent host star could be possible second generation planets candidates. In such systems one may therefore search for a compact object companion to the host star. \item Stars in WD binaries showing evidence for mass accretion from a companion star (e.g. chemically peculiar stars such as Barium and CH stars) could be more prone to have second generation planetary companions (for adequate binary separations) % \footnote{Note that material from second generation disks and/or planets which was processed in the AGB star would not pollute the star with $^{6}$Li as would first generation planetary material {[}e.g. see \citet{gon+06} for a review{]}. It is not clear to us, however, whether such differences could be detected, and what are the required statistics. % } \item Second generation planets could be more massive than regular exo-planets. This may be suggestive that old planet hosting stars with extremely massive planets (or brown dwarf companions found in the brown dwarf dessert) are more likely to be second generation planetary systems. Again this could point to the existence of a compact object (most likely WD) companion at the appropriate separations. \item Planets around compact objects could be second generation planets from mass transfer accretion. These systems are therefore more likely to have a compact object companion. Also, polluted WDs, thought to reflect accretion of asteroids, might be more likely to have WD companions. \item The spin-orbit alignment between second generation planets and their host stars is likely to differ from that of first generation planets, and second generation planets might be more likely to show high relative inclinations between different planets in multiple planetary system. \item Statistical correlation between planetary companions and their host star metallicity could show a difference between WD binary systems and MS binary systems (but this may be a weak signature, given that the host star itself accretes metal rich material). Chemically peculiar stars in WD binaries may serve as good second generation planetary hosts candidates. \item The MS stars in WD-MS binaries may also show higher rotational velocities, due to planetary infall, although such phenomena could also be induced by the re-accretion of matter from the second generation disk. In either case, stars with high rotational velocities in WD binaries could be better candidate hosts of second generation planets. \end{enumerate} \section{Candidate second generation planetary systems } Observationally, many post-AGB binary stars show evidence for having disks surrounding them either in circumstellar or circumbinary disks \citep{van04,der+06,hin+07,van+09}. Such disks have many similarities with protoplanetary disks as observed in T-Tauri stars, suggesting that new generations of planets could form there\citep{zuc+08,mel+09}. As discussed above, there are several observational expectations for second generation planetary systems. Currently, several observed planetary systems and planetary candidates may be consistent with such expectations, and may therefore be considered as candidate second generation planets. These systems are found in evolved binary systems, and include both circumstellar planets such as in GL 86 \citep{que+00,lag+06,mug+05} and HD 27442 \citep{mug+07} as well as several circumbinary planets and candidates: PSR B1620\textminus{}26 \citep{bac+93}, HW Vir \citep{lee+09}, NN Ser \citep{qia+09} and DP Leo \citep{qia+10b}. These systems are consistent with being second generation planetary systems, as their host binary separation is the region where second generation disks could be formed. Interestingly, according to \citet{lag+06} the planetary orbit of GL 86 could have been in the forbidden region for first generation planets (i.e. where they could have not formed and survived) for the most plausible parameters for the WD progenitor in this system, which are consistent with its cooling age \citep{lag+06,desi+07}. Although this discrepancy could possibly be circumvented by taking different age and mass estimates for the progenitor, under some assumptions \citep{desi+07}, a second generation origin for this planet presents an alternative simple and natural explanation. Besides its consistency with being a second generation planet, Gl 86 may therefore show the smoking gun signature of a second generation planetary system. Similarly the circumbinary planets candidates in HW Vir, NN Ser and DP Leo are found to be in regions likely to be inaccessible or less favorable for first generation planets in the progenitor pre-evolved binary systems. These planets orbit their evolved binary host at separations of only a few AU, where the pre-stellar evolution orbits of these binaries might have as wide as a few AU (e.g. \citealp{rit08}), i.e. these planets could not have formed in-situ as first generation planets in their current position. However, these orbits are accessible for in-situ formation of second generation planets are the already evolved and much shorter period binary observed today. The globular cluster pulsar planet PSR B1620\textminus{}26 (\citealp{bac+93}) is another highly interesting second generation candidate. It is the only planet found in a globular cluster (metal poor environment) and it is a circumbinary planet around a WD-NS binary, found in the cluster outskirts of a globular cluster, as might be expected from our discussion above of second generation planets in globular clusters (see section 3). Several studies suggested scenarios for the formation of this system and its unique and puzzling configuration \citep{sig+05,sig+08}. These usually require highly fine tuned and complex dynamical and evolutionary history. A second generation origin could give an alternative and more robust explanation {[}as also suggested by \citet{liv+03} and studied by \citet{bee+04}{]}. The high relative inclination observed for this planets suggested that even in the second generation scenario some an encounter with another star is required to explain this system \citep{bee+04}. However, recent studies show that high inclinations could also occur from planet-planet scattering in a multiple planet system \citep{cha+08,jur+08}. Note that the existence of another planet in this system is highly unlikely for the favorable formation scenarios discussed by \citet{sig+08}, but could be a natural consequence for the second generation scenario. This could motivate further observational study of this system. Similarly, findings of additional globular cluster planets preferably in WD binary systems would strongly support the second generation scenario origin of globular cluster planets (see also \citealp{sig+05}). Other possibly weaker candidate second generation planets would be those planets found around metal poor stars \citep[e.g. ][]{san+07}. Finding a WD companions for such stars, however, would give a strong support for the second generation scenario and the identification of these systems as second generation planets. We conclude that several second generation candidate planets have already been observed, and show properties consistent with the scenario discussed in this paper. Moreover, the current properties of these systems may pose problems for our current (poor) understanding of first generation planets formation and evolution in evolved binary systems, which could be naturally solved in the context of second generation planets. Observed WD-MS binary systems (e.g. \citealp{reb+09}) should therefore serve as potentially promising targets for exo-planets searches. \section{Conclusions} In this paper we suggested and discussed the formation of second generation planetary systems in mass transferring binaries. We found that this is a possible route for planet formation in old evolved systems and around compact objects in double compact object binaries. We presented possible implications for this process and the planetary systems it could produce, and detailed the possible observational signatures of second generation planetary systems. We also pointed out a few currently observed planetary systems with properties suggestive of a second generation origin. The possibility of second generation planets may open new horizons and suggest new approaches and targets for planetary searches and research. It suggests that stellar evolution processes and stellar deaths may serve as the cradle for the birth and/or rejuvenation of a new generation of planets, rather than just being the death throes or hostile hosts for pre-existing planets. In particular such processes could provide new routes for the formation of habitable planets, opening the possibilities for their existence and discovery even in (the previously thought) less likely places to find them. The environments of old stars and more so of compact objects could be very different from that of young stars. Such different environments can strongly affect the formation of second generation planets and possibly introduce unique processes involved in their formation and evolution. The discovery and study of second generation planets could therefore shed new light on our understanding of both planet formation and binary evolution, and drive further research on the wide range of novel processes opened up by this possibility. \acknowledgements{ {I am most greatful to Scott Kenyon for many iluminating discussions. I would also like to thank Scott Kenyon and Noam Soker for helpful comments on an earlier version of this manuscript.} \newpage \bibliographystyle{apj}
1,108,101,564,000
arxiv
\section{Introduction} Bitcoin was born in 2009 and since then its value and popularity has been rapidly increasing until its current state, in which it is the most used, assessed and priced cryptocurrency of all. Bitcoin is a pure peer-to-peer cryptocurrency \cite{nakamoto2008bitcoin} where all transactions are stored in a public shared ledger called blockchain that cannot be manipulated or changed \cite{crosby2016blockchain}. Bitcoin is decentralized, which means that it is not controlled by any financial institution but it is regulated by everyone in the Bitcoin network: its blockchain architecture maintains the system without ambiguity \cite{narayanan2016bitcoin}. While transactions within the Bitcoin network are openly available, Bitcoin user identity is non-transparent and protected by anonymity. This circumstance, combined with the unregulated nature of the Bitcoin market, has brought a lot of new actors to the Bitcoin network using cryptocurrency for illicit operations. Approximately one-quarter of Bitcoin users and half of all Bitcoin transactions are associated with illegal activity \cite{foley2018sex}, accounting for an annual amount of around \$72 billion (report 2018). Conventional law-enforcement strategies tackling illegal financial operations such as money laundering or transactions funding criminal operations are typically based on complete knowledge of each actor's identity, while details about financial transactions are controlled by banks and thus unknown \cite{moser2014towards}. Within the Bitcoin network, these circumstances are reversed - incomplete knowledge of identities restricts traceability and transparency of operations, in turn promoting further increase of illegal activities. This calls for novel methods to attack anonymity within the Bitcoin network, aiming to uncover Bitcoin entity categories. Among the most active categories of entities is the exchange, which represents a digital marketplace where traders can buy and sell cryptocurrencies using different fiat (money made legal tender by a government decree) or other digital currencies. Exchanges thus constitute the "front and exit doors" to the cryptocurrency world and are ideal to hide illicit operations, as documented in \cite{moore2013beware}. Another category is the darknet market. These markets are e-commerce platforms where users can find drugs, weapons and any kind of goods or services that are illegal in most countries. These cryptomarkets use electronic currencies to facilitate licit and illicit transactions among their users \cite{christin2013traveling}. Further, so-called mixers represent services that allow users to obscure operations, as presented in \cite{moser2013anonymity}. At the same time mixed transactions increase the privacy of the users, and they can be used for money laundering of illegal funds. Being able to classify anonymous Bitcoin entities according to such categories would increase transparency and would facilitate linking blockchain information with real actors to uncover illegal activities. Current techniques attacking anonymity often try to cluster addresses and apply heuristic assumptions combined with labelled data from external sources like markets, forums or social media in order to determine address owners in the real world \cite{meiklejohn2013fistful}. However, gathering external data and combining them with Bitcoin information is tedious and could be limited due to privacy restrictions. This motivates the implementation of a model able to characterize different behaviours in the Bitcoin network by analyzing the pure blockchain information only; by extracting transactions and by recognizing patterns using machine learning approaches. In this paper, we present a novel approach to decrease Bitcoin anonymity based on a cascading machine learning model, using entity, address and motifs data as inputs. We apply a "cascade" of classifiers, performing a first entity classification based on address, 1\_motif, and 2\_motif data, which is then used as input for a second classification step, which combines those classification results with entity information from the blockchain. Notably, our approach only requires a few features that can be directly extracted from Bitcoin blockchain data. In order to compare benefits and limits of the proposed approach, two experiments are presented: firstly, a simple classifier is trained based on pure entity information gathered from the blockchain. In the second experiment, a final classifier is trained using the enriched data set generated by our cascading approach. We aimed to detect six different types of Bitcoin entity behaviours. Overall, three classifier models are tested and compared: Adaboost, Random Forest and Gradient Boosting. The rest of the paper is organized as follows. Section~\ref{sec:related} describes the related work. After that, Section~\ref{sec:graph} presents the graph model used and Section~\ref{sec:data} shows an overview of the used data sets. Section~\ref{sec:machine} describes the implemented machine learning models and Section~\ref{sec:result} presents the obtained results. Finally, in Section~\ref{sec:conclusions}, we draw conclusions and provide guidelines for future work. \section{Related work}\label{sec:related} User anonymity has probably been the key factor for the success of cryptocurrencies and has promoted illegal activities within the Bitcoin network. Yet, several studies determine that current measures adopted by the Bitcoin protocol are not sufficient to protect the privacy of its users \cite{meiklejohn2015privacy}, \cite{androulaki2013evaluating}, opening up possibilities to attack Bitcoin anonymity. One of the first transaction analysis is documented in \cite{ron2013quantitative} where typical behavior of Bitcoin users are detected based on how they spend cryptocurrencies, how they keep the balance in their accounts, and how they move Bitcoins between their various accounts. Herrera-Joancomart{\'\i} \cite{herrera2015research} presents a review on Bitcoin anonymity, concluding that anonymity can be reduced by address clustering or by gathering information from various peer-to-peer networks. This technique is also advocated in \cite{koshy2014analysis}, where conservative constraints (patterns) are applied for address clustering, and in \cite{liao2016behind} where information gathered from online forums is used to characterize the CryptoLocker, a family of ransomware. Similarly, in \cite{fleder2015bitcoin}, information scraped from online forums and social media is determinant to simulate an attacker and to summarize activity of both known and unknown Bitcoin users. In \cite{biryukov2014deanonymisation}, a generic method to deanonymize a significant fraction of Bitcoin users by correlating their pseudonyms with public IP addresses is described. Reid et al. \cite{reid2013analysis} demonstrates how it is possible to associate many public-keys with each other, using a map of the topological network and external identifying information in order to investigate a large theft of Bitcoins. Several recent studies have exploited machine learning algorithms for Bitcoin analysis. In \cite{hirshman2013unsupervised}, an unsupervised learning model is presented with the aim to identify atypical transactions related to money laundering. Monamo et al. \cite{monamo2016unsupervised} introduce a k-means classifier for object clustering and fraudulent activity detection in Bitcoin transactions. Another study on detection of anomalous behavior, suspicious users and transactions is presented in \cite{pham2016anomaly}, where three unsupervised learning methods are applied to two graphs generated by the Bitcoin transaction network. Further, a supervised machine learning algorithm is used by \cite{harlev2018breaking} to uncover Bitcoin anonymity using a method for predicting the type of yet-unidentified entities. In \cite{bartoletti2018data}, data mining techniques are used to implement and train a classifier to identify Ponzi schemes in the Bitcoin blockchain and in \cite{mcnally2018predicting} a Bayesian optimized recurrent neural network (RNN) and a Long Short Term Memory (LSTM) are implemented to predict the direction of Bitcoin price in USD. Recently, an interesting approach is given in \cite{ranshous2017exchange}, where the concept of motifs is introduced to blockchain analysis. Authors performed an analysis of the transaction directed hypergraph in order to identify several distinct statistical properties of exchange addresses. They were able to predict if an address is owned by an exchange with $>80\%$ accuracy. The introduction of hypergraphs (or dirhypergraphs) proved beneficial due to their significant advantages over a complex graph structure typically derived from Bitcoin networks. In \cite{jourdan2018characterizing}, the motif concept is further developed and is combined with multiple features (entity, address, temporal, centrality) to obtain a comprehensive entity classification into five categories: Exchange, Service, Gambling, Mining Pool and DarkNet marketplace. Using a total of $315$ features, a global accuracy of $0.92$ could be achieved. Inspired by the good classification results presented in \cite{jourdan2018characterizing}, we present here a novel machine-learning-based approach to attack Bitcoin anonymity, making use of motifs as introduced by Ranshous et al. and allowing for multi-class classification of Bitcoin entities as in \cite{jourdan2018characterizing}, yet aiming to provide a straightforward methodology that relies on fewer, well-defined features. To achieve this, we introduce a novel cascading machine learning model for Bitcoin data analysis. The main idea is to implement a cascade of classifiers, so that outgoing classification results can be joined and can be used to enrich a final classification. \section{Graph Model}\label{sec:graph} \subsection{Blockchain Graph Model}\label{blockchain_model} Bitcoin transactions have a natural graph structure, with a fundamental example being the address-transaction graph (Figure \ref{fig:address}). This graph is directly obtained by using the information gathered from the blockchain and provides an estimation of the flow of Bitcoins linking public key addresses over time. The vertices represent the addresses $(a\textsubscript{1},a\textsubscript{2},...,a\textsubscript{N})$ and the transactions $(tx\textsubscript{1}, tx\textsubscript{2},..., tx\textsubscript{M})$. The directed edges (arrows) between entities and transactions indicate the incoming relations, while directed edges between transactions and entities correspond to outgoing relations. Each directed edge can also include additional features such as values, time-stamps, etc. \begin{figure}[!htbp] \centering \includegraphics[scale=0.5]{images/address.eps} \caption{\textit{Example of address-transaction graph}} \label{fig:address} \end{figure} To improve anonymity in the network, users are encouraged to generate a new Bitcoin address for each new transaction, which is a common advice for the correct usage of Bitcoin\footnote{https://bitcoin.org/en/protect-your-privacy}. Due to this procedure, several addresses belong to the same logical user, so that a simplification is possible by introducing the concept of \textit{entities}. An entity is defined as person or organization that controls or can control multiple public key addresses. This definition allows us to transform the address-transaction graph into the entity-transaction graph (Figure \ref{fig:entity}). \begin{figure}[!htbp] \centering \includegraphics[scale=0.5]{images/entity.eps} \caption{\textit{Example of entity-transaction graph obtained by address clustering}} \label{fig:entity} \end{figure} The new graph is obtained by grouping addresses belonging to the same user into entities (address clustering). This operation is not intuitive, however several heuristic properties have already been presented with the aim to help the clusterization process, for example in \cite{androulaki2013evaluating}, \cite{koshy2014analysis} and \cite{ermilov2017automatic}. In the obtained graph, vertices represent the entities $(e\textsubscript{1},e\textsubscript{2},...,e\textsubscript{K})$ and the transactions $(tx\textsubscript{1},tx\textsubscript{2},...,tx\textsubscript{M})$. Similar to the address-transaction graph, directed edges between entities and transactions indicate the incoming relations, while directed edges between transactions and entities correspond to outgoing relations. The entity-transaction graph (\ref{fig:entity}) summarizes the network well and constitutes an easily understandable representation of the money flow within the network. \subsection{Motifs Graph Model}\label{motifs_model} Graph motifs were introduced in \cite{lacroix2006motif} and were motivated by applications in bioinformatics, specifically in metabolic network analysis. However, as shown in Section \ref{sec:related}, prior studies such as \cite{ranshous2017exchange} have introduced the concept of motifs to Bitcoin analysis. In this paper, a definition of $N\_motif$ is used, starting from the generalized concept introduced in \cite{jourdan2018characterizing}. \begin{theorem} A $N\_motif$ is a path from the entity-transaction graph with length $2N$ that starts and ends with an entity. Let $(e\textsubscript{1},..,e\textsubscript{M}) \in{E}$ be a class of entities and $(t\textsubscript{1},..,t\textsubscript{N}) \in{T}$ be a class of transactions, with $M\leq{N+1}$, then: \[N\_motif ={(e\textsubscript{1},t\textsubscript{1},...,t\textsubscript{N},e\textsubscript{M})}\] in which at least one output from each transaction must be an input to the next transaction. The term \textit{branch} is used here to refer to a path in the motif graph that begins and ends with an entity passing through exactly one transaction. If a single branch of the graph has the same entity as input and output ($e\textsubscript{j}=e\textsubscript{j+1}$), the branch is called Direct Loop, otherwise it is called Direct Distinct. From the motif definition it is clear that all transactions are ordered in time, which means that $\tau(t\textsubscript{1})<\tau(t\textsubscript{2})<..<\tau(t\textsubscript{N})$, where $\tau$ represents a transaction time. \end{theorem} Here, we use the $1\_motif$ and $2\_motif$ concepts. The $1\_motif$ represents the relation between two entities (at least one distinct), while the $2\_motif$ is the relation between three entities (at least one distinct) involved in two consecutive transactions. \section{Data Overview}\label{sec:data} We considered the whole Bitcoin blockchain data created until February $5$th $2019$, $08$:$13$:$31$ AM, corresponding to $561$,$620$ blocks, which contain about $380$,$000$,$000$ transactions and involve more than $1$,$000$,$000$,$000$ addresses. This data was then combined with information available on the WalletExplorer\footnote{https://www.walletexplorer.com/}, a benchmark platform for entities detection, which represents a collection of information about different known entities that have been detected until today. The data set is thus composed of 311 different samples, divided into six classes (see Table \ref{tab:walexpl}): \begin{itemize} \item \textit{Exchange}: entities that allow their customers to trade fiat currencies for Bitcoins (or vice versa) \item \textit{Service}: entities that offer Bitcoin payment methods as solutions to their business (financial services, trading, lending, etc.) \item \textit{Gambling}: entities that offer gambling services (casino, betting, roulette, etc.) \item \textit{Mining Pool}: entities composed of a group of miners that work together sharing their resources in order to reduce the volatility of their returns \item \textit{Mixer}: entities that offer a service to obscure the traceability of their clients' transactions \item \textit{Marketplace}: entities allowing to buy any kind of goods or services that are illegal in most countries paying with Bitcoin \end{itemize} \begin{table}[!htbp] \centering \begin{tabular}{lcccc} \hline Class & \ Abbreviation & \# Entities & \# Address & \% Address \\ \hline \textit{Exchange} &Ex & 137 & 9,943,512 & 61.63 \\ \textit{Gambling} &Gmb & 76 & 3,054,238 & 18.93 \\ \textit{Marketplace} &Mrk & 20 & 2,349,210 & 14.56 \\ \textit{Mining Pool} &Pool & 25 & 76,104 & 0.47 \\ \textit{Mixer} &Mxr & 37 & 475,714 & 2.95 \\ \textit{Service} &Serv & 16 & 235,629 & 1.46 \\\hline \textbf{Total} & & \textbf{311} & \textbf{16,134,407} & \textbf{100} \\\hline \end{tabular} \caption{\textit{Overview of WalletExplorer data used for this study}} \label{tab:walexpl} \end{table} As shown in Table \ref{tab:walexpl}, the \textit{Exchange} is the top class represented by more than $60\%$ of samples, while the \textit{Mining Pool} class is the least represented with just $0.47\%$ (even though it has more distinct entities than the \textit{Marketplace} and the \textit{Service}). Cross-references between Bitcoin blockchain data and labelled data from the WalletExplorer allow us to re-size the original data set by removing all the unlabelled and unusable data. As such, we focus our analysis on known entities only. From this new data set, four dataframes (2-dimensional labelled data structure or data table with samples as rows and extracted features as columns) were extracted for the proposed analysis: \begin{itemize} \item \textit{Entity dataframe} contains all features related to an entity that can be directly extracted from the blockchain. They are: the amount of BTC received/sent, the balance of the entity, the number of transactions in which this entity is the receiver/sender, and the number of addresses belonging to this entity used for receiving/sending money. (This dataframe was composed of $311$ samples and $7$ features) \item \textit{Address dataframe} contains all features related to Bitcoin addresses. Features are: the number of transactions in which a certain address is detected such as receiver/sender, the amount of BTC received/sent from/to this address, the balance, uniqueness (if this address is just used in one transaction) and siblings. (This dataframe was composed of $16$,$134$,$407$ samples and $7$ features) \item \textit{1\_motif dataframe} contains the information directly extracted from the $1\_motif$ graph. In this case, each row contains: the amount received/sent in the transaction, number of distinct addresses used for receiving/sending money, number of similar received/sent transactions between the entities in the branch, the fee, and if the branch realizes a Direct Loop or Direct Distinct path. (This dataframe was composed of $58$,$076$,$963$ samples and $9$ features) \item \textit{2\_motif dataframe} contains information gathered from the $2\_motif$ graph. The features analyzed are: the number of addresses as input/output for the first and second path in $2\_motif$ graph, the amount received/sent in the first and second branch, the fee of both considered transactions, number of similar sent transactions between the entities in the first and second branch, Direct Loop or Direct Distinct path for the first and the second branch and Direct Loop or Direct Distinct path considering the whole $2\_motif$ path, see Figure~\ref{fig:2motifs}. (This dataframe was composed of $83$,$443$,$055$ samples and $18$ features) \end{itemize} \begin{figure}[!htbp] \centering \includegraphics[width=\linewidth]{images/2motifs.eps} \caption{\textit{2\_motif representation with extracted features highlighted}} \label{fig:2motifs} \end{figure} \section{Machine learning}\label{sec:machine} \subsection{Classifier Models}\label{classifier} To demonstrate benefits and limits of our approach, we conducted two different experiments. Firstly, we created a simple classifier, called \textit{C\_entity} (Figure \ref{fig:classifier_ent}), merely based on the samples stored in the entity dataframe, containing (seven) entity-related features that can be directly extracted from the blockchain. This classifier was evaluated via a cross-validation process (see Section \ref{evaluation}). Results from cross-validation were considered as our baseline classification. The simple classifier was implemented in three versions applying Adaboost, Random Forest and Gradient Boosting models as those previously yielded good classification results for Bitcoin data \cite{ranshous2017exchange}. \begin{figure}[!htbp] \centering \includegraphics[width=\linewidth]{images/classifier_entity.eps} \caption{\textit{First experiment: simple entities classifier}} \label{fig:classifier_ent} \end{figure} In the second experiment, prior to entity classification according to the six classes (Table \ref{tab:walexpl}), we built three separate classifiers, based on the additionally available address, 1\_motif, and 2\_motif dataframes and their respective features ($7 + 9 + 18 = 34$ features). Outgoing information from these classifications was processed, as shown in Figure \ref{fig:enrichment}, in order to create a set of six new features for each classifier, which were then used to enrich (extend) the entity dataframe. Finally, a new classifier \textit{C\_final} was generated to obtain final entity classification based on this enriched entity dataframe and its $25$ features ($7$ belonged to the entity dataframe and $6$x$3$ were generated from the three classifiers \textit{C\_address}, \textit{C\_motif1}, \textit{C\_motif2}). With this cascading approach, new entity-related characteristics were added to the entity dataframe, ultimately improving the classification as demonstrated in the following sections. The first step was to split the address, 1\_motif and 2\_motif dataframes into two parts called A-data set (for training) and B-data set (for testing) with a proportion of $70$/$30$. The A-data set was used to compute cross-validation of the three \textit{C\_address}, \textit{C\_motif1}, \textit{C\_motif2} classifier models (Figure \ref{fig:classifier}). After that, the B-data set was used as input for the trained classifiers \textit{C\_address}, \textit{C\_motif1}, \textit{C\_motif2} in order to obtain classification results based on completely new, unseen data. \begin{figure}[!htbp] \centering \includegraphics[width=\linewidth]{images/classifier.eps} \caption{\textit{Second experiment: cascading entities classifiers}} \label{fig:classifier} \end{figure} Classification results essentially assign one of the six possible output classes to each entry in the input dataframe. As each entry has its original (ground truth) label obtained from the WalletExplorer, we can join input label and computed output class and perform a group-by and count operation as illustrated in Figure \ref{fig:enrichment}: we count how many times a sample belonging to a particular entity has been detected in each of the considered classes. This value is then normalized as indicated in the following formula: $$\forall \xi \in E \qquad \frac{\parallel P\textsubscript{$\xi\vert$ j}\parallel}{\sum_{i=1}^{N}\parallel P\textsubscript{$\xi\vert$ i}\parallel }*100 \qquad with \quad j\in N$$ where $E$ is the entities set and $N$ represents the number of considered classes ($N=6$ in this study). The term $\parallel P\textsubscript{$\xi\vert$ j}\parallel$ represents how many times a sample originally labelled with entity $\xi$ generates a prediction belonging to the class j, while the term $\sum_{i=1}^{N}\parallel P\textsubscript{$\xi\vert$ i}\parallel$ counts all the predictions generated from samples with labelled input belonging to entity $\xi$. These normalized values form a dataframe containing $311$ samples (one for each known entity as in the entity dataframe) and six new features, representing the percentage of being classified as belonging to one of the six classes. These features were added to the entity dataframe for data enrichment, constituting our cascading machine learning system. The elements of the enriched entity dataframe were used to implement and evaluate the final classifier, called \textit{C\_final}, and a cross-validation process (Section \ref{evaluation}) was applied to compute its performance. \begin{figure}[!htbp] \centering \includegraphics[width=\linewidth]{images/draw.eps} \caption{\textit{Steps to create the enriched entity dataframe applied to an example address dataframe}} \label{fig:enrichment} \end{figure} To allow for better comparison between experiments, we implemented all classifier models \textit{C\_address}, \textit{C\_motif1}, \textit{C\_motif2} and \textit{C\_final} with Adaboost, Random Forest and Gradient Boosting models. Specifically, all Adaboost classifiers were generated with the number of estimators set to $50$ and the learning rate set to $1$. All Random Forest models were implemented with the number of estimators set to $10$, a Gini function to measure the quality of the split and without a maximum depth of the tree. All Gradient Boosting models were implemented with the number of estimators set to $100$, the learning rate set to $0.1$ and the maximum depth for limiting the number of nodes set to $3$. \subsection{Evaluation Metrics}\label{evaluation} All classification models were evaluated by extracting and comparing classification metrics via a cross-validation process. The goal of cross-validation is to analyze the prediction capabilities of the model in order to detect problems such as over-fitting or selection bias \cite{cawley2010over}. Here, we used stratified K-fold cross-validation, with a value of K equal to $5$. This method involves dividing the whole data set into K equal partitions or folds. Each fold is composed of data ensuring a good representative sample of the whole population by keeping the same proportion of classes present in the original data set (stratification). Then, K-1 folds are used to train the model and the one left-out fold is used to evaluate the predictions obtained by the trained model. The entire process is repeated K times, until each fold has been left out once, testing all possible combinations. During this process, the following metrics were computed: \begin{itemize} \item \textit{Accuracy} or \textit{Score} is defined as the number of correct predictions divided by the total number of predictions and is given as percentage \item \textit{Precision} is the number of positive predictions divided by the total number of the positive class values predicted. It represents a measure of a classifier's exactness given as a value between $0$ and $1$, with $1$ relating to high precision \item \textit{Recall} represents a measure of a classifier's completeness given as a value between $0$ and $1$ \item \textit{F\textsubscript{1}-score} is the harmonic mean of Precision and Recall. It takes values between $0$ and $1$, with $1$ relating to perfect Precision and Recall \[F\textsubscript{1} score= 2* \frac{Precision * Recall}{Precision+Recall}\] \item \textit{Matthews Correlation Coefficient (MCC)} is a metric yielding easy comparison with respect to a random baseline, suitable for unbalanced classes. It takes values between $-1$ and $+1$. A coefficient of $+1$ represents a perfect prediction, $0$ an average random prediction and $-1$ an inverse prediction. As shown in \cite{gorodkin2004comparing}, let $K$ be the number of classes and $C$ be a confusion matrix with dims $K\times K$, the $MCC$ can be calculated as: \[MCC\_part1 ={\sqrt{\sum_{k}(\sum_{l}C\textsubscript{kl})(\sum_{f,g\vert f\neq{g}}C\textsubscript{gf})}}\] \[MCC\_part2 ={\sqrt{\sum_{k}(\sum_{l}C\textsubscript{lk}) (\sum_{f,g\vert f\neq{g}}C\textsubscript{fg})}}\] \[MCC = \frac{\sum_{k}\sum_{l}\sum_{m}C\textsubscript{kk}C\textsubscript{lm}-C\textsubscript{kl}C\textsubscript{mk}}{MCC\_part1*MCC\_part2}\] \end{itemize} In Section \ref{sec:result} results for the baseline model (\textit{C\_entity}) and for the final model obtained after cross-validation using the enriched dataframe (\textit{C\_final}) are presented and compared. We report global metric values for Accuracy/Score and \textit{MCC} averaged over the K=$5$ cross-validation runs and per-class values for Precision, Recall and F1-score when evaluating the final models. \subsection{Hardware and Software Configuration}\label{configuration} All analyses were run on a cluster of three virtual machines, each one with $16$ CPUs Intel(R) Xeon(R) Silver $4114$ CPU @ $2.20$ GHz, $64$ GB RAM DDR4 memory with $2,666$ MHz, and $500$ GB of Hard Disk SATA. Apache Spark\footnote{https://spark.apache.org/} v$2.4.0$, set in cluster mode was used to manage stored data using Apache Hadoop\footnote{https://hadoop.apache.org/}. The various classifier models were implemented and evaluated using Python's Scikit-learn\footnote{https://scikit-learn.org/} library. All scripts were executed within the Jupyter-notebook\footnote{https://jupyter.org/} environment. \section{Results}\label{sec:result} Considering the simple classifier \textit{C\_entity} from the first experiment, the Gradient Boosting model yielded a better average score ($61.90\%$ accuracy) and \textit{MCC} ($0.44$) than Random Forest and Adaboost classifiers, as shown in Table \ref{tab:cross-val} (upper section). However, with overall low \textit{MCC} for all classifiers (between $0.22$ and $0.44$), these scores were not sufficient to achieve reliable entities characterization. This led to introducing our cascading machine learning approach, enriching the initial entity dataframe with information gathered from prior classifications in the second experiment. \begin{table}[!htbp] \centering \begin{tabular}{lccccc} \cline{1-5} \multicolumn{1}{c}{Model} & Classifier & Score \% & Std \% & MCC \\ \cline{1-5} \textit{Adaboost} & \textit{C\_entity} & 45.63 & 6.34 & 0.22 \\ \textit{Random Forest} & \textit{C\_entity} & 59.71 & 1.82 & 0.41 \\ \textit{Gradient Boosting} & \textit{C\_entity} & 61.90 & 1.36 & 0.44 \\ \cline{1-5} \textit{Adaboost} & \textit{C\_final} & 78.84 & 1.76 & 0.76 \\ \textit{Random Forest} & \textit{C\_final} & 98.04 & 1.22 & 0.97 \\ \textit{Gradient Boosting} & \textit{C\_final} & 99.68 & 0.63 & 0.99 \\ \cline{1-5} \end{tabular} \caption{\textit{Average performance of classifiers over five cross-validation repetitions for simple \textit{C\_entity} model (above) and for final model after data enrichment via cascading machine learning \textit{C\_final}}} \label{tab:cross-val} \end{table} Analyzing the \textit{C\_address}, \textit{C\_motif1} and \textit{C\_motif2} classifiers separately for entity characterization, Table \ref{tab:model_score} shows that outgoing information from the Random Forest classifier resulted to be more accurate than information from Gradient Boosting and Adaboost classifiers (accuracy scores \textgreater$90\%$ for Random Forest). Notably, only using information from the address dataframe, the Random Forest classifier \textit{C\_address} could already achieve an average global accuracy of $\sim96\%$. Due to these results, we only used results obtained from Random Forest classifiers for the subsequent entities dataframe enrichment. Random Forest classifiers not only proved to be the best in terms of accuracy, but also performed with highest speed among the considered classification models. \begin{table}[!htbp] \centering \begin{tabular}{lcccc} \cline{1-4} \multicolumn{1}{c}{Model} & \textit{C\_address} \% & \textit{C\_motif1} \% & \textit{C\_motif2} \% \\ \cline{1-4} \textit{Adaboost} & 61.54 & 72.69 & 78.27 \\ \textit{Random Forest} & 95.73 & 94.14 & 90.88 \\ \textit{Gradient Boosting} & 83.23 & 83.52 & 83.54 \\ \cline{1-4} \end{tabular} \caption{\textit{Average global C\_address, C\_motif1 and C\_motif2 classifier accuracy calculated via 5-fold cross-validation}} \label{tab:model_score} \end{table} The final classifiers \textit{C\_final} were fed with the enriched entity dataframe, which comprised the original features from the entity dataframe and included as new features the class predictions obtained from \textit{C\_address}, \textit{C\_motif1} and \textit{C\_motif2} for the respective test data sets. From Table \ref{tab:cross-val} (lower section) it is obvious that the average score result improved significantly by exploiting the information obtained via our cascading approach. Random Forest and Gradient Boosting classifiers again performed better than the Adaboost model, reaching a score of more than $98\%$ (respectively $\sim39\%$ and $\sim38\%$ percentage points higher than the baseline accuracy from \textit{C\_entity}). Furthermore, classification results were more stable during cross-validation, generating low standard deviations between $0.63\%$ and $1.76\%$ and the \textit{MCC} reached values close to $1.0$, relating to close-to-perfect class prediction. \begin{table*} \centering \begin{tabular}{lcccc|ccc} \cline{3-8} & \multicolumn{1}{l}{} & \multicolumn{3}{c|}{\textit{C\_entity} model} & \multicolumn{3}{c}{\textit{C\_final} model} \\ \hline \multicolumn{1}{c}{Class} & Model & Precision & Recall & F1-score & Precision & \multicolumn{1}{l}{Recall} & F1-score \\ \hline \textit{Exchange} & Adaboost & 0.51 & 0.68 & 0.57 & 0.77 & 0.78 & 0.77 \\ \textit{Gambling} & Adaboost & 0.22 & 0.14 & 0.17 & 0.75 & 1.00 & 0.85 \\ \textit{Market} & Adaboost & 0.05 & 0.15 & 0.08 & 0.40 & 0.30 & 0.33 \\ \textit{Mining Pool} & Adaboost & 0.20 & 0.16 & 0.17 & 0.11 & 0.2 & 0.14 \\ \textit{Mixer} & Adaboost & 0.69 & 0.78 & 0.71 & 1.00 & 0.98 & 0.99 \\ \textit{Service} & Adaboost & 0.20 & 0.10 & 0.13 & 0.95 & 0.95 & 0.95 \\ \hline \textit{Exchange} & Random Forest & 0.60 & 0.77 & 0.67 & 0.96 & 1.00 & 0.98 \\ \textit{Gambling} & Random Forest & 0.54 & 0.50 & 0.51 & 1.00 & 1.00 & 1.00 \\ \textit{Market} & Random Forest & 0 & 0 & 0 & 1.00 & 0.85 & 0.91 \\ \textit{Mining Pool} & Random Forest & 0.68 & 0.50 & 0.56 & 1.00 & 0.92 & 0.96 \\ \textit{Mixer} & Random Forest & 0.89 & 0.78 & 0.82 & 1.00 & 1.00 & 1.00 \\ \textit{Service} & Random Forest & 0 & 0 & 0 & 1.00 & 0.93 & 0.96 \\ \hline \textit{Exchange} & Gradient Boosting & 0.61 & 0.80 & 0.69 & 1.00 & 1.00 & 1.00 \\ \textit{Gambling} & Gradient Boosting & 0.59 & 0.53 & 0.55 & 0.99 & 1.00 & 0.99 \\ \textit{Market} & Gradient Boosting & 0.10 & 0.05 & 0.06 & 1.00 & 1.00 & 1.00 \\ \textit{Mining Pool} & Gradient Boosting & 0.38 & 0.40 & 0.38 & 1.00 & 1.00 & 1.00 \\ \textit{Mixer} & Gradient Boosting & 0.92 & 0.84 & 0.87 & 1.00 & 1.00 & 1.00 \\ \textit{Service} & Gradient Boosting & 0 & 0 & 0 & 1.00 & 0.93 & 0.96 \\ \hline \end{tabular} \caption{\textit{Average Precision, Recall and F\textsubscript{1}-score calculated in each model implementation for each class}} \label{tab:test-metrics} \end{table*} In Table \ref{tab:test-metrics}, we present per-class Precision, Recall and F\textsubscript{1}-scores calculated for \textit{C\_entity} (baseline) and \textit{C\_final} (enriched) classifiers for each classification model. Results demonstrate that the simple classifier \textit{C\_entity} - independently of the classification model used - had problems detecting \textit{Service} and \textit{Market} entities (calculated metrics are $0$ or have very low values). However, it is to be noted that these two classes are least represented in terms of distinct entities in the original data set. Random Forest and Gradient Boosting classifiers showed overall good performance in detecting \textit{Mixer} entities for the \textit{C\_entity} approach (F\textsubscript{1}-scores \textgreater$0.8$). By exploiting the cascading machine learning implementation however, all classifiers improved their classification performance for each class, with most values being close to $1.0$. Only the Adaboost model kept having problems with the classification of \textit{Mining Pool} and \textit{Market} entities. Random Forest and the Gradient Boosting models instead yielded excellent values for Precision, Recall and F\textsubscript{1}-score for each class. Overall best classification scores were achieved by the \textit{C\_final} implementation with Gradient Boosting models. Data enrichment through prior classification and cascading thus clearly had a highly beneficial impact on classification ability of Gradient Boosting, motivating a further analysis of the importance of individual features from the enriched entity dataframe. We therefore calculated in a next step a feature \textit{importance score} for the enriched entity dataframe. Generally, the feature importance score provides a score that indicates how useful or valuable each feature was in the construction of the model. The more often an attribute is used to make key decisions, the greater will be its relative importance score. Importance was explicitly calculated through Python's Scikit-learn library for each attribute in the data set, allowing features to be ranked and compared to each other. Figure \ref{fig:features} shows a list of the top fifteen features for the \textit{C\_final} Gradient Boosting classifier. All fifteen important features were created during the prior classifications taking into account \textit{C\_address}, \textit{C\_motif1} and \textit{C\_motif2}. These features represent how address, 1\_motif, or 2\_motif data, related to a certain entity, were previously classified. This highlights again that the information brought in from prior classifications (first step of the cascade) clearly contributes to much improved entities characterization. \begin{figure}[] \centering \includegraphics[width=\linewidth]{images/importance_score.eps} \caption{\textit{Top 15 important features from the GB classifier}} \label{fig:features} \end{figure} \section{Conclusion and Future work}\label{sec:conclusions} In this paper, we present a novel approach of how to attack Bitcoin anonymity through entity characterization. Specifically, we demonstrate how a cascading machine learning model combined with an adequate set of input features directly derived from Bitcoin blockchain data (entity and address data) as well as derived via 1\_motif and 2\_motif concepts introduced by Ranshous et al. \cite{ranshous2017exchange} can lead to impressive classification performance for a number of relevant Bitcoin entity classes. In fact, we were able to obtain an average global accuracy score of $99.68\%$ with low standard deviation of $0.63\%$ and a Matthews Correlation Coefficient (\textit{MCC}) of $0.99$ over 5-fold cross validation for a Gradient Boosting model using our cascading approach. These final models were indeed able to predict each of the six entity classes used (Exchange, Gambling, Market, Mining Pool, Mixer, Service) with Precision, Recall and F\textsubscript{1}-score values close to $1.0$. Ranshous et al. \cite{ranshous2017exchange} obtained similar results using Random Forest and Adaboost classifiers, however their study was limited to exchange address classification. Jourdan et al. \cite{jourdan2018characterizing} generally obtained lower values for per-class F\textsubscript{1}-score and Precision ranging between $0.67$ and $1.0$ using Gradient Boosting and their approach involved a complex step of model hyper-parameter calibration and required a total number of $315$ input features. Our approach applies one more classification step in the "classification cascade" generating a set of new entity-related features used for the final classification, but we do not require extensive parameter tuning. Most importantly, we only use $34$ features for the initial classification step (involving address, 1\_motif and 2\_motif) and finally $7$ features from the entities data set plus $3$ x $6 = 18$ new features obtained as outgoing information from the initial classification step. The final classification is thus based on only $25$ features, which equals to less than $10\%$ of features compared to Jourdan et al. Our future work will focus on investigating deeper the matter of feature importance, in order to further reduce the number of relevant features required for obtaining high entity classification performance. This will facilitate the process of attacking Bitcoin anonymity further. One major drawback of our approach is that we were not able to characterize entities behaving as normal users as this information is not currently available as ground truth data in the WalletExplorer. We had to remove all entities that have not yet been classified in the WalletExplorer from our analysis. Nevertheless, we were able to detect six classes of key Bitcoin services that have previously been associated with illicit financial operations with very high classification scores. We therefore believe that our study can contribute to improving crime investigation and may form a base for developing effective tools assisting law enforcement agencies in uncovering illegal activities within the Bitcoin network. \section*{ACKNOWLEDGMENT} This work was partially funded by the European Commission through the Horizon 2020 research and innovation program, as part of the ``TITANIUM" project (grant agreement No 740558). \addtolength{\textheight}{-12cm} \bibliographystyle{splncs04} \section{Introduction} Bitcoin was born in 2009 and since then its value and popularity has been rapidly increasing until its current state, in which it is the most used, assessed and priced cryptocurrency of all. Bitcoin is a pure peer-to-peer cryptocurrency \cite{nakamoto2008bitcoin} where all transactions are stored in a public shared ledger called blockchain that cannot be manipulated or changed \cite{crosby2016blockchain}. Bitcoin is decentralized, which means that it is not controlled by any financial institution but it is regulated by everyone in the Bitcoin network: its blockchain architecture maintains the system without ambiguity \cite{narayanan2016bitcoin}. While transactions within the Bitcoin network are openly available, Bitcoin user identity is non-transparent and protected by anonymity. This circumstance, combined with the unregulated nature of the Bitcoin market, has brought a lot of new actors to the Bitcoin network using cryptocurrency for illicit operations. Approximately one-quarter of Bitcoin users and half of all Bitcoin transactions are associated with illegal activity \cite{foley2018sex}, accounting for an annual amount of around \$72 billion (report 2018). Conventional law-enforcement strategies tackling illegal financial operations such as money laundering or transactions funding criminal operations are typically based on complete knowledge of each actor's identity, while details about financial transactions are controlled by banks and thus unknown \cite{moser2014towards}. Within the Bitcoin network, these circumstances are reversed - incomplete knowledge of identities restricts traceability and transparency of operations, in turn promoting further increase of illegal activities. This calls for novel methods to attack anonymity within the Bitcoin network, aiming to uncover Bitcoin entity categories. Among the most active categories of entities is the exchange, which represents a digital marketplace where traders can buy and sell cryptocurrencies using different fiat (money made legal tender by a government decree) or other digital currencies. Exchanges thus constitute the "front and exit doors" to the cryptocurrency world and are ideal to hide illicit operations, as documented in \cite{moore2013beware}. Another category is the darknet market. These markets are e-commerce platforms where users can find drugs, weapons and any kind of goods or services that are illegal in most countries. These cryptomarkets use electronic currencies to facilitate licit and illicit transactions among their users \cite{christin2013traveling}. Further, so-called mixers represent services that allow users to obscure operations, as presented in \cite{moser2013anonymity}. At the same time mixed transactions increase the privacy of the users, and they can be used for money laundering of illegal funds. Being able to classify anonymous Bitcoin entities according to such categories would increase transparency and would facilitate linking blockchain information with real actors to uncover illegal activities. Current techniques attacking anonymity often try to cluster addresses and apply heuristic assumptions combined with labelled data from external sources like markets, forums or social media in order to determine address owners in the real world \cite{meiklejohn2013fistful}. However, gathering external data and combining them with Bitcoin information is tedious and could be limited due to privacy restrictions. This motivates the implementation of a model able to characterize different behaviours in the Bitcoin network by analyzing the pure blockchain information only; by extracting transactions and by recognizing patterns using machine learning approaches. In this paper, we present a novel approach to decrease Bitcoin anonymity based on a cascading machine learning model, using entity, address and motifs data as inputs. We apply a "cascade" of classifiers, performing a first entity classification based on address, 1\_motif, and 2\_motif data, which is then used as input for a second classification step, which combines those classification results with entity information from the blockchain. Notably, our approach only requires a few features that can be directly extracted from Bitcoin blockchain data. In order to compare benefits and limits of the proposed approach, two experiments are presented: firstly, a simple classifier is trained based on pure entity information gathered from the blockchain. In the second experiment, a final classifier is trained using the enriched data set generated by our cascading approach. We aimed to detect six different types of Bitcoin entity behaviours. Overall, three classifier models are tested and compared: Adaboost, Random Forest and Gradient Boosting. The rest of the paper is organized as follows. Section~\ref{sec:related} describes the related work. After that, Section~\ref{sec:graph} presents the graph model used and Section~\ref{sec:data} shows an overview of the used data sets. Section~\ref{sec:machine} describes the implemented machine learning models and Section~\ref{sec:result} presents the obtained results. Finally, in Section~\ref{sec:conclusions}, we draw conclusions and provide guidelines for future work. \section{Related work}\label{sec:related} User anonymity has probably been the key factor for the success of cryptocurrencies and has promoted illegal activities within the Bitcoin network. Yet, several studies determine that current measures adopted by the Bitcoin protocol are not sufficient to protect the privacy of its users \cite{meiklejohn2015privacy}, \cite{androulaki2013evaluating}, opening up possibilities to attack Bitcoin anonymity. One of the first transaction analysis is documented in \cite{ron2013quantitative} where typical behavior of Bitcoin users are detected based on how they spend cryptocurrencies, how they keep the balance in their accounts, and how they move Bitcoins between their various accounts. Herrera-Joancomart{\'\i} \cite{herrera2015research} presents a review on Bitcoin anonymity, concluding that anonymity can be reduced by address clustering or by gathering information from various peer-to-peer networks. This technique is also advocated in \cite{koshy2014analysis}, where conservative constraints (patterns) are applied for address clustering, and in \cite{liao2016behind} where information gathered from online forums is used to characterize the CryptoLocker, a family of ransomware. Similarly, in \cite{fleder2015bitcoin}, information scraped from online forums and social media is determinant to simulate an attacker and to summarize activity of both known and unknown Bitcoin users. In \cite{biryukov2014deanonymisation}, a generic method to deanonymize a significant fraction of Bitcoin users by correlating their pseudonyms with public IP addresses is described. Reid et al. \cite{reid2013analysis} demonstrates how it is possible to associate many public-keys with each other, using a map of the topological network and external identifying information in order to investigate a large theft of Bitcoins. Several recent studies have exploited machine learning algorithms for Bitcoin analysis. In \cite{hirshman2013unsupervised}, an unsupervised learning model is presented with the aim to identify atypical transactions related to money laundering. Monamo et al. \cite{monamo2016unsupervised} introduce a k-means classifier for object clustering and fraudulent activity detection in Bitcoin transactions. Another study on detection of anomalous behavior, suspicious users and transactions is presented in \cite{pham2016anomaly}, where three unsupervised learning methods are applied to two graphs generated by the Bitcoin transaction network. Further, a supervised machine learning algorithm is used by \cite{harlev2018breaking} to uncover Bitcoin anonymity using a method for predicting the type of yet-unidentified entities. In \cite{bartoletti2018data}, data mining techniques are used to implement and train a classifier to identify Ponzi schemes in the Bitcoin blockchain and in \cite{mcnally2018predicting} a Bayesian optimized recurrent neural network (RNN) and a Long Short Term Memory (LSTM) are implemented to predict the direction of Bitcoin price in USD. Recently, an interesting approach is given in \cite{ranshous2017exchange}, where the concept of motifs is introduced to blockchain analysis. Authors performed an analysis of the transaction directed hypergraph in order to identify several distinct statistical properties of exchange addresses. They were able to predict if an address is owned by an exchange with $>80\%$ accuracy. The introduction of hypergraphs (or dirhypergraphs) proved beneficial due to their significant advantages over a complex graph structure typically derived from Bitcoin networks. In \cite{jourdan2018characterizing}, the motif concept is further developed and is combined with multiple features (entity, address, temporal, centrality) to obtain a comprehensive entity classification into five categories: Exchange, Service, Gambling, Mining Pool and DarkNet marketplace. Using a total of $315$ features, a global accuracy of $0.92$ could be achieved. Inspired by the good classification results presented in \cite{jourdan2018characterizing}, we present here a novel machine-learning-based approach to attack Bitcoin anonymity, making use of motifs as introduced by Ranshous et al. and allowing for multi-class classification of Bitcoin entities as in \cite{jourdan2018characterizing}, yet aiming to provide a straightforward methodology that relies on fewer, well-defined features. To achieve this, we introduce a novel cascading machine learning model for Bitcoin data analysis. The main idea is to implement a cascade of classifiers, so that outgoing classification results can be joined and can be used to enrich a final classification. \section{Graph Model}\label{sec:graph} \subsection{Blockchain Graph Model}\label{blockchain_model} Bitcoin transactions have a natural graph structure, with a fundamental example being the address-transaction graph (Figure \ref{fig:address}). This graph is directly obtained by using the information gathered from the blockchain and provides an estimation of the flow of Bitcoins linking public key addresses over time. The vertices represent the addresses $(a\textsubscript{1},a\textsubscript{2},...,a\textsubscript{N})$ and the transactions $(tx\textsubscript{1}, tx\textsubscript{2},..., tx\textsubscript{M})$. The directed edges (arrows) between entities and transactions indicate the incoming relations, while directed edges between transactions and entities correspond to outgoing relations. Each directed edge can also include additional features such as values, time-stamps, etc. \begin{figure}[!htbp] \centering \includegraphics[scale=0.5]{images/address.eps} \caption{\textit{Example of address-transaction graph}} \label{fig:address} \end{figure} To improve anonymity in the network, users are encouraged to generate a new Bitcoin address for each new transaction, which is a common advice for the correct usage of Bitcoin\footnote{https://bitcoin.org/en/protect-your-privacy}. Due to this procedure, several addresses belong to the same logical user, so that a simplification is possible by introducing the concept of \textit{entities}. An entity is defined as person or organization that controls or can control multiple public key addresses. This definition allows us to transform the address-transaction graph into the entity-transaction graph (Figure \ref{fig:entity}). \begin{figure}[!htbp] \centering \includegraphics[scale=0.5]{images/entity.eps} \caption{\textit{Example of entity-transaction graph obtained by address clustering}} \label{fig:entity} \end{figure} The new graph is obtained by grouping addresses belonging to the same user into entities (address clustering). This operation is not intuitive, however several heuristic properties have already been presented with the aim to help the clusterization process, for example in \cite{androulaki2013evaluating}, \cite{koshy2014analysis} and \cite{ermilov2017automatic}. In the obtained graph, vertices represent the entities $(e\textsubscript{1},e\textsubscript{2},...,e\textsubscript{K})$ and the transactions $(tx\textsubscript{1},tx\textsubscript{2},...,tx\textsubscript{M})$. Similar to the address-transaction graph, directed edges between entities and transactions indicate the incoming relations, while directed edges between transactions and entities correspond to outgoing relations. The entity-transaction graph (\ref{fig:entity}) summarizes the network well and constitutes an easily understandable representation of the money flow within the network. \subsection{Motifs Graph Model}\label{motifs_model} Graph motifs were introduced in \cite{lacroix2006motif} and were motivated by applications in bioinformatics, specifically in metabolic network analysis. However, as shown in Section \ref{sec:related}, prior studies such as \cite{ranshous2017exchange} have introduced the concept of motifs to Bitcoin analysis. In this paper, a definition of $N\_motif$ is used, starting from the generalized concept introduced in \cite{jourdan2018characterizing}. \begin{theorem} A $N\_motif$ is a path from the entity-transaction graph with length $2N$ that starts and ends with an entity. Let $(e\textsubscript{1},..,e\textsubscript{M}) \in{E}$ be a class of entities and $(t\textsubscript{1},..,t\textsubscript{N}) \in{T}$ be a class of transactions, with $M\leq{N+1}$, then: \[N\_motif ={(e\textsubscript{1},t\textsubscript{1},...,t\textsubscript{N},e\textsubscript{M})}\] in which at least one output from each transaction must be an input to the next transaction. The term \textit{branch} is used here to refer to a path in the motif graph that begins and ends with an entity passing through exactly one transaction. If a single branch of the graph has the same entity as input and output ($e\textsubscript{j}=e\textsubscript{j+1}$), the branch is called Direct Loop, otherwise it is called Direct Distinct. From the motif definition it is clear that all transactions are ordered in time, which means that $\tau(t\textsubscript{1})<\tau(t\textsubscript{2})<..<\tau(t\textsubscript{N})$, where $\tau$ represents a transaction time. \end{theorem} Here, we use the $1\_motif$ and $2\_motif$ concepts. The $1\_motif$ represents the relation between two entities (at least one distinct), while the $2\_motif$ is the relation between three entities (at least one distinct) involved in two consecutive transactions. \section{Data Overview}\label{sec:data} We considered the whole Bitcoin blockchain data created until February $5$th $2019$, $08$:$13$:$31$ AM, corresponding to $561$,$620$ blocks, which contain about $380$,$000$,$000$ transactions and involve more than $1$,$000$,$000$,$000$ addresses. This data was then combined with information available on the WalletExplorer\footnote{https://www.walletexplorer.com/}, a benchmark platform for entities detection, which represents a collection of information about different known entities that have been detected until today. The data set is thus composed of 311 different samples, divided into six classes (see Table \ref{tab:walexpl}): \begin{itemize} \item \textit{Exchange}: entities that allow their customers to trade fiat currencies for Bitcoins (or vice versa) \item \textit{Service}: entities that offer Bitcoin payment methods as solutions to their business (financial services, trading, lending, etc.) \item \textit{Gambling}: entities that offer gambling services (casino, betting, roulette, etc.) \item \textit{Mining Pool}: entities composed of a group of miners that work together sharing their resources in order to reduce the volatility of their returns \item \textit{Mixer}: entities that offer a service to obscure the traceability of their clients' transactions \item \textit{Marketplace}: entities allowing to buy any kind of goods or services that are illegal in most countries paying with Bitcoin \end{itemize} \begin{table}[!htbp] \centering \begin{tabular}{lcccc} \hline Class & \ Abbreviation & \# Entities & \# Address & \% Address \\ \hline \textit{Exchange} &Ex & 137 & 9,943,512 & 61.63 \\ \textit{Gambling} &Gmb & 76 & 3,054,238 & 18.93 \\ \textit{Marketplace} &Mrk & 20 & 2,349,210 & 14.56 \\ \textit{Mining Pool} &Pool & 25 & 76,104 & 0.47 \\ \textit{Mixer} &Mxr & 37 & 475,714 & 2.95 \\ \textit{Service} &Serv & 16 & 235,629 & 1.46 \\\hline \textbf{Total} & & \textbf{311} & \textbf{16,134,407} & \textbf{100} \\\hline \end{tabular} \caption{\textit{Overview of WalletExplorer data used for this study}} \label{tab:walexpl} \end{table} As shown in Table \ref{tab:walexpl}, the \textit{Exchange} is the top class represented by more than $60\%$ of samples, while the \textit{Mining Pool} class is the least represented with just $0.47\%$ (even though it has more distinct entities than the \textit{Marketplace} and the \textit{Service}). Cross-references between Bitcoin blockchain data and labelled data from the WalletExplorer allow us to re-size the original data set by removing all the unlabelled and unusable data. As such, we focus our analysis on known entities only. From this new data set, four dataframes (2-dimensional labelled data structure or data table with samples as rows and extracted features as columns) were extracted for the proposed analysis: \begin{itemize} \item \textit{Entity dataframe} contains all features related to an entity that can be directly extracted from the blockchain. They are: the amount of BTC received/sent, the balance of the entity, the number of transactions in which this entity is the receiver/sender, and the number of addresses belonging to this entity used for receiving/sending money. (This dataframe was composed of $311$ samples and $7$ features) \item \textit{Address dataframe} contains all features related to Bitcoin addresses. Features are: the number of transactions in which a certain address is detected such as receiver/sender, the amount of BTC received/sent from/to this address, the balance, uniqueness (if this address is just used in one transaction) and siblings. (This dataframe was composed of $16$,$134$,$407$ samples and $7$ features) \item \textit{1\_motif dataframe} contains the information directly extracted from the $1\_motif$ graph. In this case, each row contains: the amount received/sent in the transaction, number of distinct addresses used for receiving/sending money, number of similar received/sent transactions between the entities in the branch, the fee, and if the branch realizes a Direct Loop or Direct Distinct path. (This dataframe was composed of $58$,$076$,$963$ samples and $9$ features) \item \textit{2\_motif dataframe} contains information gathered from the $2\_motif$ graph. The features analyzed are: the number of addresses as input/output for the first and second path in $2\_motif$ graph, the amount received/sent in the first and second branch, the fee of both considered transactions, number of similar sent transactions between the entities in the first and second branch, Direct Loop or Direct Distinct path for the first and the second branch and Direct Loop or Direct Distinct path considering the whole $2\_motif$ path, see Figure~\ref{fig:2motifs}. (This dataframe was composed of $83$,$443$,$055$ samples and $18$ features) \end{itemize} \begin{figure}[!htbp] \centering \includegraphics[width=\linewidth]{images/2motifs.eps} \caption{\textit{2\_motif representation with extracted features highlighted}} \label{fig:2motifs} \end{figure} \section{Machine learning}\label{sec:machine} \subsection{Classifier Models}\label{classifier} To demonstrate benefits and limits of our approach, we conducted two different experiments. Firstly, we created a simple classifier, called \textit{C\_entity} (Figure \ref{fig:classifier_ent}), merely based on the samples stored in the entity dataframe, containing (seven) entity-related features that can be directly extracted from the blockchain. This classifier was evaluated via a cross-validation process (see Section \ref{evaluation}). Results from cross-validation were considered as our baseline classification. The simple classifier was implemented in three versions applying Adaboost, Random Forest and Gradient Boosting models as those previously yielded good classification results for Bitcoin data \cite{ranshous2017exchange}. \begin{figure}[!htbp] \centering \includegraphics[width=\linewidth]{images/classifier_entity.eps} \caption{\textit{First experiment: simple entities classifier}} \label{fig:classifier_ent} \end{figure} In the second experiment, prior to entity classification according to the six classes (Table \ref{tab:walexpl}), we built three separate classifiers, based on the additionally available address, 1\_motif, and 2\_motif dataframes and their respective features ($7 + 9 + 18 = 34$ features). Outgoing information from these classifications was processed, as shown in Figure \ref{fig:enrichment}, in order to create a set of six new features for each classifier, which were then used to enrich (extend) the entity dataframe. Finally, a new classifier \textit{C\_final} was generated to obtain final entity classification based on this enriched entity dataframe and its $25$ features ($7$ belonged to the entity dataframe and $6$x$3$ were generated from the three classifiers \textit{C\_address}, \textit{C\_motif1}, \textit{C\_motif2}). With this cascading approach, new entity-related characteristics were added to the entity dataframe, ultimately improving the classification as demonstrated in the following sections. The first step was to split the address, 1\_motif and 2\_motif dataframes into two parts called A-data set (for training) and B-data set (for testing) with a proportion of $70$/$30$. The A-data set was used to compute cross-validation of the three \textit{C\_address}, \textit{C\_motif1}, \textit{C\_motif2} classifier models (Figure \ref{fig:classifier}). After that, the B-data set was used as input for the trained classifiers \textit{C\_address}, \textit{C\_motif1}, \textit{C\_motif2} in order to obtain classification results based on completely new, unseen data. \begin{figure}[!htbp] \centering \includegraphics[width=\linewidth]{images/classifier.eps} \caption{\textit{Second experiment: cascading entities classifiers}} \label{fig:classifier} \end{figure} Classification results essentially assign one of the six possible output classes to each entry in the input dataframe. As each entry has its original (ground truth) label obtained from the WalletExplorer, we can join input label and computed output class and perform a group-by and count operation as illustrated in Figure \ref{fig:enrichment}: we count how many times a sample belonging to a particular entity has been detected in each of the considered classes. This value is then normalized as indicated in the following formula: $$\forall \xi \in E \qquad \frac{\parallel P\textsubscript{$\xi\vert$ j}\parallel}{\sum_{i=1}^{N}\parallel P\textsubscript{$\xi\vert$ i}\parallel }*100 \qquad with \quad j\in N$$ where $E$ is the entities set and $N$ represents the number of considered classes ($N=6$ in this study). The term $\parallel P\textsubscript{$\xi\vert$ j}\parallel$ represents how many times a sample originally labelled with entity $\xi$ generates a prediction belonging to the class j, while the term $\sum_{i=1}^{N}\parallel P\textsubscript{$\xi\vert$ i}\parallel$ counts all the predictions generated from samples with labelled input belonging to entity $\xi$. These normalized values form a dataframe containing $311$ samples (one for each known entity as in the entity dataframe) and six new features, representing the percentage of being classified as belonging to one of the six classes. These features were added to the entity dataframe for data enrichment, constituting our cascading machine learning system. The elements of the enriched entity dataframe were used to implement and evaluate the final classifier, called \textit{C\_final}, and a cross-validation process (Section \ref{evaluation}) was applied to compute its performance. \begin{figure}[!htbp] \centering \includegraphics[width=\linewidth]{images/draw.eps} \caption{\textit{Steps to create the enriched entity dataframe applied to an example address dataframe}} \label{fig:enrichment} \end{figure} To allow for better comparison between experiments, we implemented all classifier models \textit{C\_address}, \textit{C\_motif1}, \textit{C\_motif2} and \textit{C\_final} with Adaboost, Random Forest and Gradient Boosting models. Specifically, all Adaboost classifiers were generated with the number of estimators set to $50$ and the learning rate set to $1$. All Random Forest models were implemented with the number of estimators set to $10$, a Gini function to measure the quality of the split and without a maximum depth of the tree. All Gradient Boosting models were implemented with the number of estimators set to $100$, the learning rate set to $0.1$ and the maximum depth for limiting the number of nodes set to $3$. \subsection{Evaluation Metrics}\label{evaluation} All classification models were evaluated by extracting and comparing classification metrics via a cross-validation process. The goal of cross-validation is to analyze the prediction capabilities of the model in order to detect problems such as over-fitting or selection bias \cite{cawley2010over}. Here, we used stratified K-fold cross-validation, with a value of K equal to $5$. This method involves dividing the whole data set into K equal partitions or folds. Each fold is composed of data ensuring a good representative sample of the whole population by keeping the same proportion of classes present in the original data set (stratification). Then, K-1 folds are used to train the model and the one left-out fold is used to evaluate the predictions obtained by the trained model. The entire process is repeated K times, until each fold has been left out once, testing all possible combinations. During this process, the following metrics were computed: \begin{itemize} \item \textit{Accuracy} or \textit{Score} is defined as the number of correct predictions divided by the total number of predictions and is given as percentage \item \textit{Precision} is the number of positive predictions divided by the total number of the positive class values predicted. It represents a measure of a classifier's exactness given as a value between $0$ and $1$, with $1$ relating to high precision \item \textit{Recall} represents a measure of a classifier's completeness given as a value between $0$ and $1$ \item \textit{F\textsubscript{1}-score} is the harmonic mean of Precision and Recall. It takes values between $0$ and $1$, with $1$ relating to perfect Precision and Recall \[F\textsubscript{1} score= 2* \frac{Precision * Recall}{Precision+Recall}\] \item \textit{Matthews Correlation Coefficient (MCC)} is a metric yielding easy comparison with respect to a random baseline, suitable for unbalanced classes. It takes values between $-1$ and $+1$. A coefficient of $+1$ represents a perfect prediction, $0$ an average random prediction and $-1$ an inverse prediction. As shown in \cite{gorodkin2004comparing}, let $K$ be the number of classes and $C$ be a confusion matrix with dims $K\times K$, the $MCC$ can be calculated as: \[MCC\_part1 ={\sqrt{\sum_{k}(\sum_{l}C\textsubscript{kl})(\sum_{f,g\vert f\neq{g}}C\textsubscript{gf})}}\] \[MCC\_part2 ={\sqrt{\sum_{k}(\sum_{l}C\textsubscript{lk}) (\sum_{f,g\vert f\neq{g}}C\textsubscript{fg})}}\] \[MCC = \frac{\sum_{k}\sum_{l}\sum_{m}C\textsubscript{kk}C\textsubscript{lm}-C\textsubscript{kl}C\textsubscript{mk}}{MCC\_part1*MCC\_part2}\] \end{itemize} In Section \ref{sec:result} results for the baseline model (\textit{C\_entity}) and for the final model obtained after cross-validation using the enriched dataframe (\textit{C\_final}) are presented and compared. We report global metric values for Accuracy/Score and \textit{MCC} averaged over the K=$5$ cross-validation runs and per-class values for Precision, Recall and F1-score when evaluating the final models. \subsection{Hardware and Software Configuration}\label{configuration} All analyses were run on a cluster of three virtual machines, each one with $16$ CPUs Intel(R) Xeon(R) Silver $4114$ CPU @ $2.20$ GHz, $64$ GB RAM DDR4 memory with $2,666$ MHz, and $500$ GB of Hard Disk SATA. Apache Spark\footnote{https://spark.apache.org/} v$2.4.0$, set in cluster mode was used to manage stored data using Apache Hadoop\footnote{https://hadoop.apache.org/}. The various classifier models were implemented and evaluated using Python's Scikit-learn\footnote{https://scikit-learn.org/} library. All scripts were executed within the Jupyter-notebook\footnote{https://jupyter.org/} environment. \section{Results}\label{sec:result} Considering the simple classifier \textit{C\_entity} from the first experiment, the Gradient Boosting model yielded a better average score ($61.90\%$ accuracy) and \textit{MCC} ($0.44$) than Random Forest and Adaboost classifiers, as shown in Table \ref{tab:cross-val} (upper section). However, with overall low \textit{MCC} for all classifiers (between $0.22$ and $0.44$), these scores were not sufficient to achieve reliable entities characterization. This led to introducing our cascading machine learning approach, enriching the initial entity dataframe with information gathered from prior classifications in the second experiment. \begin{table}[!htbp] \centering \begin{tabular}{lccccc} \cline{1-5} \multicolumn{1}{c}{Model} & Classifier & Score \% & Std \% & MCC \\ \cline{1-5} \textit{Adaboost} & \textit{C\_entity} & 45.63 & 6.34 & 0.22 \\ \textit{Random Forest} & \textit{C\_entity} & 59.71 & 1.82 & 0.41 \\ \textit{Gradient Boosting} & \textit{C\_entity} & 61.90 & 1.36 & 0.44 \\ \cline{1-5} \textit{Adaboost} & \textit{C\_final} & 78.84 & 1.76 & 0.76 \\ \textit{Random Forest} & \textit{C\_final} & 98.04 & 1.22 & 0.97 \\ \textit{Gradient Boosting} & \textit{C\_final} & 99.68 & 0.63 & 0.99 \\ \cline{1-5} \end{tabular} \caption{\textit{Average performance of classifiers over five cross-validation repetitions for simple \textit{C\_entity} model (above) and for final model after data enrichment via cascading machine learning \textit{C\_final}}} \label{tab:cross-val} \end{table} Analyzing the \textit{C\_address}, \textit{C\_motif1} and \textit{C\_motif2} classifiers separately for entity characterization, Table \ref{tab:model_score} shows that outgoing information from the Random Forest classifier resulted to be more accurate than information from Gradient Boosting and Adaboost classifiers (accuracy scores \textgreater$90\%$ for Random Forest). Notably, only using information from the address dataframe, the Random Forest classifier \textit{C\_address} could already achieve an average global accuracy of $\sim96\%$. Due to these results, we only used results obtained from Random Forest classifiers for the subsequent entities dataframe enrichment. Random Forest classifiers not only proved to be the best in terms of accuracy, but also performed with highest speed among the considered classification models. \begin{table}[!htbp] \centering \begin{tabular}{lcccc} \cline{1-4} \multicolumn{1}{c}{Model} & \textit{C\_address} \% & \textit{C\_motif1} \% & \textit{C\_motif2} \% \\ \cline{1-4} \textit{Adaboost} & 61.54 & 72.69 & 78.27 \\ \textit{Random Forest} & 95.73 & 94.14 & 90.88 \\ \textit{Gradient Boosting} & 83.23 & 83.52 & 83.54 \\ \cline{1-4} \end{tabular} \caption{\textit{Average global C\_address, C\_motif1 and C\_motif2 classifier accuracy calculated via 5-fold cross-validation}} \label{tab:model_score} \end{table} The final classifiers \textit{C\_final} were fed with the enriched entity dataframe, which comprised the original features from the entity dataframe and included as new features the class predictions obtained from \textit{C\_address}, \textit{C\_motif1} and \textit{C\_motif2} for the respective test data sets. From Table \ref{tab:cross-val} (lower section) it is obvious that the average score result improved significantly by exploiting the information obtained via our cascading approach. Random Forest and Gradient Boosting classifiers again performed better than the Adaboost model, reaching a score of more than $98\%$ (respectively $\sim39\%$ and $\sim38\%$ percentage points higher than the baseline accuracy from \textit{C\_entity}). Furthermore, classification results were more stable during cross-validation, generating low standard deviations between $0.63\%$ and $1.76\%$ and the \textit{MCC} reached values close to $1.0$, relating to close-to-perfect class prediction. \begin{table*} \centering \begin{tabular}{lcccc|ccc} \cline{3-8} & \multicolumn{1}{l}{} & \multicolumn{3}{c|}{\textit{C\_entity} model} & \multicolumn{3}{c}{\textit{C\_final} model} \\ \hline \multicolumn{1}{c}{Class} & Model & Precision & Recall & F1-score & Precision & \multicolumn{1}{l}{Recall} & F1-score \\ \hline \textit{Exchange} & Adaboost & 0.51 & 0.68 & 0.57 & 0.77 & 0.78 & 0.77 \\ \textit{Gambling} & Adaboost & 0.22 & 0.14 & 0.17 & 0.75 & 1.00 & 0.85 \\ \textit{Market} & Adaboost & 0.05 & 0.15 & 0.08 & 0.40 & 0.30 & 0.33 \\ \textit{Mining Pool} & Adaboost & 0.20 & 0.16 & 0.17 & 0.11 & 0.2 & 0.14 \\ \textit{Mixer} & Adaboost & 0.69 & 0.78 & 0.71 & 1.00 & 0.98 & 0.99 \\ \textit{Service} & Adaboost & 0.20 & 0.10 & 0.13 & 0.95 & 0.95 & 0.95 \\ \hline \textit{Exchange} & Random Forest & 0.60 & 0.77 & 0.67 & 0.96 & 1.00 & 0.98 \\ \textit{Gambling} & Random Forest & 0.54 & 0.50 & 0.51 & 1.00 & 1.00 & 1.00 \\ \textit{Market} & Random Forest & 0 & 0 & 0 & 1.00 & 0.85 & 0.91 \\ \textit{Mining Pool} & Random Forest & 0.68 & 0.50 & 0.56 & 1.00 & 0.92 & 0.96 \\ \textit{Mixer} & Random Forest & 0.89 & 0.78 & 0.82 & 1.00 & 1.00 & 1.00 \\ \textit{Service} & Random Forest & 0 & 0 & 0 & 1.00 & 0.93 & 0.96 \\ \hline \textit{Exchange} & Gradient Boosting & 0.61 & 0.80 & 0.69 & 1.00 & 1.00 & 1.00 \\ \textit{Gambling} & Gradient Boosting & 0.59 & 0.53 & 0.55 & 0.99 & 1.00 & 0.99 \\ \textit{Market} & Gradient Boosting & 0.10 & 0.05 & 0.06 & 1.00 & 1.00 & 1.00 \\ \textit{Mining Pool} & Gradient Boosting & 0.38 & 0.40 & 0.38 & 1.00 & 1.00 & 1.00 \\ \textit{Mixer} & Gradient Boosting & 0.92 & 0.84 & 0.87 & 1.00 & 1.00 & 1.00 \\ \textit{Service} & Gradient Boosting & 0 & 0 & 0 & 1.00 & 0.93 & 0.96 \\ \hline \end{tabular} \caption{\textit{Average Precision, Recall and F\textsubscript{1}-score calculated in each model implementation for each class}} \label{tab:test-metrics} \end{table*} In Table \ref{tab:test-metrics}, we present per-class Precision, Recall and F\textsubscript{1}-scores calculated for \textit{C\_entity} (baseline) and \textit{C\_final} (enriched) classifiers for each classification model. Results demonstrate that the simple classifier \textit{C\_entity} - independently of the classification model used - had problems detecting \textit{Service} and \textit{Market} entities (calculated metrics are $0$ or have very low values). However, it is to be noted that these two classes are least represented in terms of distinct entities in the original data set. Random Forest and Gradient Boosting classifiers showed overall good performance in detecting \textit{Mixer} entities for the \textit{C\_entity} approach (F\textsubscript{1}-scores \textgreater$0.8$). By exploiting the cascading machine learning implementation however, all classifiers improved their classification performance for each class, with most values being close to $1.0$. Only the Adaboost model kept having problems with the classification of \textit{Mining Pool} and \textit{Market} entities. Random Forest and the Gradient Boosting models instead yielded excellent values for Precision, Recall and F\textsubscript{1}-score for each class. Overall best classification scores were achieved by the \textit{C\_final} implementation with Gradient Boosting models. Data enrichment through prior classification and cascading thus clearly had a highly beneficial impact on classification ability of Gradient Boosting, motivating a further analysis of the importance of individual features from the enriched entity dataframe. We therefore calculated in a next step a feature \textit{importance score} for the enriched entity dataframe. Generally, the feature importance score provides a score that indicates how useful or valuable each feature was in the construction of the model. The more often an attribute is used to make key decisions, the greater will be its relative importance score. Importance was explicitly calculated through Python's Scikit-learn library for each attribute in the data set, allowing features to be ranked and compared to each other. Figure \ref{fig:features} shows a list of the top fifteen features for the \textit{C\_final} Gradient Boosting classifier. All fifteen important features were created during the prior classifications taking into account \textit{C\_address}, \textit{C\_motif1} and \textit{C\_motif2}. These features represent how address, 1\_motif, or 2\_motif data, related to a certain entity, were previously classified. This highlights again that the information brought in from prior classifications (first step of the cascade) clearly contributes to much improved entities characterization. \begin{figure}[] \centering \includegraphics[width=\linewidth]{images/importance_score.eps} \caption{\textit{Top 15 important features from the GB classifier}} \label{fig:features} \end{figure} \section{Conclusion and Future work}\label{sec:conclusions} In this paper, we present a novel approach of how to attack Bitcoin anonymity through entity characterization. Specifically, we demonstrate how a cascading machine learning model combined with an adequate set of input features directly derived from Bitcoin blockchain data (entity and address data) as well as derived via 1\_motif and 2\_motif concepts introduced by Ranshous et al. \cite{ranshous2017exchange} can lead to impressive classification performance for a number of relevant Bitcoin entity classes. In fact, we were able to obtain an average global accuracy score of $99.68\%$ with low standard deviation of $0.63\%$ and a Matthews Correlation Coefficient (\textit{MCC}) of $0.99$ over 5-fold cross validation for a Gradient Boosting model using our cascading approach. These final models were indeed able to predict each of the six entity classes used (Exchange, Gambling, Market, Mining Pool, Mixer, Service) with Precision, Recall and F\textsubscript{1}-score values close to $1.0$. Ranshous et al. \cite{ranshous2017exchange} obtained similar results using Random Forest and Adaboost classifiers, however their study was limited to exchange address classification. Jourdan et al. \cite{jourdan2018characterizing} generally obtained lower values for per-class F\textsubscript{1}-score and Precision ranging between $0.67$ and $1.0$ using Gradient Boosting and their approach involved a complex step of model hyper-parameter calibration and required a total number of $315$ input features. Our approach applies one more classification step in the "classification cascade" generating a set of new entity-related features used for the final classification, but we do not require extensive parameter tuning. Most importantly, we only use $34$ features for the initial classification step (involving address, 1\_motif and 2\_motif) and finally $7$ features from the entities data set plus $3$ x $6 = 18$ new features obtained as outgoing information from the initial classification step. The final classification is thus based on only $25$ features, which equals to less than $10\%$ of features compared to Jourdan et al. Our future work will focus on investigating deeper the matter of feature importance, in order to further reduce the number of relevant features required for obtaining high entity classification performance. This will facilitate the process of attacking Bitcoin anonymity further. One major drawback of our approach is that we were not able to characterize entities behaving as normal users as this information is not currently available as ground truth data in the WalletExplorer. We had to remove all entities that have not yet been classified in the WalletExplorer from our analysis. Nevertheless, we were able to detect six classes of key Bitcoin services that have previously been associated with illicit financial operations with very high classification scores. We therefore believe that our study can contribute to improving crime investigation and may form a base for developing effective tools assisting law enforcement agencies in uncovering illegal activities within the Bitcoin network. \section*{ACKNOWLEDGMENT} This work was partially funded by the European Commission through the Horizon 2020 research and innovation program, as part of the ``TITANIUM" project (grant agreement No 740558). \addtolength{\textheight}{-12cm} \bibliographystyle{splncs04}
1,108,101,564,001
arxiv
\section{Introduction} A core challenge in computational game theory is the problem of learning strategies that approximate Nash equilibrium in very large imperfect-information games such as Starcraft~\citep{alphastar}, dark chess~\citep{zhang2021subgame}, and Stratego~\citep{mcaleer2020pipeline}. Due to the size of these games, tabular game-solving algorithms such as counterfactual regret minimization (CFR) are unable to produce such equilibrium strategies. To sidestep the issue, in the past \emph{stochastic} methods such as \emph{Monte-Carlo CFR (MCCFR)} have been proposed. These methods use computationally inexpensive unbiased estimators of the regret (i.e., utility gradient) of each player, trading off speed for convergence guarantees that hold with high probability rather than in the worst case. Several unbiased estimation techniques of utility gradients are known. Some, such as \emph{external sampling}, produce low-variance gradient estimates that are dense, and therefore are prohibitive in the settings mentioned above. Others, such as \emph{outcome sampling}, produce high-variance estimates that are sparse and can be computed given only the realization of play, and are therefore more appropriate for massive games. However, even outcome-sampling MCCFR is inapplicable in practice. First, since it is a tabular method, it can only update regret on information sets that it has seen during training. In very large games, only a small fraction of all information sets will be seen during training. Therefore, generalization (via neural networks) is necessary. Second, to achieve unbiasedness of the utility gradient estimates, outcome-sampling MCCFR uses importance sampling (specifically, it divides the utility of each terminal state by a reach probability, which is often tiny), leading to estimates with extremely large magnitudes and high variance. This drawback is especially problematic when MCCFR is implemented using function approximation, as the high variance of the updates can cause instability of the neural network training. Deep CFR~\citep{deep_cfr} addresses the first shortcoming above by training a neural network to estimate the regrets cumulated by outcome-sampling MCCFR, but is vulnerable to the second shortcoming, causing the neural network training procedure to be unstable. DREAM~\citep{steinberger2020dream} improves on Deep CFR by \emph{partially} addressing the second shortcoming by using a history-based value function as a baseline~\citep{Schmid19VRMCCFR}. This baseline greatly reduces the variance in the updates and is shown to have much better performance than simply regressing on the MCCFR updates. However, DREAM still uses importance sampling to remain unbiased. So, while DREAM was shown to work in small artificial poker variants, it is still vulnerable to the high variance of the utility gradients and indeed we demonstrate that in games with long horizons and/or large action spaces, this importance sampling term causes DREAM to fail. \iffalse \fi In this paper, we introduce \textit{\textbf{E}schewing importance \textbf{S}ampling by \textbf{C}omputing a \textbf{H}istory value function to \textbf{E}stimate \textbf{R}egret (ESCHER)}, a method that is unbiased, low variance, and does not use importance sampling. Similar to DREAM, ESCHER uses a history-dependent value function, but instead of using it as a baseline, ESCHER uses it directly as an estimator of the counterfactual value. To remove the need to weight by the reach to the current information state, ESCHER samples actions from a fixed sampling policy that does not change from one iteration to the next. Since this distribution is static, our fixed sampling policy simply weights certain information sets more than others. When the fixed sampling policy is close to the balanced policy (i.e., one where each leaf is reached with equal probability), these weighting terms minimally affect overall convergence of ESCHER with high probability. In experiments with a tabular version of ESCHER that uses an oracle value function, we find that ESCHER performs comparably to OS-MCCFR and tabular DREAM with an oracle value function, but has orders of magnitude lower estimated regret variance. In experiments with a deep learning version of ESCHER on the large games of phantom tic tac toe and dark hex, we find that ESCHER outperforms NFSP and DREAM, and that the performance difference increases to be dramatic as the size of the game increases. \section{Background} We consider extensive-form games with perfect recall~\citep{OsborneRubinstein94,hansen2004dynamic}. An extensive-form game progresses through a sequence of player actions, and has a \emph{world state} $w \in \mathcal{W}$ at each step. In an $N$-player game, $\mathcal{A} = \mathcal{A}_1 \times \cdots \times \mathcal{A}_N$ is the space of joint actions for the players. $\mathcal{A}_i(w) \subseteq \mathcal{A}_i$ denotes the set of legal actions for player $i \in \mathcal{N} = \{1, \ldots, N\}$ at world state $w$ and $a = (a_1, \ldots, a_N) \in \mathcal{A}$ denotes a joint action. At each world state, after the players choose a joint action, a transition function $\mathcal{T}(w, a) \in \Delta^\mathcal{W}$ determines the probability distribution of the next world state $w'$. Upon transition from world state $w$ to $w'$ via joint action $a$, player $i$ makes an \emph{observation} $o_i = \mathcal{O}_i(w,a,w')$. In each world state $w$, player $i$ receives a utility $\mathcal{U}_i(w)$. The game ends when the players reach a terminal world state. In this paper, we consider games that are guaranteed to end in a finite number of actions. A \emph{history} is a sequence of actions and world states, denoted $h = (w^0, a^0, w^1, a^1, \ldots, w^t)$, where $w^0$ is the known initial world state of the game. $\mathcal{U}_i(h)$ and $\mathcal{A}_i(h)$ are, respectively, the utility and set of legal actions for player $i$ in the last world state of a history $h$. An \emph{information set} for player $i$, denoted by $s_i$, is a sequence of that player's observations and actions up until that time $s_i(h) = (a_i^0, o_i^1, a_i^1, \ldots, o_i^t)$. Define the set of all information sets for player $i$ to be $\mathcal{I}_i$. The set of histories that correspond to an information set $s_i$ is denoted $\mathcal{H}(s_i) = \{ h: s_i(h) = s_i \}$, and it is assumed that they all share the same set of legal actions $\mathcal{A}_i(s_i(h)) = \mathcal{A}_i(h)$. For simplicity we often drop the subscript $i$ for an information set $s$ when the player is implied. A player's \emph{strategy} $\pi_i$ is a function mapping from an information set to a probability distribution over actions. A \emph{strategy profile} $\pi$ is a tuple $(\pi_1, \ldots, \pi_N)$. All players other than $i$ are denoted $-i$, and their strategies are jointly denoted $\pi_{-i}$. A strategy for a history $h$ is denoted $\pi_i(h) = \pi_i(s_i(h))$ and $\pi(h)$ is the corresponding strategy profile. When a strategy $\pi_i$ is learned through RL, we refer to the learned strategy as a \emph{policy}. The \emph{expected value (EV)} $v_i^{\pi}(h)$ for player $i$ is the expected sum of future utilities for player $i$ in history $h$, when all players play strategy profile $\pi$. The EV for an information set $s_i$ is denoted $v_i^{\pi}(s_i)$ and the EV for the entire game is denoted $v_i(\pi)$. A \emph{two-player zero-sum} game has $v_1(\pi) + v_2(\pi) = 0$ for all strategy profiles $\pi$. The EV for an action in an information set is denoted $v_i^{\pi}(s_i,a_i)$. A \emph{Nash equilibrium (NE)} is a strategy profile such that, if all players played their NE strategy, no player could achieve higher EV by deviating from it. Formally, $\pi^*$ is a NE if $v_i(\pi^*) = \max_{\pi_i}v_i(\pi_i, \pi^*_{-i})$ for each player $i$. The \emph{exploitability} $e(\pi)$ of a strategy profile $\pi$ is defined as $e(\pi) = \sum_{i \in \mathcal{N}} \max_{\pi'_i}v_i(\pi'_i, \pi_{-i})$. A \emph{best response (BR)} strategy $\mathbb{BR}_i(\pi_{-i})$ for player $i$ to a strategy $\pi_{-i}$ is a strategy that maximally exploits $\pi_{-i}$: $\mathbb{BR}_i(\pi_{-i}) = \arg\max_{\pi_i}v_i(\pi_i, \pi_{-i})$. An \emph{$\bm{\epsilon}$-best response ($\bm{\epsilon}$-BR)} strategy $\mathbb{BR}^\epsilon_i(\pi_{-i})$ for player $i$ to a strategy $\pi_{-i}$ is a strategy that is at most $\epsilon$ worse for player $i$ than the best response: $v_i(\mathbb{BR}^\epsilon_i(\pi_{-i}), \pi_{-i}) \ge v_i(\mathbb{BR}_i(\pi_{-i}), \pi_{-i}) - \epsilon$. An \emph{$\bm{\epsilon}$-Nash equilibrium ($\bm{\epsilon}$-NE)} is a strategy profile $\pi$ in which, for each player $i$, $\pi_i$ is an $\epsilon$-BR to $\pi_{-i}$. \subsection{Counterfactual Regret Minimization (CFR)} In this section we review the \textit{counterfactual regret minimization (CFR)} framework. All superhuman poker AIs have used advanced variants of the framework as part of their architectures~\citep{bowling2015heads,brown2018superhuman,brown2019superhuman}. CFR is also the basis of several reinforcement learning algorithms described in section~\ref{sec:relwork}. We will leverage and extend the CFR framework in the rest of the paper. We will start by reviewing the framework. Define $\eta^\pi(h)$ to be the reach weight of joint policy $\pi$ to reach history $h$, and $z$ is a terminal history. Define $Z(s)$ to be the set of terminal histories $z$ that can be reached from information state $s$ and define $z[s]$ to be the unique history $h \in s$ that is a subset of $z$. Define \begin{equation} \label{eq:history-val} v_{i}(\pi, h) = \sum_{z \sqsupset h}\eta^\pi(h, z)u_i(z) \end{equation} to be the expected value under $\pi$ for player $i$ having reached $h$. Note that this value function takes as input the full-information history $h$ and not an information set. Define \begin{equation} \label{eq:cf-val} v_i^c(\pi, s) = \sum_{z \in Z(s)}\eta^\pi_{-i}(z[s])\eta^\pi(z[s], z)u_i(z) = \sum_{h \in s}\eta^\pi_{-i}(h)v_{i}(\pi, h) \end{equation} to be the \emph{counterfactual value} for player $i$ at state $s$ under the joint strategy $\pi$. Define the strategy $\pi_{s \rightarrow a}$ to be a modified version of $\pi$ where $a$ is played at information set $s$, and the counterfactual state-action value $q^c_i(\pi, s, a) = v^c_i(\pi_{s \rightarrow a}, s)$. For any state $s$, strategy $\pi$, and action $a \in \mathcal{A}(s)$, one can define a local \emph{counterfactual regret} for not switching to playing $a$ at $s$ as $r^c(\pi, s, a) = q_i^c(\pi, s, a) - v_i^c(\pi, s)$. Counterfactual regret minimization (CFR)~\citep{cfr} is a strategy iteration algorithm that produces a sequence of policies: $\{ \pi_1, \pi_2, \cdots, \pi_T \}$. Each policy $\pi_{t+1}(s)$ is derived directly from a collection of cumulative regrets $R_T(s, a) = \sum_{t=1}^T r^c(\pi_t, s, a)$, for all $a \in \mathcal{A}(s)$ using regret-matching~\citep{hart2000simple}. In two-player zero-sum games, the average policy $\bar{\pi}_T$ converges to an approximate Nash equilibrium at a rate of $e(\bar{\pi}_T) \le O(1/\sqrt{T})$. \subsection{Monte Carlo Counterfactual Regret Minimization (MCCFR)} In the standard CFR algorithm, the quantities required to produce new policies in Equations~\ref{eq:history-val} and~\ref{eq:cf-val} require full traversals of the game to compute exactly. Monte Carlo CFR~\citep{lanctot2009monte} is a \emph{stochastic} version of CFR which instead \emph{estimates} these quantities. In particular, MCCFR uses a sampling approach which specifies a distribution over blocks $Z_j$ of terminal histories such that $\cup_j Z_j = \mathcal{Z}$, the set of terminal histories. Upon sampling a block $j$, a certain \emph{sampled counterfactual value} $\hat{v}^c(\pi, s~|~j)$ (defined in detail later in this section) is computed for all prefix histories that occur in $Z_j$. Then, estimated regrets are accumulated and new policies derived as in CFR. The main result is that $\mathbb{E}[\hat{v}^c(\pi, s~|~j)] = v^c(\pi, s)$, so MCCFR is an unbiased approximation of CFR, and inherits its convergence properties albeit under a probabilistic guarantee. Blocks are sampled via sampling policy $\tilde{\pi}$ which is commonly a function of the players' joint policy $\pi$. Two sampling variants were defined in the original MCCFR paper: \emph{outcome sampling} (OS-MCCFR) and \emph{external sampling} (ES-MCCFR). External sampling samples only the opponent (and chance's) choices; hence, it requires a forward model of the game to recursively traverse over all of the subtrees under the player's actions. Outcome sampling is the most extreme sampling variant where blocks consist of a single terminal history: it is the only model-free variant of MCCFR compliant with the standard reinforcement learning loop where the agent learns entirely from experience with the environment. The OS-MCCFR counterfactual value estimator when the opponent samples from their current policy as is commonly done is given as follows: \begin{equation} \label{mccfr_eq} \begin{split} \hat{v}_{i}(\pi, s | z) & = \frac{\eta^{\pi_{-i}}(z[s])\eta^\pi(z[s], z)u_i(z)}{\eta^{\tilde{\pi}}(z)} = \frac{1}{\eta^{\tilde{\pi}_i}(z[s])}\frac{\eta^{\pi_i}(z[s], z)}{\eta^{\tilde{\pi}_i}(z[s], z)}u_i(z) \\ \end{split} \end{equation} The importance sampling term that is used to satisfy the unbiasedness of the values can have a significant detrimental effect on the convergence rate~\cite{Gibson12probing}. Variance reduction techniques provide some empirical benefit~\cite{Schmid19VRMCCFR,Davis19}, but have not been evaluated on games with long trajectories where the importance corrections have their largest impact. \subsection{Deep Counterfactual Regret Minimization} Deep CFR~\citep{deep_cfr, steinberger2019single, li2019double} is a method that uses neural networks to scale MCCFR to large games. Deep CFR performs external sampling MCCFR and trains a regret network $R_i(s, a | \psi)$ on a replay buffer of information sets and estimated cumulative regrets. The regret network is trained to approximate the cumulative regrets seen so far at that information state. The estimated counterfactual regrets are computed the same as in MCCFR, namely using importance sampling or external sampling. \subsection{DREAM} DREAM~\citep{steinberger2020dream} builds on Deep CFR and approximates OS-MCCFR with deep neural networks. Like Deep CFR, it trains a regret network $R_i(s, a | \psi)$ on a replay buffer of information sets and estimated cumulative regrets. Additionally, in order to limit the high variance of OS-MCCFR, DREAM uses a learned history value function $q_i(\pi, h, a | \theta)$ and uses it as a baseline~\citep{Schmid19VRMCCFR}. While the baseline helps remove variance in the estimation of future utility, in order to remain unbiased DREAM must use importance sampling as in OS-MCCFR. We show empirically that variance of the DREAM estimator of the counterfactual value, although lower than OS-MCCFR, will often be quite high, even in small games and with an oracle history value function. This high variance estimator might make neural network training very difficult. In contrast, ESCHER has no importance sampling term and instead directly uses the learned history value function $q_i(\pi, h, a | \theta)$ to estimate regrets. \section{Tabular ESCHER with Oracle Value Function}\label{sec:tabular} In this section we define a tabular version of our proposed algorithm, \emph{ESCHER}, where we assume oracle access to a value function. While in practice this tabular algorithm will not compare well to existing approaches due to the expense of generating an oracle value function, in this section we show that if we assume access to an oracle value function at no cost, then our tabular method is sound and converges to a Nash equilibrium with high probability. In the next section we introduce our main method which is a deep version of this tabular method and learns a neural network value function from data. As shown in Equation \ref{mccfr_eq}, the OS-MCCFR estimator can be seen as containing two separate terms. The first $1/\eta^{\tilde{\pi}_i}(z[s])$ term ensures that each information set is updated equally often in expectation. The second $\eta^{\pi_i}(z[s], z)u_i(z)/\eta^{\tilde{\pi}_i}(z[s], z)$ term is an unbiased estimator of the history value $v_i(\pi, z[s])$. The main idea behind ESCHER is to remove the first importance sampling term by ensuring that the sampling distribution for the update player remains fixed across iterations, and to replace the second term with a history value function $\hat{v}_i(\pi, z[s])$. Similar to OS-MCCFR, the tabular version of ESCHER iteratively updates each player's policy by sampling a single trajectory. When updating a player's strategy, we sample from a fixed distribution (for example the uniform distribution) for that player and sample from the opponent's current policy for the other player. As in MCCFR, we update the estimated regret for all actions in each information state reached in the trajectory. However, unlike in MCCFR, we do not use the terminal utility to estimate regret but instead use the oracle history value function. This reduces variance in the update because the oracle value function at a given history action pair will always be the same if the policies are the same. ESCHER samples from the opponent's current strategy when it is their turn but samples from a fixed strategy that roughly visits every information set equally likely when it is the update player's turn. As a result, the expected value of the history value is equal to the counterfactual value scaled by a term that weights certain information sets up or down based on the fixed sampling policy. Formally, define the fixed sampling policy $b_i(s, a)$ to be any policy that remains constant across iterations and puts positive probability on every action. This fixed sampling policy can be one of many distributions such as one that samples uniformly over available actions at every information set. An interesting open research direction is finding good fixed sampling policy. In this paper, our fixed sampling policy uniformly samples over actions, which is somewhat similar to the robust sampling technique introduced in ~\cite{li2019double}. When updating player $i$, we construct a joint fixed sampling policy $\tilde{\pi}^i(s, a)$ to be \begin{equation}\label{sampling_dist} \tilde{\pi}^i(s, a) = \begin{cases} b_i(s, a), & \text{if it's the update player $i$'s turn} \\ \pi_{-i}(s, a), & \text{otherwise} \end{cases} \end{equation} We use a fixed sampling policy because it allows us to remove any importance sampling in our estimator. Unlike OS-MCCFR which must divide by the current player's reach probability to remain unbiased, our method simply weights the regrets of certain information states more than others, but total average regret is still guaranteed to converge to zero. To remove the importance sampling term that arises from estimating the future expected value from the terminal utlity, we substitute this estimate with the oracle value function. Formally, we define our estimator for the immediate counterfactual regret to simply be the oracle advantage: \begin{algorithm}[t] \caption{Tabular ESCHER with Oracle Value Function} \label{alg:main_alg_tab} \begin{algorithmic} \FOR{$t = 1, ..., T$}{} \FOR{update player $i \in \{0, 1\}$} \STATE Sample trajectory $\tau$ using sampling distribution $\tilde{\pi}^i$ (Equation \ref{sampling_dist}) \FOR{each state $s \in \tau$} \FOR{each action $a$} \STATE Estimate immediate regret vector $\hat{r}(\pi, s, a) = q_i(\pi, h, a) - v_i(\pi, h)$ \STATE Update total estimated regret of action $a$ at infostate $s$: $\hat{R}(s, a) = \hat{R}(s, a) + \hat{r}(\pi, s, a)$ \STATE Update $\pi_i(s, a)$ via regret matching on total estimated regret \ENDFOR \ENDFOR \ENDFOR \ENDFOR \STATE {\bfseries return} average policy $\bar{\pi}$\; \end{algorithmic} \end{algorithm} \begin{equation} \hat{r}_i(\pi, s, a | h) = q_{i}(\pi, h, a) - v_i(\pi, h) \end{equation} If we sample from $\tilde{\pi}^i$ when updating player $i$, then the expected value of our counterfactual regret estimator is: \begin{equation} \label{eq7} \begin{split} \mathbf{E}_{h \sim \tilde{\pi}^i}[\hat{r}_i(\pi, s, a | h)] & = \mathbf{E}_{h \sim \tilde{\pi}^i}[q_{i}(\pi, h, a) - v_{i}(\pi, h)] \\ & = \sum_{h \in s} \eta^{\tilde{\pi}^i}(h)[q_{i}(\pi, h, a) - v_{i}(\pi, h)] \\ & = \eta_{i}^{\tilde{\pi}^i}(s)\sum_{h \in s} \eta_{-i}^{\pi}(h)[q_{i}(\pi, h, a) - v_{i}(\pi, h)] \\ & = w(s)[v^c_i(\pi, s, a) - v^c_i(\pi, s)] \\ & = w(s)r(s, a) \end{split} \end{equation} Where for $h, h' \in s$, $\eta_{i}^{\pi'}(h) = \eta_{i}^{\pi'}(h') = \eta_{i}^{\pi'}(s) =: w(s)$ is the reach probability for reaching that infostate for player $i$ via the fixed sampling distribution. Unlike the MCCFR estimator, our estimator has no importance sampling terms, and as a result has much lower variance. Furthermore, our estimator is only biased to the extent that it weights different information sets by different values depending on the fixed distribution. The correctness of our method is established by the next theorem, whose proof can be found in Appendix \ref{app:proofs}. \begin{restatable}{theorem}{thethm} Assume a fixed sampling policy that puts positive probability on every action. For any $p \in (0,1)$, with probability at least $1-p$, the regret accumulated by each agent learning using the tabular algorithm ESCHER (Algorithm~\ref{alg:main_alg_tab}) is upper bounded by $O(\sqrt{T}\cdot\mathrm{poly\,log}(1/p))$, where the $O(\cdot)$ notation hides game-dependent and sampling-policy-dependent constants. \end{restatable} \section{ESCHER } As shown in the previous section, when using an oracle value function, our method minimizes regret in the single-agent case and converges to a Nash equilibrium in the two-player case. In this section we describe our main method where we learn a history-dependent value function via a neural network. Similar to Deep CFR and DREAM, we train a regret network $R_i(s, a | \psi)$ over a buffer of information states and targets where the targets come directly from the learned history value function $q_i(\pi, h, a | \theta)$. Our method is built on Deep CFR. In particular, like Deep CFR we traverse the game tree and add this experience into replay buffers. The first replay buffer stores information states and instantaneous regret estimates is used to train a regret network $R_\psi(s, a)$ that is trained to estimate the cumulative regret at a given information set. Unlike Deep CFR and DREAM, which use the terminal utility and sampling probabilities from the current trajectory to estimate the value, in ESCHER the instantaneous regret estimates are estimated using the current history value function alone \begin{equation} \hat{r}_i(\pi, s, a | h) = q_i(\pi, h, a | \theta) - \sum_a \pi_i(s, a) q_i(\pi, h, a | \theta) \end{equation} Similar to Deep CFR, each player's current policy $\pi_i$ is given by performing regret matching on the output of the current regret network $R_i(s, a | \psi)$. The second replay buffer stores histories and terminal utilities and is used to train the value network $Q_\theta$ to estimate the expected utility for both players when both players are at that history and play from their current policies. Lastly, the third replay buffer stores information states and actions taken by the policy $\pi$ and uses that data to train an average policy network $\bar{\pi}_\phi$ that approximates the average policy across all iterations. It is this average policy that has no regret and converges to an approximate Nash equilibrium in self play. As described in the previous section, the only difference between our tabular method and MCCFR and our deep method and Deep OS-MCCFR is the estimator for the immediate regret. While Deep OS-MCCFR uses an importance-weighted estimate of the counterfactual value estimated from the utility of the rollout, we instead simply use the value function to estimate the immediate regret. We describe our algorithm in Algorithm \ref{alg:main_alg}. \begin{algorithm}[t] \caption{ESCHER} \label{alg:main_alg} \begin{algorithmic} \STATE Initialize history value function $q$ \STATE Initialize policy $\pi_i$ for both players \FOR{$t = 1, ..., T$}{} \STATE Retrain history value function $q$ on value buffer data \STATE Reinitialize regret networks $R_0, R_1$ \FOR{update player $i \in \{0, 1\}$} \FOR{$P$ trajectories}{} \STATE Get trajectory $\tau$ using sampling distribution (Equation~\ref{sampling_dist}) \FOR{each state $s \in \tau$} \FOR{each action $a$} \STATE Estimate immediate cf-regret $\hat{r}(\pi, s, a) = q_i(\pi, h, a | \theta) - \sum_a \pi_{i}(s, a) q_i(\pi, h, a | \theta)$ \ENDFOR \STATE Add $(s, \hat{r}(\pi, s))$ to cumulative regret buffer \STATE Add $(s, a')$ to average policy buffer where $a'$ is action taken at state $s$ in trajectory $\tau$ \ENDFOR \ENDFOR \STATE Train regret network $R_i$ on cumulative regret buffer \ENDFOR \ENDFOR \STATE Train average policy network $\bar{\pi}_{\phi}$ on average policy buffer \STATE {\bfseries return} average policy network $\bar{\pi}_\phi$\; \end{algorithmic} \end{algorithm} \section{Related Work} \label{sec:relwork} Superhuman performance in two-player games usually involves two components: the first focuses on finding a model-free blueprint strategy, which is the setting we focus on in this paper. The second component improves this blueprint online via model-based subgame solving and search~\citep{burch2014solving, moravcik2016refining, brown2018depth, brown2020combining, brown2017safe, schmid2021player}. This combination of blueprint strategies with subgame solving has led to state-of the art performance in Go~\citep{silver2017mastering}, Poker~\citep{brown2017libratus, brown2018superhuman, moravvcik2017deepstack}, Diplomacy~\citep{gray2020human}, and The Resistance: Avalon~\citep{serrino2019finding}. Methods that only use a blueprint have achieved state-of-the-art performance on Starcraft~\citep{alphastar}, Gran Turismo~\citep{wurman2022outracing}, DouDizhu~\citep{zha2021douzero}, Mahjohng~\citep{li2020suphx}, and Barrage Stratego~\citep{mcaleer2020pipeline}. Because ESCHER is a method for finding a blueprint, it can be combined with subgame solving and is complementary to these approaches. In the rest of this section we focus on other model-free methods for finding blueprints. Deep CFR~\citep{deep_cfr, steinberger2019single} is a general method that trains a neural network on a buffer of counterfactual values. However, Deep CFR uses external sampling, which may be impractical for games with a large branching factor, such as Stratego and Barrage Stratego. DREAM~\citep{steinberger2020dream} and ARMAC~\citep{gruslys2020advantage} are model-free regret-based deep learning approaches. ReCFR~\citep{liu2022model} propose a bootstrap method for estimating cumulative regrets with neural networks that could potentially be combined with our method. Neural Fictitious Self-Play (NFSP)~\citep{nfsp} approximates fictitious play by progressively training a best response against an average of all past opponent policies using reinforcement learning. The average policy converges to an approximate Nash equilibrium in two-player zero-sum games. Policy Space Response Oracles (PSRO)~\citep{psro, muller2019generalized, feng2021discovering, mcaleer2022anytime} are another promising method for approximately solving very large games. PSRO maintains a population of reinforcement learning policies and iteratively trains a best response to a mixture of the opponent's population. PSRO is a fundamentally different method than the previously described methods in that in certain games it can be much faster but in other games it can take exponentially long in the worst case. Neural Extensive Form Double Oracle (NXDO)~\citep{mcaleer2021xdo} combines PSRO with extensive-form game solvers, and could potentially be combined with our method. There is an emerging literature connecting reinforcement learning to game theory. QPG~\citep{srinivasan2018actor} shows that state-conditioned $Q$-values are related to counterfactual values by a reach weighted term summed over all histories in an infostate and proposes an actor-critic algorithm that empirically converges to a NE when the learning rate is annealed. NeuRD~\citep{hennes2020neural}, and F-FoReL~\citep{perolat2021poincare} approximate replicator dynamics and follow the regularized leader, respectively, with policy gradients. Robust reinforcement learning~\citep{pinto2017robust}, seeks to train an RL policy to be robust against an adversarial environment. In future work we will look to apply ESCHER to this setting. Markov games (or stochastic games) are extensive-form games where the world state information is shared among all players at each timestep, but players take simultaneous actions. Recent literature has shown that reinforcement learning algorithms converge to Nash equilibrium in two-player zero-sum Markov games~\citep{brafman2002r, wei2017online, perolat2018actor, xie2020learning, daskalakis2020independent, jin2021v} and in multi-player general-sum Markov potential games~\citep{leonardos2021global, mguni2021learning, fox2022independent, zhang2021gradient, ding2022independent}. \begin{figure*}[t] \centering \subfigure[Leduc Exploitability]{\includegraphics[width=45mm]{images/leduc_exploitability.png}} \subfigure[Battleship Exploitability]{\includegraphics[width=45mm]{images/battleship_exploitability.png}} \subfigure[Liar's Dice Exploitability]{\includegraphics[width=45mm]{images/liars_dice_exploitability.png}} \medskip \subfigure[Leduc Variance]{\includegraphics[width=45mm]{images/leduc_variance.png}} \subfigure[Battleship Variance]{\includegraphics[width=45mm]{images/battleship_variance.png}} \subfigure[Liar's Dice Variance]{\includegraphics[width=45mm]{images/liars_dice_variance.png}} \caption{The tabular version of ESCHER with an oracle value function is competitive with the tabular version of DREAM with an oracle value function and with OS-MCCFR in terms of exploitability (top row). The regret estimator in ESCHER has orders of magnitude lower variance than those of DREAM and OS-MCCFR (bottom row).} \label{fig:results} \end{figure*} Actor Critic Hedge (ACH)~\citep{ach} is a similar method that does not use importance sampling but instead uses a value function to estimate regrets. Their method differs from ours in two important ways. First, ACH uses an information set based value function. As a result, the expected value of the estimator weights certain information sets with a weight that changes from one iteration to the next. Because of this, the regret bound of the tabular method can remain constant if the difference between this weighting term remains high across iterations. Second, ACH uses Hedge as it's policy instead of regret matching. Hedge is very unstable when combined with MCCFR and is difficult to tune. Third, ACH trains a policy on a buffer of data collected from the most recent policy. This choice removes the downside of training on a large replay buffer of all past experience but might introduce extra instability. This design choice could be combined with our approach as an alternative way of training the policy, but we did not run an ablation study on this. We attempted to replicate a tabular version of ACH but were not able to match the results in their paper. \section{Results} \subsection{Tabular Results} We compare a tabular version of ESCHER with oracle value functions to a tabular version of DREAM with oracle value functions and with OS-MCCFR. We run experiments on Leduc poker, Battleship, and Liar's dice, and use the implementations from OpenSpiel~\citep{lanctot2019openspiel}. We see in Figure \ref{fig:results} on the top row that ESCHER remains competitive with DREAM and OS-MCCFR on these games. On the bottom row we plot the average variance of the regret estimators over all information sets visited over an iteration window for each of these algorithms. While DREAM does improve upon OS-MCCFR, it still has orders of magnitude higher variance than ESCHER. Although this does not matter much in tabular experiments, we conjecture that high regret estimator variance makes neural network training unstable without prohibitively large buffer sizes. \begin{figure*}[t] \centering \subfigure[Phantom Tic Tac Toe Head to Head]{\includegraphics[width=45mm]{images/pttt_new.png}} \subfigure[Dark Hex 4 Head to Head]{\includegraphics[width=45mm]{images/hex4_new.png}} \subfigure[Dark Hex 5 Head to Head]{\includegraphics[width=45mm]{images/hex5_new.png}} \medskip \subfigure[Phantom Tic Tac Toe Against Random]{\includegraphics[width=45mm]{images/pttt_rand.png}} \subfigure[Dark Hex 4 Against Random]{\includegraphics[width=45mm]{images/hex4rand.png}} \subfigure[Dark Hex 5 Against Random]{\includegraphics[width=45mm]{images/hex5rand.png}} \caption{ESCHER is competitive with NFSP and DREAM in Phantom Tic Tac Toe. But as the size of the game increases, ESCHER performs increasingly better than both DREAM and NFSP.} \label{fig:deep_results} \end{figure*} \subsection{Deep Results} In this section we compare our method to DREAM and NFSP, the most popular baselines that are also open-source, on the medium sized games of Phantom Tic Tac Toe (TTT) and Dark Hex. Phantom TTT and Dark Hex are similar in that they are both imperfect information versions of perfect-information games played on square boards. Phantom TTT is played on a $3\times 3$ board while dark hex 4 is played on a size $4\times 4$ board and dark hex 5 is played on a size $5\times 5$ board. Because these games are large, we are not able to compare exact exploitability so instead we compare performance through head-to-head evaluation. Results are show in Figure \ref{fig:deep_results}, where the x axis tracks the number of information sets visited during training. We see that our method is competitive with DREAM and NFSP on Phantom TTT. On the larger game of Dark Hex 4, ESCHER beats DREAM and NFSP head to head and also scores higher against a random opponent. Moving to the largest game of Dark Hex 5, we see that ESCHER beats DREAM and NFSP head to head and also is able to beat a random opponent while DREAM and NFSP are no better than random. We do not compare to other methods such as NeuRD~\citep{neurd}, F-FoReL~\citep{perolat2021poincare}, and ARMAC~\citep{gruslys2020advantage} because we could not find open-source implementations of their RL forms. \section{Discussion: Limitations and Future Research} Our method has a number of ways it can be improved. First, it requires two separate updates in one iteration. Perhaps the method could be more efficient if only on-policy data were used. Second, our method, like Deep CFR, DREAM, and NFSP, trains neural networks on large replay buffers of past experience. Unlike RL algorithms like DQN~\citep{mnih2015human}, these replay buffers must record all data ever seen in order to learn an average. This can be a problem when the amount of data required is much larger than the replay buffer memory. Third, we do not use various implementation details that help performance in Deep CFR such as weighting by the sum of the reach probabilities over all iterations. Fourth, our method still has variance that comes from the different values associated with different histories inside the same information set. Perhaps a combination of our method with baselines like in DREAM could further reduce variance. Finally, our method uses separate data to train the value function. Our method could be made much more efficient by also using the data generated for training the policy to also train the value function. We could also use the reward information discovered when training the regret network which we currently do not use. One direction of future research is finding optimal sampling distributions. In our method we use the uniform distribution over actions as our fixed sampling distribution, but this can be far from optimal. In principle any distribution that remains fixed will guarantee the method to converge with high probability. One possible direction would be to try to estimate the theoretically optimal balanced distribution. Another possible method would be to take the frozen policy at a certain iteration and use that as the sampling distribution for the rest of the game. Other, less principled, methods such as using the average policy might work well in practice as well~\citep{burch2012efficient}. Another direction is in connecting this work with the reinforcement learning literature. Similar to reinforcement learning, we learn a Q value and a policy, and there are many techniques from reinforcement learning that are promising to try in this setting. For example, although we learned the value function simply through Monte-Carlo rollouts, one could use bootstrapping-based methods such as TD-$\lambda$~\citep{sutton1988learning} and expected SARSA~\citep{rummery1994line}. The policy might be able to be learned via some sort of policy gradient, similar to QPG~\citep{srinivasan2018actor}, NeuRD~\citep{hennes2020neural}, and F-FoReL~\citep{perolat2021poincare}. \section{Acknowledgements} This paper is based on work supported by the National Science Foundation under grants IIS-1901403 and CCF-1733556, and ARO award W911NF2010081. This paper is also based on work supported by grant number 2030859 to the Computing Research Association for the CIFellows Project.
1,108,101,564,002
arxiv
\section{Introduction} Deep learning has produced remarkable results across the full breadth of machine learning research. For the most part this has been achieved through the reapplication of the two main architectures, the \textsc{cnn} and \textsc{rnn}, adapted to two Euclidean cases -- omnidirectional (image-like) and unidirectional (series) -- respectively. As such there is great interest in extending the general techniques to non-Euclidean cases and graph-structured data problems in particular. These efforts are mostly inspired by the \textsc{cnn} and attempting to find suitable analogs to its core components, the convolutional and pooling operators. Early work set out to develop convolution-like graph operators. The focus has now turned to developing pooling operations, often referred to as coarsening in the context of graphs. Besides static methods \cite{Luzhnica2019CliqueClassification}, differentiable pooling frameworks have been developed. DiffPool achieved state-of-the-art (\textsc{s}o\textsc{ta}) performance across many benchmark tasks \cite{Ying2018HierarchicalPooling}, however a dense representation, quadratic in memory, is required. The Graph U-Net introduces a sparse method based on pruning nodes ($\topk$) \cite{gao2019graph}. Cangea et al. \yrcite{Cangea2018TowardsClassifiers} apply the method in graph classification by incorporating $\topk$ pools in a \textsc{gcn} model, achieving performance competitive with the \textsc{s}o\textsc{ta} with scalable memory requirements. In this work we show that, under standard initialisation \cite{Glorot2010UnderstandingNetworks,He2015DelvingClassification}, using the \textsc{gcn} and $\topk$ operator together results in vanishing gradients beyond the first layers. In addition, we show that it is possible to attain good performance on smaller benchmark tasks simply using a global-pool\footnote{A simple mean or sum over the features of all nodes.} followed by an \textsc{mlp}. Furthermore, to achieve results on a par with Graph U-Net in \textit{all} benchmarks a single-layer \textsc{gcn} with a jumping-knowledge (\textsc{jk}) connection \cite{Xu2018RepresentationNetworks} from the input graph followed by an \textsc{mlp} is sufficient, whether the weights of the \textsc{gcn} are trained or not. Considering the implications of these results, we primarily argue for the importance of including strong, simple baselines in evaluation. We also define an initialisation scheme that remedies the vanishing gradient issue by design though we find that this does not consistently improve performance. \paragraph{Motivation} This work was motivated by studies of network activations and gradient flow in deeper \textsc{gnn}s with \textsc{jk} structures and $\topk$ pooling. We found that, at initialisation, activations into the network rapidly vanish and that throughout training the gradients flowed mostly into earlier layers. These findings prompt two questions: firstly, are deeper networks only trainable thanks to \textsc{jk} structures bypassing later layers? and secondly, how important are the later layers to performance anyway? \section{Preliminaries} We use the standard notation: a graph $\mathcal{G}$ of $N$ nodes with $F$ features per node is represented by the pair $(\mathbf{A}, \mathbf{X})$ with adjacency matrix, $\mathbf{A} \in \mathbb{R}^{N \times N}$, and node feature matrix, $\mathbf{X} \in \mathbb{R}^{N \times F}$. \paragraph{Graph Convolution} \textsc{R}e\textsc{lu} activations and the improved \textsc{gcn} \cite{gao2019graph} are used throughout. This differs from the standard \textsc{gcn} in that $\mathbf{\hat{A}}=\mathbf{A}+2\mathbf{I}$ is used i.e. self-loops have a weight of 2. \paragraph{Pooling} $\topk$ pooling is used \cite{gao2019graph}. The pooling operator drops $N - \ceil{kN}$ nodes, where $k \in [0,1)$ is a fixed hyperparameter. In all experiments this was set to $0.8$. Nodes are dropped based on the ranked projection of features on a learnable vector, $\Vec{p}$, as \begin{align} \hat{y}_i &= \frac{\textup{X}_i\cdot\Vec{p}} {\| \Vec{p} \|} &\Vec{i} = \topk(\Vec{y}, k) \nonumber \\ \mathbf{X'} &= \textup{X}_{\Vec{i}} \odot \tanh(\Vec{y}_{\Vec{i}}) &\mathbf{A'} = \mathbf{A}_{\Vec{i};\Vec{i}} \nonumber \end{align} where $\hat{y}$ are the scores for each node (rows in $\mathbf{X}$) and $\Vec{i}$ are the indices of the top-$k$ nodes based on their scores. \paragraph{Jumping Knowledge Networks} In node aggregating schemes, the range of nodes\footnote{Analogous to the receptive field in \textsc{cnn}s.} that a node's representation draws from is strongly dependent on the neighbourhood structure \cite{Xu2018RepresentationNetworks}. \textsc{Jk}-structures were introduced to allow some flexibility over the degree of aggregation and thus even out the ``range'' by introducing layer skipping connections. For a node, $v$, this takes the form \begin{align} h_v^{1} &= f_1(X_v) \quad ; \quad h_v^{i} = f_i(h_v^{i-1})\nonumber \\ h_v^{JK} &= \textup{Agg.}(h_v^1,\dots,h_v^L) \nonumber \end{align} where the aggregation function is typically concatenation, summation or an elementwise max, the result being passed to a classifier. \section{Removing JK \& Initialisation} \label{remove-jk} Whilst \textsc{jk}-connections were introduced to tackle the problem of node-specific range, in deeper networks they are acting as bypasses of later layers and a hierarchy of representations is not actually being produced. Clearly it runs counter to the core concept of allowing the range to vary over nodes if the higher ranges are not used. To test this we expose the gradient flow and activations in a net of four blocks of \textsc{gcn}+$\topk $ with the final representation aggregated with a global mean and entered into an \textsc{mlp}. Re\textsc{lu} activations are used in the \textsc{gcn}. The \textsc{gcn} weights are initialised using Kaiming \cite{He2015DelvingClassification}, while the pools are initialized using Glorot \cite{Glorot2010UnderstandingNetworks}\footnote{The authors note the mixed naming conventions here but this seems to be what the community has settled on.}. We refer to this combination as the `standard initialisation'. Under standard initialisation, layer activations decay into the network, gradients are vanishingly small and the latter part of the network is effectively static under backpropagation. \subsection{\textsc{ReInit}} To expose this problem we propose a data-driven approach similar to \textsc{lsuv}-initialisation \cite{Mishkin2015AllInit} to maintain variance across layers. The idea is simply to initialise under some scheme and then pass the entire batch through each block, scaling the layer weights in turn by $\sigma^{-1}$ to maintain variance, a process we refer to as \textsc{ReInit}. This is implemented as scaling factors that are set progressively \begin{align} \mathbf{X'} &= \frac{1}{c_1} \textup{GCN}(\mathbf{X},\mathbf{A}) \quad ; \quad c_1 = \sigma\big(\textup{GCN}(\mathbf{X},\mathbf{A})\big) \nonumber \\ \mathbf{X''} &= \frac{1}{c_2} \textup{X}'_{\Vec{i}} \odot \tanh(\Vec{y}_i) \quad ; \quad c_2 = \sigma\big(\textup{X}'_{\Vec{i}} \odot \tanh(\Vec{y}_i)\big) \nonumber \end{align} with the result that $\sigma(\mathbf{X}')=\sigma(\mathbf{X}'')=1$. We deviate from \textsc{lsuv} in not ortho-normalising as there is not an analogue that could be applied to the $\topk$ layers so simply rescaling has a more consistent meaning over the network. We have also found that attempting to derive a semi-analytic solution, in the footsteps of Glorot \& Bengio \yrcite{Glorot2010UnderstandingNetworks}, is not possible for the \textsc{gcn} due to the structural asymmetries in neighbourhood aggregation. In essence, the expected variance is sensitive to the number and similarity of neighbours to such a degree that properly accounting for these variations would require specific node-level information. This also allows \textsc{ReInit} to be applied on top of any initial scheme, so the `shape' is not fixed in that sense. \section{Shallower, Simpler Networks} \label{shallow-simple} To see how much later \textsc{gcn} layers contribute to performance we tested three shallower networks on standard benchmarks. The models could be thought of as extreme ablations. \paragraph{Structure-blind MLP} A three-layer \textsc{mlp}. The adjacency matrix is discarded, the features are globally pooled and passed as input. Three weight matrices, biases; \textsc{r}e\textsc{lu} activations. This model cannot see even the number of nodes let alone their individual features or structural relationships. \paragraph{Single-layer JK GCN+MLP} A single layer \textsc{gcn} with a \textsc{jk}-skip preceding the \textsc{mlp} described above. We test this set up both with the weights of the \textsc{gcn} fixed at the random initialization values, denoted \textsc{(r)}, and free to update. The fixed method is intended to provide a minimal structural addition to the plain \textsc{mlp}. \section{Experiments \& Results} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{noreinit_test_4activations.pdf} \includegraphics[width=\columnwidth]{reinit_test_4activations.pdf} \caption{Outputs of each layer during training with the standard initialization (top) and ours (bottom). Note the scale difference. The standard initialization quickly converges to zero for all layers, while with \textsc{ReInit} the values vary widely } \label{linear-activations} \end{center} \vskip -0.5in \end{figure} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{noreinit_gradients_4gradients_all.pdf} \includegraphics[width=\columnwidth]{reinit_test_4gradients_all.pdf} \caption{Gradients flowing into the weights of all layers with regular initialization (top) and \textsc{ReInit} (bottom). The gradients of all the layers apart from the last MLP layer are almost 0 for the regular initialization. The reinitialized network manages to train the other layers, although noticeably less gradients flow into the latter layers, possibly by choice rather than a network problem.} \label{linear-gradients} \end{center} \vskip -0.4in \end{figure} \begin{figure} \begin{center} \centerline{\includegraphics[scale=0.3]{merged_training_loss.png}} \caption{Training loss for the standard initialization and \textsc{ReInit}. The loss does not change for the standard initialization while with \textsc{ReInit} the network is successfully trained.} \label{linear-training-loss} \end{center} \vskip -0.4in \end{figure} \begin{figure}[ht!] \begin{center} \centerline{\includegraphics[width=\columnwidth]{no_reinit_test_weight_decayactivations.pdf}} \includegraphics[width=\columnwidth]{no_reinit_test_no_weight_decayactivations.pdf} \centerline{\includegraphics[width=\columnwidth]{reinit_test_no_weight_decayactivations.pdf}} \centerline{\includegraphics[scale=0.5]{epoch_accuracy_curve.pdf}} \caption{Output values in different training and initialisation routines when training for 300 epochs on the \textsc{dd} dataset. The first plot shows pre-activations vanish in a simple \textsc{jk}-net under standard initialization, trained with Adam with weight decay. The second shows the same network trained without weight decay. The third has no weight decay and is initialized with \textsc{ReInit}. The last figure shows the performance of the three setups on the \textsc{dd} dataset (over 10 folds) as we vary the number of epochs.} \label{activations-jk} \end{center} \vskip -0.2in \end{figure} \paragraph{No JK} We first present the comparison of activations, gradient flow and training dynamics for a 4-block \textsc{gnn} (as described in \ref{remove-jk}) in figures \ref{linear-activations}, \ref{linear-gradients} \& \ref{linear-training-loss}, respectively. Detailed analysis of these plots is presented as captions, though the overall picture is that under \textsc{ReInit} training is able to occur whilst under standard initialisation it is not. \subsection{Shallow baselines} We conduct several experiments with the networks described in section \ref{shallow-simple}: a simple \textsc{mlp}; a randomly initialized \textsc{gcn}, which is not updated during the training process, denoted \textsc{gcn(r)-mlp}; and a \textsc{gcn} that is free to update (\textsc{gcn-mlp}). We find that these models surpass most of the previous methods. In some cases surpassing even the recent differentiable pooling methods. We note that the performance of the random \textsc{gcn} should not come as a surprise given its connection to WL-test \cite{Kipf2016Semi-SupervisedNetworks}. This is most relevant in the case of the random \textsc{gcn}, having very little power in the featural domain but adding structural information comparable to 1-WL. These initial results (presented in table \ref{table:results}) show that there is room for advancements in graph classification and that these simple models are to be considered strong baselines. These networks, particularly the \textsc{mlp}, are simple and appear as subnetworks in many methods. As such, it is of paramount importance to undertake thorough ablation studies to show the benefit of complexifying networks. For instance, we can add additional components that improve upon other approaches but do so by relying heavily on these simpler subnetworks. We explore this idea below. \subsection{Bloated networks} We use the following architecture in the next few experiments: \textsc{gcn-pool-gcn-pool-gcn-pool-mlp} with the global max and sum of each layer passed to the \textsc{mlp} through \textsc{jk}-structures. Due to the initialization problem, if weight decay is used\footnote{Here we use $\lambda = 5\PLH10^{-3}$ with a learning rate of $5\PLH10^{-4}$ but smaller values achieve similar results.} the network is unable to recover from a bad initialization and as such it cannot learn in the deeper layers (see Figure \ref{activations-jk}). This method (\textsc{jk-sum-decay}) is competitive with most results, performing closely to the simple sub-network it contains: \textsc{gcn-mlp}. \begin{table} \vskip 0.15in \centering \caption{Classification accuracy percentages. The results of other networks are taken from \citealt{Cangea2018TowardsClassifiers} with which we share 10-fold splits for benchmarking our methods. Bold indicates top-performance, \textcolor{table_colour}{blue} indicates weaker performance than the \textsc{mlp}.} \begin{small} \begin{sc} \begin{tabular}{lcccr} \toprule & \multicolumn{4}{c}{Datasets} \\ \cmidrule{2-5} Model & Reddit \footnote{Reddit-Multi-12K} & DD & Collab & Prot.\\ \midrule PatchySAN & 41.32 & \textcolor{table_colour}{76.27}& \textcolor{table_colour}{72.60} & \textcolor{table_colour}{75.00}\\ GraphSAGE & 42.24 & \textcolor{table_colour}{75.42}& \textcolor{table_colour}{68.25} & \textcolor{table_colour}{70.48}\\ ECC & 41.73 & \textcolor{table_colour}{74.10}& \textcolor{table_colour}{67.79} & \textcolor{table_colour}{72.65}\\ Set2Set & 43.49 & \textcolor{table_colour}{78.12}& \textcolor{table_colour}{71.75} & \textcolor{table_colour}{74.29} \\ SortPool & 41.82 & \textcolor{table_colour}{79.37}& \textcolor{table_colour}{73.76} & \textcolor{table_colour}{75.54}\\ DiffPool-Det & 46.18 & \textcolor{table_colour}{75.47}& \textbf{82.13} & \textcolor{table_colour}{75.62}\\ DiffPool-NoLP & 46.65 & \textcolor{table_colour}{79.98}& 75.63 & 77.42\\ DiffPool & 47.08 & \textbf{81.15}& 75.50 & \textbf{78.10}\\ GU-Net/SHGC & - & \textcolor{table_colour}{78.59}& 74.54 & \textcolor{table_colour}{75.46}\\ \midrule MLP & 40.96 & 80.22 & 74.00 & 75.74\\ GCN(R)-MLP & \textcolor{table_colour}{36.15}& \textcolor{table_colour}{78.61}& 75.38 & 76.28\\ GCN-MLP & 45.01 & \textcolor{table_colour}{79.29}& 76.50 & \textcolor{table_colour}{75.64}\\ \midrule JK-Sum & \textbf{47.16} & \textcolor{table_colour}{79.02}& 77.00 & 75.82\\ JK-Sum-Decay & 43.87 & \textcolor{table_colour}{79.11}& 74.14 & 75.82\\ JK-Sum-ReInit & 46.77& \textcolor{table_colour}{75.97}& 77.20 & \textcolor{table_colour}{75.46}\\ \bottomrule \end{tabular} \end{sc} \end{small} \label{table:results} \end{table} Next, even if we do not use any weight decay the network will only be able to recover the deeper layers after a significant number of epochs. For instance, for DD the network only starts to recover the deeper layers after epoch $100$ as shown in Figure \ref{activations-jk}. Although, to fully recover the layers (similarly to the network with \textsc{ReInit}) we found that the network needs to be trained for more than $800$ epochs and, if early-stopping causes training to end in an earlier epoch, we would still be using only the first two layers (\textsc{gcn}+\textsc{pool}). In fact, the optimal number of epochs to train the network for was $100$ which is what we report in the results in Table \ref{table:results} (\textsc{jk-sum}). However, the network behaves very differently when initialized using \textsc{ReInit} as the method does not need to recover the layers one-by-one, changing the dynamics and ultimately how and what the network learns. The same figure shows that in the case of \textsc{ReInit} all the layers are trainable from the beginning. In that case, we notice that the performance goes up sharply in the very first few epochs for DD (less than 10, see last plot of Figure \ref{activations-jk}) and then drops and converges to roughly the same as the recovered network with standard initialization (without weight decay). While for small datasets (\textsc{DD, Proteins}) unleashing the power of the deeper network from the beginning is not beneficial since it can cause over-fitting (a single layer \textsc{gcn} already performs well) for \textsc{collab} we see that this differs. In fact, for these small datasets, the method with \textsc{ReInit} achieved highest accuracy in fewer than 50 epochs, while for \textsc{Collab} it was 300. The same network without \textsc{ReInit} had the best performance training for 100 epochs, but resulted in a lower quality model. This hints that for this bigger dataset all 3-layers are needed, while for smaller problems the network is likely over-parameterised and this is exposed by \textsc{ReInit}. \paragraph{Closing remarks} We have demonstrated that some very simple models are competitive with the \textsc{s}o\textsc{ta} and that \textsc{jk}-structures may permit models to perform well through these subnetworks. We hope that these baselines and a greater interest in ablation studies will be adopted by the community. \clearpage \nocite{Bronstein2017GeometricData} \nocite{pmlr-v15-glorot11a,zhang2018end,gilmer2017neural,niepert2016learning,Simonovsky_2017,hamilton2017inductive,He_2016,Fey/Lenssen/2019}
1,108,101,564,003
arxiv
\section{Introduction}\label{vvv:intro} The VISTA Variables in the Via Lactea \citep[VVV,][]{2010Minniti} survey has mapped a 560 deg$^{2}$ area containing $\sim 3 \times 10^{8}$ point sources with multi-epoch near-infrared photometry. The surveyed area includes the Milky Way bulge and an adjacent section of the mid plane. The survey has already produced a deep near-infrared Atlas in 5 bandpasses ($Z$, $Y$, $J$, $H$, $K_{\rm s}$), and the final product will include a 2nd epoch of the multi-filter data and a catalogue of more than $10^{6}$ variable sources. One of the main scientific goals expected to arise from the final product of VVV is the finding of rare variable sources such as Cataclysmic Variables and RS CVn stars, among others \citep[see][for a discussion on classes of near-infrared variable stars that are being studied with VVV]{2013Catelan}. One of the most important outcomes is the possibility of finding eruptive variable Young Stellar Objects (YSOs) undergoing unstable accretion. Such objects are usually assigned to one of two sub-classes: FUors, named after FU Orionis, have long duration outbursts (from tens to hundreds of years); EXors, named for EX Lupi, have outbursts of much shorter duration (from few weeks to several months). Both categories were optically defined in the first instance and fewer than 20 are known in total \citep[see e.g.,][]{2010Reipurth,2013Scholz,2014Audard}, very likely because YSOs with high accretion rates tend to suffer high optical extinction by circumstellar matter. For thorough reviews of the theory and observations in this subject see \citet{1996Hartmann,2010Reipurth,2014Audard}. Given that VVV is the first near infrared time domain survey of a large portion of the Galaxy, it is reasonable to hope for a substantial yield of new eruptive variable YSOs in the dataset. In particular, we would expect the survey to probe for high amplitude variability that occurs on typical timescales of up to a few years, which corresponds more to EXors (or their younger, more obscured counterparts) than to FUors. Eruptive variable YSOs are important because it is thought that highly variable accretion may be common amongst protostars, though rarely observed owing to a duty cycle consisting of long periods of slow accretion and much shorter periods of unstable accretion at a much higher rate. If this is true, it might explain both the observed under-luminosity of low-mass, class I YSOs (the ``Luminosity problem'' \citep[see e.g][]{1990Kenyon,2009Evans,2012Caratti}) and the wide scatter seen in the Hertzsprung-Russell (HR) diagrams of pre-main-sequence (PMS) clusters \citep*{2009Baraffe,2012Baraffe}. In the search for this rare class of eruptive variable stars, \citet{2014Contreras} studied near-infrared high-amplitude variability in the Galactic plane using the two epochs of UKIDSS Galactic Plane Survey (UGPS) K band data \citep{2007Lawrence,2008Lucas}. \citeauthor{2014Contreras} found that $\sim 66\%$ of high-amplitude variable stars selected from UGPS data releases DR5 and DR7 are located in star forming regions (SFRs) and have characteristics of YSOs. They concluded that YSOs likely dominate the Galactic disc population of high-amplitude variable stars in the near-infrared. Spectroscopic follow-up confirmed four new additions to the eruptive variable class. These objects showed a mixture of the characteristics of the optically-defined EXor and FUor subclasses. Two of them were deeply embedded sources with very steep 1 to 5~$\mu$m spectral energy distributions (SEDs), though showing ``flat spectrum'' SEDs at longer wavelengths. Such deeply embedded eruptive variables are regarded as a potentially distinct additional sub-class, though only a few had been detected previously: OO Ser, V2775 Ori, HOPS 383 and GM Cha \citep[see][]{1996Hodapp, 2007Kospal, 2007Persi, 2011Caratti, 2015Safron}. With the aims of determining the incidence of eruptive variability among YSOs and characterising the phenomenon, we have undertaken a search of the multi-epoch VVV dataset. In contrast to UGPS, the ongoing VVV survey offers several dozen epochs of K$_{\rm s}$ data over a time baseline of a few years. We expect that the VVV survey will also be used to identify YSOs by the common low amplitude variability seen in nearly all such objects \citep[e.g.][]{2012Rice}. This will complement studies in nearby star formation regions and in external galaxies, such as the {\it Spitzer} YSOVAR programme \citep[e.g. ][]{2015Wolk} and a 2 epoch study of the LMC with {\it Spitzer} SAGE survey data \citep{2009Vijh}. We have divided the results of this work in two publications. In this first study we present the method of the search and a general discussion on the photometric characteristics of the whole sample of high amplitude variables in the near-infrared. We present the follow-up and spectroscopic characteristics of a large sub-sample of candidate eruptive variable stars in a companion publication (hereinafter referred to as paper II ). In Sect. 2 of this work we describe the VVV survey, the data and the method used to select high amplitude infrared variables. Section 3 describes the make up and general properties of the sample, the evidence for clustering and the apparent association with SFRs. In this section we also classify the light curves of variables found outside SFRs and use this information to estimate the contamination of our high amplitude YSO sample by other types of variable star. We then estimate the high amplitude YSO source density from our sample and compare the average space density with those of other high amplitude infrared variables. In Sect. 4 we discuss the physical mechanisms that drive variability in YSOs and classify our YSOs via light curve morphology. This yields some ideas concerning which of the known mechanisms might be responsible for the observed variability. We test these mechanisms using two epoch $JHK_{\rm s}$ data. Then we discuss the trends in the likely YSOs as a function of evolutionary status based on their spectral energy distribution. Finally we discuss the large sample of likely eruptive variables. Section 6 presents a summary of our results. \section{VVV} The regions covered by the VVV survey comprise the Bulge region within $-10^{\circ} < l < +10^{\circ} $ and $-10^{\circ} < b < +5^{\circ}$ and the disc region in $295^{\circ} < l < 350^{\circ} $ and $-2^{\circ} < b < +2^{\circ}$ \citep[see e.g.,][]{2010Minniti}. The data is collected by the Visible and Infrared Survey Telescope for Astronomy (VISTA). The 4m telescope is located at Cerro Paranal Observatory in Chile and is equipped with a near-infrared camera (VIRCAM) consisting of an array of sixteen 2048$\times$2048 pix detectors, with a typical pixel scale of 0.\arcsec 339, with each detector covering 694$\times$694 arcsec$^{2}$. The detectors are set in a 4$\times$4 array and have large spacing along the X and Y axis. Therefore a single pointing, called a ``pawprint'', covers 0.59$^{\circ}$ giving partial coverage of a particular field of view. A continuous coverage of a particular field is achieved by combining six single pointing with appropriate offsets. This combined image is called a tile. The VVV survey uses the five broad-band filters available in VIRCAM, $Z(\lambda_{eff}=0.87\mu$m), $Y(\lambda_{eff}=1.02\mu$m), $J(\lambda_{eff}=1.25\mu$m), $H(\lambda_{eff}=1.64\mu$m) and $K_{\rm s}(\lambda_{eff}=2.14\mu$m). The VVV survey area is comprised of 348 tiles, 196 in the bulge and 152 in the disc area. Each tile was observed in a single near-contemporaneous multi-filter (ZYJHK$_{\rm s}$) epoch at the beginning of the campaign, with an exposure time of 80 s per filter. A second epoch of contemporaneous JHK$_{\rm s}$ was observed in 2015. The variability monitoring was performed only in $K_{\rm s}$ with an exposure time of 16 s. The images are combined and processed at the Cambridge Astronomical Survey Unit (CASU). The tile catalogues are produced from the image resulting from combining six pawprints. The catalogues provide parameters such as positions and fluxes from different aperture sizes. A flag indicating the most probable morphological classification is also provided, with ``-1'' indicating stellar sources, ``-2'' borderline stellar, ``1'' non-stellar , ``0'' noise, ``-7'' indicating sources containing bad pixels and finally class=-9 related to saturation \citep[for more details on all of the above, see][]{2012Saito}. Quality control (QC) grades are also given by the European Southern Observatory (ESO) according to requirements provided by the observer. The constraints for VVV $K_{\rm s}$ variability data are: seeing $<$2 arcsec, sky transparency defined as ``thin cirrus'' or better. The ``master epoch'' of multi-filter data taken for each tile in a contemporaneous $JHK_{\rm s}$ observing block and a separate $ZY$ observing block have more stringent constraints: seeing $<$1.0, 1.0, 0.9, 0.9, 0.8 in $Z$, $Y$, $J$, $H$, $K_{\rm s}$ respectively and sky transparency of "clear" or better. According to whether observations fulfil the constraints established by the observer, these are classified as fully satisfied (QC A), almost satisfied, where for example some of the parameters are outside the specified constraints by $<10\%$ (QC B) and finally not satisfied (QC C). \subsection{Selection method}\label{sec:vvvselec} In order to search for variable stars we used the multi-epoch database of VVV comprising the observations of disc tiles with $|b| \leq 1^{\circ}$ taken between 2010 and 2012. We added the 2013, 2014 and 2015\footnote{We included a single $K_{\rm s}$ datapoint from 2015 observations, corresponding to the epoch with contemporaneous JHK$_{\rm s}$ photometry. Note that our analysis of the light curve morphologies and periods is based on the 2010-2014 data only (Sect. \ref{vvv:sec_lcmorp}). The 2015 data became available only after that was complete but they were used in the colour variability analysis (see Sect. \ref{vvv:sec_nirchange}).} data later to assist our analysis but they were not used in the selection. The catalogues were requested and downloaded from the CASU. We used catalogues of observations with QC grades A, B or C. Catalogues with QC grades C are still considered in order to increase the number of epochs. Some of them were still useful for our purposes. However, a small number of catalogues still presented some issues (e.g. zero point errors, bad seeing) making them useless, and as such were eliminated from the analysis. The number of catalogues in each tile varied from 14 to 23 epochs, with a median of 17 epochs per tile. When the 2013-15 data were added, the number of epochs available for the light curves rose to between 44 and 59. $K_{\rm s}$ photometry is derived from {\it apermag3} aperture fluxes (2\arcsec\ diameter aperture). For each tile, the individual catalogues are merged into a single master catalogue. The first catalogue to be used as a reference was selected as the catalogue with the highest number of sources on it. In every case this corresponded to the catalogue from the deep $K_{\rm s}$ observation (80~s on source), which was taken contemporaneously with the $J$ and $H$ band data (in 2010). For all other epochs the time on source was 16~s. \begin{figure} \centering \resizebox{\columnwidth}{!}{\includegraphics{lc_nonvars_v2.pdf}} \caption{2010-2012 light curves of ``non-variable'' VVV objects (i.e. not classified as high-amplitude variables in our analysis). These are presented to show the typical scatter in magnitude across the analysed magnitude range. We note that photometry for the brightest star is the standard CASU pipeline photometry.} \label{selec:vvv3} \end{figure} Figure \ref{selec:vvv3} shows the typical scatter shown by stars at different magnitudes across the analysed range. In here we can see considerable scatter at bright magnitudes, due to effects of saturation, and the faint end of the distribution, which is dominated by photon noise. High amplitude variable star candidates are selected from the master catalogue from stars which fulfilled the following criteria in the 2010-12 data: \begin{enumerate} \item Detection with a stellar morphological classification (class$=-1$) in every available epoch. \item Ellipticity with ell$<0.3$ in every epoch. \item The absolute difference ($\Delta K_{\rm s}$) between the brightest ($K_{s,max}$) and faintest point ($K_{s,min}$) in the light curve of the source to be larger than 1 magnitude \citep[similar to the analysis in][]{2014Contreras}. \begin{figure*} \centering \resizebox{0.6\textwidth}{!}{\includegraphics{vvvsearchcut2.pdf}} \caption{$K_{\rm s}$ vs $\Delta K_{\rm s}$ for one of the VVV tiles studied in this work, showing stars with class$=-1$ and ellipticity $<0.3$ in every epoch ( black circles). Variable star candidates which fulfil the condition $\Delta K_{\rm s} > 1$~magnitude are shown as blue circles. The red solid line marks the additional 3$\sigma$ cut applied to the objects as explained in the text. Stars above this line are selected for subsequent visual inspection.} \label{selec:vvv1} \end{figure*} The requirement for a detection at every available epoch in the 2010-12 interval was designed to exclude most transient objects such as novae, as well as reducing the number of false positives. This was the initial classification scheme. However, we observed that for each tile we were selecting a large number of sources as variable star candidates. Figure \ref{selec:vvv1} shows the average $K_{\rm s}$ magnitude vs $\Delta K_{\rm s}$ for variable stars in one of the VVV tiles. The figure shows that the majority of stars selected in the original classification scheme are located at the bright and faint ends of the distribution. The latter arise due to unreliable photometry at this faintest part. The VISTA detectors also become increasingly non-linear when reaching the saturation level. This non-linearity is corrected for in the creation of the catalogues, but differences between the magnitudes of the same object can still be observed, even for objects classified as stellar sources \citep[][]{2012Saito}. Figure 10 in \citet{2012Saito} shows that when comparing the $K_{\rm s}$ magnitudes of stellar sources found in overlapping regions of adjacent disc tiles, stars found at the brighter end show an increasing difference in magnitude \citep[an effect also observed in][]{2011Cioni, 2011Gonzalez}. This effect would explain the large differences observed at the brighter end of Fig. \ref{selec:vvv1}. This part of the distribution also shows marked ``finger-like'' sequences. Each of the sequences can be explained by the fact that the VISTA detectors have different saturation levels. In order to minimize these effects we applied an additional cut. \item We separated the average $K_{\rm s}$ distribution of Fig. \ref{selec:vvv1} into bins of 0.5 magnitudes and derived the mean and standard deviation, $\sigma$, on $\Delta K_{\rm s}$ for each bin. In order to select an object as a candidate variable star we required its $\Delta K_{\rm s}$ to be 3$\sigma$ above the mean $\Delta K_{\rm s}$ at the corresponding magnitude level. This 3$\sigma$ line is shown in red in Fig. \ref{selec:vvv1} where we can see that it is able to account for the non-linearity effects at the bright end of the distribution. \begin{figure*} \centering \resizebox{0.7\textwidth}{!}{\includegraphics{vvvsearchim_v2.pdf}} \caption{Example of the images used to visually inspect variable star candidates. In this case we show the images, taken between 2010 and 2012, of variable star VVVv322. Each image has a size of 1\arcmin$\times$1\arcmin. The star gets brighter towards the end of the sequence.} \label{selec:vvv2} \end{figure*} This additional constraint reduced the number of variable star candidates by a large factor. The initial requirements yield 158478 stars; the additional cut reduced this to 5085 stars. After the catalogue-based selection, we constructed 1\arcmin$\times$1\arcmin\ cut-out images around each candidate for every available epoch. Variable stars were confirmed as real through visual inspection of the individual images (an example is shown in Fig. \ref{selec:vvv2}). In some cases we performed manual photometry with IRAF in order to confirm the variability of the star. The most common causes for the appearance of false positives were, bad pixels in the images, saturation of bright sources, diffraction spikes and stars that were found on the edge of tiles. In the case of saturation, if this effect was present it was quite evident in individual images. In most cases saturation was observed in every single epoch, thus the variability observed in the light curve plots was not real, and the source was marked as such. This selection method yielded a total of 840 real variable stars. However, 25 of them are found twice as they are covered by adjacent tiles in VVV. The final list of VVV high-amplitude infrared variables consists of 816 stars. This includes one variable star, VVVv815, that showed large variability in 2010 but did not meet all the selection criteria (see below). The average magnitude for objects in the selected sample was found in the range $10.3<K_{\rm s}<16.9$~mag. \end{enumerate} Our requirement for a high quality detection at every epoch between 2010 and 2012 (see items (i) and (ii)) is bound to cause us to miss some real variables, very likely including some of the faintest or highest amplitude variables if they dropped below $K_{\rm s} \sim 16$ during that time, or if they became saturated and were therefore no longer classified as point sources. A significant fraction of all VVV sources are blended with adjacent stars and they can fail to pass our cuts on the morphological class and ellipticity at one or more epochs in consequence. The same can be true for YSOs with extended reflection nebulae or strong H$_{2}$ jets, as they might have slightly extended morphologies and fail to be classified as point sources \citep[see e.g.][]{2009Chen,2016Ward}. However, sources that pass these quality cuts are likely to be unblended and therefore to have reliable photometry \citep[photometry from the VISTA pipeline is not always reliable for faint stars in crowded fields, see e.g.][]{2012Saito}. In order to check the reliability of the pipeline photometry, especially for faint sources with $K_{\rm s}=15-16$ magnitudes, we obtained point spread function (PSF) fitting photometry of all stars in tile d069 with {\sc DoPHOT} \citep{1993Schechter}. The results confirmed that the variables found by our selection have reliable pipeline photometry. This is illustrated in Fig. \ref{selec:dophotcom} for variable star VVVv316, where the comparison of {\sc DoPHOT} and VISTA pipeline photometry shows close agreement. We investigated the incompleteness of our selection by examining two widely separated VVV disc tiles (d064 and d083), in which we removed our class and ellipticity cuts and required a minimum of only one detection in each year from 2010 to 2012 (with a stellar profile classification). This continues to select against transients and perhaps the most extreme variable YSOs but it allows us to assess incompleteness due to blending, which can cause sources to be absent or to have different profile classifications at different epochs. We found that this more relaxed selection added over 400 additional candidates in the two tiles down to (mean) $K_{\rm s}$=15.5, an increase of more than a factor of 10. Following visual inspection (see below), we found that the number of real high amplitude variables was increased by a factor of $\sim$2, up to a limit of $K_{\rm s}$= 15.5. At fainter mean magnitudes the completeness of our selection with criteria (i) to (iv) falls more steeply because most high amplitude variables will not satisfy the quality cuts at every epoch as the sensitivity limit is approached. A case of this selection effect is found in a variable star VVVv815 mentioned above. It showed a large variation ($\Delta K_{\rm s} >$ 1 magnitude) in the analysis of an early release of 2010 data. However, the star does not show up as a variable star candidate in the analysis described above. Inspection of the master catalogue for the respective tile shows that the star has a classification different from stellar in 3 out of 18 epochs available for tile d090 in the 2010-2012 period. This star is included in our final list of VVV high-amplitude variables because it is also part of the sample that has follow up spectroscopic observations. The number of stars in the analysed VVV area that fulfil criteria (i) and (ii) above is 12 789 000 stars. Considering the number of real variable stars we see that high-amplitude infrared variability was observed in approximately 1 out of 15000 stars in the Galactic mid-plane at $295 < l < 350^{\circ}$. \begin{figure} \centering \resizebox{\columnwidth}{!}{\includegraphics{dophot_d069_v4.pdf}} \caption{PSF (red triangles) vs aperture photometry (open circles) of star VVVv316. PSF photometry is shown only for data points classified as ``good'' or ``good but faint'' by {\sc DoPHOT}.} \label{selec:dophotcom} \end{figure} \subsection{Issues with saturation} The aforementioned problems of saturation still affect a small number of stars in our sample. This effect can become important when individual epochs of stars in our sample are brighter than $K_{\rm s}=12$ magnitudes. Saturation will reduce the flux at the inner core of the star, thus the magnitude of the star derived by using smaller diameter aperture than the default 2\arcsec\ aperture, will be fainter than the magnitude obtained from the default aperture. In order to check whether the star is saturated we first obtain the magnitudes from aperture photometry using the measure fluxed from five different diameters for the apertures. These diameters are 1\arcsec\ (Apermag1), 1.41\arcsec\ (Apermag2), 2\arcsec\ (Apermag3), 2.82\arcsec\ (Apermag4) and 4\arcsec\ (Apermag5). We find that saturated stars show relatively large differences between the magnitudes from the first three apertures, and we set a threshold for saturation as stars having both Apermag1-Apermag3$>$0.05 mag and Apermag2-Apermag3$>$0.02 mag. Thus any individual epoch of a star with Apermag3$<12$~mag (in the K$_{\rm s}$ passband) and having these differences is flagged as saturated. In order to correct for saturation, we follow \citet{2009Irwin} and defined a ring outside the saturated core, to obtain a new flux estimate. We then determine an aperture correction for the ring from a set of bright, unsaturated stars found within 5\arcmin\ of our object of interest. In our analysis we derived new fluxes using the ring defined by Apermag2 and Apermag4. Comparison with 2MASS $K_s$ photometry indicates that this choice of apertures extends the dynamic range by 2.5 magnitudes, relative to the pipeline photometry. We caution that comparison with 2MASS $K_s$ photometry indicates that while this approach provides correct magnitudes, the uncertainties are large, typically 0.2 magnitudes. In Table \ref{table:vvvfullphot} we present the 2010-2015 photometry for the 816 high-amplitude variable stars from VVV. \begin{table} \begin{center} \begin{tabular}{@{}l@{\hspace{0.3cm}}c@{\hspace{0.25cm}}c@{\hspace{0.2cm}}c@{\hspace{0.2cm}}} \hline ID & MJD-55200 & $K_{\rm s}$ & $K_{\rm s,err}$ \\ & (days) & (mag) & (mag) \\ \hline VVVv1 & 69.1171875 & 14.400 & 0.016\\ VVVv1 & 70.1484375 & 14.436 & 0.010\\ VVVv1 & 71.1445312 & 14.381 & 0.016\\ VVVv1 & 72.1406250 & 14.225 & 0.014\\ VVVv1 & 81.0781250 & 14.606 & 0.022\\ VVVv1 & 82.1015625 & 14.680 & 0.024\\ VVVv1 & 501.9882812 & 14.297 & 0.017\\ VVVv1 & 533.9648438 & 15.271 & 0.044\\ VVVv1 & 535.9726562 & 15.133 & 0.033\\ VVVv1 & 554.9726562 & 14.853 & 0.029\\ VVVv1 & 564.0117188 & 14.223 & 0.019\\ \hline \end{tabular} \caption{$K_{\rm s}$ photometry of the 816 high-amplitude variable stars from VVV. The full version of the table is available online.}\label{table:vvvfullphot} \end{center} \end{table} \section{High Amplitude Infrared Variables from VVV} \subsection{General characteristics}\label{sec:vvvsearch} \begin{landscape} \clearpage \pagestyle{empty} \setlength\LTleft{0pt} \setlength\LTright{0pt} \setlength\topmargin{-30pt} \setlength\textwidth{702pt} \setlength\textheight{38pc} \begin{table*} \begin{flushleft} \begin{tabular}{@{}l@{\hspace{0.15cm}}c@{\hspace{0.15cm}}c@{\hspace{0.15cm}}c@{\hspace{0.15cm}}c@{\hspace{0.15cm}}c@{\hspace{0.15cm}}c@{\hspace{0.15cm}}c@{\hspace{0.15cm}}c@{\hspace{0.15cm}}c@{\hspace{0.15cm}}c@{\hspace{0.15cm}}c@{\hspace{0.15cm}}c@{\hspace{0.15cm}}c@{\hspace{0.15cm}}c@{\hspace{0.15cm}}c@{\hspace{0.15cm}}c@{\hspace{0.15cm}}c@{\hspace{0.15cm}}c@{\hspace{0.15cm}}l@{\hspace{0.15cm}}c@{}} \hline Object ID & VVV Designation & $\alpha$ & $\delta$ & $l$ & $b$ & $Z$ & $Z_{err}$ & $Y$ & $Y_{err}$ & $J$ & $J_{err}$ & $H$ & $H_{err}$ & $K_{\rm s}$ & $K_{\rm s,err}$ & $\Delta K_{\rm s}$ & $\alpha_{class}$ & SFR & Class$^{\mathrm{a}}$ & Period $^{\mathrm{b}}$ \\ & & (J2000) & (J2000) & (degrees) & (degrees) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & & & & (days) \\ \hline VVVv1 & VVV J114135.16-622055.51 & 11:41:35.16 & -62:20:55.51 & 294.92603 & -0.56770 & -- & -- & -- & -- & 17.99 & 0.06 & 15.95 & 0.02 & 14.44 & 0.01 & 1.34 & -0.29 & y & STV & 72\\ VVVv2 & VVV J114412.94-623449.09 & 11:44:12.94 & -62:34:49.09 & 295.28005 & -0.71146 & -- & -- & -- & -- & -- & -- & 18.78 & 0.23 & 15.71 & 0.03 & 2.52 & 1.22 & y & STV & --\\ VVVv3 & VVV J115113.03-623729.29 & 11:51:13.03 & -62:37:29.29 & 296.07199 & -0.55784 & 13.17 & 0.01 & 12.93 & 0.01 & 12.90 & 0.01 & 12.70 & 0.01 & 12.24 & 0.01 & 2.21 & -- & n & Known & --\\ VVVv4 & VVV J115808.69-630708.60 & 11:58:08.69 & -63:07:08.60 & 296.95057 & -0.86785 & -- & -- & -- & -- & 18.23 & 0.08 & 16.62 & 0.04 & 15.32 & 0.02 & 1.10 & -0.24 & y & STV & --\\ VVVv5 & VVV J115959.68-622613.20 & 11:59:59.68 & -62:26:13.20 & 297.02026 & -0.15716 & 17.69 & 0.02 & 16.62 & 0.01 & 15.87 & 0.01 & 15.25 & 0.01 & 13.53 & 0.01 & 1.30 & -- & n & LPV & --\\ VVVv6 & VVV J115937.81-631109.77 & 11:59:37.81 & -63:11:09.77 & 297.12836 & -0.89953 & 19.02 & 0.05 & 18.08 & 0.04 & 16.80 & 0.02 & 15.95 & 0.02 & 15.50 & 0.02 & 1.02 & -- & n & EB & 1.64\\ VVVv7 & VVV J120202.67-623615.60 & 12:02:02.67 & -62:36:15.60 & 297.28472 & -0.27538 & -- & -- & -- & -- & -- & -- & -- & -- & 17.22 & 0.12 & 1.60 & 2.39 & y & Eruptive & --\\ VVVv8 & VVV J120059.11-631636.18 & 12:00:59.11 & -63:16:36.18 & 297.29582 & -0.95838 & -- & -- & -- & -- & -- & -- & -- & -- & 16.86 & 0.09 & 1.38 & 0.64 & y & Eruptive & --\\ VVVv9 & VVV J120217.23-623647.83 & 12:02:17.23 & -62:36:47.83 & 297.31381 & -0.27888 & -- & -- & -- & -- & 18.29 & 0.08 & 16.33 & 0.03 & 14.64 & 0.01 & 2.78 & -0.39 & y & Dipper & --\\ VVVv10 & VVV J120250.85-622437.62 & 12:02:50.85 & -62:24:37.62 & 297.33912 & -0.06749 & 18.57 & 0.04 & 18.23 & 0.05 & 16.97 & 0.03 & 16.35 & 0.03 & 16.07 & 0.04 & 1.19 & -- & n & STV & --\\ VVVv11 & VVV J120436.62-625704.60 & 12:04:36.62 & -62:57:04.60 & 297.63741 & -0.56188 & 20.20 & 0.19 & 19.01 & 0.13 & 17.87 & 0.07 & 16.79 & 0.05 & 16.08 & 0.08 & 1.10 & -- & n & STV & --\\ VVVv12 & VVV J121033.19-630755.71 & 12:10:33.19 & -63:07:55.71 & 298.33185 & -0.62611 & -- & -- & -- & -- & -- & -- & 16.30 & 0.03 & 15.04 & 0.03 & 1.74 & 0.28 & y & Fader & --\\ VVVv13 & VVV J121216.83-624838.32 & 12:12:16.83 & -62:48:38.32 & 298.47603 & -0.27814 & -- & -- & -- & -- & -- & -- & -- & -- & 16.72 & 0.14 & 1.40 & 1.39 & y & STV & --\\ VVVv14 & VVV J121218.13-624904.48 & 12:12:18.13 & -62:49:04.48 & 298.47958 & -0.28495 & 19.48 & 0.10 & 18.88 & 0.12 & 17.84 & 0.06 & 16.74 & 0.05 & 15.56 & 0.05 & 1.29 & 0.88 & y & LPV-YSO & 124\\ VVVv15 & VVV J121226.09-624416.97 & 12:12:26.09 & -62:44:16.97 & 298.48252 & -0.20371 & 19.00 & 0.07 & 17.72 & 0.04 & 16.57 & 0.02 & 15.60 & 0.02 & 15.04 & 0.03 & 1.09 & -- & y & EB & 2.27\\ VVVv16 & VVV J121329.76-624107.74 & 12:13:29.76 & -62:41:07.74 & 298.59498 & -0.13364 & 18.01 & 0.03 & 17.13 & 0.02 & 16.06 & 0.01 & 14.72 & 0.01 & 13.66 & 0.01 & 1.29 & 0.92 & y & Eruptive & --\\ VVVv17 & VVV J121352.08-625549.90 & 12:13:52.08 & -62:55:49.90 & 298.67278 & -0.36986 & -- & -- & -- & -- & -- & -- & 17.79 & 0.12 & 16.41 & 0.10 & 1.22 & 0.44 & y & STV & --\\ VVVv18 & VVV J121950.31-632142.24 & 12:19:50.31 & -63:21:42.24 & 299.39868 & -0.70695 & 17.82 & 0.02 & 17.29 & 0.02 & 16.20 & 0.01 & 15.52 & 0.01 & 15.32 & 0.02 & 1.04 & -- & n & STV & --\\ VVVv19 & VVV J122255.30-632352.56 & 12:22:55.30 & -63:23:52.56 & 299.74594 & -0.70270 & 19.56 & 0.08 & 18.67 & 0.06 & 17.55 & 0.04 & 16.40 & 0.03 & 15.61 & 0.03 & 1.82 & -- & n & EB & 16.88\\ VVVv20 & VVV J122827.97-625713.97 & 12:28:27.97 & -62:57:13.97 & 300.32402 & -0.19849 & -- & -- & -- & -- & 17.38 & 0.04 & 14.10 & 0.01 & 11.70 & 0.01 & 1.71 & 0.60 & y & Eruptive & --\\ VVVv21 & VVV J122902.24-625234.10 & 12:29:02.24 & -62:52:34.10 & 300.38193 & -0.11533 & -- & -- & -- & -- & -- & -- & 17.12 & 0.05 & 15.80 & 0.03 & 1.79 & 0.86 & y & LPV-YSO & 603\\ VVVv22 & VVV J123105.60-624457.34 & 12:31:05.60 & -62:44:57.34 & 300.60547 & 0.03057 & -- & -- & -- & -- & 18.81 & 0.14 & 16.94 & 0.05 & 15.55 & 0.03 & 1.73 & -0.34 & y & STV & --\\ VVVv23 & VVV J123128.53-624433.10 & 12:31:28.53 & -62:44:33.10 & 300.64855 & 0.04070 & 19.44 & 0.07 & 18.37 & 0.05 & 17.14 & 0.03 & 15.57 & 0.02 & 14.40 & 0.01 & 1.51 & -0.20 & y & Fader & --\\ VVVv24 & VVV J123235.68-634319.61 & 12:32:35.68 & -63:43:19.61 & 300.84794 & -0.92662 & 17.17 & 0.01 & 16.13 & 0.01 & 14.05 & 0.01 & 12.95 & 0.01 & 12.23 & 0.01 & 1.20 & -- & n & LPV & 430\\ VVVv25 & VVV J123514.37-624715.63 & 12:35:14.37 & -62:47:15.63 & 301.08129 & 0.02587 & -- & -- & -- & -- & -- & -- & 16.01 & 0.02 & 12.34 & 0.01 & 1.68 & 0.22 & y & Eruptive & --\\ VVVv26 & VVV J123845.66-631136.03 & 12:38:45.66 & -63:11:36.03 & 301.50320 & -0.35674 & -- & -- & -- & -- & 19.67 & 0.29 & 16.70 & 0.04 & 14.71 & 0.01 & 2.45 & 1.07 & y & Eruptive & --\\ VVVv27 & VVV J123848.33-633939.15 & 12:38:48.33 & -63:39:39.15 & 301.53114 & -0.82347 & -- & -- & -- & -- & -- & -- & -- & -- & 12.04 & 0.01 & 2.73 & -- & n & LPV & 1329\\ VVVv28 & VVV J123911.54-630524.76 & 12:39:11.54 & -63:05:24.76 & 301.54688 & -0.25138 & -- & -- & -- & -- & -- & -- & 18.91 & 0.32 & 16.78 & 0.09 & 1.39 & 0.54 & y & STV & 3.68\\ VVVv29 & VVV J123931.48-630720.38 & 12:39:31.48 & -63:07:20.38 & 301.58593 & -0.28170 & -- & -- & -- & -- & -- & -- & -- & -- & 16.94 & 0.10 & 2.13 & 1.32 & y & Eruptive & --\\ VVVv30 & VVV J124140.56-635033.57 & 12:41:40.56 & -63:50:33.57 & 301.85616 & -0.99128 & 17.92 & 0.02 & 17.22 & 0.02 & 16.40 & 0.02 & 15.66 & 0.02 & 15.23 & 0.02 & 1.22 & -- & n & EB & 1.91\\ VVVv31 & VVV J124140.15-635918.05 & 12:41:40.15 & -63:59:18.05 & 301.86093 & -1.13689 & 13.91 & 0.01 & 13.27 & 0.01 & 12.46 & 0.01 & 12.08 & 0.01 & 11.69 & 0.01 & 1.15 & -- & n & LPV & 760\\ VVVv32 & VVV J124357.15-625445.09 & 12:43:57.15 & -62:54:45.09 & 302.07991 & -0.05314 & 19.58 & 0.07 & 18.08 & 0.04 & 16.07 & 0.01 & 14.07 & 0.01 & 12.45 & 0.01 & 2.49 & 0.33 & y & LPV-YSO & --\\ VVVv33 & VVV J124425.05-631355.76 & 12:44:25.05 & -63:13:55.76 & 302.14153 & -0.37116 & -- & -- & -- & -- & -- & -- & 18.10 & 0.16 & 16.43 & 0.07 & 1.37 & -- & n & STV & --\\ VVVv34 & VVV J125029.87-625124.93 & 12:50:29.87 & -62:51:24.93 & 302.82470 & 0.01465 & 19.22 & 0.05 & 18.32 & 0.05 & 17.86 & 0.06 & 16.70 & 0.04 & 15.71 & 0.03 & 1.30 & -0.34 & y & STV & --\\ VVVv35 & VVV J125206.52-635711.52 & 12:52:06.52 & -63:57:11.52 & 303.00557 & -1.08152 & 17.62 & 0.01 & 16.85 & 0.01 & 17.13 & 0.03 & 16.21 & 0.03 & 15.97 & 0.04 & 1.24 & -- & n & EB & 1.12\\ VVVv36 & VVV J125917.72-633008.44 & 12:59:17.72 & -63:30:08.44 & 303.80825 & -0.64394 & 14.38 & 0.01 & 13.84 & 0.01 & 13.27 & 0.01 & 12.56 & 0.01 & 11.81 & 0.01 & 1.03 & -0.11 & y & Eruptive & --\\ VVVv37 & VVV J130243.05-631130.00 & 13:02:43.05 & -63:11:30.00 & 304.20331 & -0.34774 & 18.80 & 0.03 & 17.75 & 0.03 & 16.74 & 0.02 & 15.89 & 0.02 & 15.38 & 0.03 & 1.34 & -- & n & STV & --\\ VVVv38 & VVV J130311.38-631439.09 & 13:03:11.38 & -63:14:39.09 & 304.25411 & -0.40259 & -- & -- & -- & -- & -- & -- & 17.79 & 0.13 & 16.32 & 0.07 & 1.41 & 1.20 & y & STV & 48.96\\ VVVv39 & VVV J130440.98-635313.45 & 13:04:40.98 & -63:53:13.45 & 304.38893 & -1.05277 & 18.67 & 0.03 & 17.89 & 0.03 & 17.57 & 0.05 & 16.38 & 0.04 & 15.49 & 0.03 & 1.51 & -- & n & STV & --\\ VVVv40 & VVV J130600.43-630144.40 & 13:06:00.43 & -63:01:44.40 & 304.58298 & -0.20394 & 20.71 & 0.20 & 19.49 & 0.14 & 16.52 & 0.02 & 15.42 & 0.02 & 14.69 & 0.01 & 1.24 & -- & n & EB & 10.29\\ \hline \multicolumn{21}{l}{a. LPV:long-period variable; LPV-Mira:long-period variable found in SFR but with Mira-like characteristics; LPV-YSO:long-period variable found in SFR with YSO-like characteristics.}\\ \multicolumn{21}{l}{ STV:short-term variable; EB:eclipsing binary}\\ \multicolumn{21}{l}{b. We note that in some objects classified as long-period variables (LPV, LPV-MIRA and LPV-YSO), the light curve shows a clear periodic behaviour over long timescales ($>100$~days).}\\ \multicolumn{21}{l}{ However, we are unable to measure an exact period. In those cases this column is left blank.}\\ \hline \end{tabular} \caption{Parameters of the high-amplitude variables from VVV. For the description of the columns see Sect. \ref{sec:vvvsearch}. Here we show the first 40 sources in the list. The complete list is available online.}\label{table:vvvpar} \end{flushleft} \end{table*} \end{landscape} The selection method of Sect.\ref{sec:vvvselec} yields 816 high amplitude ($\Delta K_{\rm s} > $1 mag) infrared variables. In order to study the properties of these stars, we searched for additional information in available public databases. This search can be summarized as follows: \begin{itemize} \item {\bf SIMBAD} We query the SIMBAD database \citep{2000Wenger} for astronomical objects within a radius of 5\arcmin\ centred on the VVV object. \item {\bf Vizier} Additional information was provided with the use of the Vizier database \citep{2000Ochsenbein}. In here we queried whether the VVV object was found within 2\arcsec\ of objects found in astronomical catalogues that were not available in SIMBAD. \item {\bf The NASA/IPAC Infrared Science Archive (IRSA)} In here we queried for additional information in several near- and mid-infrared public surveys, which include 2MASS \citep{2006Skrutskie}, DENIS \citep{1994Epchtein}, {\it Spitzer}/GLIMPSE surveys \citep[see e.g.,][]{2003Benjamin}, WISE \citep{2010Wright}, Akari \citep{2007Murakami} and MSX6C \citep{2001Price}. The search was done automatically using the IDL scripts provided at the IRSA website. The catalogues were queried for objects found within a 10\arcsec\ radius of the VVV object. In most cases, several objects are found at these distances, then we only selected the nearest object to our star. In order to confirm whether these detections correspond to our variable star, 1\arcmin$\times$1\arcmin\ VVV images around the star were visually inspected. In addition, we used the WISE image service within IRSA to inspect multi-colour images of the areas around our variable stars, in order to establish a possible association of the the VVV object with a SFR. \end{itemize} The general properties of the sample can be found in Table \ref{table:vvvpar}. Column $1$ presents the original designation given to the sources. Column $2$ corresponds to the full VVV designation for the source. Coordinates for the objects are given in Columns $3$ and $4$, with columns $5$ and $6$ presenting the Galactic coordinates of the sources. In columns $7$, $8$, $9$, $10$ and $11$ we present the nearly contemporaneous $Z$, $Y$, $J$, $H$, $K_{\rm s}$ photometry from VVV. Column $12$ gives $\Delta K_{\rm s}$, the absolute value of the peak-to-trough difference from the 2010-2014 light curves from VVV. Column $13$ presents $\alpha_{class}$, the 2 to 23~$\mu$m spectral index parameter that relates to the evolutionary class of sources that appear to be associated with SFRs (the method and data used to estimate this parameter is explained in Sect. \ref{vvv:sec_alphaclass}). In column $14$ we present whether the object is likely associated with SFRs or not, whilst column 15 presents the classification of the object from its light curve. The latter is discussed throughout the text. Finally, in column 16 we present the approximate period for variable stars where we are able to measure this parameter. Most of the variable stars are unknown from searches in SIMBAD and Vizier ($\sim$ 98 per cent). Among the known variables there are 2 novae, Nova Cen 2005 and Nova Cen 2008 \citep{2010Hughes,2013Saito}, 2 eclipsing binaries (EBs), EROS2-GSA J132758-631449 and PT Cen \citep{2002Derue, 2004Budding}, 1 high-mass X-ray binary, BR Cir \citep[see e.g.][]{2008Tudose} and 9 OH/IR stars. Among the objects not previously classified as variable stars, 159 are found in the Spitzer/GLIMPSE catalogue of intrinsically red sources from \citet{2008Robitaille}, with the majority being classified as likely YSOs from their mid-IR colours and brightness. \subsection{YSO population} At this point, the reader should note that most of the variables are listed as spatially associated with SFRs ($\sim$65\%, falling to $\sim$54\% after allowing for chance projection by non-YSOs, see Sect. \ref{vvv:sec_sfrass}, \ref{sec:cont} and \ref{vvv:sec_physmec}) and these stars have spectral indices that indicate a class I or flat spectrum evolutionary status, therefore they are likely in an early evolutionary stage. They are usually sufficiently red to be undetected ($i > 21$~mag) in sensitive panoramic optical surveys such as VPHAS$+$ \citep{2014Drew} unlike most of the known FUor and EXors. The spectral indices of the YSOs are discussed later in Sect.\ref{vvv:sec_alphaclass} following classification of the light curves and an attempt to decontaminate chance projections of other variables in SFRs. \begin{figure*} \centering \resizebox{0.7\textwidth}{!}{\includegraphics{Graf_lb_a1.pdf}} \resizebox{0.7\textwidth}{!}{\includegraphics{Graf_lb_a2.pdf}} \resizebox{0.7\textwidth}{!}{\includegraphics{Graf_lb_a3.pdf}} \caption{(top) Galactic distribution of high amplitude variable stars selected from VVV. These are divided into objects likely associated with SFRs (black filled circles) and those that are found outside these regions (black open circles) . The bottom graph shows the same distribution for the 816 high amplitude variables (black circles), but this time including the areas of star formation from the \citet{2002Avedisova} catalogue (red diamonds).} \label{vvv:lb_dist} \end{figure*} \begin{figure*} \centering \resizebox{0.47\textwidth}{!}{\includegraphics{Graf_cor_v2.pdf}} \resizebox{0.47\textwidth}{!}{\includegraphics{Graf_neig_v2.pdf}} \caption{(left) Two-point angular correlation function of the sample of VVV high-amplitude variables. (right) Nearest neighbour distribution for the same sample. The smooth curve represents the expected distribution of a random (Poisson) distribution.} \label{vvv:neighist} \end{figure*} \subsection{Association with SFRs}\label{vvv:sec_sfrass} Figure \ref{vvv:lb_dist} shows the distribution for the 816 VVV variables across the Galactic midplane. It can be seen that our objects appear to be highly clustered, with their distribution following that of the SFRs from the Avedisova et al.(2002) catalogue (red diamonds in the bottom plot of Fig. \ref{vvv:lb_dist}). To study the apparent clustering, we derive the two-point angular correlation function, $w(\theta)$ and the nearest neighbour distribution of the sample of high amplitude variables. To derive $w(\theta)$, we follow \citet{1998Bate} and first estimate the mean surface density of companions (MSDC). For each star, we compute the angular separation, $\theta$, to all other stars, and bin the separation into annuli of separation $\theta$~to $\theta+\delta \theta$. The MSDC results from dividing the number of pairs found, $N_{p}$, at a given separation by the area of the annulus, and dividing by the total number of stars, $N_{\ast}$, or \begin{eqnarray*} MSDC=\frac{N_{P}}{2\pi\theta\delta \theta N_{\ast}} \end{eqnarray*} The latter relates to $w(\theta)$ as \begin{eqnarray*} w(\theta) = MSDC \times \frac{A}{N_{\ast}} - 1 \end{eqnarray*} \noindent where $A$ is the area covered by the survey \citep[][and references therein]{1998Bate}. This correlation function is valid as long as the separations $\theta$ are smaller than the angular length of the sample. We show the two-point correlation function in Fig. \ref{vvv:neighist}. We can see that we do not find any pairs for separations $\theta < 20\arcsec$, hence $w(\theta)=-1$. For separations between 20\arcsec\ and 100\arcsec\, $w(\theta)$ is larger than the values expected for random pairings ($w(\theta)=0$) and it remains somewhat above zero for separations up to a few hundred arcseconds. This nearest neighbour distribution of Fig. \ref{vvv:neighist}, also shows an excess of close neighbours at distances $R<200\arcsec$, compared to the expected number from a random (Poisson) distribution. Thus we are confident that we are tracing clustering in the VVV sample, on a spatial scale similar to that of distant Galactic clusters and star formation regions. As an illustration of how variable stars in VVV are preferentially located in areas of star formation, Fig. \ref{vvv:d065} shows the $K_{\rm s}$ image of the area covered in tile d065. Twenty five highly variable stars are found in this tile, and it is clear that these are not evenly distributed along the area covered in d065, and instead are found clustered around an area of star formation, which is better appreciated in the cut-out image from WISE \citep{2010Wright}. \begin{figure} \centering \resizebox{\columnwidth}{!}{\includegraphics{d065_vvv.pdf}}\\ \resizebox{\columnwidth}{!}{\includegraphics{d065_wise2.pdf}} \caption{The top graph shows the $K_{\rm s}$ image of tile d065 along with the high amplitude variable stars found in this region. The clustering of the variable stars is already apparent in this image. The bottom graph shows the WISE false colour image of the same region (blue=3.5 $\mu$m, green=4.6 $\mu$m, red=12 $\mu$m). In here, the fact that variable stars preferentially locate around areas of star formation can be better appreciated.} \label{vvv:d065} \end{figure} To establish a likely association with a SFR we used the criteria established in the UGPS search \citep[see][]{2014Contreras}, which were based on entries in the SIMBAD database and the Avedisova catalogue within a 5\arcmin\ radius of each high amplitude variable. In addition we also check WISE images for evidence of star formation in the area of the object, e.g. evidence of bright extended 12 $\mu$m emission near the location of the objects or the finding of several stars with red W1-W2 colours (sources appearing green, yellow or red in WISE colour images) around the VVV object. We find that 530 of our variable stars are spatially associated with SFRs, which represents 65$\%$ of the sample, remarkably similar to the observed association in UGPS objects \citep{2014Contreras}. \subsection{Contamination by non-YSOs}\label{sec:cont} In \citet{2014Contreras}, we estimated that about 10$\%$ of objects are probable chance associations with SFRs. This number is likely to be larger in our current analysis given that: (i) we are sampling mid-plane sightlines across the Galactic disc; (ii) the higher extinction in the Galactic midplane and the brighter saturation limit of VVV than UGPS, allows for a larger number of bright evolved variable stars to show up in our results. To determine the percentage of objects that might be catalogued as likely associated with SFRs by chance, we used the following method: \begin{itemize} \item[] a) Create a master catalogue of objects in the 76 tiles that were classified as stars in each of the epochs from the 2010-2012 analysis. \item[] b) Select 816 stars randomly from this catalogue and query SIMBAD for objects found in a 5\arcmin\ radius. \item[] c) Count the number of objects within this radius that were classified in any of the categories that could relate to star formation. This categories included T Tauri and Herbig Ae/Be stars, HII regions, Dark clouds, dense cores, mm and submm sources, FU Orionis stars, among others. \item[] d) Repeat items b through c 40 times. \end{itemize} \begin{figure} \centering \resizebox{\columnwidth}{!}{\includegraphics{Graf_cont_v2.pdf}} \caption{Percentage of objects flagged as likely associated with SFRs as a function of the number of objects classified in the categories that could relate to star formation found within a 5\arcmin\ radius query in SIMBAD.} \label{vvv:simbcont} \end{figure} Figure \ref{vvv:simbcont} shows the number of stars found within 5\arcmin\ of the VVV object and that were classified in the categories shown above, N$_{simbad}$ vs the percentage of VVV objects with this number. It is already apparent that the percentage of chance selection will be higher than that estimated from GPS. However, we note that in order for an object to be flagged as associated with an SFR in Table \ref{table:vvvpar}, we needed at least 4 SIMBAD objects to appear within the 5\arcmin\ radius, thus giving us an estimate that $\sim$30$\%$ of the non-YSO population is spatially associated with an SFR by chance. Inspection of WISE images of 100 randomly selected sources yields a similar fraction of chance associations but most of these were also identified as SFR-associated from the SIMBAD results, so the WISE data only slightly increased the chance association fraction. The Avedisova catalogue added an even smaller fraction of chance associations not indicated by the SIMBAD and WISE data, so the final chance association probability for non-YSOs with star formation regions is 35\%. The number of non-YSOs in the SFR-associated sample is less than 35\% because non-YSOs do not dominate the full high amplitude sample but constitute about half of it. We found 286/816 (35\%) of variables outside SFRs, i.e. in 65\% of the area, suggesting that 54\% (35/0.65) of the sample is composed of objects other than YSOs but this neglects the fact that some YSOs will be members of SFRs that are not known in the literature nor visible in WISE images (see Sect. \ref{vvv:sec_varsnonsfrs}). Consequently, random addition of 35\% of half of the total sample of 816 variables to the SFR-associated sample would be expected to cause only 27\% contamination of the SFR-associated subsample by non-YSOs. This conclusion that the SFR-associated population of variables is dominated by bona-fide YSOs is supported by the two colour diagrams (figures \ref{vvv:nonsfrs_prop} and \ref{vvv:gc}) and light curves of the population (see Sect. \ref{vvv:sec_varsnonsfrs} and Sect. \ref{vvv:sec_lcmorp}), which differ from those outside SFRs. We note that the results of spectroscopic follow-up of a subsample of VVV objects associated with SFRs (Paper II) show a figure of 25$\%$, a consistent figure despite some additional selection effects in that subsample. \subsection{Properties of variables outside SFRs}\label{vvv:sec_varsnonsfrs} To establish the nature of the objects that could be contaminating our sample of likely YSOs, and may also be interesting variable stars, we study the properties of objects found outside areas of star formation. Many of these are listed in SIMBAD as IR sources (from the IRAS and MSX6C catalogues) and associated with OH masers, as well as being catalogued as evolved stars in previous surveys. Visual inspection of their light curves also shows that a large percentage of objects have periodic variability, with $P >$ 100 days, whilst the remainder of the sample shows variability over shorter timescales in the order of $P<$ 20 days. We use the phase dispersion minimization \citep[{\sc pdm},][]{1978Stellingwerf} algorithm found in the {\sc noao} package in {\sc iraf} to search for a period in the light curve of these objects. This allows us to derive the periods or at least the approximate timescale of the variability of objects found outside areas of star formation. To provide a comparison with the {\sc pdm} results we also used the {\sc gatspy LombScargleFast} implementation of the Lomb Scargle algorithm, which benefits from automatic optimisation of the frequency grid so that significant periods are not missed. We found that {\sc pdm} was generally better for the purpose of this initial investigation. The Lomb Scargle algorithm is designed to detect sinusoidal variations whereas {\sc pdm} makes no assumptions about the form of the light curve and is therefore much more sensitive to the periods of eclipsing binaries, for example. The Lomb Scargle method did help with the classification of a small number of long period variables (LPVs). Out of the 286 stars in this subsample, 5 objects correspond to known objects from the literature (novae, EBs and a high mass X-ray binary), 45$\%$ of them are LPVs and 17$\%$ are EBs where we are able to measure a period. In addition, 30$\%$ of the sample is comprised of objects in which variability seems to occur on short timescales. The light curves of many objects in the latter group resemble those of EBs with measured periods, but with only 1 or 2 dips sampled in the dataset. We suspect that most of these could also be EBs but we are not able to establish the periods. Finally, we also find 18 objects (6$\%$) that do not appear to belong to any of the former classes. In Fig. \ref{vvv:nonSFRs_examples} we show two examples of objects belonging to these different subclasses where a period could be derived. The LPV VVVv215 is typical of many of the dusty Mira variables in the dataset that show long term trends caused by changes in the extinction of the circumstellar dust shell. These trends are superimposed on the pulsational, approximately sinusoidal variations, with the result that the $K_{\rm s}$ magnitude as a given point in the phase curve can differ between cycles. \begin{figure} \centering \resizebox{\columnwidth}{!}{\includegraphics{example_mira_v2.pdf}}\\ \resizebox{\columnwidth}{!}{\includegraphics{example_eb_v2.pdf}} \caption{Examples of $K_{\rm s}$ light curves for objects not associated with SFRs. (top) the long period variable VVVv215. (bottom) the eclipsing binary VVVv203. } \label{vvv:nonSFRs_examples} \end{figure} The objects belonging to different classes show very different properties. Figure \ref{vvv:nonsfrs_prop} shows the $K_{\rm s}$ distribution for these objects, where it can be seen that the LPVs dominate the bright end of the distribution with a peak at $K_{\rm s} \sim 11.8$ magnitudes, and showing a sharp drop at brighter magnitudes, probably due to the effects of saturation. EBs and other classes are usually found at fainter magnitudes. The near-infrared colours of the two samples (bottom plot of Fig. \ref{vvv:nonsfrs_prop}) show that LPVs tend to be highly reddened objects or have larger $(H-K_{\rm s})$ colours than EBs and other classes, which usually have the colours of lightly reddened main sequence and giant stars. We will see in Sect. \ref{vvv:sec_physmec} that this low reddening and lack of K band excess (in most cases) distinguishes the EBs and other shorter period variables from the sample spatially associated with SFRs, so contamination of the SFR sample by these shorter-period variables should not be very significant. The colour-colour diagram of Fig. \ref{vvv:nonsfrs_prop} also supports the idea that this sample might contain some bona fide YSOs that are not revealed by our searches of SIMBAD and the WISE images, as mentioned in our discussion of contamination. In the figure we observe objects (red circles) that show $(H-K_{\rm s})$ colour excesses and are neither known variables nor classified as LPVs or EBs. By simply selecting red circles located to the right of the reddening vector passing through the reddest main sequence stars we estimate that 44 objects have colours consistent with a YSO nature. This would represent 15$\%$ of objects outside SFRs and 5\% of the full sample of 816 variables. A more detailed study would of course be needed to confirm their nature as YSOs. We also note that the lower left part of the classical T Tauri stars (CTTS) locus plotted in the figure extends into the region occupied by lightly reddened main sequence stars, so in principle some of these individual red circles can potentially be YSOs. \begin{figure} \centering \resizebox{\columnwidth}{!}{\includegraphics{nonSFRs_kdist_v3.pdf}}\\ \resizebox{\columnwidth}{!}{\includegraphics{nonSFRs_colours_v3.pdf}} \caption{(top) Overall $K_{\rm s}$ distribution (from 2010 data) of objects not associated with SFRs (dotted line), and the same distribution separated for LPVs (solid blue line), EBs and known objects (solid orange line), and other classes of variable stars (solid red line). (bottom) $(J-H)$, $(H-K_{\rm s})$ colour-colour diagram for LPVs (blue circles), EBs and known objects (orange circles), and other classes of variable stars (red circles). In the diagram, arrows mark lower limits. The solid curve at the lower left indicates the colours of main sequence stars. The short-dashed line is the CTTS locus of \citet{1997Meyer} and the long-dashed lines are reddening vectors.} \label{vvv:nonsfrs_prop} \end{figure} The bright LPVs are very likely pulsating Asymptotic Giant Branch (AGB) stars. These type of stars are usually divided into Mira variables, which are characterized by displaying variability of $\Delta K > 0.4$~magnitudes and with periods in the range $100 <$~P$< 400$~d, and dust-enshrouded AGB stars, which are heavily obscured in the optical due to the thick circumstellar envelopes (CSE) developed by heavy mass loss ($\sim 10^{-4}$~M$_{\odot}$~yr$^{-1}$). The latter group, comprised of carbon-rich and oxygen-rich stars (the latter often referred to as OH/IR stars if they display OH maser emission), show larger amplitudes in the K-band (up to 4 magnitudes) and have periods between 400 $<$~P$< 2000$~d \citep[for the above, see e.g,][]{2006Jimenez2,2008Whitelock}. AGB stars are bright objects and should be saturated at the magnitudes covered in VVV. However, due to the large extinctions towards the Galactic mid-plane we are more likely to observe these type of objects in VVV compared to our previous UKIDSS study. We can estimate the apparent magnitude of Mira variables at the different Galactic longitudes covered in VVV, and by assuming that these objects are located at the Galactic disc edge \citep[R$_{GC}=14$~kpc][]{2011Minniti} and then considering other Galactocentric radii. At a given longitude, $l$, we derive A$_{V}$ as the mean value of the interstellar extinction found at latitudes $b$ between $-1^{\circ}$ and $1^{\circ}$. The interstellar extinction is taken from the \citet{1998Schlegel} reddening maps, and corrected following \citet{2011Schlafly}, i.e. $E(B-V)=0.86E(B-V)_{Schlegel}$. We then assume that extinction increases linearly with distance, at a rate A$_{V}/D_{edge}$ (mag kpc$^{-1}$), with $D_{edge}$ the distance to the Galactic disc edge at the corresponding $l$. We finally take the absolute magnitude as M$_{K}=-7.25$ \citep{2008Whitelock}. \begin{figure} \centering \resizebox{\columnwidth}{!}{\includegraphics{Miras_apmag_v2.pdf}}\\ \resizebox{\columnwidth}{!}{\includegraphics{Miras_extinct_v2.pdf}} \caption{(top) Apparent $K_{\rm s}$ magnitude, derived as explained in the text, for a Mira variable located at the Galactic disc edge (solid red line). The same value is shown for a Mira located at lower Galactic radii R$_{GC}=10$~kpc (blue line) and R$_{GC}=7$~kpc (black line). The magnitude where the number of detections for LPVs drops ($K_{\rm s}=11.5$ magnitudes) is marked by a dotted line. (bottom) K-band Galactic extinction column as a function of Galactic longitude.} \label{vvv:miras_apmag} \end{figure} Figure \ref{vvv:miras_apmag} shows the estimated apparent magnitudes of Mira variables at different $l$ and at varying R$_{GC}$. In the figure we also show the magnitude, $K_{\rm s}=11.5$, that marks the drop in the number of the detection of these objects, as observed in the histograms of the $K_{\rm s}$ distribution (Fig. \ref{vvv:nonsfrs_prop} ). We can see as we move away from the Galactic center, a Mira variable would most likely saturate in VVV, especially at $l < 310^{\circ}$. This occurs due to the effects of having relatively larger extinctions towards the Galactic center (see bottom plot of Fig.\ref{vvv:miras_apmag}) and that a star at R$_{GC}=14 $~kpc is located farther away from the observer as $l$ approaches $l=0^{\circ}$. We note that \citet{2011Ishihara} finds that most AGB stars are found at $R_{GC}< 10$ kpc so if we place a Mira variable at smaller Galactic radii (R$_{GC}=$7, 10 kpc), we see that it is less likely for such a star to show up in our sample. However, variable dust-enshrouded AGB stars which undergo heavy mass loss, suffer heavy extinction due to their thick circumstellar envelope, and thus are fainter than Mira variables \citep[AGB stars with optically thick envelopes are found to be $\sim$5 $K_{\rm s}$ magnitudes fainter than objects with optically thin envelopes in the work of][]{2006Jimenez} and thus less likely to saturate in VVV, even at large distances. Then most AGB stars in our sample are probably dust-enshrouded objects. \begin{figure} \centering \resizebox{\columnwidth}{!}{\includegraphics{nonSFRs_longdist_v4.pdf}}\\ \resizebox{\columnwidth}{!}{\includegraphics{Miras_period_amp_v2.pdf}} \caption{(top) Overall Galactic longitude distribution of objects outside SFRs (black line). We also show the same distribution divided into LPVs (blue line), EBs and known objects (orange line), and other classes of variable stars (red line). (bottom) Period vs $K_{\rm s}$ amplitude for LPVs with a measured period.} \label{vvv:miras_periodamp} \end{figure} The observations confirm the trend expected from the analysis above. Figure \ref{vvv:miras_periodamp} shows that the number of LPVs increases as we come closer to the Galactic center. In addition, when taking into account AGB stars with measured periods, we confirm that the majority of AGB stars show periods longer than 400 days and large amplitudes (see lower panel in Fig. \ref{vvv:miras_periodamp}), as expected in heavily obscured AGB stars. It is interesting to see in the same figure, that variable objects with periods longer than 1500 days show lower amplitudes than expected for their long periods. This is similar to the observed trend in the variable OH/IR stars of \citet{2006Jimenez2}. According to the authors, these objects correspond to stars at the end of the AGB. We note that the apparent lack of high amplitude objects at longer periods could relate to the fact that more luminous (longer period) objects have smaller amplitudes expressed in magnitudes \citep[red supergiants often display $\Delta K<1$~mag, see e.g.][]{2008VanLoon}. This population of bright pulsating AGB stars can also explain the observed bimodality of the $K_{\rm s}$ distribution for the full sample of VVV high amplitude variable stars (see Fig. \ref{vvv:k_dist}). The peaks of the distribution occur at $K_{\rm s}\sim 11.8$ and $K_{\rm s}\sim 15.8$. The peak at the bright end is at the same magnitude as the peak for LPVs. \begin{figure} \centering \resizebox{\columnwidth}{!}{\includegraphics{vvv_kdist_v4.pdf}}\\ \caption{$K_{\rm s}$ distribution (from 2010 data) of the 816 VVV selected variable stars (solid black line). We show the same distribution for the overall sample of stars found to be likely associated with areas of star formation (blue line) and for the same sample but removing objects that show Mira-like light curves in Sect. \ref{vvv:sec_lcmorp} (red line). } \label{vvv:k_dist} \end{figure} When we only plot objects which are found to be likely associated with areas of star formation, the peak at the bright end becomes less evident (see blue histogram in Fig. \ref{vvv:k_dist}). When we plot only SFR-associated sources that do not have LPV-like light curves (see Sect. \ref{vvv:sec_lcmorp}) the bimodality almost disappears, as shown by the red histogram in Fig. \ref{vvv:k_dist}. AGB stars are probably the main source of contamination in our search for eruptive YSOs in SFRs, especially dust-enshrouded AGB stars which can have infrared colours resembling those of YSOs. Hence it is fortunate that we can remove most of this contamination by selecting against LPV-like light curves. \subsection{YSO source density} Our finding in Sect. \ref{vvv:sec_sfrass} that YSOs constitute about half of the detected population of high amplitude variables in VVV disc fields indicates that they represent the largest single population of high-amplitude infrared variables in the Galactic mid-plane, at least in the range $11 < K < 17$. We note that extragalactic studies of high-amplitude stellar variables are dominated by the more luminous AGB star population \citep[see e.g.][]{2015Javadi}. Our analysis only considered the VVV disc tiles with $\left|b\right|<1^{\circ}$, this amounts to 76 tiles, covering each 1.636 deg$^{2}$ of the sky. After allowing for the small overlaps between adjacent tiles, the total area covered in this part of the survey is 119.21 deg$^{2}$. Adopting a 50\% YSO fraction for the full sample of 816 variables implies a source density of 3.4 deg$^{-2}$. As noted in Sect. \ref{sec:vvvselec}, the stringent data quality cuts in our selection procedure excluded $\sim 50\%$ of high amplitude variables down to $K_{\rm s}$=15.5, and a high fraction at fainter magnitudes where completeness falls (see Fig. \ref{vvv:k_dist}). The corrected source density is therefore $\sim$7 deg$^{-2}$. When considering the source density of high amplitude infrared variables in the UGPS, \citet{2014Contreras} argued that the observed source density under-estimates the actual source density due to three main effects: (1) with only two epochs of K-band data most high amplitude variables will be missed; (2) the source density rises towards the magnitude cut of K$<16$, indicating that many low luminosity PMS variables that are detected at distances of 1.4 to 2 kpc would be missed at larger distances; and (3) the dataset used in the UGPS search of \citeauthor{2014Contreras} excludes the mid-plane and is therefore strongly biased against SFRs. The UGPS YSO source density is estimated to reach 12.7 deg$^{-2}$ when correcting for these 3 factors. In the case of VVV, given the higher number of epochs obtained from this survey, and that this analysis is not biased against areas of star formation, the source density is only likely to be affected by item (2) of the UGPS analysis. Figure \ref{vvv:k_dist} shows the magnitude distribution of the VVV variables associated with SFRs, where we can see a similar behaviour to the UGPS results, with the density of sources rising steeply towards faint magnitudes. Contrary to the UGPS search, we do not have a hard magnitude cut in the VVV sample, which includes sources as faint as $K_{\rm s} \sim 17$. However, the number of sources decreases at $K_{\rm s} > 16$, so we estimate an effective magnitude detection limit of $K_{\rm s}=16.25$~magnitudes. This implies that if typical sources from VVV have similar characteristics to UGPS objects in Cygnus and Serpens ($K=14.8, d=1.4-2$~kpc), then we would not detect them at distances $d>3.32$ kpc. The complete sample of star forming complexes from \citet{2003Russeil} shows that $83\%$ of them are located beyond these distances. Correcting for this factor we then estimate a true source density of 41 $\deg^{-2}$, though this figure does not include YSOs with low mass and luminosity that are too faint to be sampled by VVV due to the absence of nearby SFRs in the survey area. This figure of 41 $\deg^{-2}$ is three times larger than the one estimated from the UGPS analysis of \citet{2014Contreras} (12.7 deg$^{-2}$). Two effects can account for the larger source density in VVV than UGPS. (1) High luminosity YSOs are less common, but they can be observed at larger distances. The UGPS study would not find such objects at large distances because the available dataset did not cover the mid-plane of the Galactic disc, in which all distant SFRs are located due to their small scale height. Since VVV does cover the midplane we are able to detect these rare higher luminosity YSOs. This seems to be supported by the slightly larger distances established for members of the spectroscopic subsample in paper II. (2) In the UGPS study, most (23/29) of the variables in SFRs were located in just 2 large SFRs: the Serpens OB2 association and portions of Cygnus X. The much smaller size of the UGPS sample (in number of SFRs and number of variables) meant that there was considerable statistical uncertainty in the area-averaged source density. Moreover, the incidence of high amplitude variability is greater at the earlier stages of YSO evolution (see Sect. \ref{vvv:sec_alphaclass}) so the numbers in the UGPS study may have been reduced by a relative lack of YSOs at these stages in the two large SFRs surveyed. The estimated highly variable YSO source density remains much larger than that estimated for Mira variables in \citet{2014Contreras}, indicating a higher average space density for the variable YSOs. The observed variables in SFRs also outnumber the EBs and unclassified variables in the magnitude range of this study. However, we are likely to miss a large part of the population of high amplitude EBs due to the sparse time sampling of VVV. In Appendix \ref{appen:EBs} we attempt to calculate the source density and space density of high amplitude EBs from the OGLE-III Galactic disc sample of \citet{2013Pietrukowicz}. In this we are aided by a recent analysis of the physical properties of the large sample of {\it Kepler} eclipsing binaries \citep{2014Armstrong}, which indicates that EBs with high amplitudes in the VVV $K_{\rm s}$ and OGLE $I$ passbands are dominated by systems with F to G-type primaries. We use simple calculations to show that while EBs can have very high amplitudes at optical wavelengths, the eclipse depth should not exceed 1.6 mag in $K_{\rm s}$. Similarly, we find that eclipse depths should not exceed 3 mag in $I$. These results are supported by the VVV and OGLE-III datasets \citep{2013Pietrukowicz} in which the distribution of EB amplitudes falls to zero by these limits. YSOs with $\Delta K_{\rm s} > 1.6$~mag are very numerous in our sample so we conclude that high amplitude YSOs greatly outnumber EBs above this limit. Below $\Delta K_{\rm s} = 1.6$~mag it is harder to reach a firm conclusion (see Appendix \ref{appen:EBs}). The space densities of EBs and those YSOs massive enough to be sampled by VVV may be comparable at $1< \Delta K_{\rm s} < 1.6$~mag. High amplitude YSOs are likely to be more numerous if the variability extends down to the peak of the Initial Mass Function at low masses, given that high amplitude EB systems contain a giant with mass of order 1~M$_{\odot}$. \section{Analysis of variables in star formation regions}\label{vvv:sec_physmec} The results presented here concern YSO variability in the K$_{\rm s}$ bandpass. At these wavelengths, variability in typical YSOs is produced by physical mechanisms (or a combination of them) affecting the stellar photosphere, the star-disc interface, the inner edge of the dust disc as well as spatial scales beyond 1 au \citep[see e.g.][]{2015Rice}. These mechanisms include cold or hot spots on the stellar photosphere \citep[e.g.][]{2012Scholz}, changes in disc parameters such as the location of the inner disc boundary, variable disc inclination and changes in the accretion rate \citep[as shown by][]{1997Meyer}. Variable extinction along the line of sight can also be responsible for the observed changes in the brightness of YSOs. Dust clumps that screen the stellar light have been invoked to explain the variability observed in Herbig Ae/Be stars and early-type Classical T Tauri stars \citep[group also known as UX Ori stars, see e.g.][]{1999Herbst,2002Eiroa}. In other scenarios variable extinction can be produced by a warped inner disc, dust that is being uplifted at larger radii by a centrifugally driven wind, azimuthal disc asymmetry produced by the interaction of a planetary mass companion embedded within the disc or by occultations in a binary system with a circumbinary disc \citep[see e.g.,][and references therein]{2013Romanova,2012Bans,2013Bouvier, 2014Windemuth}. Finally, sudden and abrupt increases in the accretion rate (of up to 3 orders of magnitude) explain the large changes observed in eruptive variable YSOs. The variability in these systems traces processes occurring at the inner disc \citep[in EXors, see][]{2012Loren} or at larger spatial scales beyond 1 au, such as instabilities leading to outbursts events \citep[in FUors, see e.g.][]{2014Audard}. The amplitude of the variability induced by most of these mechanisms is not expected to be larger than $\Delta K\sim1$~magnitude. Table 6 of \citet{2013Wolk} shows the expected amplitude of the $K$-band variability that would be produced by these different mechanisms. Cold and hot spots, and changes in the size of the inner disc hole are not expected to show $\Delta K$ larger than 0.75 magnitudes. We do note that the variability produced by hot spots from accretion depends on the temperature of the spot and the percentage of the photosphere that is covered by such spots, thus sufficiently hot spots can produce larger changes in the magnitude of the system. The range in $\Delta$K from variable extinction is effectively limitless as it depends on the amount of dust that obscures the star. Large changes ($\Delta$K $>$~1 mag) have been observed from variable extinction in YSOs, e.g. AA Tau, V582 Mon \citep[][]{2013Bouvier, 2014Windemuth}. Nevertheless, variable extinction can be inferred from colour variability (see e.g. Sect. \ref{vvv:sec_nirchange}). \citet{2013Wolk} also estimate that a change in the accretion rate of a class II object of $\log \dot{\mathrm{M}}($M$_{\odot}$~yr$^{-1})$ from $-8.5$~to $-7$ yields $\Delta$K$\sim$0.75 magnitudes. Thus, larger changes as observed in eruptive variables will produce large amplitudes. Given all of the above, it is reasonable to expect variability in our YSO sample to be dominated by accretion-related variability and/or events of obscuration by circumstellar dust. \subsection{Light curve morphologies}\label{vvv:sec_lcmorp} We have visually inspected the light curves of our 530 SFR-associated variables in order to gain insight into the physical mechanism causing the brightness variations. In addition, we used {\sc pdm} in {\sc IRAF} and {\sc LombScargleFast} in {\sc GATSPY} to search for periodicity in the light curves of our objects. We stress that this is a simple and preliminary classification that is highly influenced by the sparse sampling of VVV. A more detailed study is planned in future, with improved precision by applying the differential photometry method of \citet{2014Huckvale} to the VVV images. We have divided the morphologies in the following classifications. \begin{figure} \centering \resizebox{\columnwidth}{!}{\includegraphics{YSOslpv_period_amp_v4.pdf}}\\ \resizebox{\columnwidth}{!}{\includegraphics{YSOslpv_period_mag_v5.pdf}} \caption{(top) $K_{\rm s}$ amplitude vs period for stars in SFRs with long-term periodic variability. LPVs that show Mira-like light curves are shown in red plus signs while other sources are shown as blue circles. (bottom) $K_{\rm s}$ magnitude (2010) vs period for the same sample of stars.} \label{vvv:ysos_lpv_periodamp} \end{figure} \begin{itemize} \item{{\bf Long-term Periodic Variables.}} Defined as objects showing periodic variability with $P>100$~days. This limit is adopted for consistency with the limit used in the analysis of objects outside of SFRs, with the benefit that contamination by long period AGB stars will be confined to this group. We measure periods for most of these objects, albeit with some difficulty in phase-folding the data in many of them. In this subsample we find 154 stars, representing 29$\%$ of objects spatially associated with SFRs. In Sect. \ref{sec:cont} we contended that field high-amplitude infrared variables with periodic light curves ($P>100$~days) are very likely dust-enshrouded AGB stars, these being identifiable by their smooth, approximately sinusoidal light curves. We estimated that $\sim$27\% of the 530 SFR-associated variables would be non-YSOs and up to 45\% of these would be LPVs, implying that this subsample may contain $\sim$64 dusty Mira variables. Visual inspection of the 154 light curves indicates that while some have a smooth sinusoidal morphology (after allowing for long term trends due to variable extinction in the expanding circumstellar dust shell) others display short timescale scatter superimposed on the high amplitude long term periodicity. In Fig. \ref{vvv:lpvmorph}, we show the examples of objects VVVv309 and VVVv411. The short timescale variability in their light curves is definitely not consistent with the typical light curves of Mira variables and their periods of 143.95 and 190.6 days, respectively, are shorter than those of the dusty Miras detected outside SFRs (see Fig. \ref{vvv:miras_periodamp}). This short timescale scatter is typical of the scatter observed in normal YSOs due to a combination of hot spots, cold spots, and small variations in accretion rate or extinction, so it is reasonable to think that most of the long period variables with short timescale scatter are in fact YSOs. \begin{figure*} \centering \resizebox{0.44\textwidth}{!}{\includegraphics{Example_lpvs_a.pdf}} \resizebox{0.45\textwidth}{!}{\includegraphics{Example_lpvs_b.pdf}}\\ \resizebox{0.45\textwidth}{!}{\includegraphics{d068sfr_v2.pdf}} \resizebox{0.45\textwidth}{!}{\includegraphics{d074sfr.pdf}} \caption{(top left) Examples of $K_{\rm s}$ light curves of the long-period variables VVVv309 and VVVv411, which are found in areas of star formation. (top right) Phased light curves for the same objects. (bottom) 10\arcmin$\times$10\arcmin\ WISE false colour images (blue=3.5 $\mu$m, green=4.6 $\mu$m, red=12 $\mu$m) centred on VVVv309 (left) and VVVv411 (right). In both images the location of the variable star is marked by a ring around the location of the object. VVVv309 is 114\arcsec\ from HII region GRS G337.90 -00.50 \citep[see e.g.][]{2011Culverhouse}. The 12$\mu$m WISE image of VVVv309 saturates at the centre of the HII region creating the blue/green ``inset'' in the false colour image. VVVv411 is located 104\arcsec\ from the infrared bubble [CPA2006] S10 \citep{2012Simpson} as well as other indicators of ongoing star formation.} \label{vvv:lpvmorph} \end{figure*} To support this interpretation, Fig. \ref{vvv:ysos_lpv_periodamp} shows the period vs $\Delta K_{\rm s}$ and period vs $K_{\rm s}$ distributions, for objects where we are able to measure a period. The period vs $\Delta K_{\rm s}$ distribution is similar to that observed in LPVs outside SFRs (Fig. \ref{vvv:miras_periodamp}) except that there is a larger number of ``long-term'' periodic variables found with periods, $100<P< 350$ days. The 65 blue points are the objects with short timescale scatter and the red points are the remaining 89, categorised by careful inspection of the light curves of the 154 long-term periodic variables in SFRs. The blue points clearly dominate the group with $P <350$~days and they also have a distinctly fainter distribution of $K_{\rm s}$ magnitudes, similar to that shown in the red histogram in Fig.\ref{vvv:k_dist}, which represents all objects in SFRs except those with Mira-like light curves. As expected, the red points with Mira-like light curves typically have $K_{\rm s}\sim 12$, similar to the LPV distribution plotted in Fig. \ref{vvv:nonsfrs_prop}. We conclude that inspection of the light curves can separate the evolved star population of LPVs from the YSOs in SFRs with fair success, though we caution that this is an imperfect and somewhat subjective process that can be influenced by outlying data points and our limited knowledge of the time domain behaviour of circumstellar extinction in dusty Mira systems. The limitations are demonstrated by the presence of a number of blue points with $K_{\rm s}\sim 12$ and $P>350$~days in the lower panel of Fig. \ref{vvv:ysos_lpv_periodamp} and a hint of bimodality even in the ``decontaminated'' magnitude distribution in Fig. \ref{vvv:k_dist} (red histogram). In the subsequent discussion of YSOs from our sample we only include the 65 objects with short timescale scatter (called LPV-yso) and assume the other 89 sources are dusty AGB stars or other types of evolved star (or LPV-Mira). This decontamination of AGB stars reduces the SFR-associated sample to 441 objects. The long term periodic YSOs represent 15$\%$ of this sample. Periods, $P>$ 15 days are longer than the stellar rotation period of YSOs, or the orbital period of their inner discs \citep{2015Rice}. Some YSOs have been observed to show variability with periods longer than even 100 days. WL~4 in $\rho$ Oph shows periodic variability with $P=130.87$ days \citep{2008Plavchan}, which can be explained by obscuration of the components of a binary system by a circumbinary disc. The $K$-band amplitude of the variability in that system is somewhat less than 1 magnitude. However, it is possible to think that a similar mechanism might be responsible for the variations in some of our objects. \citet{2012Hodapp} show that variable star V371 Ser, a class I object driving an H$_{2}$ outflow, has a periodic light curve with $P=543$~days. The authors suggest that variability arises from variable accretion modulated by a binary companion. In view of this, the variability in some of the long term periodic variables might be driven by accretion and we discuss this in paper II, based on spectroscopic evidence for a sub-sample of them. \begin{figure} \centering \resizebox{\columnwidth}{!}{\includegraphics{YSOsspv_period_amp_v2.pdf}} \caption{$K_{\rm s}$ amplitude vs period for stars with short-term variability and with a measured period.} \label{vvv:ysos_spv_periodamp} \end{figure} \item{{\bf Short-term Variability.}} This group comprises objects that either have periodic variability and measured periods, $P<100$ days, (75 objects) or else have light curves that appear to vary continuously over short timescales ($t<100$~days) but not with an apparent period (87 objects). Their light curves do not resemble those of detached EBs because they vary continuously and cannot be contact binaries (W UMa variables) because their periods are typically longer than 1 day. For objects in this classification that have measured periods, we observe a broad distribution from 1 to 100 days and the amplitudes are in the range $\Delta K_{\rm s}$=1 to 2 magnitudes (see Fig. \ref{vvv:ysos_spv_periodamp}). If we join together the long-term periodic variables and the short-term variables (STVs) with measured periods, we find that sources with periods, $P>100$~days show higher amplitudes, on average, and sources with $P>600$~days have redder SEDs (larger values of the spectral index $\alpha$). There are no clear gaps in the period distribution, so the 100~day division between the two groups that we adopted to aid decontamination is arbitrary. We find 162 stars in the STV group, which represents 37$\%$ of the decontaminated SFR-associated sample. High-amplitude periodic variability has been observed in YSOs over a wide range of periods. RWA1 and RWA26 in Cygnus OB7 \citep[][]{2013Wolk} vary with periods of 9.11 and 5.8 days respectively. The variability has been explained as arising from extinction and inner disc changes. As mentioned before variability with $P>15$ days is not expected to arise from the stellar photosphere or changes in the inner disc of YSOs. This instead could be related to obscuration events from a circumbinary disc, such as in V582 Mon \citep{2014Windemuth}, and YSOs ONCvar 149 and 479 in \citet{2015Rice}. Variable accretion has been invoked to explain the observed periodic variability ($P\sim30$~d) of L1634 IRS7 \citep{2015Hodapp}. The shorter periods within this group may indicate rotational modulation by spots in objects with amplitudes not far above 1 magnitude, see below. \begin{figure} \centering \resizebox{0.45\textwidth}{!}{\includegraphics{Example_spvs_v3.pdf}}\\ \resizebox{0.45\textwidth}{!}{\includegraphics{Example_aper_v4.pdf}} \caption{Examples of $K_{\rm s}$ light curves for the different classifications as explained in the text. (top) Phased light curves of short-term variable star with a measured period, VVVv683 and eclipsing binary VVVv350. (bottom) Light curves of short-term variable star, without a measured period, VVVv169, the fader VVVv243 and the dipper VVVv504.} \label{vvv:lpvmorph2} \end{figure} \begin{figure} \centering \resizebox{0.48\textwidth}{!}{\includegraphics{Example_lcerup_v4.pdf}} \caption{Examples of $K_{\rm s}$ light curves for different objects in the eruptive classification as explained in the text. From top to bottom we show objects VVVv118, VVVv815, VVVv322 and VVVv270.} \label{vvv:erupmorph} \end{figure} \item{{\bf Aperiodic Long-term variability.}} This category can be divided into three different subclasses: a) Faders. Here the light curves show a continuous decline in brightness or show a constant magnitude for the first epochs followed by a sudden drop in brightness that lasts for a long time ($\ge 1$~year), continuing until the end of the time series in 2014. This type of object might be related to either stars going back to quiescent states after an outburst or objects dominated by long-term extinction events similar to the long-lasting fading event in AA Tau \citep{2013Bouvier}, or some of the faders in \citet{2013Findeisen}. b) Objects that show long-lasting fading events and then return to their normal brightness (such as VVVv504 in Fig. \ref{vvv:lpvmorph2}), which we refer to as dippers. These might also be related to extinction events. Examples of objects in groups (a) and (b) can be seen in Fig. \ref{vvv:lpvmorph2}. Group (c) contains sources with outbursts, typically of long duration ($\geq 1$~yr). In a very small number of objects the outburst duration appears to be much shorter, on the order of weeks. The increases in brightness are also unique or happen no more than twice during the light curve of the object, thus not resembling the light curves of objects in the STV category. An exception is VVVv118, which shows four brief rises on timescales of weeks. The light curves in this category typically have a monotonic rise of 1 magnitude or more, though sometimes a lower level scatter is present atop the rising trend. In a small number of cases the rise in the light curve is poorly sampled, starting at or before the beginning of the time series, but the subsequent drop exceeds 1 magnitude. Figure \ref{vvv:erupmorph} shows four examples of objects falling in the eruptive classification. The examples have been selected in order to illustrate the different temporal behaviour observed in objects belonging to this class. As we have already mentioned, VVVv118 shows multiple short, high-amplitude rises. In general objects show outbursts which last between 1-4 years (see VVVv815 and VVVv322). We also detect a few cases where the outburst duration cannot be measured as it extends beyond 2014 data (e.g. VVVv270). When comparing to the behaviour of known classes of eruptive variables, VVVv118 would resemble that of EXors and VVVv270 could potentially be an FUor object (based only on photometric data). However most of the objects have outburst durations that are in between the expected duration for EXors and FUors. Considering the outburst duration of the known subclasses of young eruptive variables, we are likely to miss detection of FUor outbursts if they went into outburst prior to 2010. In the case of EXors, which have outbursts that last from few weeks to several months, we would expect to detect more of these objects given the time baseline of VVV. However, our results show a lack of classical EXors, which could be a real feature or it could be related to the sparse VVV sampling. Thus, we need to test our sensitivity to short, EXor-like eruptions. We simulate outbursts with timescales from ~2 months to ~3 yr. First we generate a very rough approximation of an eruptive light curve with outburst duration, $T_{o}$. The light curve consists of: 1) A quiescent phase of constant magnitude that lasts until the beginning of the outburst, which is set randomly at a point within the 2010-2012 period (between 0 and 1000 d). 2) A rise which is set arbitrarily to have a rate of 0.15 mag/day, lasting $T_{rise}=$10 d until reaching an outburst amplitude of 1.5 mag, which is a little below the median for the VVV eruptive variable candidates. 3) Plateau phase with a constant magnitude set to the peak of the outburst. This phase lasts for $T_{o}-T_{rise}-T_{decline}=T_{o}-20$~d. 4) The decline, which lasts 10 days and has a rate of 0.15 mag/day. Finally, 5) A second quiescence phase. To every point in the light curve we add a randomly generated scatter of $\pm$0.2~mag. Once the light curve is generated, we measure the magnitude of the synthetic object at the observation dates of a particular VVV tile. If the synthetic object shows $\Delta K_{\rm s}\geq1$~mag then it is marked as a detection. This procedure is repeated 1000 times for each outburst duration (which is set to be between 30 and 900 days). We also repeat this procedure for four different VVV tiles. The simulation shows that the number of detections is very similar ($\sim 80\%$) for $T_{o} >$~ 7 months and declines slowly as $T_{o}$ is reduced, falling by a factor of 2 for $T_{o}=2$ months. However, this is not enough to cause the apparent lack of eruptive variables with EXor-like outbursts in our sample. We conclude that the longer (1-4 yr) durations that we observe are typical values for infrared eruptive variables, rather than a a sampling effect. The characteristics of our eruptive sample (see Paper II) agree with recent discoveries of eruptive variables that show a mixture of characteristics between the known subclasses of eruptive variables \citep[see e.g.][]{2009Aspin}. We note that classification of our sample into the known subclasses becomes even more problematic when taking spectroscopic characteristics into account, as e.g. VVVv270, the potential FUor from its light curve, shows an emission line spectrum, or VVVv322 shows a classical FUor near-infrared spectrum. In Paper II we propose a new class of eruptive variable to describe these intermediate eruptive YSOs. Sources classified as eruptive are very likely to be eruptive variables, where the changes are explained by an increase of the accretion rate onto the star due to instabilities in the disc of YSOs \citep[see e.g.][]{2014Audard}. We find 39 objects in subgroup (a), 45 in (b) and 106 in (c). The whole class of faders/bursts represent 43$\%$ of the likely YSO sample. \item{{\bf Eclipsing Binaries.}} We find 24 objects with this light curve morphology, representing 5$\%$ of the sample. We are able to measure a possible period in 15 of them. The remaining 9 objects are left with this classification given the resemblance of their light curves to the objects with measured periods. We expect that a number of them will be field EBs contaminating our YSO sample. However, inspection of the 2 to 23~$\mu$m spectral index for each object, $\alpha$, (see Fig. \ref{vvv:gc2}), indicates that 12 objects are classified as either class II or flat-spectrum sources. If these are in fact YSOs, they would represent a significant discovery as YSO EBs are invaluable anchors for stellar evolutionary models, which generally lack empirical data on stellar radii. Figure \ref{vvv:ebyso} shows the light curve and location of one candidate YSO EB, VVVv317, with P$=6.85$~d and $\alpha=-1.57$. The spectral index places it at the edge of the classification of class II YSOs. \begin{figure} \centering \resizebox{0.95\columnwidth}{!}{\includegraphics{ebyso_glimpse_v2.pdf}}\\ \resizebox{\columnwidth}{!}{\includegraphics{ebyso_lc_v2.pdf}} \caption{(top) False colour image (blue=3.6 $\mu$m, green=4.5 $\mu$m and red=8 $\mu$m) from GLIMPSE, showing the location of candidate YSO EB VVVv317 (blue ring). (bottom) Phased light curve of the object.} \label{vvv:ebyso} \end{figure} \end{itemize} In Fig. \ref{vvv:classlc_dkdist} we compare the amplitude distributions ($K_{\rm s,max}-K_{\rm s,min}$) of the different categories of variable YSO. We can see that the EBs and STVs typically have the smallest amplitudes (means of 1.18 mag and 1.33 mag, respectively), whereas the faders and eruptive variables have the highest amplitudes (means of 1.95 mag and 1.72 mag, and medians of 1.75 mag and 1.61 mag respectively). The dippers and long-term periodic variables have mean amplitudes of 1.64 mag and 1.57 mag respectively, which are similar to the mean amplitude of 1.56 mag for the full sample. The substantial number of STVs with amplitudes only a little over 1 magnitude is consistent with our suggestion that in the shorter period variables in this category variability may be explained by rotational modulation of dark or bright spots on the photosphere. The relatively high amplitudes of the eruptive variables are not unexpected, whilst the high amplitudes of the faders could be explained if some of these objects are eruptive variables returning to quiescent states. \begin{figure} \centering \resizebox{0.95\columnwidth}{!}{\includegraphics{dkdist_lpv_spv_eb_v2.pdf}}\\ \resizebox{0.95\columnwidth}{!}{\includegraphics{dkdist_aper_v2.pdf}} \caption{$\Delta K_{\rm s}$ distribution for the different light curve morphology YSO classes. (top) Distribution for LPVs (black line), STVs (red line) and EBs (blue line). (bottom) Distribution for aperiodic variables, faders (black line), dippers (blue line) and eruptive objects (red line).} \label{vvv:classlc_dkdist} \end{figure} \subsection{Near infrared colour variability}\label{vvv:sec_nirchange} Considering the various classes of light curve defined above for SFR-associated variables, it is reasonable to expect that extinction causes the variability in dippers and perhaps some of the STVs. Extinction variability can be observed in eruptive objects. However, we do not expect the main cause of variability in this class to be due to this mechanism. It is less clear what to expect for faders and long-term periodic variables. We can test this by looking at colour variability data. The VVV survey was initially designed with only 1 epoch of contemporaneous JHK$_{\rm s}$ colour data but a 2nd such epoch was added to the programme for observation in 2015, both to benefit the YSO variability science and to help to understand VVV variables of unknown nature. We note that many objects in the SFR sample are not detected in J- and H-band (see Sect. \ref{vvv:sec_alphaclass}), making a colour comparison impossible. Also, in many objects the two epochs do not span a large fraction of the full range of magnitudes in the light curve. E.g. in many eruptive objects we do not have direct comparison between quiescent and outburst states. Nevertheless, comparison of the change in colour vs magnitude still provides some valuable information on the mechanisms driving variability, particularly for those sources in which source magnitudes differ substantially at the 2 epochs. Following a similar procedure to \cite{2012Loren} and \citet{2014Antoniucci} we compare the change in colour, $(H-K_{\rm s})$, vs magnitude, $H$, between 2010 and 2015 for the different classes of YSOs (see Fig. \ref{vvv:ysoext}). In the figure we see that in most cases the changes in both colour and magnitude are small, thus objects cluster around the origin. This is especially true in EBs. The overall distribution of eruptive, dippers, LPVs and STVs on the other hand, seems to be elongated along an axis passing through the ``bluer when brighter'' and ``redder when fainter'' quadrants. This agrees with the behaviour expected from changes due to accretion or extinction in YSOs and resembles the near-infrared variability observed in EXors \citep{2009Loren,2012Loren}, the classical T Tauri sample of \citet{2012Loren}, and the mid-infrared variability of candidate EXors from \citet{2014Antoniucci}. It is interesting to see that many objects classified as faders fall in the bluer when fading quadrant. This behaviour is still consistent with YSO variability due to changes in disc parameters described by \citet{1997Meyer} and observed in some Cygnus OB7 YSOs \citep[see e.g.][]{2013Wolk}. The different behaviour might also be caused by the different geometry (inclination) of the system with respect to the observer. Scattering of light by the circumsteller disc or envelope may also be contributing to the sources that are bluer when fainter. Figure \ref{vvv:ysoext} also shows the observed change in both $(J-H)$ and $(H-K_{\rm s})$ colours for YSOs detected in the three filters in both epochs. In there we also plot the expected change if the variability occurs parallel to the reddening line (independent of the direction of the change) as well as a linear fit to the different YSO classes. We would expect that variability similar to that observed in EXors \citep[see e.g. fig. 1 in][]{2012Loren} would show a behaviour that is not consistent with reddening. It is hard to say much from EBs as they do not show much variability. The overall change in STVs, faders and dippers appears to be different from the reddening path, although it appears to depend on the selection of objects from those samples. The path followed by LPVs is also different from the reddening line, but it does suggest a more similar behaviour to extinction compared to the other classes. Eruptive variables appear to follow a very different path from reddening. We would expect such behaviour if the variability is similar to that observed in EXors \citep[see e.g. fig. 1 in][]{2012Loren}. Given that variable extinction and eruptive variability are the only known mechanisms to produce variability well in excess of 1~mag, the fact that colour variability disfavours the former in eruptive systems suggests that the latter is more likely. We have checked the individual $(J-H)$ vs $H$, $(H-K_{\rm s})$ vs $K_{\rm s}$ and $(J-H)$ vs $(H-K_{\rm s})$ colour-magnitude (CMD) and colour-colour diagrams for the 15 eruptive variables that showed $\Delta K_{\rm s}>0.75$ mag between the two multi-wavelength epochs (this representing a significant fraction of the total amplitude in most systems). From 15 objects, 10 do not show changes consistent with extinction (e.g. they are bluer when fainter or show negligible colour change) and 5 were found to show variability approximately following the reddening vector. However, the colour behaviour in these 5 objects does not contradict the idea that accretion is the mechanism driving variability because: 1) we are not directly comparing quiescent vs outburst states, as the two near-infrared epochs cover random points in the light curve; 2) As previously mentioned, extinction does play a role in outburst variability. E.g. the near infrared colour variation of V1647 Ori follows the reddening path in fig. 13 of \citet{2008Aspin}. Extinction might also be involved in the observed variability or the recent eruptive object V899 Mon \citep{2015Ninan}. The same analysis of individual CMD and colour-colour diagrams for the remaining classes shows that 3/7 LPV-YSOs, 5/9 STVs, 1/4 dippers and 3/11 faders have colour changes consistent with variable extinction (again considering only systems with $\Delta K_{\rm s}>0.75$ between the 2 multi-colour epochs). No results can be derived from EBs as none of them show changes larger than 0.75 magnitudes between the two epochs. It is interesting to see that the changes in the majority of faders are not consistent with extinction. This supports the idea that variability in many of the objects in this class could be related to accretion changes. In Appendix \ref{apenn:mid-ir} we briefly summarise the colour and magnitude changes detected by the multi-epoch photometry from the WISE satellite. We note that this adds little to the preceding discussion of near-infrared colour changes, though large mid-infrared variability is observed in a minority of sources where the satellite happened to sample both a peak and a trough in the light curve. \begin{figure*} \centering \resizebox{0.8\textwidth}{!}{\includegraphics{Graf_hk_ysoext_v3.pdf}} \resizebox{0.8\textwidth}{!}{\includegraphics{Graf_jhk_ysoext_v3.pdf}} \caption{$\Delta (H-K_{\rm s})$ vs $\Delta H$ (top) and $\Delta (J-H)$ vs $\Delta (H-K_{\rm s})$ (bottom) for YSOs with an available second $JHK_{\rm s}$ epoch from VVV. In the plots we mark the different classes from light curve morphology. In the bottom plot we mark the expected changes which occur parallel to the reddening vector (dotted line) as well as the best fit to the observed change (dot-dashed line). In the bottom right of the upper panel we mark four distinct regions as explained in the text.} \label{vvv:ysoext} \end{figure*} \subsection{Variability trends with SED class}\label{vvv:sec_alphaclass} In order to study the possible evolutionary stage of the variable stars in SFRs, we use the slope of the SEDs of the stars between $2 < \lambda < 24~\mu$m. Following \citet{1987Lada}, we define the parameter $\alpha$ as $\alpha=d(\log (\lambda F_{\lambda}))/d(log (\lambda))$. The value of $\alpha$ is determined from a linear fit to SED points between $2 < \lambda < 24~\mu$m. Objects are then classified according to their value of $\alpha$ following \citet{1994Greene}, also shown in Table \ref{table:vvvclass}. We note that this class definition might not necessarily relate to the actual evolutionary stage of the object. As stated in e.g. \citet{2006Robitaille}, parameters such as inclination or stellar temperature can affect the shape of the SED at the wavelengths used to classify YSOs. We have removed numerous objects that showed Mira-like characteristics from their light curves but there may still be some contamination by non-YSOs amongst the remaining objects projected close to an SFR. We derive $\alpha$ using the photometry arising from VVV and WISE, given that these were taken in the same year (2010). When the objects are not detected in WISE, we use {\it Spitzer}/GLIMPSE photometry. The use of the latter is more likely to cause errors in the estimation of $\alpha$ due to the time difference between {\it Spitzer}/GLIMPSE and VVV measurements, but {\it Spitzer}/GLIMPSE benefits from higher spatial resolution. We find that if we use {\it Spitzer} instead of WISE the difference in $\alpha$ shows a random offset of 0.1-0.2 for the majority of the sample detected in both surveys. The number of objects belonging to different classes are shown in Table \ref{table:vvvclass}. We find that the majority (67\%) of objects in our sample are either class I or flat spectrum sources (45\% and 22\%, respectively). Objects belonging to different classes show some differences in their global properties. Figure \ref{vvv:gc} shows the near-infrared colours of objects in SFRs. As expected the vast majority of objects show colours consistent with them being YSOs. The $H-K_{\rm s}$ colour tends to be redder for stars belonging to younger evolutionary stages. The fraction of objects detected at J and H bands also decreases with objects belonging to younger stages, as would be expected for typical deeply embedded class I objects. Table \ref{table:vvvclass} shows that 66$\%$ of class I objects are not detected in the J band, whilst $23\%$ of them are not detected in either the J nor H bands. These near IR colour trends confirm that, as with typical YSOs, highly variable YSOs whose spectral index indicates an earlier evolutionary stage also have higher extinction by circumstellar matter, along with more infrared emission from circumstellar matter. It might be thought that the relatively high reddening of most of the YSOs in Fig. \ref{vvv:gc} is due to foreground extinction, given that in Paper II we derive typical distances of a few kpc for these sources. We measure foreground extinction for a sub-sample of VVV objects (the 28 variable YSOs in Paper II) by estimating the extinction of red clump giants found at distances similar to those of our objects. The red clump giants are identified in the local $K_{\rm s}$ vs $(J-K_{\rm s})$ colour-magnitude diagrams of the VVV objects (6\arcmin$\times$6\arcmin~ fields), and distances are estimated from their observed magnitudes $K_{\rm s}$ and mean ($J-K_{\rm s}$) colours using equation 1 in \citet{2011Minniti}. The excess of the mean $(J-K_{\rm s})$ with respect to the intrinsic colour of red clump giants \citep[$(J-K_{\rm s})_{0}$=0.70 mag][]{2011Minniti}, gives us a measure of the foreground extinction to the front of the molecular cloud containing each YSO. In most cases the YSO itself is redder than the red giant branch stars, due to extinction by matter within the cloud and by circumstellar matter. This method yields values of A$_{K_{\rm s}}\sim0.8$ to 1.4 mag (i.e. A$_{V} \sim$7 to 12 mag) which is higher than the typically very low diffuse interstellar extinction between the sun and nearby molecular clouds. We infer that the optical faintness and steeply rising near-infrared SEDs of these objects are due not only to their early evolutionary stage but also to foreground extinction. Correcting our values of $\alpha$ accounting for A$_{V}=11$ magnitudes, produces changes of 0.2-0.4 in this parameter. These changes would alter the classification of 1/3 of the sample where we can estimate $\alpha$, mostly in objects where $\alpha$ has a value that is close to the limits set by \citet{1996Greene}. Despite this correction flat-spectrum and class I sources still dominate the sample of YSOs with a measured value of $\alpha$: with no correction these represent 76$\%$ of the sample, whereas with a correction to $\alpha$ of 0.3 the proportion remains high, at 59$\%$. We choose not to apply a correction in Table \ref{table:vvvclass} and our subsequent analysis because the extinctions and distances are uncertain (e.g. the red giant branch is not well defined in the CMDs for about a third of sources) and we cannot be sure this method is entirely correct. Our derived extinctions are a factor of $\sim$2 higher than indicated by the 3D extinction map of \citet{2006Marshall}, which was also based on red clump giants but used 2MASS data on a coarser angular scale. Moreover, in Paper II we compare the ratio of 70~$\mu$m flux to 24~$\mu$m flux in the spectroscopic subsample with that found in a nearby sample of YSOs. At these far-infrared wavelengths, where extinction is very low, we find similar flux ratios for class I systems in both datasets, indicating that most VVV class I YSOs have been correctly classified. In the following discussion, we directly compare embedded (class I and flat spectrum) YSOs with sources in nearby SFRs. We simply ask the reader to note firstly that $\alpha$ may be slightly inflated by interstellar extinction and secondly that this more distant sample will be biased towards more luminous YSOs of intermediate mass (see Paper II). \begin{table} \begin{center} \begin{tabular}{@{}l@{\hspace{0.3cm}}c@{\hspace{0.25cm}}c@{\hspace{0.2cm}}c@{\hspace{0.2cm}}c@{\hspace{0.2cm}}c@{\hspace{0.2cm}}} \hline Class & $\alpha$ & N & N$_{Jdrop}$ & N$_{JHdrop}$ & N$_{\Delta K_{\rm s} \geq 2}$\\ \hline class I & $\alpha>0.3$ & 198 & 130 & 45 & 49\\ flat & $-0.3 \leq \alpha \leq 0.3$ & 95 & 35 & 1 & 12\\ class II & $-1.6 < \alpha < -0.3$ & 83 & 19 & 1 & 6\\ class III & $ \alpha \leq -1.6 $ & 12 & 0 & 0 & 0\\ Undefined & n/a & 53 & 10 & 2 & 3\\ \hline \end{tabular} \caption{Number of VVV variable stars belonging to the different evolutionary classes of YSOs, as determined from their SEDs.}\label{table:vvvclass} \end{center} \end{table} \begin{figure*} \centering \resizebox{\columnwidth}{!}{\includegraphics{kysodist_v4.pdf}}\\ \resizebox{\columnwidth}{!}{\includegraphics{vvv_nir_23_v3.pdf}} \resizebox{\columnwidth}{!}{\includegraphics{vvv_nir_1f_v3.pdf}} \caption{(top) $K_{\rm s}$ distribution (from 2010 data) of class I (red), flat-spectrum (orange), class II (blue) and class III (black) YSOs. (bottom left) Colour-colour diagram for class II (blue filled circles) and class III YSOs (black open circles) from VVV. In the figure, lower limits in colour are marked by arrows. The classical T Tauri locus of \citet{1997Meyer} is presented (long-dashed line) along with intrinsic colours of dwarfs and giants (solid lines) from \citet{1988Bessell}. Reddening vectors of $A_{V} = 20$~mag are shown as dotted lines. (bottom right) Colour-colour diagram for class I (red) and flat-spectrum (orange) YSOs. } \label{vvv:gc} \end{figure*} The distribution of $\Delta K_{\rm s}$ (Fig. \ref{vvv:gc2}, upper left panel) shows that peak of the distribution is found at larger $\Delta K_{\rm s}$ for younger objects. We also observe a higher fraction of objects with $\Delta K_{\rm s} > 2$~mag for class I objects ($25\%$) than for flat-spectrum ($13\%$) and class II ($7\%$) objects. The comparison of $\alpha_{class}$ vs $\Delta K_{\rm s}$ in the bottom panel of the same figure further illustrates the increase in amplitude at younger evolutionary stages, as well as the higher incidence of $\Delta K_{\rm s}>1$ variability at the younger stages. The class I and flat spectrum YSOs constitute 87\% of the $\Delta K_{\rm s} > 2$ subsample, dominating it even more than the full SFR-associated sample. We note that 70 objects with $\Delta K_{\rm s} > 2$~mag also have redder near infrared colours than the full sample: the proportions of $J$ band non-detections and $JH$ non-detections rises from 44\% and 11\% in the full SFR-associated sample to 56\% and 24\% in the $\Delta K_{\rm s} > 2$ subsample. This further highlights the simple fact that efficient detection of the majority of YSOs with the most extreme variations requires observation at wavelengths $\lambda \ge 2~\mu$m. Most importantly, this further emphasises that younger objects have higher accretion variations. Eruptive variables are the largest component of the $\Delta K_{\rm s}>$2 sample, comprising 30/70 objects. \begin{figure*} \resizebox{\columnwidth}{!}{\includegraphics{dkysodist_v4.pdf}} \resizebox{\columnwidth}{!}{\includegraphics{rms_cor_v4.pdf}}\\ \resizebox{\columnwidth}{!}{\includegraphics{alphadist_aper_v2.pdf}} \resizebox{\columnwidth}{!}{\includegraphics{alphadist_lpv_spv_eb_v2.pdf}}\\ \resizebox{\columnwidth}{!}{\includegraphics{Graf_alpha_amp2_v3.pdf}} \caption{(top, left) $\Delta K_{\rm s}$ distribution of the different YSO classes. (top, right) Mean r.m.s. variability for the different YSO classes in intervals up to 50 days (calculated with time bins of 1 day), and for intervals up to the full 1600 day baseline of the dataset (using time bins of 30 days). The colour coding in the top (left and right) figures is the same as in Fig.\ref{vvv:gc}. (middle, left) $\alpha$ distribution for fader (black line), dipper (blue line) and eruptive (red lines) objects. (middle, right) Same distribution but for long-term periodic objects (black lines), stars with short-term variability (red line) and EBs (blue line). (bottom) $\alpha$ vs $\Delta K_{\rm s}$ for objects associated with SFRs. The limits for the different YSO classes are marked by dashed lines. Objects in this figure are divided according to the morphological light curve classification (Sect. \ref{vvv:sec_lcmorp}). The different classifications are marked in the plot.} \label{vvv:gc2} \end{figure*} In Fig. \ref{vvv:gc2} we also show the mean variability of YSOs belonging to different evolutionary classes as a function of time baseline. This is calculated by averaging the values of r.m.s. variability vs time interval computed using every possible pairing of two points within the light curve of each star. In the figure we show the variability over intervals up to 50 days (calculated with time bins of 1 day), and for intervals up to the full 1600 day baseline of the dataset (using time bins of 30 days). The r.m.s. variability over short timescales appears to be larger for more evolved objects than flat and class I sources. The variability increases with time for every YSO class and becomes flat at $t\sim250-350$ days, although this is less clear for class III sources because of noise due to the lower number of objects in this class. Class I and flat sources have higher r.m.s. variability on these longer timescales. The higher r.m.s variability in class II and III systems on timescales $<$25 days can be explained by the fact that in these more evolved YSOs the stellar photosphere contributes a greater proportion of the K-band luminosity of the system, whereas in less evolved YSOs the luminosity is more dominated by the accretion disc. Consequently we may expect a greater contribution to variability from cold and hot spots in the photosphere of class II and class III YSOs, which manifests itself on the timescale of stellar rotation. We note that variability on rotational timescales of a few days also contributes to the measured mean r.m.s. variability shown in Fig. \ref{vvv:gc2} on all longer timescales, which is why the variation increases rapidly on baselines from zero to 3 days \citep[a typical rotation timescale in YSOs, e.g][]{2010Alencar} and then increases more slowly thereafter. It is also very interesting to see that the 250-350 day timescale at which the maximum of mean r.m.s. variability is reached in all classes of YSO corresponds to variablity on spatial scales of 1-2 au, assuming the timescale is determined by Keplerian rotation about low to intermediate mass YSOs. \citet{2014Connelley} found from a spectroscopic variability study that mass accretion tracers in their sample of class I YSOs, such as Br$\gamma$ and CO emission, are highly variable over timescales of 1-3 yrs and they proposed the above explanation of the timescale. Studies of the optical, near infrared and mid-infrared temporal behaviour of YSOs have shown that the great majority are variable at these wavelengths, with their light curves showing a diversity of amplitudes, timescales and morphologies \citep[see e.g.][]{2013Findeisen, 2014Cody, 2014Rebull, 2015Rice, 2015Wolk}. These studies, and the earlier large scale study of \citet{2012Megeath} showed that the amplitude of the variability increases for younger embedded objects, though these works contained few if any YSOs with $\Delta K_{\rm s}>1$. \citet{2012Rice} found indications that amplitudes $\Delta K > 1$~mag are more common amongst class I systems ($13\pm7\%$, based on 2 high amplitude objects in a sample of 30 in the Braid Nebula within Cygnus OB7), whereas such high amplitudes are found to be less common in more evolved YSOs \citep[see e.g.][]{2001Carpenter}. \subsection{Eruptive variability}\label{vvv:sec_erupvars} We have found from the morphological classification that 106 of our SFR-associated high amplitude variables have light curves that show sudden and large increases in brightness. Our near infrared colour variability data, though limited, appears to verify that variability does not arise from changes in the extinction along the line of sight in most cases. As we have discussed previously, large magnitude changes in our sample are more likely explained by either changes in the accretion rate or in the extinction along the line of sight. Since we appear to discard the latter effect in our eruptive variables, we infer that large changes in the accretion rate are the most likely explanation for the observed variability in these objects. In Fig. \ref{vvv:gc2} (middle panels) we show histograms of the spectral index of the YSOs of each light curve type. We also see that the YSOs classified as eruptive have larger values of $\alpha$ (i.e. redder SEDs) than the other categories of YSO variables. The redder SEDs of the eruptive objects supports the idea that fluctuations in the accretion rates are larger and much more common at early stages of PMS evolution than in the class II stage. This is especially true given that class II YSOs typically outnumber class I YSOs in nearby SFRs by a factor of $\sim 3.7$-4.8 \citep[see e.g. table 1 in][]{2014Dunham} due to the greater duration of the class II stage. We have 70 eruptive variables classified as class I YSOs and 5 classified as class II YSOs (the remainder being flat spectrum or class III systems). If we assume that the YSOs classified as eruptive are mainly genuine eruptive variables (which is supported by our spectroscopic follow up in paper II) this tells us that the incidence of eruptive variability is $\sim$50-70 times higher in class I YSOs than class II YSOs. If we consider the possible correction to $\alpha$ due to foreground extinction (see \ref{vvv:sec_alphaclass}), then we have 52 class I YSOs and 15 class II YSOs in the eruptive category, so eruptive variability is still 13 to 17 times more common in class I YSOs. We conclude that the difference is at least an order of magnitude. It is interesting to see that some of the theoretical models that explain the outbursts observed in young stars, predict that luminosity bursts are more common in the class I stage compared to later stages of stellar evolution. These models mainly involve gravitational instability (GI) in the outer disc, which is most likely to occur during the embedded phase. GI can cause the outer disc to fragment, forming bound fragments that later migrate into the inner disc. The infall of these fragments can lead to mass accretion bursts. GI can also produce a persistent spiral structure which efficiently transfers mass to small radii. The continuous pile up of mass at lower radii can trigger magneto-rotational instabilities (MRI), which lead to sudden disc outbursts \citep[see e.g.,][ and references therein]{2009Zhu,2014Audard, 2015Vorobyov}. Thus invoking these mechanisms might explain the higher occurrence of eruptive variables at younger stages. However, we note that most mechanisms that explain eruptive variability, such as bursts due to binary interaction \citep{1992Bonnell}, MRI activated by layered accretion \citep{2009Zhu} or thermal instabilities \citep{1996Hartmann}, predict outbursts during class I through class II stages of YSO evolution. The classification as eruptive variables only comes from $K_{\rm s}$ light curves, and spectroscopic follow up is needed to confirm their YSO nature. However, we note that potentially adding 106 more objects to the YSO eruptive variable class would increase the known members by a factor of five. Moreover, our survey covers just a portion of the Galactic plane. Therefore eruptive variability in YSOs might be more common than previously thought, especially for the most embedded and young objects. Spectroscopic follow up of a subsample of the objects shows that a large fraction of them are indeed eruptive YSOs (see Paper II). \section{Incidence of eruptive variability} To determine whether episodic accretion plays an important role in the assembly of stars it is essential to have an estimate of the incidence of the phenomenon. \citet{1996Hartmann} estimate that stars must spend $\sim 5\%$ of their lifetime in high states of accretion to gain their final mass during the infall phase. \citet{2009Enoch} and \citet{2009Evans} find that 5-7$\%$ of class I stars in their sample are at high accretion states. In this section we attempt to derive an initial estimate of this number using our sample of eruptive variables from VVV and the intrinsically red {\it Spitzer} sources of \citet{2008Robitaille}. We investigated the intrinsically red {\it Spitzer} sources of \citet{2008Robitaille}, specifically those objects classified by the authors as likely YSOs. The use of GLIMPSE and MIPSGAL \citep{2009Carey} photometry allows us to study the SEDs of this sample, where we find 2059 class I YSOs in the Galactic disc area studied in our work. From these objects, 51 are found in our list of high amplitude variables in SFRs, 26 of them having the eruptive variable classification. These numbers would imply that approximately 2.5$\%$ of class I YSOs show large amplitude variability, and that eruptive variability is observed in 1.3$\%$ of YSOs at this evolutionary stage. Once again we need to take into account the completeness of our sample due to our selection criteria. In Sect. \ref{sec:vvvselec} we showed that our strict selection criteria caused our sample of high amplitude variables to be only $\sim50\%$ complete, down to $K_{\rm s}=15.5$~mag. In Sect. \ref{sec:vvvselec} we noted that at magnitudes fainter than $K_{\rm s}=15.5$~mag the completeness of our selection falls more steeply. Thus the incompleteness factor in the \citeauthor{2008Robitaille} sample is likely to be higher. We studied the completeness of the \citeauthor{2008Robitaille} sample in four widely separated VVV tiles (d052, d065, d072 and d109) using 2010-2014 $K_{\rm s}$ data, where we found 138 counterparts in the VVV catalogues. If we only consider 2010-2012 data, 22 objects show high amplitude variability \footnote{The true variability of these objects was confirmed via visual inspection of 1\arcmin$\times$1\arcmin images.}, are located in SFRs, and have light curves that do not resemble those of AGB stars. From these, 8 are part of the list of high amplitude variables presented in this work, which implies an incompleteness factor of 22/8 or 2.75, caused by the quality cuts that were required in Sect. \ref{sec:vvvselec} to reduce the number of false positives to a manageable level. This figure is higher than the factor of $\sim$2 (50\% completeness) quoted above because many of the class I YSO variables from the Robitaille et al.(2008) list have mean $K_{\rm s}>$15.5 magnitudes. If we consider the whole period of 2010-2014, then the number of high amplitude variables increases to 29 raising the incidence by a further factor of 1.32. We should also consider whether there is a bias towards erupting systems in the \citet{2008Robitaille} sample, which could in general be detected at greater distances, or lower down the mass function, than quiescent systems. Such a bias would only affect our calculation if erupting YSOs detected by GLIMPSE in $\sim$2003 also erupted a second time in 2010-2014. We tested for this by comparing the GLIMPSE $I2$ (4.5~$\mu$m) and WISE $W2$ (4.6~$\mu$m) magnitudes of the 26 systems from the \citet{2008Robitaille} sample that are included in our list of eruptive YSOs. $W2$ and $I2$ magnitudes for YSOs are typically similar \citep[e.g.][]{2014Antoniucci}. Considering the 20/26 of these systems with $W2$ detections in the WISE AllSky catalogue (data from 2010, typically before the eruption) the median $I2-W2$ is 0.02 mag. If we transform the {\it Spitzer} $I2$ magnitudes to the $W2$ passband using the equation given in \citet{2014Antoniucci} then the median difference between the {\it Spitzer} and WISE data is $-$0.14 mag. This small number indicates that any luminosity bias toward eruptive systems can be neglected in this initial estimate of their incidence. After correcting for incompleteness, the incidence of high amplitude variability among class I YSOs rises to 6.8$\%$, including only stars that varied in the 2010-2012 time period. This rises to 9\% if we extend the time period to 2010-2014. More importantly the incidence of eruptive variables amongst class I YSOs reaches 3.4$\%$(or 4.6$\%$ in the 2010-2014 data). The 4.6$\%$ figure has a statistical uncertainty of 40$\%$, not including any biases arising from our use of the sample of \citet{2008Robitaille}, so we should express the incidence of eruptive variability as about 3 to 6$\%$ over a 4 year timescale. These figures happen to agree with the incidence of outbursts inferred by \citet{2009Enoch} and \citet{2009Evans} from the observation of class I YSOs with high bolometric luminosities. However, their observations are perhaps more likely to trace long-duration, FUor-like outbursts than those detected by VVV. If we assume that all class I YSOs go through episodes of enhanced accretion at the same average rate, then our estimate that $\sim4\%$ of them burst over a 4 year period, would imply that roughly every source suffers at least one burst over 100 yr. \citet{2016Froebrich} study the distribution of the separation between large H$_{2}$ knots in jets. The knots likely trace the accretion burst history of a particular source. \citeauthor{2016Froebrich} find that the bright knots have typical separations that correspond to about 1000 yr, which they conclude is too large to be EXor driven and too small to be FUor driven. This number is also larger than the 100 yr burst frequency of the VVV sample. However, further study needs to be done to obtain a more robust estimate, and the assumption of similar behaviour for all class I YSOs is of course questionable. \section{Summary and Conclusions} We have searched for high amplitude infrared variables in a 119 deg$^2$ area of the Galactic midplane covered by the VVV survey, using a method that tends to exclude transients and eruptive variables that saturated during outburst or very faint in quiescence, owing to our requirement for a high quality detection at every epoch. We discovered 816 bona fide variables in the 2010-2012 data with $\Delta K_{\rm s}>1$ in that time interval. Nearly all of these were previously unknown as variable stars, though a significant minority had been identified as embedded YSO candidates in the Robitaille et al.(2008) catalogue of stars with very red [4.5]-[8.0] {\it Spitzer}/GLIMPSE colours. We have presented a fairly simple analysis of the sample using the 2010-2014 VVV light curves, supplemented by a recently obtained 2nd epoch of multi-filter $JHK_{\rm s}$ data and photometry from WISE, and {\it Spitzer}. Our main conclusions are as follows. \begin{itemize} \item In agreement with the previous results from searches in the UKIDSS GPS \citep{2014Contreras}, we observe a strong concentration of high-amplitude infrared variables towards areas of star formation. The two point correlation function and nearest neighbour distribution of VVV objects show evidence for clustering on angular scales typical of distant Galactic clusters and SFRs. The variable stars found in SFRs are characterized by having near-infrared colours and SEDs of YSOs. \item The most common types of variable outside SFRs are LPVs (typically dust-obscured Mira variables because other types are saturated in VVV) and EBs. By visual inspection of the light curves of the variables in SFRs we were able to identify and remove most of these contaminating systems and provide a reasonably clean sample of high amplitude YSOs. The YSOs make up about half of the full sample of 816 variables. \item We analysed the light curves of the variables in SFRs, after removal of likely Mira-types, and we classify them as 106 eruptive variables, 39 faders, 45 dippers, 162 short term variables, 65 long-term periodic variables ($P>100$~days) and 24 eclipsing binaries (EBs) and EB candidates. Individual YSOs may display more than 1 type of variability and the low amplitude variation on short timescales seen in normal YSOs is common in every category. \item Spectroscopic follow up of a substantial subset of the variables with eruptive light curves is presented in the companion paper (Paper II), confirming that the great majority of systems with eruptive light curves are indeed eruptive variables with signatures of strong accretion similar to those seen in EXors or FUors, or a mixture of the two. The 2 epochs of VVV $JHK_{\rm s}$ multi-colour data indicate that extinction is not the main cause of the variability in systems with eruptive light curves, though there is a trend for the sources to become bluer when brighter, similar to EXors. The faders show a wider range of colour behaviour, with more examples of ``bluer when fainter'' than the other categories. \item Unsurprisingly, very few of the EBs showed significant magnitude or colour changes between the two multi-colour epochs. Amongst the STVs and dippers, we observe large colour and magnitude changes that appear to differ from the reddening vector, although this conclusion seems to depend on the selection of objects from these classes. We observed changes consistent with the reddening vector in a few systems. It is possible that the flux changes in some of the STVs are caused by extinction, e.g. as has been observed in some binary YSOs with a circumbinary disc \citep{2014Windemuth, 2015Rice}. \item The STVs typically have lower amplitudes than any other category except the EBs and EB candidates. Moreover, the STVs have a bluer distribution of spectral indices than the full YSO sample, including a high proportion of class II YSOs with relatively low extinction. It is likely that in many of the STVs with periods P$<15$~days and $K_{\rm s}$ amplitudes not far above 1 magnitude the light curve is rotationally modulated by unusually prominent bright or dark spots on the photosphere. \item Variables in the eruptive and fader categories tend to have higher amplitudes than the full YSO sample over the 4 year period, with mean $\Delta K_{\rm s}$ = 1.72 and 1.95~mag, respectively, compared to 1.56~mag for the full YSO sample. \item It is reasonable to suppose that variable accretion is the cause of photometric variability in a proportion of the faders and some of the long-term periodic variables and STVs, although spectroscopic confirmation is limited in these categories as yet. In the periodic variables the accretion rate would presumably be modulated by a companion body \citep{2012Hodapp}. Adding these together with the $\sim$100 YSOs with eruptive light curves suggests that the sample contains between 100 and 200 YSOs in which the variability is caused by large changes in accretion. If we take the lower end of this range, this increases the number of probable eruptive variable YSOs available for study by a factor of 5. Some of these systems have $K_{\rm s}$ amplitudes not far above the 1 mag level, below which variability due to spots and smaller changes in accretion rate and extinction becomes much more common. However, we see no clear argument for a higher threshold given that the amplitudes have a continuous distribution that can be be influenced in individual YSOs by the mass and luminosity of the central protostar as much as the change in accretion rate \citep{1991Calvet}. \item As a whole, the high amplitude variables are a very red sample, dominated by embedded YSOs (i.e. systems with Class I or flat spectrum SEDs that are typically not observable at optical wavelengths). This contrasts with the optical selection of the classical FUors and EXors, the majority of which have a class II or flat spectrum classification. While FUors are often discussed as systems with a remnant envelope, only 3 or 4 deeply embedded eruptive variables were known prior to this study and our recent UGPS 2-epoch study. \item Variables with eruptive light curves tend to have the reddest SEDs. The spectral indices in the eruptive category indicate that this type of variability is at least one order of magnitude more common among class I YSOs than class II YSOs. This demonstrates that eruptive variability is either much more common or recurs much more frequently amongst YSOs at earlier stages of pre-MS evolution, when average accretion rates are higher. We hope this result will inform ongoing efforts to develop a theoretical framework for the phenomenon. \item For the full sample of YSOs, the r.m.s. variability is higher at earlier evolutionary (SED) classes for time intervals longer than 25~days, and reaches a maximum at 250-350 days for all SED classes. The full duration of the outbursts in eruptive systems is typically 1 to 4 years. Some variables with eruptive light curves show more than 1 outburst. If some of the faders are eruptive variables in decline then a similar or slightly longer duration would apply. \item At time intervals shorter than 25 days the evolutionary dependence is reversed, with class II YSOs showing higher amplitudes than flat spectrum and class I YSOs. This suggests that the shorter timescale variations are dominated by rotational modulation by spots on the photosphere (which are more readily observed in class II systems than embedded YSOs) whereas accretion variations usually take place on longer timescales. \item The 1 to 4 year duration of the eruptions is between that of FUors ($>10$ years) and EXors (weeks to months) A small but growing number of eruptive YSOs with these intermediate durations have been found in recent years, some of which have a mixture of the spectroscopic characteristics of FUors and EXors. This has led to the recent concept that FUor and EXors may simply be part of a range of different eruptive behaviours on different timescales, all driven by large variations in accretion rate. Until now it was unclear whether these recent discoveries were rare exceptions, but it now seems clear that they are not. In fact we find that YSOs with intermediate outburst durations outnumber short EXor-like outbursts and are now the majority of known eruptive systems, A much longer duration survey would be required to determine the incidence of FUor-like outbursts amongst embedded systems. In paper II we propose a new class of eruptive variable to describe YSOs with eruptive outbursts of intermediate duration, which are usually optically obscured class I or flat spectrum YSOs and display a variety of the EXor-like and/or FUor-like spectroscopic signatures of strong accretion. \item We investigated the intrinsically red {\it Spitzer} sources of \citet{2008Robitaille}, specifically those objects classified by the authors as likely YSOs. We find 2059 such objects in the area covered by VVV. From these 51 are found in the list of high amplitude variables in SFRs from this work, with 26 of them being classified as eruptive variables. After correcting for incompleteness imposed by the strict selection criteria in the VVV sample, we estimate that high amplitude variability is observed from 6.8$\%$ (when considering 2010-2012 data) up to 9$\%$ (when considering 2010-2014 data) of class I YSOs. More importantly, the incidence of eruptive variability amongst class I YSOs rises to 3.4--4.6$\%$. This estimate agrees with those inferred from observations of class I YSOs by \citet{2009Evans} and \citet{2009Enoch}. However, the agreement might be a coincidence considering that their observations are likely tracing long-duration, FUor-like outbursts, rather than those detected in our study. \item YSOs are the commonest type of high amplitude infrared variable detected by the VVV survey. We estimate a completeness-corrected source density of 7 deg$^{-2}$ in the mid-plane of quadrant 4, in the approximate mean magnitude range $11<K_{\rm s}<16$. These YSOs are detected at typical distances of a few kpc, there being no very nearby SFRs in the area surveyed; they are therefore likely to be intermediate-mass YSOs. If we were able to detect such objects out to the far edge of the Galactic disc the source density would rise to perhaps $\sim 40$ deg$^{-2}$. This confirms our previous suggestion in \citet{2014Contreras} that high amplitude YSO variables have a higher source density and average space density than Mira variables. EBs are very common at low amplitudes and they may have a comparable space density to YSOs at $K_{\rm s}$ amplitudes of 1 to 1.6 mag (an approximate upper limit for EBs in {\it Kepler}). YSOs are more numerous at higher amplitudes and may well be more numerous for all amplitudes over 1~mag if the eruptive phenomenon extends to the lower part of the stellar Initial Mass Function. \end{itemize} \section*{Acknowledgments} This work was supported by the UK's Science and Technology Facilities Council, grant numbers ST/J001333/1, ST/M001008/1 and ST/L001403/1. We gratefully acknowledge the use of data from the ESO Public Survey program 179.B-2002 taken with the VISTA 4.1m telescope and data products from the Cambridge Astronomical Survey Unit. Support for DM, and CC is provided by the Ministry of Economy, Development, and Tourism’s Millennium Science Initiative through grant IC120009, awarded to the Millennium Institute of Astrophysics, MAS. DM is also supported by the Center for Astrophysics and Associated Technologies PFB-06, and Fondecyt Project No. 1130196. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France; also the SAO/NASA Astrophysics data (ADS). A.C.G. was supported by the Science Foundation of Ireland, grant 13/ERC/I2907. We also acknowledge the support of CONICYT REDES project No. 140042 ``Young variables and proper motion in the Galactic plane. Valparaiso-Hertfordshire collaboration'' C. Contreras Pe\~{n}a was supported by a University of Hertfordshire PhD studentship in the earlier stages of this research. We thank Janet Drew for her helpful comments on the structure of the paper. \bibliographystyle{mn2e}
1,108,101,564,004
arxiv
\section{} \baselineskip 24pt \def\langle {\langle } \def \rangle { \rangle } \def\langle {\langle } \def\varepsilon{\varepsilon} \def\lambda^{\star}{\lambda^{\star}} \def Im{ Im} \def Re{ Re} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\preprint}[1]{\begin{table}[t] \begin{flushright} \begin{large}{#1}\end{large} \end{flushright} \end{table}} \setlength {\textwidth} {16 true cm} \setlength {\textheight} {23 true cm} \setlength {\oddsidemargin} {0 mm} \setlength {\evensidemargin} {0 mm} \setlength {\topmargin} {-5 mm} \setlength {\headheight} {15 pt} \setlength {\headsep} {0 pt} \textfloatsep 10 mm \begin{document} \begin{center} {\LARGE The QLBS Q-Learner Goes NuQLear: \vskip0.5cm Fitted Q Iteration, Inverse RL, and Option Portfolios } \vskip1.0cm {\Large Igor Halperin} \\ \vskip0.5cm NYU Tandon School of Engineering \\ \vskip0.5cm {\small e-mail: [email protected] $} \vskip0.5cm \today \\ \vskip1.0cm {\Large Abstract:\\} \end{center} \parbox[t]{\textwidth}{ The QLBS model is a discrete-time option hedging and pricing model that is based on Dynamic Programming (DP) and Reinforcement Learning (RL). It combines the famous Q-Learning method for RL with the Black-Scholes (-Merton) model's idea of reducing the problem of option pricing and hedging to the problem of optimal rebalancing of a dynamic replicating portfolio for the option, which is made of a stock and cash. Here we expand on several NuQLear (Numerical Q-Learning) topics with the QLBS model. First, we investigate the performance of Fitted Q Iteration for a RL (data-driven) solution to the model, and benchmark it versus a DP (model-based) solution, as well as versus the BSM model. Second, we develop an Inverse Reinforcement Learning (IRL) setting for the model, where we only observe prices and actions (re-hedges) taken by a trader, but not rewards. Third, we outline how the QLBS model can be used for pricing portfolios of options, rather than a single option in isolation, thus providing its own, data-driven and model independent solution to the (in)famous volatility smile problem of the Black-Scholes model. } \newcounter{helpfootnote} \setcounter{helpfootnote}{\thefootnote} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \setcounter{footnote}{0} \footnotetext{ I would like to thank Eric Berger and Vivek Kapoor for stimulating discussions. I thank Bohui Xi, Tianrui Zhao, and Yuhan Liu for an initial implementation of a DP solution of the QLBS model. } \renewcommand{\thefootnote}{\arabic{footnote}} \setcounter{footnote}{\thehelpfootnote} \newpage \section{Introduction} In Ref.~\cite{IH_2017}, we presented the QLBS model - a discrete-time option hedging and pricing model rooted in Dynamic Programming (DP) and Reinforcement Learning (RL). It combines the famous Q-Learning method for RL \cite{Watkins_1989, Watkins} with the Black-Scholes (-Merton) model's idea of reducing the problem of option pricing and hedging to the problem of optimal rebalancing of a dynamic replicating portfolio for an option, which is made of a stock and cash \cite{BS,Merton}. In a nutshell, the celebrated Black-Scholes-Merton (BSM) model, also known as the Black-Scholes (BS) model \cite{BS,Merton}, shows that even though the option price can (and will) change in the future because it depends on a future stock price which is also unknown, a {\it unique} fair option price can be found by using the principle of one price for identical goods, alongside with the method of pricing by replication. This assumes a continuous re-hedging and a special (lognormal) choice of stock price dynamics. However, such apparent uniqueness of option prices also means that, under these assumptions, options are completely {\it redundant}, as they can be always perfectly replicated by a simple portfolio made of a stock and cash. As argued in more details in \cite{IH_2017}, an apparent redundancy of options in the BSM model is due to the fact that the latter model is formulated in the {\it continuous time} limit $ \Delta t \rightarrow 0 $, where hedges are rebalanced continuously, and at zero cost. In such academic limit, an option becomes risk-free, and hence completely redundant, as it is just {\it equal}, at any time $ t $, to a dynamic portfolio of a stock and cash. In any other case, i.e. when a time step $ \Delta t > 0 $, risk in an option position cannot be completely eliminated, but at best can be {\it minimized} by a proper choice in an offsetting position in a stock that underlies the option, i.e. by an optimal hedge. But in the real life, re-balancing of option hedges {\it always} happens with some finite frequency $ \Delta t > 0 $, e.g. daily, monthly, etc. Therefore, keeping a time-step $ \Delta t $ {\it finite} while controlling {\it risk} in an option position is {\it critical} for keeping realism in {\it any} option pricing model. While the classical BSM model gives rise to elegant closed-form expressions for option prices and hedges in the {\it mathematical} limit $ \Delta t \rightarrow 0 $, it makes its theoretical {\it "risk-neutral"} option prices and hedges quite problematic in practice, even as a "zero-order" approximation to the real world. Indeed, as financial markets are precisely in the business of {\it trading risk}, any meaningful "zero-order" approximation should account for risk inherently present in financial options and other derivative instruments. One could argue that using an equilibrium "risk-neutral" framework for option pricing and hedging in a risky option trading business is akin to explaining a biological system starting with equilibrium thermodynamics. While it would be absurd to describe life as a "correction" to non-life (which is the only possible state with equilibrium thermodynamics), various volatility smile models developed in continuous-time Mathematical Finance do essentially the same thing for financial risk in option pricing\footnote{"Economics ended up with the theory of rational expectations, which maintains that there is a single optimum view of the future, that which corresponds to it, and eventually all the market participants will converge around that view. This postulate is absurd, but it is needed in order to allow economic theory to model itself on Newtonian Physics." (G.~Soros). I thank Vivek Kapoor for this reference.}. Indeed, to adjust model-based {\it "risk-neutral"} option prices to market prices of {\it risky} options, traditional local and/or stochastic volatility models (see e.g. \cite{Wilmott}) come to the altar of Athena to ask her to breathe life into a clay {\it volatility surface} that was just {\it designed} to be flat (dead) in the original BSM model! This is because the latter model rests on {\it two} critical assumptions: 1) continuous re-hedging is possible, which produces an equilibrium "risk-neutral" option price, and 2) the world is log-normal with a {\it fixed} volatility which means a {\it flat} volatility surface as a function of option strike and maturity. Because {\it both} these assumptions are violated in practice, the original BSM model contradicts data, which makes it some way in between of a pure mathematical model, and a technical tool to quote market option prices as BS implied volatilities, and risk-manage options using their sensitivities with respect to the stock volatility ("vega"-sensitivity), and other BS sensitivity parameters ("the Greeks"). A mismatch with the market data is "fixed" by switching to local or stochastic volatility models that "match the market" much better than the original BSM model. But this smacks of a "scientific" Cargo cult, with PDEs and GPUs replacing straw airplanes and wooden rifles. No matter how well stochastic volatility models fit market prices, they entirely miss the {\it first} question that needs an answer for trading, namely the question of expected {\it risk} in any given option contract. Their straight-face answer to such basic question would be "Right now, you have no risk in this option, sir!" Needless to say, in physics a quantum model that tweaked the Planck constant $ \hbar $ to achieve consistency with data would be deemed nonsensical, as the Planck constant is a {\it constant} that cannot change, thus any "sensitivity with respect to $ \hbar $" would be meaningless (but see \cite{Scherrer_2009}). Yet, a likely questionable adjustment to the original BSM model via promoting a {\it model constant} (volatility) to a {\it variable} (local or stochastic volatility), to reconcile the model with market data, has become a market standard since 1974. The main reason for this is a common belief that advantages of analytical tractability of the classical BSM model in the continuous-time limit $ \Delta t \rightarrow 0 $ outweigh its main drawbacks such as inconsistency with data, thus calling for "fixes" in the original model, such as introduction of non-constant volatilities. However, this only brings a theoretical (and practical!) nightmare on the modeling side, when the financial {\it risk}, unceremoniously thrown away in the classical BSM model and other continuous-time models of Mathematical Finance but present in the market data, tries to make it back to the game, via mismatches between the model and market behavior. This results in what was colorfully described as "Greek tragedies" for practitioners by Satyajit Das \cite{Das}. The main issue with these Mathematical Finance models is that they lump together two {\it different} problems with the original BSM model: (i) the absence of risk in the limit $ \Delta t \rightarrow 0 $, and (ii) differences between real-world stock price dynamics and lognormal dynamics assumed in the BSM model. On the contrary, the QLBS model tackles these two problems sequentially. It starts with a discrete-time version of the BSM model, and re-states the problem of optimal option hedging and pricing as a problem of {\it risk minimization by hedging} in a sequential Markov Decision Process (MDP). When transition probabilities and a reward function are {\it known}, such model can be solved by means of DP. This produces a semi-analytical solution for the option price and hedge, which only involves matrix linear algebra for a numerical implementation \cite{IH_2017}. On the other hand, we might know only the general {\it structure} of a MDP model, but {\it not} its specifications such as transition probability and reward function. In this case, we should solve a Bellman optimality equation for such MDP model relying only on {\it samples} of data. This is a setting of {\it Reinforcement Learning}, see e.g. a book by Satton and Barto \cite{SB}. It turns out that in such {\it data-driven} and {\it model-free} setting, the QLBS model can be solved (also semi-analytically) by the celebrated {\it Q-Learning} method of Watkins \cite{Watkins_1989, Watkins}. In recognition of the fact that Q-Learning produces both the optimal price and optimal hedge in such time-discretized (and distribution-free) version of the BS model, we called the model developed in Ref.~\cite{IH_2017} the QLBS model. While Ref.~\cite{IH_2017} focused on Mathematical Q-Learning ("MaQLear") for the QLBS model, here we expand on several topics with a Numerical Q-Learning ("NuQLear") analysis of the model. First, we investigate the performance of Fitted Q Iteration (FQI) for a RL (data-driven) solution to the model, and benchmark it versus a DP (model-based) solution, as well as versus the BSM model. Second, we extend the model to a setting of Inverse Reinforcement Learning (IRL), where we only observe prices and actions (re-hedges) taken by a trader, but not rewards. Third, we outline how the QLBS model can be used for pricing portfolios of options, rather than a single option in isolation. This requires mutual consistency of pricing of different options in a portfolio. We show how the QLBS model addresses this problem, i.e. solves the (in)famous volatility smile problem of the Black-Scholes model. The paper is organized as follows. In Sect.~\ref{sect:QLBS_model}, we give a summary of the QLBS model, and present both a DP-based and RL-based solutions for the model. An IRL formulation for the model is developed in Sect.~\ref{sect:IRL}. "NuQLear" experiments are presented in Sect.~\ref{sect:Experiments}. Sect.~\ref{sect:Option_portfolios} outlines option hedging and pricing in the QLBS model in a multi-asset (portfolio) setting. Finally, we conclude in Sect.~\ref{sect:Summary}. \section{The QLBS model} \label{sect:QLBS_model} The QLBS model starts with a discrete-time version of the BSM model, where we take the view of a seller of a European option (e.g. a put option) with maturity $ T $ and a terminal payoff of $ H_T(S_T) $ at maturity, that depends on a final stock price $ S_T $ at that time. To hedge the option, the seller use the proceeds of the sale to set up a replicating (hedge) portfolio $ \Pi_t $ made of the stock $ S_t $ and a risk-free bank deposit $ B_t $. The value of the hedge portfolio at any time $ t \leq T $ is \begin{equation} \label{Pi_t} \Pi_t = a_t S_t + B_t \end{equation} where $ a_t $ is a position in the stock at time $ t $, taken to hedge risk in the option. As at $ t = T $ the option position should be closed, we set $ u_T = 0$, which produces a terminal condition at $ t = T $: \begin{equation} \label{B_T} \Pi_T = B_T = H_T(S_T) \end{equation} Instead of (non-stationary) stock price $ S_t $, we prefer to use time-homogeneous variables $ X_t $ as state variables in the model, where $ X_t $ and $ S_t $ are related as follows: \begin{equation} \label{X_t} X_t = - \left( \mu - \frac{\sigma^2}{2} \right) t + \log S_t \; \; \Leftrightarrow \; \; S_t = e^{ X_t + \left( \mu - \frac{\sigma^2}{2} \right) t } \end{equation} \subsection{Optimal value function} As was shown in \cite{IH_2017}, the problem of optimal option hedging and pricing in such discrete-time setting can be formulated as a problem of Stochastic Optimal Control (SOC) where a value function to be {\it maximized} is given by the following expression: \begin{equation} \label{Value_Function_port} V_t^{\pi} ( X_t) = \mathbb{E}_t \left[ \left. - \Pi_{t} - \lambda \, \sum_{t'=t}^{T} e^{-r (t'-t)} Var \left[ \left. \Pi_{t'} \right| \mathcal{F}_{t'} \right] \right| \mathcal{F}_t \right] \end{equation} where $ \lambda $ is a Markowitz-like risk aversion parameter \cite{Markowitz}, $ \mathcal{F}_t $ means an information set of all Monte Carlo (or real) paths of the stock at time $ t $, and the upper-script $ \pi $ stands for a {\it policy} $ \pi \left(t, X_t \right) $ that maps the time $ t $ and the current state $ X_t = x_t $ into an action $ a_t \in \mathcal{A} $: \begin{equation} \label{deterministic_policy} a_t = \pi(t, x_t) \end{equation} As shown in \cite{IH_2017}, the value function (\ref{Value_Function_port}) satisfies the following Bellman equation: \begin{equation} \label{MDP_BSM} V_t^{\pi}(X_t) = \mathbb{E}_{t}^{\pi} \left[ R(X_t, a_t, X_{t+1}) + \gamma V_{t+1}^{\pi} \left( X_{t+1} \right) \right] \end{equation} where the one-step time-dependent random reward is defined as follows: \begin{eqnarray} \label{one-step-reward} R_t(X_t, a_t, X_{t+1}) &=& \gamma a_t \Delta S_t \left(X_t, X_{t+1} \right) - \lambda \, Var \left[ \left. \Pi_t \right| \mathcal{F}_t \right] \nonumber \\ & = & \gamma a_t \Delta S_t \left(X_t, X_{t+1} \right) - \lambda \gamma^2 \mathbb{E}_t \left[ \hat{\Pi}_{t+1}^2 - 2 a_t \Delta {\hat S}_t \hat{\Pi}_{t+1} + a_t^2 \left( \Delta \hat{S}_t \right)^2 \right] \end{eqnarray} where $ \hat{\Pi}_{t+1} \equiv \Pi_{t+1} - \bar{\Pi}_{t+1} $, where $ \bar{\Pi}_{t+1} $ is the sample mean of all values of $ \Pi_{t+1} $, and similarly for $ \Delta {\hat S}_t $. For $ t = T $, we have $ R_T = - \lambda \, Var \left[ \Pi_T \right] $ where $ \Pi_T $ is determined by the terminal condition (\ref{B_T}). An {\it optimal policy} $ \pi_t^{\star} (\cdot | X_t) $ is determined as a policy that maximizes the value function $ V_t^{\pi} \left( X_t \right) $: \begin{equation} \label{Value_maximization_pi} \pi_t^{\star}(X_t) = arg \, \max_{ \pi} \, V_t^{\pi} ( X_t) \end{equation} The optimal value function $ V_t^{\star}(X_t) $ corresponding to the optimal policy satisfies the Bellman optimality equation \begin{equation} \label{Bellman_V_star} V_t^{\star}(X_t) = \mathbb{E}_{t}^{\pi^{\star}} \left[ R_t(X_t, u_t = \pi_t^{\star}(X_t), X_{t+1}) + \gamma V_{t+1}^{\star} \left( X_{t+1} \right) \right] \end{equation} Once it is solved, the (ask) option price is minus the optimal value function: $ C_t^{(ask)} = - V_t^{\star} ( X_t) $. If the system dynamics are known, the Bellman optimality equation can be solved using methods of Dynamic Programming such as Value Iteration. If, on the other hand, dynamics are unknown and the optimal policy should be computed using {\it samples}, which is a setting of Reinforcement Learning, then a formalism based on an action-value function, to be presented next, provides a better framework for Value Iteration methods. \subsection{Action-value function} The action-value function, or Q-function, is defined by an expectation of the same expression as in the definition of the value function (\ref{Value_Function_port}), but conditioned on both the current state $ X_t $ {\it and} the initial action $ a = a_t $, while following a policy $ \pi $ afterwards: \begin{eqnarray} \label{Value_maximization} & Q_t^{\pi} (x,a) = \mathbb{E}_t \left[ \left. - \Pi_{t}(X_t) \right| X_t = x, a_t = a \right] \\ &- \lambda \, \mathbb{E}_t^{\pi} \left[ \left. \sum_{t'=t}^{T} e^{-r (t'-t)} Var \left[ \left. \Pi_{t'}(X_{t'} ) \right| \mathcal{F}_{t'} \right] \right| X_t = x, a_t = a \right] \nonumber \end{eqnarray} The Bellman equation for the Q-function reads \cite{IH_2017} \begin{equation} \label{Bellman_Q_1} Q_t^{\pi}(x, a) = \mathbb{E}_{t} \left[ \left. R_t(X_t, a_t, X_{t+1}) \right| X_t = x, a_t = a \right] + \gamma \mathbb{E}_{t}^{\pi} \left[ \left. V_{t+1}^{\pi} \left( X_{t+1} \right) \right| X_t = x \right] \end{equation} An optimal action-value function $ Q_T^{\star} (x,a) $ is obtained when (\ref{Value_maximization}) is evaluated with an optimal policy $ \pi_t^{\star} $: \begin{equation} \label{opt_policy_Q} \pi_t^{\star} = \arg \, \max_{\pi} Q_t^{\pi} (x,a) \end{equation} The optimal value- and state-value functions are connected by the following equations \begin{eqnarray} \label{V_A} & & V_t^{\star}(x) = \max_{a} Q_t^{\star} (x,a) \\ & & Q_t^{\star} (x,a) = \mathbb{E}_t \left[ R_t(x, a, X_{t+1}) \right] + \gamma \mathbb{E} \left[ \left. V_{t+1}^{\star}(X_{t+1}) \right| X_t = x \right] \nonumber \end{eqnarray} The Bellman Optimality equation for the action-value function is obtained by substituting the first of Eqs.(\ref{V_A}) into the second one: \begin{equation} \label{Bellman_Q} Q_t^{\star} (x,a) = \mathbb{E}_t \left[ R_t \left(X_t, a_t, X_{t+1} \right) + \gamma \max_{a_{t+1} \in \mathcal{A}} \left. Q_{t+1}^{\star} \left( X_{t+1}, a_{t+1} \right) \right| X_t = x, a_t = a \right] \, , \; \; t = 0, \ldots, T-1 \end{equation} with a terminal condition at $ t = T $ given by \begin{equation} \label{Q_T} Q_T^{\star}(X_T, a_T = 0) = - \Pi_T \left(X_T \right) - \lambda \, Var \left[ \Pi_T \left( X_T \right) \right] \end{equation} where $ \Pi_T $ is determined by the terminal condition (\ref{B_T}). A "greedy" policy $ \pi^{\star} $ that is used in the QLBS model always seeks an action that maximizes the action-value function in the current state: \begin{equation} \label{pi_greedy} \pi_t^{\star} (X_t) = arg \, \max_{a_t \in \mathcal{A}} Q_t^{\star} (X_t,a_t) \end{equation} \subsection{DP solution for the optimal Q-function} \label{sect:DP} If transition probabilities to compute the expectation in the right-hand side of the Bellman optimality equation (\ref{Bellman_Q}) are {\it known}, then the Bellman equation (\ref{Bellman_Q}) can be solved, jointly with the optimal policy (\ref{pi_greedy}), using backward recursion starting from $ t = T-1 $ and the terminal condition (\ref{Q_T}). This can be used for benchmarking in our test environment where we {\it do} know these probabilities, and know the rewards function (\ref{one-step-reward}). Substituting the one-step reward (\ref{one-step-reward}) into the Bellman optimality equation (\ref{Bellman_Q}) we find that $ Q_t^{\star} \left( X_t, a_t \right) $ is {\it quadratic} in the action variable $ a_t $: \begin{eqnarray} \label{Bellman_Q_a_t} Q_{t}^{\star}(X_{t},a_{t}) &=& \gamma \mathbb{E}_{t} \left[ Q_{t+1}^{\star} \left( X_{t+1}, a_{t+1}^{\star} \right) + a_t \Delta S_t \right] \nonumber \\ &- & \lambda \gamma^2 \, \mathbb{E}_t \left[ \hat{\Pi}_{t+1}^{2} - 2 a_t \hat{\Pi}_{t+1} \Delta \hat{S}_t + a_t^2 \left( \Delta \hat{S}_t \right)^2 \right] \, , \; \; t = 0, \ldots, T-1 \end{eqnarray} As $ Q_t^{\star} \left( X_t, a_t \right) $ is a quadratic function of $ a_t $, the optimal action (i.e. the hedge) $ a_t^{\star} (S_t) $ that maximizes $ Q_{t}^{\star}(X_{t},a_{t}) $ is computed analytically: \begin{equation} \label{a_star_t} a_{t}^{\star} \left( X_t \right) = \frac{\mathbb{E}_{t} \left[ \Delta \hat{S}_{t} \hat{\Pi}_{t+1} + \frac{1}{2 \gamma \lambda} \Delta S_{t} \right]}{ \mathbb{E}_{t} \left[ \left( \Delta \hat{S}_{t} \right)^2 \right]} \end{equation} Plugging Eq.(\ref{a_star_t}) back into Eq.(\ref{Bellman_Q_a_t}), we obtain an explicit recursive formula for the {\it optimal} action-value function: \begin{equation} \label{Q_star_rec} Q_{t}^{\star}(X_{t},a_{t}^{\star}) = \gamma \mathbb{E}_{t} \left[ Q_{t+1}^{\star}(X_{t+1},a_{t+1}^{\star}) - \lambda \gamma \hat{\Pi}_{t+1}^2 + \lambda \gamma \left( a_{t}^{\star} \left(X_t \right) \right)^2 \left( \Delta \hat{S}_{t} \right)^2 \right] \, , \; \; t = 0, \ldots, T-1 \end{equation} where $ a_{t}^{\star} \left( X_t \right) $ is defined in Eq.(\ref{a_star_t}). In practice, the backward recursion expressed by Eqs.(\ref{Q_star_rec}) and (\ref{a_star_t}) is solved in a Monte Carlo setting, where we assume to have access to $ N_{MC} $ simulated (or real) paths for the state variable $ X_t $ \cite{IH_2017}. In addition, we assume that we have chosen a set of basis functions $ \{ \Phi_n(x) \} $. We can then expand the optimal action (hedge) $ a_t^{\star} \left( X_t \right) $ and optimal Q-function $ Q_t^{\star} \left(X_t, a_t^{\star} \right) $ in basis functions, with time-dependent coefficients: \begin{equation} \label{Q_basis_exp} a_t^{\star} \left( X_t \right) = \sum_{n}^{M} \phi_{nt} \Phi_n \left( X_t \right) \, , \; \; Q_t^{\star} \left(X_t, a_t^{\star} \right) = \sum_{n}^{M} \omega_{nt} \Phi_n \left( X_t \right) \end{equation} Coefficients $ \phi_{nt} $ and $ \omega_{nt} $ are computed recursively backward in time for $ t = T-1, \ldots, 0 $. The results are given by the following expressions: \begin{equation} \label{phi_nt_vec} {\bf \phi}_t^{\star} = {\bf A}_t^{-1} {\bf B}_t \end{equation} where \begin{eqnarray} \label{AB} A_{nm}^{(t)} &=& \sum_{k=1}^{N_{MC}} \Phi_n \left( X_t^k \right) \Phi_m \left( X_t^k \right) \left( \Delta \hat{S}_t^k \right)^2 \nonumber \\ B_{n}^{(t)} &=& \sum_{k=1}^{N_{MC}} \Phi_n \left( X_t^k \right) \left[ \hat{\Pi}_ {t+1}^k \Delta \hat{S}_t^k + \frac{1}{2 \gamma \lambda} \Delta S_t^k \right] \end{eqnarray} and \begin{equation} \label{omega_nt_vec} \omega_{t}^{\star} = {\bf C}_t^{-1} {\bf D}_t \end{equation} where \begin{eqnarray} \label{CD} C_{nm}^{(t)} &=& \sum_{k=1}^{N_{MC}} \Phi_n \left( X_t^k \right) \Phi_m \left( X_t^k \right) \nonumber \\ D_{n}^{(t)} &=& \sum_{k=1}^{N_{MC}} \Phi_n \left( X_t^k \right) \left( R_t \left(X_t^k, a_t^{k \star}, X_{t+1}^k \right) + \gamma \max_{a_{t+1} \in \mathcal{A}} Q_{t+1}^{\star} \left( X_{t+1}^k, a_{t+1} \right) \right) \end{eqnarray} Equations (\ref{phi_nt_vec}) and (\ref{omega_nt_vec}), computed jointly and recursively for $ t = T-1, \ldots, 0 $, provide a practical implementation of the DP-based solution to the QLBS model using expansions in basis functions. This approach can be used to find optimal price and optimal hedge when the dynamics are {\it known}. For more details, see Ref.~\cite{IH_2017}. \subsection{RL solution for QLBS: Fitted Q Iteration} \label{sect:FQI} Reinforcement Learning (RL) solves the same problem as Dynamic Programming (DP), i.e. it finds an optimal policy. But unlike DP, RL does {\it not} assume that transition probabilities and reward function are known. Instead, it relies on {\it samples} to find an optimal policy. Our setting assumes a {\it batch-mode} learning, when we only have access to some historically collected data. The data available is given by a set of $ N_{MC} $ trajectories for the underlying stock $ S_t $ (expressed as a function of $ X_t $ using Eq.(\ref{X_t})), hedge position $ a_t $ , instantaneous reward $ R_t $, and the next-time value $ X_{t+1} $: \begin{equation} \label{F_t_RL} \mathcal{F}_t^{(n)} = \left\{ \left( X_t^{(n)}, a_t^{(n)}, R_t^{(n)}, X_{t+1}^{(n)} \right) \right\}_{t=0}^{T-1} \, , \; \; n = 1, \ldots, N_{MC} \end{equation} We assume that such dataset is available either as a simulated data, or as a real historical stock price data, combined with real trading data or artificial data that would track the performance of a hypothetical stock-and-cash replicating portfolio for a given option. We use a popular batch-model Q-Learning method called Fitted Q Iteration (FQI) \cite{Ernst, Murphy}. A starting point in this method is a choice of a parametric family of models for quantities of interest, namely optimal action and optimal action-value function. We use linear architectures where functions sought are {\it linear} in adjustable parameters that are next optimized to find the optimal action and action-value function. We use the same set of basis functions $ \{ \Phi_n(x) \} $ as we used above in Sect.~\ref{sect:DP} . As the optimal Q-function $ Q_t^{\star} \left(X_t, a_t \right) $ is a quadratic function of $ a_t $, we can represent it as an expansion in basis functions, with time-dependent coefficients parametrized by a matrix $ {\bf W}_t $: \begin{eqnarray} \label{Q_any_a} Q_t^{\star} \left(X_t, a_t \right) &=& \left( 1, a_t, \frac{1}{2} a_t^2 \right) \, \left( \begin{array}{cccc} W_{11}(t) & W_{12}(t) & \cdots & W_{1M}(t) \\ W_{21}(t) & W_{22}(t) & \cdots & W_{2M}(t) \\ W_{31} (t) & W_{32}(t) & \cdots & W_{3M} (t) \\ \end{array} \right) \left( \begin{array}{c} \Phi_1(X_t) \\ \vdots \\ \Phi_M(X_t) \\ \end{array} \right) \nonumber \\ &\equiv & {\bf A}_t^T {\bf W}_t {\bf \Phi}(X_t) \equiv {\bf A}_t^T \, {\bf U}_{W} (t,X_t) \end{eqnarray} Eq.(\ref{Q_any_a}) is further re-arranged to convert it into a product of a parameter vector and a vector that depends on both the state and the action: \begin{eqnarray} \label{rearrange} Q_t^{\star} \left(X_t, a_t \right) &=& {\bf A}_t^T {\bf W}_t {\bf \Phi}(X) = \sum_{i=1}^{3} \sum_{j=1}^{M} \left( {\bf W}_t \odot \left( {\bf A}_t \otimes {\bf \Phi}^T(X) \right) \right)_{ij} \nonumber \\ &=& \vec{{\bf W}}_t \cdot vec \left( {\bf A}_t \otimes {\bf \Phi}^T(X) \right) \equiv \vec{{\bf W}}_t \vec{{\bf \Psi}} \left(X_t,a_t \right) \end{eqnarray} Here $ \odot $ stands for an element-wise (Hadamard) product of two matrices. The vector of time-dependent parameters $ \vec{{\bf W}_t} $ is obtained by concatenating columns of matrix $ \bf{W}_t $, and similarly, $ \vec{{\bf \Psi}} \left(X_t,a_t \right) = vec \left( {\bf A}_t \otimes {\bf \Phi}^T(X) \right) $ stands for a vector obtained by concatenating columns of the outer product of vectors $ {\bf A}_t $ and $ {\bf \Phi}(X) $. Coefficients $ \vec{\bf{W}}_t $ can then be computed recursively backward in time for $ t = T-1, \ldots, 0 $ \cite{IH_2017}: \begin{equation} \label{W_opt_vec} \vec{\bf{W}}_{t}^{\star} = {\bf S}_t^{-1} {\bf M}_t \end{equation} where \begin{eqnarray} \label{SM} S_{nm}^{(t)} &=& \sum_{k=1}^{N_{MC}} \Psi_n \left( X_t^k, a_t^k \right) \Psi_m \left( X_t^k, a_t^k \right) \nonumber \\ M_{n}^{(t)} &=& \sum_{k=1}^{N_{MC}} \Psi_n \left( X_t^k, a_t^k \right) \left( R_t \left(X_t^k, a_t^k, X_{t+1}^k \right) + \gamma \max_{a_{t+1} \in \mathcal{A}} Q_{t+1}^{\star} \left( X_{t+1}^k, a_{t+1} \right) \right) \end{eqnarray} To perform the maximization step in the second equation in (\ref{SM}) analytically, note that because coefficients ${\bf W}_{t+1} $ and hence vectors $ {\bf U}_{W} (t+1, X_{t+1}) \equiv {\bf W}_{t+1} {\bf \Phi}(X_{t+1}) $ (see Eq.(\ref{Q_any_a})) are known from the previous step, we have \begin{equation} \label{max_Q} Q_{t+1}^{\star} \left( X_{t+1}, a_{t+1}^{\star} \right) = \mathbf U_W^{\left(0\right)} \left(t+1,X_{t+1} \right) + a_{t+1}^{\star} \mathbf U_W^{\left(1\right)} \left(t+1,X_{t+1} \right) + \frac{\left( a_{t+1}^{\star} \right)^2 }{2} \mathbf U_W^{\left(2\right)} \left(t+1,X_{t+1} \right) \end{equation} It is important to stress here that while this is a quadratic expression in $ a_{t+1}^{\star} $, it would be completely {\it wrong} to use a point of its maximum as a function of $ a_{t+1}^{\star} $ as such optimal value in Eq.(\ref{max_Q}). This would amount to using the same dataset to estimate both the optimal action and the optimal Q-function, leading to an overestimation of $ Q_{t+1}^{\star} \left( X_{t+1}, a_{t+1}^{\star} \right) $ in Eq.(\ref{SM}), due to Jensen's inequality and convexity of the $ \max(\cdot ) $ function. The correct way to use Eq.(\ref{max_Q}) is to plug there a value of $ a_{t+1}^{\star} $ computed using the analytical solution Eq.(\ref{a_star_t}), applied at the previous time step. Due to availability of the analytical optimal action (\ref{a_star_t}), a potential overestimation problem, a classical problem of Q-Learning that is sometimes addressed using such methods as Double Q-Learning \cite{Double-Q}, is avoided in the QLBS model, leading to numerically stable results. Equation (\ref{W_opt_vec}) gives the solution for the QLBS model in a {\it model-free} and {\it off-policy} setting, via its reliance on Fitted Q Iteration which {\it is} a model-free and off-policy algorithm \cite{Ernst, Murphy}. \section{Inverse Reinforcement Learning in QLBS} \label{sect:IRL} Inverse Reinforcement Learning (IRL) provides a very interesting and useful extension of the (direct) RL paradigm. In the context of batch-mode learning used in this paper, a setting of IRL is nearly identical to the setting of RL (see Eq.(\ref{F_t_RL})), except that there is no information about rewards: \begin{equation} \label{F_t_IRL} \mathcal{F}_t^{(n)} = \left\{ \left( X_t^{(n)}, a_t^{(n)}, X_{t+1}^{(n)} \right) \right\}_{t=0}^{T-1} \, , \; \; n = 1, \ldots, N \end{equation} The objective of IRL is typically two-fold: (i) find rewards $ R_t^{(n)} $ that would be most consistent with observed states and action, and (ii) (the same as in RL) find the optimal policy and action-value function. One can distinguish between {\it on-policy} IRL and {\it off-policy} IRL. In the former case, we know that observed actions were {\it optimal} actions. In the latter case, observed actions may {\it not} necessarily follow an optimal policy, and can be sub-optimal or noisy. In general, IRL is a harder problem than RL. Indeed, not only we have to find optimal policy from data, which is the same task as in RL, but we also have to do it without observing rewards. Furthermore, the other task of IRL is to find a (the?) reward function corresponding to an observed sequence of states and actions. Note that situations with missing reward information are probably encountered more frequently in potential applications of RL/IRL than observable rewards. In particular, this is typically the case when RL methods are applied to study human behavior, see e.g. \cite{Krishnan}. IRL is also widely used in robotics as a useful alternative to direct RL methods via training robots by demonstrations, see e.g. \cite{Kober}. It appears that IRL offers a very attractive, at least conceptually, approach for many financial applications that consider rational agents involved in a sequential decision process, where no information about rewards received by an agent is available to a researcher. Some examples of such (semi- ?) rational agents would be loan or mortgage borrowers, deposit or saving account holders, credit card holders, consumers of utilities such as cloud computing, mobile data, electricity, etc. In the context of trading applications, such IRL setting may arise when a trader wants to learn a strategy of a counterparty. She observes counterparty's actions in their bilateral trades, but not counterparty's rewards. Clearly, if she reverse-engineered most likely counterparty's rewards from observed actions to find counterparty's objective (strategy), she could use it to design her own strategy. This is a typical IRL problem. While typically IRL is a harder problem than RL, and {\it both} are computationally {\it hard}, in the QLBS model both are about equally {\it easy}, due to a quadratic form of both the reward function (\ref{one-step-reward}) and action-value function (\ref{Bellman_Q_a_t}). Moreover, the general IRL setting, where only states and actions, but not rewards, are observed in a dataset, is exactly in between of our two previous settings: a DP setting where we only observe states, and a RL setting where we observe states, actions, and rewards. The main difference is that in the DP setting we know model dynamics, including in particular the risk aversion parameter $ \lambda $, while in the setting of RL or IRL $ \lambda $ is unknown. Therefore, we will first assume that $ \lambda $ is {\it known}, and outline how IRL should proceed with the QLBS model, and then we will discuss ways to estimate $ \lambda $ from data. In the IRL setting, once we have observed states $ X_t $ and actions $ a_t $, rewards $ R_t $ corresponding to these actions can be obtained, if $ \lambda $ is known, in the same way they were computed in Eq.(\ref{one-step-reward}). The only difference is that while in the DP solution of Sec.~\ref{sect:DP} we computed rewards (\ref{one-step-reward}) for {\it optimal} actions (\ref{a_star_t}), in the IRL setting we would use {\it observed} actions $ a_t $ to plug into Eq.(\ref{one-step-reward}) to compute the corresponding rewards. After that, the algorithm proceeds in the same way as the FQI solution of Sect.~\ref{sect:FQI}, using these computed rewards instead of observed rewards in Eq.(\ref{SM}). Clearly, this produces {\it identical} RL and IRL solutions of the QLBS model, as long as $ \lambda $ implied in observed rewards $ R_t $ in the RL case is the same $ \lambda $ used in Eq.(\ref{one-step-reward}) by the IRL solution. This means that the first problem of IRL, i.e. finding a reward function, amounts for the QLBS model to finding just {\it one} parameter $ \lambda $ using Eq.(\ref{one-step-reward}). This can be done using an approach that we present next. \subsection{Maximum Entropy IRL} A simple method to estimate the one-step reward function (\ref{one-step-reward}) by estimating its parameter $ \lambda $ is based on a highly tractable version of a popular Maximum Entropy (MaxEnt) IRL method \cite{Ziebart_2008} that was developed in \cite{IRL_marketing} in a different context. We start with writing expected rewards corresponding to Eq.(\ref{one-step-reward}) as follows \begin{equation} \label{R_t} \bar{R}_t(X_t, a_t) \equiv \mathbb{E}_t \left[ R_t(X_t, a_t, X_{t+1}) \right] = c_0 (\lambda) + a_t c_1 (\lambda) - \frac{1}{2} a_t^2 c_2 (\lambda) \end{equation} where, omitting for brevity the dependence on $ X_t $, we defined \begin{equation} \label{reward_coeffs} c_0 (\lambda) = - \lambda \gamma^2 \mathbb{E}_t \left[ \hat{\Pi}_{t+1}^2 \right] \, , \; c_1 (\lambda) = \gamma \mathbb{E}_t \left[ \Delta S_t + 2 \lambda \gamma \Delta {\hat S}_t \hat{\Pi}_{t+1} \right] \, , \; c_2(\lambda) = 2 \lambda \gamma^2 \mathbb{E}_t \left[ \left( \Delta \hat{S}_t \right)^2 \right] \end{equation} The MaxEnt method of \cite{IRL_marketing} assumes that one-step probabilities of observing different actions $ a_t $ in data are described by an exponential model \begin{equation} \label{MaxEnt} p_{\lambda} \left( \left. a_t \right| X_t \right) = \frac{1}{Z_{\lambda}} e^{ \bar{R}_t(X_t, a_t) } = \sqrt{ \frac{c_2(\lambda)}{ 2 \pi} } \exp \left[ - \frac{c_2(\lambda)}{2} \left( a_t - \frac{c_1 (\lambda)}{c_2 (\lambda)} \right)^2 \right] \end{equation} where $ Z_\lambda $ is a normalization factor. Thus, by combining an exponential distribution of the MaxEnt method with the quadratic expected reward (\ref{R_t}), we ended up with a Gaussian action distribution (\ref{MaxEnt}) for IRL in QLBS. Clearly this is very good news given the amount of tractability of Gaussian distributions. Using Eq.(\ref{MaxEnt}), the log-likelihood of observing data $ \left\{ X_t^{(k)}, a_t^{(k)} \right\})_{k=1}^{N} $ is (omitting a constant factor $ - \frac{1}{2} \log \left( 2 \pi \right) $ in the second expression) \begin{equation} \label{ll} LL (\lambda) = \log \prod_{k=1}^{N} p_{\lambda} \left( \left. a_t^{(k)} \right| X_t^{(k)} \right) = \sum_{k=1}^{N} \left( \frac{1}{2} \log c_2^{(k)}(\lambda) - \frac{c_2^{(k)}(\lambda)}{2} \left( a_t^{(k)} - \frac{c_1^{(k)}(\lambda)}{ c_2^{(k)}(\lambda)} \right)^2 \right) \end{equation} where $ c_i^{(k)}(\lambda) $ with $ i = 1,2 $ stands for expressions (\ref{reward_coeffs}) evaluated on the $k$-th path. As this is a concave function of $ \lambda $, its unique maximum can be easily found numerically using standard optimization packages. Note that optimization in Eq.(\ref{ll}) refers to one particular value of $ t $, therefore this calculation can be repeated independently for different times $ t $, producing a curve $ \lambda_{impl}(t) $ that could be viewed as a term structure of implied risk aversion parameter. It can also be noticed that while Eq.(\ref{MaxEnt}) describes a {\it probabilistic} Gaussian policy (action probability), in Sect.~\ref{sect:FQI} we used the {\it deterministic} "greedy" policy (\ref{pi_greedy}). Therefore, if we used a value of $ \lambda $ estimated with Eq.(\ref{ll}) in the IRL algorithm described above, this may not produce the same result as the RL approach of Sect.~\ref{sect:FQI}. Policy assumptions can be made more consistent between the RL and IRL approaches if instead of Q-Learning (in the form of Fitted Q Iteration) that we used in Sect.~\ref{sect:FQI}, we switched to G-Learning \cite{G-Learning} that replaces the "greedy max" term in Eq.(\ref{SM}) with a "soft-greedy max" term for a G-function: \begin{equation} \label{G-learning} \max_{a_{t+1} \in \mathcal{A}} Q_{t+1}^{\star} \left( X_{t+1}, a_{t+1} \right) \rightarrow - \frac{1}{\beta} \log \left( \int p \left(a | X_{t+1} \right) e^{- \beta G_{t+1} \left(X_{t+1}, a \right)} da \right) \end{equation} where $ \beta $ is an "inverse temperature" parameter of G-Learning \cite{G-Learning}. We leave G-Learning in the QLBS model for a future research. \section{NuQLear experiments} \label{sect:Experiments} We illustrate the numerical performance of the model in different settings (DP, RL, IRL) using simulated stock price histories $ S_t $ with the initial stock price $ S_0 = 100 $, stock drift $ \mu = 0.05 $, and volatility $ \sigma = 0.15 $. Option maturity is $ T = 1 $ year, and a risk-free rate is $ r = 0.03 $. We consider an ATM ("at-the-money") European put option with strike $ K = 100 $. Re-hedges are done bi-weekly (i.e. $ \Delta t = 1/24 $). We use $ N_{MC} = 50,000 $ Monte Carlo scenarios for a path of the stock, and report results obtained with two MC runs, where the error reported is equal to one standard deviation calculated from these runs. In our experiments, we use pure risk-based hedges, i.e. omit the second term in Eq.(\ref{a_star_t}) to facilitate comparison with the BSM model. We use 12 basis functions chosen to be cubic B-splines on a range of values of $ X_t $ between the smallest and largest values observed in a dataset. \subsection{DP solution} In our experiments below, we pick the Markowitz risk aversion parameter $ \lambda = 0.001 $. This provides a visible difference of QLBS prices from BS prices, while being not too far away from BS prices. The dependence of the ATM option price on $ \lambda $ is shown in Fig.~\ref{fig:price_vs_lambda}. \begin{figure}[ht] \begin{center} \includegraphics[ width=90mm, height=70mm]{Opt_price_vs_lambda_Markowitz.png} \caption{The ATM put option price vs risk aversion parameter. The horizontal red line corresponds to the BS model price. Error bars correspond to one standard deviation of two MC runs.} \label{fig:price_vs_lambda} \end{center} \end{figure} Simulated path and solutions for optimal hedges, portfolio values, and Q-function values corresponding to the DP solution of Sect.~\ref{sect:DP} are illustrated in Fig.~\ref{fig:QLBS_DP_summary_graphs_ATM_option_mu_r}. In the numerical implementation of matrix inversion in Eqs.(\ref{phi_nt_vec}) and (\ref{omega_nt_vec}), we used a regularization by adding a unit matrix with a regularization parameter of $ 10^{-3} $. \begin{figure}[ht] \begin{center} \includegraphics[ width=140mm, height=120mm]{QLBS_DP_summary_graphs_ATM_option_mu_r.png} \caption{DP solution for the ATM put option on a sub-set of MC paths} \label{fig:QLBS_DP_summary_graphs_ATM_option_mu_r} \end{center} \end{figure} The resulting QLBS ATM put option price is $ 4.90 \pm 0.12 $ (based on two MC runs), while the BS price is 4.53. \subsection{On-policy RL/IRL solutions} We first report results obtained with {\it on-policy} learning. In this case, optimal actions and rewards computed as a part of a DP solution are used as inputs to the Fitted Q Iteration algorithm Sect.~\ref{sect:FQI} and the IRL method of Sect.~\ref{sect:IRL}, in addition to the paths of the underlying stock. Results of two MC runs with Fitted Q Iteration algorithm of Sect.~\ref{sect:FQI} are shown in Fig.~\ref{fig:QLBS_FQI_summary_graphs_ATM_option_mu_r}. Similarly to the DP solution, we add a unit matrix with a regularization parameter of $ 10^{-3} $ to invert matrix $ {\bf C}_t $ in Eq.(\ref{W_opt_vec}). Note that because here we deal with {\it on-policy} learning, the resulting optimal Q-function $ Q_t^{\star} \left(X_t, a_t \right) $ and its optimal value $ Q_t^{\star} \left(X_t, a_t^{\star} \right) $ are virtually identical in the graph. \begin{figure}[ht] \begin{center} \includegraphics[ width=140mm, height=120mm]{QLBS_FQI_summary_graphs_ATM_option_mu_r.png} \caption{RL solution (Fitted Q Iteration) for {\it on-policy} learning for the ATM put option on a sub-set of MC paths for two MC runs.} \label{fig:QLBS_FQI_summary_graphs_ATM_option_mu_r} \end{center} \end{figure} The resulting QLBS RL put price is $ 4.90 \pm 0.12 $ which is identical to the DP value. As expected, the IRL method of Sect.~\ref{sect:IRL} produces the same result. \subsection{Off-policy RL solution} In the next set of experiments we deal with {\it off-policy} learning. To make off-policy data, we multiply, at each time step, optimal hedges computed by the DP solution of the model by a random uniform number in the interval $ [1- \eta, 1 + \eta] $ where $ 0 < \eta < 1 $ is a parameter controlling the noise level in the data. We will consider the values of $ \eta = [0.15, 0.25, 0.35, 0.5] $ to test the noise tolerance of our algorithms. Rewards corresponding to these sub-optimal actions are obtained using Eq.(\ref{one-step-reward}). In Fig.~\ref{fig:Option_price_vs_noise_level} we show results obtained for off-policy learning with 5 different scenarios of sub-optimal actions. Note that while some non-monotonicity in these graphs is due to a low number of scenarios, we note that the impact of sub-optimality of actions in recorded data is rather mild, at least for a moderate level of noise in actions. This is as expected as long as Fitted Q Iteration is an {\it off-policy} algorithm. This implies that when dataset is large enough, the QLBS model can learn even from data with purely random actions, and, in particular, learn the BSM model itself, if the world is lognormal \cite{IH_2017}. \begin{figure}[ht] \begin{center} \includegraphics[ width=120mm, height=90mm]{Option_price_vs_noise_level.png} \caption{Means and standard deviations of option prices obtained with {\it off-policy} FQI learning with data obtained by randomization of DP optimal actions by multiplying each optimal action by a uniform random variable in the interval $ [1- \eta, 1 + \eta] $ for $ \eta = [0.15, 0.25, 0.35, 0.5] $, with 5 scenarios for each value, and 2 MC runs. Horizontal red lines show values obtained with {\it on-policy} learning corresponding to $ \eta = 0 $.} \label{fig:Option_price_vs_noise_level} \end{center} \end{figure} Results of two MC runs for off-policy learning with the noise parameter $ \eta = 0.5 $ with Fitted Q Iteration algorithm are shown in Fig.~\ref{fig:QLBS_FQI_off_policy_summary_ATM_eta_50}. \begin{figure}[ht] \begin{center} \includegraphics[ width=140mm, height=120mm]{QLBS_FQI_off_policy_summary_ATM_eta_50.png} \caption{RL solution (Fitted Q Iteration) for {\it off-policy} learning with noise parameter $ \eta = 0.5 $ for the ATM put option on a sub-set of MC paths for two MC runs. } \label{fig:QLBS_FQI_off_policy_summary_ATM_eta_50} \end{center} \end{figure} \section{Option portfolios} \label{sect:Option_portfolios} While above and in \cite{IH_2017} we looked at the problem of hedging and pricing of a single European option by an option seller that {\it does not} have any pre-existing option portfolio, here we outline a simple generalization to the case when the option seller {\it does} have such pre-existing option portfolio, or alternatively if she wants to sell a few options simultaneously\footnote{The context of this section was previously presented in a separate note "Relative Option Pricing in the QLBS Model" (https://papers.ssrn.com/sol3/papers.cfm?abstract\_id=3090608).}. In this case, she needs to worry about {\it consistency} of pricing and hedging of {\it all} options in her new portfolio. In other words, she has to solve the dreaded {\it volatility smile problem} for her particular portfolio. Here we will show how she can do it in a {\it worry-free} way using the QLBS model. Assume the option seller has a pre-existing portfolio of $ K $ options with market prices $ C_1, \ldots, C_K $. All these options reference an underlying state vector (market) $ {\bf X}_t $ which can be high-dimensional such that each particular option $ C_i $ with $ i = 1, \ldots, K $ references only one or a few components of market state $ {\bf X}_t $. Alternatively, we can add vanilla option prices as components of the market state $ {\bf X}_t $. In this case, our dynamic replicating portfolio would include vanilla options, along with underlying stocks. Such hedging portfolio would provide a dynamic generalization of static option hedging for exotics introduced by Carr {\it et. al.} \cite{Carr}. We assume that we have a historical dataset $ \mathcal{ F }$ that includes $ N $ observations of trajectories of tuples of vector-valued market factors, actions (hedges), and rewards (compare with Eq.(\ref{F_t_RL})): \begin{equation} \label{F_n} \mathcal{F}_t^{(n)} = \left\{ \left( {\bf X}_t^{(n)}, {\bf a}_t^{(n)}, {\bf R}_t^{(n)}, {\bf X}_{t+1}^{(n)} \right) \right\}_{t=0}^{T-1} \, , \; \; n = 1, \ldots, N \end{equation} Now assume the option seller wants to add to this pre-existing portfolio another (exotic) option $ C_e $ (or alternatively, she wants to sell a portfolio of options $ C_1, \ldots, C_K, C_e $). Depending on whether the exotic option $ C_e $ was traded before in the market or not, there are two possible scenarios. We will look at them one by one. In the first case, the exotic option $ C_e $ was previously traded in the market (by the seller herself, or by someone else). As long as its deltas and related P\&L impacts marked by a trading desk are available, we can simply extend vectors of actions $ {\bf a}_t^{(n)} $ and rewards $ {\bf R}_t^{(n)} $ in Eq.(\ref{F_n}), and then proceed with the FQI algorithm of Sect.~\ref{sect:FQI} (or with the IRL algorithm of Sect.~\ref{sect:IRL}, if rewards are not available). The outputs of the algorithm will be optimal price $ P_t $ of the whole option portfolio, plus optimal hedges for all options in the portfolio. Note that as long as FQI is an {\it off-policy} algorithm, it is quite forgiving to human or model errors: deltas in the data should not even be perfectly mutually consistent (see single-option examples in the previous section). But of course, the more consistency in the data, the less data is needed to learn an optimal portfolio price $ P_t $. Once the optimal time-zero value $ P_0 $ of the total portfolio $ C_1, \ldots, C_K, C_e $ is computed, a market-consistent price for the exotic option is simply given by a subtraction: \begin{equation} \label{subtraction} C_e = P_0 - \sum_{i=1}^{K} C_i \end{equation} Note that by construction, the price $ C_e $ is consistent with {\it all} option prices $ C_1, \ldots, C_K $ and {\it all} their hedges, to the extent they are consistent between themselves (again, this is because Q-Learning is an off-policy algorithm). Now consider a different case, when the exotic option $ C_e $ was {\it not} previously traded in the market, and therefore there are no available historical hedges for this option. This can be handled by the QLBS model in essentially the same way as in the previous case. Again, because Q-Learning is an {\it off-policy} algorithm, it means that a delta and a reward of a {\it proxy} option $ C_e^{'} $ (that {\it was} traded before) to $ C_e $ could be used in the scheme just described in lieu of their actual values for option $ C_e $. Consistently with a common sense, this will just slow down the learning, so that more data would be needed to compute the optimal price and hedge for the exotic $ C_e $. On the other hand, the closer the traded proxy $ C_e^{'} $ to the actual exotic $ C_e $ the option seller wants to hedge and price, the more it helps the algorithm on the data demand side. Finally, when rewards for the $ C_e^{'} $ are not available, we can use the IRL methods of Sect.~\ref{sect:IRL}. \section{Summary} \label{sect:Summary} In this paper, we have provided further extensions of the QLBS model developed in \cite{IH_2017} for RL-based, data-driven and model-independent option pricing, including some topics for "NuQLear" (Numerical Q-Learning) experimentations with the model. In particular, we have checked the convergence of the DP and RL solutions of the model to the BSM results in the limit $ \lambda \rightarrow 0 $. We looked into both {\it on-policy} and {\it off-policy} RL for option pricing, and showed that Fitted Q Iteration (FQI) provides a reasonable level of noise tolerance with respect to possible sub-optimality of observed actions in our model, which is in agreement with general properties of Q-Learning being an {\it off-policy} algorithm. This makes the QLBS model capable of learning to hedge and price even when traders' actions (re-hedges) are sub-optimal or not mutually consistent for different time steps, or, in a portfolio context, between different options. We formulated an Inverse Reinforcement Learning (IRL) approach for the QLBS model, and showed that when the Markowitz risk aversion parameter $ \lambda $ is {\it known}, the IRL and RL algorithms produce identical results, by construction. On the other hand, when $ \lambda $ is {\it unknown}, it can be separately estimated using Maximum Entropy (MaxEnt) IRL \cite{Ziebart_2008} applied to one-step transitions as in \cite{IRL_marketing}. While this does {\it not} guarantee identical results between the RL and IRL solutions of the QLBS model, this can be assured again by using G-Learning \cite{G-Learning} instead of Q-Learning in the RL solution of the model. Finally, we outlined how the QLBS model can be used in the context of option portfolios. By relying on Q-Learning and Fitted Q Iteration, which are {\it model-free} methods, the QLBS model provides its own, data-driven and model independent solution to the (in)famous volatility smile problem of the Black-Scholes model. While fitting the volatility smile and pricing options consistently with the smile is the {\it main objective} of Mathematical Finance option pricing models, this is just a {\it by-product} for the QLBS model. This is because the latter is distribution-free, and is therefore capable of adjusting to {\it any} smile (a set of market quotes for vanilla options). As was emphasized in the Introduction and in \cite{IH_2017}, the {\it main} difference between all continuous-time option pricing models of Mathematical Finance (including the BSM model and its various local and stochastic volatility extensions, jump-diffusion models, etc.) and the QLBS model is that while the former try to "match the market", they remain clueless about the expected risk in option positions, while the QLBS model makes the {\it risk-return} analysis of option replicating portfolios the {\it main focus} of option hedging and pricing, similarly to how such analysis is applied for stocks in the classical Markowitz portfolio theory \cite{Markowitz}.
1,108,101,564,005
arxiv
\section{Introduction} Tremendous efforts have been made to understand the symmetry-protected gapped topological phases after the discovery of topological insulators \cite{TIrev,SPTrev}. Following this progress, various theoretical and experimental studies have begun to explore the gapless analogs of symmetry-protected topological phases such as the Dirac \cite{neupane14,xu15,liu14} and Weyl semi-metal \cite{wan11,xu15-2,zhang15,xiong15,krempa12}, where low energy excitations possess Dirac-like spectra. Recently, three-dimensional materials with symmetry-protected Fermi line nodes have also been theoretically proposed and experimentally synthesized \cite{burkov11,kee12,yang_2, yige15,schaffer15,rhim15,chen_2,kane15,cava15,mullen15,weng14,yu15,weng15,zeng15}. These systems have nodal rings in momentum space protected by various combinations of time reversal invariance, inversion, chiral and other lattice symmetries. These non-trivial systems are predicted to host topologically protected surface states. However, so far no efforts have been made to study the effects of interactions. In this study, we investigate the effects of the long-range Coulomb interaction in nodal ring semi-metals. This is known in various other Fermion systems. In the best-studied system, the Fermi liquid metal, $1/r$-long range interaction is marginal, but Fermi liquid survives due to the strong Thomas-Fermi screening which makes the Coulomb interaction effectively short-ranged. This is caused by metals having an extended Fermi surface and a constant density of states at the Fermi level. The results are known in the other limit, where the energy vanishes only at isolated points of the Brillouin zone. Graphene (in two-dimensions), Weyl semi-metals (in three-dimensions) and double Weyl semi-metals receive logarithmic corrections due to the Coulomb interaction that remains marginal \cite{gonzalez96,neto12,isobe12,lai15,jian15}. In the quadratic band touching case, a non-Fermi liquid phase was found \cite{yang10,moon13}. For anisotropic Weyl fermions, the Coulomb interaction becomes anisotropic and irrelevant \cite{yang14}. Nodal ring semi-metals lie in between these two well-studied limits. The energy gap closes on a one-dimensional line node, on which the density of states vanishes. Because of this, short-range interaction was found to be irrelevant \cite{senthil09,lee14}. Screening of the Coulomb interaction is expected to be much weaker compared to the Fermi liquid metal, because fewer states are available to participate. Nonetheless, we show below that the Coulomb interaction is relevant at the non-interacting fixed point. Through renormalization group (RG) analysis and large-$N_f$ computations, we identify a non-trivial interacting fixed point where the partially-screened Coulomb interaction becomes irrelevant, making the fermions asymptotically free in the low energy limit. This allows us to treat the partially-screened Coulomb interaction as a perturbation and calculate the lifetime of the quasiparticles. It is found that the quasiparticle scattering rate vanishes as $E^2$ at low energies even though the partially-screened Coulomb interaction is still long-ranged. \section{Model} We start with a non-interacting effective Hamiltonian for the nodal ring semi-metal. This can be written as \begin{equation} \mathcal{H}_0 = \frac{ k_x^2 +k_y^2 -k_F^2 }{2m} \sigma^x+ \gamma k_z \sigma^y \equiv \epsilon_{a}(k) \sigma^a. \quad a=x,y \;, \end{equation} where the Pauli matrices $\sigma_x$ and $\sigma_y$ describe the orbital or pseudo-spin degrees of freedom. This Hamiltonian is similar to that of Ref~\onlinecite{kane15}. This system has a nodal Fermi ring in the $k_x-k_y$ plane of radius $k_F$, and a linear dispersion in the $k_z$-direction. Its energy spectrum is \begin{equation} E_{\pm}(k)=\pm \sqrt{ \left( \frac{ k_x^2 +k_y^2 -k_F^2}{2m} \right) ^2 + (\gamma k_z)^2}\;, \label{eq:dispersion} \end{equation} for the empty $(+)$ and filled $(-)$ bands. In order to describe the effects of Coulomb interaction, we use the Euclidean path integral formalism for the action in $3+1$ dimensions. \begin{equation} \mathcal{S} = \int d\tau d^3 x~\psi^\dagger \left[ \partial_\tau - i e \phi + \mathcal{H}_0 \right] \psi + \frac{1}{2} \int d\tau d^3 x~ (\partial_i \phi)^2 \label{eq:action1} \end{equation} The bosonic field $\phi$ represents the instantaneous Coulomb interaction introduced by the Hubbard-Stratonovich transformation. To study how important the interaction is at low energies, we start with finding the engineering dimension of the coupling constant. The non-trivial Fermi surface (ring) in the system affects the scaling dimensions of both fermionic and bosonic fields. Here we use an RG scheme where a momentum cutoff is applied in the directions around the Fermi ring. We scale the fermion momentum towards the Fermi ring \cite{fitzpatrick13,senthil09}; $k_F$ is fixed and scaling is done only in the Dirac dimensions in which there are linear dispersions. Using definitions $k_r = \sqrt{k_x^2+k_y^2}$ and $\tilde{k_r} \equiv k_r - k_F$, $\tilde{k_r}$ and $k_z$ are scaled. However there is no scaling in the angular ($\phi \equiv \cos^{-1}(k_x/k_r))$ direction since this represents the gapless degree of freedom. Because of this anisotropy, it is easier to calculate the scaling dimensions from an action written in momentum space rather than in the form given in Eq.~\ref{eq:action1}. Here we generalize the expression to general $d$-spacial dimensions and write the Coulomb interaction as a 4-fermion term. \begin{align} \mathcal{S} & \sim \int_{\omega,{\bm k}}\psi^\dagger (-i\omega + \mathcal{H}_0) \psi \nonumber \\ &~~~+ e^2 \int_{\omega_1,\omega_2,\omega_3,{\bm k},{\bm k}^\prime, {\bm q}} \frac{1}{q^2} \psi^\dagger(k+q) \psi (k) \psi^\dagger (k^\prime-q) \psi (k^\prime) \label{eq:action2} \end{align} We have used the notation $\int_\omega = \int d\omega$, $\int_{\bm k} = k_F \int d^{d-1}k \int d\phi$ and $\int_{\bm q} = \int d^{d}q$. The constants that have no scaling dimensions such as $k_F$ and $\pi$ have been dropped for clarity. Note that while $k$ and $k^\prime$ are scaled only in the Dirac directions with $d-1$ dimensions, $q$ is scaled in all $d$ dimensions. This is because the important contribution arises from when the momentum carried by the Coulomb interaction is small and when the fermions are close to the Fermi ring. The scaling dimensions can be found to be $[\tilde{k_r}] = 1$, $[k_z] = 1$, $[\omega] = 1$, $[q_i]=1$, $[\psi] = -(d+1)/2$, and $[e^2] = 3-d$. Therefore the critical dimension is the physical dimension $d=3$. From this we would conclude that the Coulomb interaction to be marginal. \section{RG Analysis} The energy scales of this problem are the Coulomb energy $E_c = e^2 m v_F$, the kinetic energy $E_k = m v_F^2$, and the energy cutoff $E_\Lambda = v_F \Lambda$. We also define a velocity anisotropy parameter $\eta = \gamma / v_F$, where $v_F = k_F/m$ is the fermion radial velocity in the $k_z=0$ plane. The following dimensionless ratios determine the scaling behaviors. \begin{align} \alpha = \frac{E_c}{E_k} = \frac{e^2}{v_F} \quad,\quad \beta = \frac{E_c}{E_\Lambda} = \frac{e^2 k_F}{v_F \Lambda} \quad,\quad \eta = \frac{\gamma}{v_F} \end{align} To allow for anisotropic Coulomb interaction, we use as the action for the boson, \begin{align} \mathcal{S}_\phi = \frac{1}{2}\int d\tau d^3 x \left[ a \left( (\partial_x \phi)^2 + (\partial_y \phi)^2 \right) + \frac{1}{a} (\partial_z \phi)^2 \right] \;. \end{align} \begin{figure}[] \includegraphics[width=80mm]{diags1.pdf} \caption{Diagrammatic representations of (a) Boson self energy and (b) Fermion self energy. Straight arrowed lines represent the fermion propagators and the wiggly lines the boson propagators. } \label{fig:diags1} \end{figure} We perform a 1-loop momentum shell RG around the Fermi ring by calculating the boson and fermion self energies to find the RG flow for various parameters. The Feynman diagrams for these self energies are shown in Fig.~\ref{fig:diags1}. The boson self energy is, \begin{eqnarray} \Pi(q,i\omega) = -e^2 \int_{\bm k} {\rm Tr}[ G_0(k+q, \Omega+\omega) G_0(k, \Omega) ]\;, \label{eq:Pi} \end{eqnarray} where $G_0(k,i\Omega) = (-i \Omega + \mathcal{H}_0)^{-1}$ is the bare Green's function of the fermions. For $\omega =0$, this gives \begin{align} \Pi({\bm q}, 0) &= e^2 \int_{{\bm k}}^\Lambda \frac{2}{4} \Big(1- \frac{\epsilon_a(k+p/2) \epsilon_a(k-p/2)}{E_{k+q/2} E_{k-q/2}} \Big) \nonumber \\ &~~~~~\times \frac{-2}{ E_{k+q/2} + E_{k-q/2}}\;, \label{eq:Pi2} \end{align} where $E_k=E_+(k)$ is defined by the dispersion relation shown in Eq.~\ref{eq:dispersion}. We define the momentum shell integral as $\int_{\bm k}^\Lambda = \frac{1}{(2\pi)^3} \int_0^{2\pi} d\phi ( \int_\mu ^\Lambda + \int_{-\Lambda}^{-\mu} ) k_F d \tilde{k_r} \int_{-\infty}^{\infty}dk_z$ with $\mu = \Lambda e^{-d \ell}$. The resulting integral can be done after expanding the integrand to second order in $q_r$ and $q_z$. We find \begin{align} \Pi(q_r,q_z) &\approx - \frac{e^2}{(2\pi)^2}\left( q_z^2 \frac{ 2 m^2 \gamma}{3 k_F } + q_r^2 \frac{k_F }{ 6\gamma} \right) \frac{d \ell}{\Lambda_r} \nonumber \\ &= -\beta^\prime \left( a q_r^2 \frac{1}{2 a \eta} + \frac{1}{a}q_z^2 2 a \eta \right) d\ell \;, \label{eq:bosonRG} \end{align} where $\beta^\prime = \beta \frac{1}{3(2\pi)^3}$. This is infrared (IR) divergent as $\Lambda_r \rightarrow 0$ and the Coulomb interaction is strongly renormalized. Similarly the fermion self energy is calculated setting external momentum to ${\bm p} = (k_F + p_x, 0, p_z)$. \begin{align} \Sigma_f({\bm p}) &= -e^2 \int_{\bm q}^\Lambda \frac{\mathcal{H}_0(p+q)}{E_(p_q)} \frac{1}{ a(q_x^2+q_y^2) + 1/a~q_z^2}\nonumber \\ &\equiv -\frac{\alpha}{(2\pi)^2} \left( \sigma_x v_F p_x F_1(a\eta) + \sigma_y \gamma p_z F_2(a\eta) \right) d\ell \end{align} The momentum shell integral is defined as $\int_{\bm q}^\Lambda = \frac{1}{(2\pi)^3} (\int_\mu ^\Lambda + \int_{-\Lambda}^{-\mu} ) d q_x \int_{-\infty}^{\infty}dq_y dq_z$. The detailed calculation and expressions for $F_1$ and $F_2$ are given in Appendix A. This scaling of the fermion self energy is consistent with the marginal engineering dimension of the bare Coulomb interaction. The final RG flow equations for $\alpha$, $\beta^\prime$, and $a \eta$ are \begin{align} \frac{d \alpha}{d \ell} &= \alpha \left[ -\frac{1}{2} \beta^\prime \left( \frac{1}{2a\eta} + 2a\eta \right) - \frac{\alpha}{(2\pi)^2} F_1(a\eta) \right] \nonumber \\ \frac{d\beta^\prime}{d \ell} &= \beta^\prime + \beta^\prime \left[ -\frac{1}{2} \beta^\prime \left( \frac{1}{2a\eta} + 2a\eta \right) -\frac{\alpha}{(2\pi)^2} F_1(a\eta) \right] \nonumber \\ \frac{d(a\eta)}{d \ell} &= a \eta \Bigg[ \frac{1}{2} \beta^\prime \left( \frac{1}{2a\eta} - 2a\eta \right) \nonumber \\ &~~~~~~~~~+ \frac{\alpha}{(2\pi)^2} \left( F_2(a\eta) - F_1 (a\eta) \right) \Bigg] \end{align} There are two fixed points: the non-interacting fixed point at $\alpha =0, \beta^\prime = 0$ ($a\eta$ is arbitrary) is unstable and the interacting one at $\alpha = 0, \beta^\prime = 1, a\eta = 1/2$ is stable. From the non-interacting fixed point, $\alpha$ is marginally irrelevant and $\beta$ is relevant. The non-zero value of $\beta^\prime$ at the non-trivial interacting fixed point shows a strong renormalization of the Coulomb interaction while $\alpha=0$ shows that the renormalized Coulomb interaction is irrelevant to the fermions. After a step of eliminating high energy degrees of freedom, the boson propagator $D(q)$ can be written as \begin{align} &D^{-1}(q) \nonumber \\ &~~= a \left( 1+ \beta^\prime \frac{1}{2a\eta} d\ell \right) (q_x^2+ q_y^2) + \frac{1}{a} \left( 1+ \beta^\prime 2a\eta d\ell \right) q_z^2\;. \end{align} Therefore the anomalous dimension is $1$ which arises from the existence of a $k_F$ scale. The renormalized propagator at the new interacting fixed point satisfies \begin{align} D^{-1}(q)&\sim q_r^{2-1} + |q_z|^{2-1} = q_r + |q_z| \;. \label{eq:ScreenedCoulombRG} \end{align} This will be confirmed by a direct calculation below. \section{Large $N_f$ calculation} The screened Coulomb interaction in $d=3$ can also be directly calculated using the random phase approximation. This can be viewed as a large $N_f$ calculation where $N_f$ is the number of fermion flavors. The physical case is $N_f=2$ for the spin states. After introducing a sum over fermion flavors and modifying the coupling constant to $\frac{e}{\sqrt{N_f}}$, the same Eqs.~\ref{eq:Pi} and \ref{eq:Pi2} are calculated without the ${\bm q}$ expansions or the ${\bm k}$ cutoffs. The result is \begin{equation} \Pi(q_r,q_z,\omega=0) = -\frac{e^2}{(2\pi)^3} \left( \frac{k_F q_r}{\gamma} C_1 + 2m |q_z| C_2 \right)\;, \end{equation} where $C_1 = 6.86$, $C_2 = 7.28$ are calculated numerically. (Details of the calculation are presented in the Supplemental Material.) Therefore for a small $|q|$, the screened Coulomb potential is \begin{equation} V_s(q) \sim \frac{1}{\frac{C_1 k_F }{\gamma} q_r+ 2m C_2 |q_z| }\;. \end{equation} Notice that the screened Coulomb interaction still has algebraic momentum dependence $1/|q|$ in sharp contrast to that of Fermi liquids. The presence of $k_F$ in nodal ring excitation is not enough to make the Coulomb interaction short-ranged. Furthermore, the directional dependence is qualitatively the same even though the nodal ring spectrum is strongly anisotropic. It is important to note that this result is independent of choice in RG scheme since no cutoff has been imposed. The RG calculation is a weak coupling analysis whereas this is a strong coupling analysis with $1/N_f$ as a control parameter. However this result is still consistent with the RG result presented in Eq.~\ref{eq:ScreenedCoulombRG}, which provides validity to both. The imaginary part of the bosonic self energy determines the decay. This can be calculated by performing a Wick rotation. This gives the results \begin{eqnarray} {\rm Im} \Pi(q_r=0,q_z,\omega+i0^+) &\sim& \theta (\frac{\omega}{\gamma q_z}-1) \nonumber \\ {\rm Im} \Pi(q_r,q_z=0,\omega+i0^+) &\sim& \frac{\omega^2}{k_F q_r}\;. \end{eqnarray} (Full expressions are presented in the Supplmental Material.) Therefore there is no damping in the direction perpendicular to the ring, while the boson with in-plane momentum shows damping less than that of the Fermi liquid. Landau damping, for comparison, gives ${\rm Im} \Pi(q,\omega) \sim \omega/q$. The vertex correction vanishes at the one loop level. This can be easily checked by setting all the external momenta and frequency to $0$. This is as required by the Ward identity because the fermion self energy (Fig.~\ref{fig:diags1}(b)) has no frequency dependence. \section{Fate of the quasiparticles} \begin{figure}[] \includegraphics[width=40mm]{diags2.pdf} \caption{ The straight line represents the fermion propagator as in Fig.~\ref{fig:diags1} and the double wiggly line represents the renormalized boson propagator. } \label{fig:diags2} \end{figure} We have seen above that the bosons are strongly renormalized. However since the screened Coulomb interaction is still long-ranged, we must check whether the interaction destroys the fermi liquid or not. The fate of the quasiparticles can be determined from the self energy of the fermions. Using the renormalized boson propagator as shown in Fig.~\ref{fig:diags2}, the self energy is \begin{align} \Sigma_f(p, i \omega_n) &= \int_{{\bm q}, {\bm q}_n} \frac{1}{-i \omega_n - i q_n +\mathcal{H}_0(p+q)} \nonumber \\ &~~~~~~~\times \frac{-e^2}{q^2- \Pi(q, i q_n)} \;. \end{align} We find that this is both UV and IR convergent and therefore the screened Coulomb interaction is an irrelevant perturbation to the fermions. Therefore the fermions remain as valid quasiparticles of the system and are effectively decoupled. The result is again consistent with the RG analysis presented earlier. The lifetime of these fermions can be found from the imaginary part of the self energy after analytic continuation by the relation $1/\tau = -2{\rm Im}\Sigma_f$. The channel that has the largest contribution is the one that satisfies the Fermi's Golden rule. Focusing on this channel, for a fermion with initial momentum ${\bm p}$ close to the line node and energy $E_p$, we have \begin{align} \frac{1}{\tau} &\sim 2 e^4 \int_{{\bm k},{\bm q}} \frac{1}{\Pi(q,0)^2} \left( 1- \frac{\epsilon_{a}(k) \epsilon_{a}(k+q)}{E_k E_{k+q}}\right) \nonumber \\ &~~~~~~\times \delta( E_{k}+E_{k+q}+E_{p+q}-E_p) \;. \label{eq:lifetime} \end{align} Leading order contributions only come from the region where the intermediate wave vector ${\bm k}$ is very close to the nodal ring. In fact, ${\bm k}$ needs to be closer to the ring than ${\bm q}$ is to the origin. We find that $\frac{1}{\tau} \sim \frac{m}{k_F^2}E_p^2 C(\chi_p) $ where $\chi_p$ controls the in-plane component versus the out-of-plane component of ${\bm p}$. $C(\chi_p)$ is a numerical factor that can be numerically calculated for any $\chi_p$. (Details of this calculation can be found in the Appendix.) It is identically zero when the in-plane component disappears. This is because there is no decay channel that satisfies the energy-momentum conservation. This can be seen above by ${\rm Im} \Pi(q_r=0,q_z,\omega) = 0$ when $q_z$ is small. Overall, $1/\tau \sim E_p^2$ and therefore the quasiparticles are long-lived. Interestingly it has the same energy dependence as the Fermi liquid case. While the density of states of this system is vanishing at the Fermi level ($\sim \omega$), this is compensated by the partially screened Coulomb potential. \section{Experimental signatures} Similar to surface acoustic wave propagation experiments in two-dimensional materials \cite{huston62,halperin93,simon96}, a bulk sound wave propagation measurement in a periodic potential with wavelength $\lambda \sim 1/q$ can be used to probe the momentum dependence of the dielectric function. The sound velocity shift and attenuation is found as \begin{eqnarray} \frac{\Delta v_s}{v_s} = \frac{\alpha^2}{2} \frac{1}{1+ f(q)^2}\;, \quad \kappa = \frac{\alpha^2 q}{2} \frac{ f(q) }{ 1+ f(q)^2} \end{eqnarray} where $f(q) = C \left( \frac{v_s}{v_F} \right)^2 \frac{k_F}{q}$, $C=\frac{\pi}{16}\frac{e^2}{\gamma\epsilon}$ and $\alpha$ is the coupling constant between the piezoelectrics and medium which depends on the geometry. In contrast, in the Fermi liquid metal, $f(q) = C^{\prime} \frac{v_s}{v_F} \left( \frac{k_F}{q} \right)^2$, $C^\prime = \frac{2e^2}{\epsilon v_F}$. A related physical observable is the DC conductivity. Using the Kubo formula and taking the limits $q \rightarrow 0$, then $\omega \rightarrow 0$, we find the DC conductivity in the clean limit to be finite (due to the underlying particle-hole symmetry) and anisotropic: $\sigma_{xx} = \sigma_{yy} = \frac{e^2}{\hbar}\frac{k_F v_F}{64 \gamma}$ and $\sigma_{zz} = \frac{e^2}{\hbar}\frac{k_F \gamma} {32 v_F}$ with restored units. These results are consistent with a previous compuatation for a similar system \cite{mullen15}. \footnote{In a Weyl semimetal this value approaches 0 linearly as frequency approaches 0 and in fermi liquids this diverges. Another system where a constant value is seen is the two-dimensional graphene.} The characteristic screening of the Coulomb interaction also affects the phonon dispersion. Longitudinal acoustic phonon dispersion follows $\omega(q)^2 = \Omega_p^2/\epsilon(q)$ where $\Omega_p$ is the plasma frequency of the ions. This shows an unusual $\omega(q) \sim \sqrt{q}$ dependence for small $q$. \section{Conclusion} It is shown that the long-range Coulomb interaction in nodal ring semi-metals leads to a non-trivial fixed point where the screened Coulomb interaction acquires an anomalous dimension. On the other hand, the screened Coulomb interaction becomes irrelevant at the interacting fixed point while remaining long-ranged. Hence the quasi-particles are asymptotically free and physical properties can be computed using a perturbation theory. We show that the quasi-particles have a long life time even though the screening of charged impurity potential would follow an unusual power-law form due to the anomalous dimension. Sound wave propagation and acoustic phonon dispersion show unique momentum dependences. Anisotropic DC conductivity is found and is proportional to the size of the nodal ring. These properties could be tested in future experiments. Interesting future directions include studies of the coupling to critical bosonic modes and impurity/disorder effects. \acknowledgments We thank Jun-Won Rhim, Hae-Young Kee, and Yige Chen for helpful discussions. This work was supported by the NSERC of Canada, Canadian Institute for Advanced Research, and Center for Quantum Materials at the University of Toronto. \begin{widetext}
1,108,101,564,006
arxiv
\section{INTRODUCTION} Grasp verification is a necessitate for autonomous robots to determine the state of the grasp while performing object manipulation. Normally robots need to perform a series of operations that depend on each other. If a task is not done correctly then the robot should re-build the operation sequence based on the occurred failure. So the robot needs to verify its action to update its knowledge about the current state. Grasp verification in robots, generally, are done using sensor's in the gripper but this becomes challenging in the new flexible grippers. For example, in the KUKA youBot(see fig \ref{arm_cams}) we have added a parallel adaptive gripper fingers by Festo. The adaptive nature of the gripper finger makes it difficult for placing traditional sensors and to robustly determine the state of the grasp. In this paper, we propose a \textit{machine vision camera} sensor based grasp verification that works by capturing images of the gripper and verifying if the grasp is successful using deep learning inference. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{arm_cam_1.png} \caption{youBot gripper with machine vision camera JeVois A33 along with the camera view.} \label{arm_cams} \end{figure} A machine vision camera consists of an image sensor, a processor and a neural processing unit, this makes it possible to perform edge computing on the captured image. Performing inference near the data source reduces the load to transfer and process images on the centralized server. This helps in reducing the latency in processing the data and improves reliability of the overall system. Recently, deep neural networks had great improvements in solving image classification problems. This was achieved by the advent of convolutional neural networks (CNNs). Successful CNN approaches are able to solve image classification problems with an accuracy near to human \cite{canziani2017evaluation}. However, CNNs need considerable resources of computation and memory. The idea of executing a deep learning algorithm into an embedded device has been discussed widely and solutions such as compression are provided \cite{alizadeh2018empirical, howard2017mobilenets}. In this paper, we formulate the grasp verification problem as deep learning based image classification task. However, the two main challenges with performing deep learning inference in machine vision cameras are (i) lack of literature on the performance of these cameras and (ii) establishing a deep learning architecture that is capable to fulfill the visual classification task with satisfying accuracy reckoning the limitations of the embedded device. In this paper, we benchmark the performance of machine vision cameras by creating a parameterized model generator which generates CNN models of varying parameters, executing them and recording performance metrics. Based on the hardware-software limitation of the machine vision cameras, the benchmarking results and the grasp verification task we select corresponding deep learning models. Finally all the grasp verification system is integrated which includes dataset collection, training and deployment of machine vision camera compatible deep learning models. To summarize, the main contributions of the paper are the \textbf{performance evaluation} of low-cost machine vision cameras and integration of machine vision camera with a real world robot for \textbf{grasp verification} tasks. We hope that the performance evaluation will drive the robotic community to apply machine vision cameras in other robotics applications. \section{RELATED WORK} Several researchers have studied autonomous grasping and vision based grasping. They all have pointed the importance of grasp verification in terms of the demand for autonomous capabilities and for adapting manipulation in dynamic environments. The method presented in \cite{stansfield1991robotic} is a general-purpose robotic grasping system that is designed to work in unstructured environments. Motions of the arm and fingers are automatically generated and validated by extracting a set of features from the environment. Visual information is obtained by scanning the target object from different views. Obtained visual perceptions are processed to generate and execute the grasping task. The visual processing is also used to detect the valid grasp which is done passively. Machine learning methods and deep learning methods were also explored. Applying deep learning methods for grasp detection is widely used in \cite{caldera2018review}. However, deploying visual learning approaches particularly for grasp verification is not considered. A grasp system for under-actuated robotic hands is presented in \cite{yao2009analysis}. The method outputs a grasp strategy for each object which is based on the analyses of human knowledge. Valid grasps are detected and used to control the robotic hand. This examination is done by using a well-trained neural network. The attribute parameters of the object are extracted and applied as inputs to the neural network. The result of the network is then compared with the grasp strategy decision. The approach presented in \cite{seredynski2015grasp} is a grasp planning procedure that extracts a sequence of desired poses from the object as well as expected external forces applied to the object during the task execution. The control system presented in this work contains a grasp verification step that checks the stability of the grasp after the task execution. The approach presented in \cite{kulkarni2019low} uses proximity sensors for grasp detection where the gripper is equipped with flexible fingers. Proximity sensors are used to measure deformation of flexible fingers due to external force and is used to detect a grasp. \section{MACHINE VISION CAMERAS} The machine vision cameras studied in this work are Sipeed Maix Bit, JeVois A33, and OpenMV H7. Hardware aspects define capabilities on computation power, response speed, and communication capabilities. Software aspects define capabilities on which types of neural network layers and activation functions they can accept. \begin{figure}[t] \centering \includegraphics[scale=0.06]{cams.jpg} \caption{Machine vision cameras. From left to right: Sipeed Maix Bit, JeVois A33, OpenMV H7} \label{cams} \end{figure} \subsection{JeVois A33} JeVois runs linux OS that is flashed on the micro SD memory. The firmware contains libraries such as OpenCV 4.0, TensorFlow Lite, Darknet deep neural networks, DLib, etc. JeVois can run codes written in C++ and Python. Running the 1.3 GHz CPU requires a large current flow in a smaller footprint which causes overheat issues and consequently, the need for the cooling fan. Thus, this camera consumes more power than others. The supported file format for CNN models is \textit{tflite} and all layers, activation types of TensorFlow Lite are supported. \subsection{Sipeed Maix Bit} Sipeed Maix Bit supports MaixPy programming language which is the MicroPython language that is ported to the K210 processor. Frequently-used standard libraries plus some custom libraries, are available in MaixPy. It executes a specific format model file called \textit{kmodel}. A neural network compiler called \textit{nncase}\footnote{https://github.com/kendryte/nncase} converts TFLite and caffe models to corresponding kmodel format. The accelerator currently only has support for a restricted group of CNN layers and activation functions also the size of the \textit{kmodel} is limited to 5.9MiB \footnote{https://maixpy.sipeed.com/en/libs/Maix/kpu.html}. \subsection{OpenMV H7} OpenMV camera supports MicroPython, a compact implementation of Python 3.4. OpenMV supports only a group of Python functions and class libraries. This camera accepts only model files with a special file format (\textit{.network}). This type of file can be created by converting CNN models that are created by \textit{Caffe} framework. The main drawback of this device arises when trying to execute custom deep learning models. After modifying the CNN architecture the conversion procedure fails as its currently not supported. \subsection{Comparison} Even though all these cameras have capability to perform visual computing and execute deep learning models they also have differences such as processor speed, memory, power consumption, and most importantly,the neural accelerators and their capability to do deep learning inference. Table \ref{table_cameras} displays general specifications of the studied machine vision cameras. \begin{table}[h] \caption{Hardware specifications of the cameras} \label{table_cameras} \begin{center} \begin{tabular}{|l|c|c|c|} \hline Parameter & JeVois & Sipeed & OpenMV \\ \hline \hline Processor & ARM Cortex A7 & Kendryte K210 & Arm Cortex-M7 \\ & 4$\times$1.3 GHz & 400 MHz & 400 MHz \\ & 32 Bit & 64 Bit & 32 Bit \\ \hline Options & GPU$^1$ & KPU$^2$ & FPU$^3$ \\ \hline RAM & 256MB & 8MB & 1MB \\ \hline Comm. & Serial over USB & USB to TTL & USB Streaming \\ & Micro Serial & JTAG, ISP & SPI, I2C, UART \\ \hline Size [mm] & 40 x 32 x 21 & 54 x 26 x 13 & 45 x 36 x 30 \\ \hline Power & 5V, $>$800 mA & 5V, $>$600 mA & 3.3V, 170 mA \\ \hline Image Size & $1280\times1024$ & $1632\times1232$ & $640\times480$ \\ \hline \end{tabular} \end{center} \footnotesize{$^1$Dual core Mali-400, $^2$Kendryte Processing Unit (NN Processing Unit), $^3$Floating-point Processing Unit} \end{table} \begin{table}[h] \caption{Software specifications of the cameras} \label{table_cameras_software} \begin{center} \begin{tabular}{|l|c|c|c|} \hline Parameter & JeVois & Sipeed & OpenMV \\ \hline \hline Firmware & Linux & MicroPython & Micropython \\ \hline Language & Python, C++ & MaixPy & MicroPython \\ \hline File format & .tflite & .kmodel & .network \\ \hline Accept CNNs & $\surd$ & $\surd$ & $\surd$ \\ \hline Accept custom CNNs & $\surd$ & $\surd$ & $\times$ \\ \hline large model size & $\surd$ & $\times$ & $\times$ \\ \hline \end{tabular} \end{center} \end{table} Table \ref{table_cameras_software} compares parameters that are essential for the task of CNN model inference. Based on the hardware software comparison, JeVois and Sipeed camera were selected for performance analysis. OpenMV camera was dropped because of the software limitation to convert custom deep learning models required for the performance analysis. \section{PERFORMANCE ANALYSIS} We benchmark the cameras by executing different custom CNN models and standardized of-the-shelf deep learning architecture. In the first benchmark, we use a CNN model generator which generates a range of CNN models with varying number of parameters while in the second benchmark we use two standardized deep learning models MobileNet\cite{howard2017mobilenets} and YOLO\cite{redmon2017yolo9000}. For each experiment, the models perform inference on-board and two different performance metrics are recorded: \textit{latency} and \textit{throughput}. Fig \ref{fig_bench_sizes} shows the distribution of all individual experiments with respect to the number of parameters. \textit{Latency} is computed as the 95th percentile of each inference’s timing window, or the time taken for one inference to complete and \textit{Throughput} is the average inference performance computed by the total number of inferences performed within the sum of all timing windows in our case the number of inference performed per second \footnote{https://github.com/eembc/mlmark}. \begin{figure}[t] \includegraphics[width=\linewidth]{bench_sizes.png} \caption{Distribution of number of parameters of the benchmarked models} \label{fig_bench_sizes} \end{figure} \subsection{CNN Model Generator} CNN model generator, generates a range of CNN models by iterating through specified ranges of hyper-parameters. The resulted models vary in size, number of trainable parameters, and attributes. Two groups of CNN models are created for this experiment: one with \textit{Conv2D} layers and the other with \textit{DepthwiseConv2D} layers. Each model is described by the number of blocks, number of filters (only for Conv2D models), number of inputs (image size), and number of outputs. Different combinations of these hyper-parameters result in a range of models that are created and used for the experiment. Table \ref{table_conv2dparam} and \ref{table_depconv2dparam} shows the range of hyper-parameters used. \begin{table}[h] \caption{Hyper-parameters for Conv2D models} \label{table_conv2dparam} \begin{center} \begin{tabular}{|l|c|c|c|} \hline Parameter & Start & End & Increment \\ \hline \hline Blocks & 2 & 6 & 1 \\ \hline Filters & 34 & 42 & 4 \\ \hline Images & 16 & 224 & 52 \\ \hline Outputs & 2 & 10 & 4 \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[h] \caption{Hyper-parameters for DepthwiseConv2D models} \label{table_depconv2dparam} \begin{center} \begin{tabular}{|l|c|c|c|} \hline Parameter & Start & End & Increment \\ \hline \hline Blocks & 1 & 5 & 1 \\ \hline Images & 64 & 224 & 32 \\ \hline Outputs & 2 & 82 & 16 \\ \hline \end{tabular} \end{center} \end{table} To choose these values, constraints of the devices are taken into consideration. Besides, ranges are selected such that resulting models cover a complete set with various file sizes and specifications. The first constraint is about the resulting file size that is imposed by Sipeed camera. The second constraint is that Sipeed cannot accept layer outputs that are lower than $4\times4$ in size. Since the number of outputs decreases as layers are added to a CNN architecture, equation \ref{eq1} defines a relation between hyper-parameters of input size and number of blocks. \begin{equation} \frac{i}{2^{b}} \geq 4 \label{eq1} \end{equation} Where $i$ is input size (images) and $b$ is the number of blocks. Another constraint is the maximum size of the input image that is again imposed by Sipeed. Based on the experiments, it can only handle models with an input image size of $224\times224$ pixels or less. In order to achieve more comparable values, experiments on JeVois camera are done on the two processor frequencies of 1344 MHz and 408 MHz. Fig \ref{fig_conv2d_comp_r} compares the latency with respect to different hyper-parameters. Image size has direct correlation on latency for both cameras. While the block size has inverse correlation for Sipeed camera. So its better to have large blocks for best performance. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{lat_all_comb.png} \caption{Latency comparison of two cameras, running Conv2D models(first row) and DepthwiseConv2D(second row).} \label{fig_conv2d_comp_r} \end{figure} Fig \ref{fig_conv2d_comp_f} and fig \ref{fig_depth_comp_f} compares throughput values with respect to the number of parameters. Throughput is inversely related to the number of parameters for Sipeed camera. In case of JeVois camera, even though there exists an inverse relation some smaller models perform bad as compared to larger size models. \begin{figure}[h!] \centering \includegraphics[scale=0.26]{conv2d_comp_f.png} \caption{Throughput comparison of two cameras, running Conv2D models} \label{fig_conv2d_comp_f} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale=0.26]{depth_comp_f.png} \caption{Throughput comparison of two cameras, running DepthwiseConv2D models} \label{fig_depth_comp_f} \end{figure} \subsection{Standard Architectures} For second set of benchmark, we selected 2 popular embedded deep learning architecture MobilNets and YOLO for performance analysis. MobileNet \cite{howard2017mobilenets} is a CNN architecture designed for mobile and embedded based vision applications where there is a lack of computing power. It provides high accuracy while the size of the network is relatively small. This architecture is based on depthwise separable convolutions, making it proper for building small models that can be matched to mobile and embedded vision applications. YOLO is a real-time object detection architecture designed for real-time processing. It uses a multi-scale training method to detect objects. It divides the image into a grid with bounding boxes, which are drawn around images. Predicted probabilities for each region are then calculated based on the weights that are associated with the probabilities\cite{redmon2017yolo9000}. Table \ref{table_standard_architectures} reports the latency and throughput values for both cameras when running MobileNet and YOLO. MobileNet deployed is of version 1 with 0.75x channel count, $224\times224$ input image size, and quantized weights while YOLO consist of 20 classes detector with input image size $320\times240$ pixels and quantized weights. As results show, Sipeed performs better than JeVois for both the models with respect to latency. While when comparing throughput Sipeed has higher throughput for MobileNet architecture but JeVois has higher throughput for YOLO architecture. This occurs because the last layer of YOLO is a convolution layer which increases the time taken to process the results from the final layer. \begin{table}[h] \caption{Results of standard architectures experiment} \label{table_standard_architectures} \begin{center} \begin{tabular}{|l|l|c|c|} \hline Architecture & Camera & Latency [ms] & Throughput [fps] \\ \hline \hline \multirow{3}{*}{MobileNet} & JeVois (408MHz) & 397 & 3.3 \\ & JeVois (1344MHz) & 125 & 7.6 \\ & Sipeed & 38 & 26.2 \\ \hline \multirow{3}{*}{YOLO} & JeVois (408MHz) & 3275 & 113 \\ & JeVois (1344MHz) & 1320 & 140 \\ & Sipeed & 24 & 21 \\ \hline \end{tabular} \end{center} \end{table} \subsection{Analysis} Based on the performance analysis, we can conclude the following (i) Latency and throughput in both the cameras are directly related to the number of parameters. (ii) Sipeed performs better in terms of latency and throughput than JeVois, this because of the dedicated CNN accelerator but can only support limited architectures, (iii) JeVois on the contrary can execute a wide range of deep learning models with comparable performance. \section{GRASP VERIFICATION} Grasp verification task is formulated as an image classification problem where the camera perceives the gripper and it has to classify between 2 states "grasped" and "not grasped". For this purpose the JeVois camera is mounted on the youBot gripper (see Fig \ref{fig_gripper}), a custom dataset is collected, image classification is performed by using different CNN architectures, and the results are evaluated. The dataset contains more than 4000 images and sufficiently generalizes various grasping conditions. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{gripper_annotated.png} \caption{youBot gripper with mounted Jevois camera} \label{fig_gripper} \end{figure} \subsection{System Design } The limitations enforced by the hardware leads to a size limit on the deep learning models which in turn limits their learning capacity. This limitation of the model is compensated in the integration of the machine vision camera. The different factors considered during the integration are (i) Field of view of the camera (ii) Force to be applied on the object being grasped and (iii) Lighting condition of the environment. Field of view (FOV) of camera has a substantial impact on the complexity of the vision problem. A larger FOV causes information overload making it difficult to learn while a smaller FOV will cause loss of information required for the task. Since the major information of the grasp is in the gripper the camera is placed in such a way that the grippers are always visible during the grasp action. Force applied by the gripper has an impact on the bending of the gripper and is a good source of visual confirmation of a tight grasp therefore during the experiments the objects are grasped with the required force. Lighting conditions also affect the vision capabilities, therefore for our experiments normal industrial lighting conditions are assumed with illuminance ranging from 50 to 500 lux. \subsection{Training and Evaluation} The architectures selected for evaluation are MobileNet, single block ResNet, and a custom CNN architecture based on the insights from the benchmarking experiment. Tensorflow was used to model the architectures and the training was performed offline on an Intel CPU. \subsubsection{MobileNet} Fine-tuned MobileNet model is created by modifying a MobileNet model with 0.5x channel count that is pre-trained on ImageNet dataset. The last 6 layers of the model is removed, then the last 12 layers are trained on the collected dataset. The final model contains 830,562 parameters with 401,922 trainable parameters. \subsubsection{Single Block ResNet} ResNet architecture follows a framework called deep residual learning that is designed to solve the problem of degradation. To turn a plain network into a residual network, shortcut connections are inserted between groups of layers \cite{he2016deep}. For this experiment, the model is reduced to a single block ResNet architecture. Final model contains total 87,170 parameters with 85,762 trainable parameters. \subsubsection{Custom CNN Models} Here two CNN models that were introduced in benchmarking are trained. These models are constructed from simple CNN blocks, they are small when compared with MobileNet or ResNet architectures. The model with Conv2D layers contains 66,194 parameters, and model with DepthwiseConv2D layers contains 248 parameters, all trainable. \begin{figure}[b] \centering \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{sal_map_closed.jpg} \caption{Grasped} \end{subfigure} \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{sal_map_notclosed.jpg} \caption{Not Grasped} \end{subfigure} \caption{Saliency Map of MobileNet trained model, darker pixels contribute more towards prediction.} \label{fig:saliency_map} \end{figure} \begin{table}[h!] \caption{Summary of model training} \label{table_train_all} \begin{center} \begin{tabular}{|l|l|c|c|c|c|c|} \hline Architecture & Parameter & Train & Validate \\ \hline \hline \multirow{2}{*}{MobileNet} & Accuracy & 0.98 & 0.97 \\ & Loss & 0.03 & 0.12 \\ \hline \multirow{2}{*}{ResNet} & Accuracy & 0.94 & 0.97 \\ & Loss & 0.13 & 0.08 \\ \hline \multirow{2}{*}{Conv2D} & Accuracy & 0.98 & 1.00 \\ & Loss & 0.03 & 0.01 \\ \hline \multirow{2}{*}{DepthwiseConv2D} & Accuracy & 0.83 & 0.95 \\ & Loss & 0.39 & 0.21 \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[h!] \caption{Summary of models performance} \label{table_thr_lat_youBot} \begin{center} \begin{tabular}{|l|c|c|c|} \hline Architecture & Latency [ms] & Throughput [fps] & Parameters \\ \hline \hline MobileNet & 75 & 12 & 830,562 \\ \hline ResNet & 140 & 6.9 & 87,170 \\ \hline Conv2D & 160 & 5.8 & 66,194 \\ \hline DepthwiseConv2D & 17 & 43 & 248 \\ \hline \end{tabular} \end{center} \end{table} \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{youbot_steps_small.jpg} \caption{youBot performing pick action, along with the inferred grasp verification status. First row: from observer's view, second row: from camera's view.} \label{fig_youBot_steps} \end{figure*} MobileNet achieves 98\% accuracy in training and 97\% accuracy in validation. Even with larger number of parameters it has the lowest latency and throughput as compared to ResNet or Conv2D. On analysis of the saliency map as show in fig \ref{fig:saliency_map}, we observe that the gripper curvature and gripper tip are the most important features contributing towards the decision. \subsection{Evaluation of Integrated System} To investigate the performance of the trained model on the overall integrated setup, we selected several objects with varying shape, weight and texture. For each object, three distinct grasp variations were tested (i) grasping with different forces (ii) grasping with different backgrounds and (iii) grasping in varying lighting conditions. In each of these variations, the robot picks each object from different picking positions, lifts it and then verifies if the grasp was successful. Fig \ref{fig_youBot_steps} provides an overview of a single run of the integrated system experiment. Based on the training accuracy we selected MobileNet architecture to run inference on the JeVois camera. The integrated system was able to verify the grasp for all the objects with 97\% per frame accuracy and 100\% per run accuracy. \section{CONCLUSIONS AND FUTURE WORK} This paper provides a vision based deep learning solution for grasp verification using machine vision cameras. We comprehensively benchmarked deep learning inference capable machine vision cameras. Based on the benchmarking results, the JeVois camera was integrated to the KUKA youBot gripper for grasp verification. The dataset collected from the integrated setup was then used to evaluate the performance of four deep learning architectures. Finally, the trained MobileNet-based grasp detection was deployed and evaluated with different test objects, in which it achieves 97\% per frame accuracy and 100\% per run accuracy. The dataset, generated models and other benchmarking results are openly available \footnote{https://github.com/amirhpd/grasp\_verification}. One future work is to increase the capability of the network to detect other semantic information, such as grasp quality and slippage by redefining the model architecture considering the benchmarking results of the cameras. \addtolength{\textheight}{-12cm} \section*{ACKNOWLEDGMENT} \small The authors gratefully acknowledge the on-going support of the Bonn-Aachen International Center for Information Technology. \bibliographystyle{./IEEEtran}
1,108,101,564,007
arxiv
\section{Introduction} Giant gravitons\cite{McGreevy:2000cw} provide an ideal laboratory in which non-perturbative effects in string theory can be studied. The operators dual to giant gravitons moving in the AdS$_5\times$S$^5$ background are known\cite{Balasubramanian:2001nh},\cite{Corley:2001zk},\cite{Berenstein:2004kk}; further, they enjoy non-renormalization properties, so that certain computations done at weak coupling can reliably be extrapolated to strong coupling. This is important since we would like to compare field theory results (which we obtain at {\it weak coupling} where we can actually do calculations) with results from the dual quantum gravity defined on a large space with small curvature (which should reproduce the {\it strong coupling} dynamics of the quantum field theory)\cite{Maldacena:1997re}. The giant gravitons we consider in this article correspond to ${1\over 2}$ BPS operators in the ${\cal N}=4$ super Yang-Mills theory. These giant gravitons can be excited by attaching open strings to them. Operators dual to the open string plus giant graviton system were proposed in \cite{Balasubramanian:2004nb}\footnote{See \cite{Balasubramanian:2002sa},\cite{Strings},\cite{CuntzChain},\cite{deMelloKoch:2005jg},\cite{Berenstein:2006qk},\cite{adsgiant},\cite{strings}, \cite{Shanin} for further studies of non-BPS excitations. Some of these excitations have been interpreted as open strings attached to giant gravitons.}. Since the worldvolume of a giant graviton is a compact space, the Gauss law imposes strict constraints on the allowed open string excitations. It is a nontrivial piece of evidence for the proposal of \cite{Balasubramanian:2004nb}, that the operators dual to the open string plus giant system are perfectly consistent with these constraints. Recently, a graphical notation for these operators together with the technology to compute free field theory correlation functions has been developed in \cite{jelena}. Our goal in this article is to obtain the one loop matrix of anomalous dimensions of these operators. In a remarkable paper, Minahan and Zarembo\cite{Minahan:2002ve} showed that the spectrum of one loop anomalous dimensions of operators dual to closed string states, in a subsector of the theory, gives rise to an integrable $SO(6)$ spin chain. This result was generalized to include the full set of local operators of the theory\cite{Beisert:2003yb}. The full planar one loop spectrum of anomalous dimensions gives an integrable spin chain model that can be solved by Bethe-Ansatz techniques\cite{Beisert:2003yb}. A similar approach for operators dual to open strings is frustrated by the fact that, since the open string and giant can exchange momentum, the number of sites of the open string lattice becomes a dynamical variable\footnote{An exception to this is the case of an open string attached to a maximal giant graviton\cite{Strings}.}. This was circumvented in \cite{CuntzChain} by introducing a Cuntz oscillator chain. Restricting to the $SU(2)$ sector, the spin chain is obtained by mapping one of the matrices, say $Z$, into a spin up and the other, say $Y$, into a spin down. In contrast to this, the Cuntz chain uses the $Y$s to set up a lattice which is populated by the $Z$s. Thus the number of sites in the Cuntz chain is fixed; the fact that the open string can exchange momentum with the giant is reflected in the fact that there are sources and sinks (at the endpoints of the string) for the particles on the chain. The precise structure of these boundary interactions is rather complicated; indeed since the brane can exchange momentum with the string, the brane will in general be deformed by these boundary interactions. The goal of this article is to determine this Cuntz chain Hamiltonian for a single string attached to an arbitrary system of giant gravitons. In particular, this entails accounting for back reaction on the giant graviton. In section 2 we start by recalling the definition of the operators dual to a giant graviton with a single string attached. This allows us to introduce the notation we use for Cuntz chain states. We also recall the bulk terms in the Cuntz oscillator chain Hamiltonian that are independent of the brane system that the open string is attached to. In section 3 we describe how to obtain the boundary interaction terms in the Hamiltonian for an arbitrary open string/brane bound state system. Section 4 discusses the numerically tractable toy model obtained by considering a string with a single site. We come to the disappointing conclusion that our Hamiltonian does not accurately describe the open string dynamics for this toy model. In section 5 we obtain sigma models that describe the continuum limit of our Cuntz chains. Our results suggest that the AdS giant gravitons are unstable. Finally, in section 6, we present our conclusions. \section{Attaching Open Strings to Giant Gravitons} In this section we will introduce the operators in ${\cal N}=4$ super Yang-Mills theory that are dual to an open string plus giant graviton system. These operators were originally introduced in \cite{Balasubramanian:2004nb}. Our goal in this article is to obtain the one loop matrix of anomalous dimensions of these operators. We will do this by mapping the spectrum of anomalous dimensions into a Cuntz oscillator chain model\cite{CuntzChain}. The dynamics of the Cuntz chain has two contributions, one coming from the bulk of the string and one from the end points. The bulk terms, which are independent of the details of the brane the open string is attached to, are known\cite{CuntzChain}. These bulk terms are briefly reviewed in this section. The end point interactions describe how the open string interacts with the giant it is attached to, and consequently, depends sensitively on the details of the brane state. One of the main results of this article is the computation of these end point interactions. This is dealt with in the next section. We study the Lorentzian ${\cal N}=4$ super Yang-Mills theory on $R\times S^3$. The $1/2$-BPS (and systematically small deformations of these) states of the theory on $R\times S^3$ can be described in the s-wave reduction of the Yang-Mills theory, i.e. in a matrix quantum mechanics \cite{Berenstein:2004kk}. According to the state-operator correspondence of conformal field theory, the generator associated to dilatations on $R^4$ becomes the Hamiltonian for the theory on $R\times S^3$. The action of ${\cal N}=4$ super Yang-Mills theory on $R\times S^3$ is $$ S={N\over 4\pi \lambda}\int dt \int_{S^3} {d\Omega_3\over 2\pi^2} \left( {1\over 2}(D\phi^i)(D\phi^i)+{1\over 4}\big(\big[\phi^i,\phi^j\big]\big)^2 -{1\over 2}\phi^i\phi^i +\dots \right),$$ where $\lambda =g_{YM}^2N$ is the 't Hooft coupling, $i,j=1,...,6$ and $\dots$ are the fermion and the gauge kinetic terms in the action which we will not need here. The mass term arises from conformal coupling to the metric of $S^3$. Group the six real scalars into three complex fields $$ Z=\phi^1+i\phi^2,\qquad Y=\phi^3+i\phi^4,\qquad X=\phi^5+i\phi^6 .$$ In what follows we use these complex combinations. \subsection{Operators dual to Excited Giants} The dual of a giant graviton is a Schur polynomial\cite{Corley:2001zk}\footnote{In this paper we study the theory with gauge group $U(N)$. For the extension to gauge group $SU(N)$, one needs to account for the fact that the $Z$s in this case are traceless. See \cite{deMelloKoch:2004ws} for further details.} \begin{equation} \chi_R (Z)={1\over n!}\sum_{\sigma\in S_n}\chi_R (\sigma )\mbox{Tr\,} (\sigma Z^{\otimes n}), \label{Schur} \end{equation} $$\mbox{Tr\,} (\sigma Z^{\otimes n}) =Z^{i_1}_{i_{\sigma (1)}}Z^{i_2}_{i_{\sigma (2)}}\cdots Z^{i_{n-1}}_{i_{\sigma (n-1)}}Z^{i_n}_{i_{\sigma (n)}}.$$ Schur polynomials are labeled by Young diagrams, denoted $R$ above. A Schur polynomial labeled by a Young diagram with a single column of length $O(N)$ is dual to a sphere giant\cite{Balasubramanian:2001nh}; a Schur polynomial labeled by a Young diagram with a single row of length $O(N)$ is dual to an AdS giant\cite{Corley:2001zk},\cite{adsgiant}. It is natural to guess that a Schur polynomial labeled by a Young diagram with $O(1)$ columns and $O(N)$ rows is dual to a bound state of sphere giants and that a Schur polynomial labeled by a Young diagram with $O(N)$ columns and $O(1)$ rows is dual to a bound state of AdS giants. One can excite giant gravitons by attaching open strings to them. Each open string is described by a word, $W$, with $O(\sqrt{N})$ letters. These letters can in principle be fermions, Higgs fields or covariant derivatives of these fields. We will consider open strings moving with a large angular momentum on the $S^5$, in the direction corresponding to $Y$. The number of $Y$ fields tells us the spacetime angular momentum of the string state. To describe strings moving with a large angular momentum on the $S^5$, take words with $O(\sqrt{N})$ $Y$ letters in the word. To describe different string states, insert letters into this word. The remaining letters can be put into a correspondence with oscillators of the string worldsheet theory\cite{Balasubramanian:2002sa}. In this article we will consider only the open string states obtained by inserting $Z$ Higgs fields so that the open strings can have a component of angular momentum in the direction of the giant. Our labeling for the open string words is the following (there are $L+1$ $Y$s in $W$) $$ (W(\{n_1,n_2,\cdots , n_L\}))^i_j= (YZ^{n_1}YZ^{n_2} Y\cdots YZ^{n_L}Y)^i_j.$$ Geometrically we are thinking of the $Y$'s as forming a lattice that is populated with $Z$'s. The numbers $n_i$ give the occupation number representation of the $Z$s in this lattice. The BMN loops\cite{Berenstein:2002jq} are given by moving to momentum space on this lattice. The endpoints of the open string are given by the first and $L+1$th $Y$ of the above word. The proposal of \cite{Balasubramanian:2004nb} for the operators dual to excited giant gravitons inserts the words $(W^{(a)})^j_i$ describing the open strings (one word for each open string) into the operator describing the system of giant gravitons \begin{equation} \chi_{R,R_1}^{(k)}(Z,W^{(1)},...,W^{(k)})={1\over (n-k)!} \sum_{\sigma\in S_n}\mbox{Tr\,}_{R_1}(\Gamma_R(\sigma))\mbox{Tr\,}(\sigma Z^{\otimes n-k}W^{(1)}\cdots W^{(k)}), \label{restrictedschur} \end{equation} $$\mbox{Tr\,} (\sigma Z^{\otimes n-k}W^{(1)}\cdots W^{(k)})= Z^{i_1}_{i_{\sigma (1)}}Z^{i_2}_{i_{\sigma (2)}}\cdots Z^{i_{n-k}}_{i_{\sigma (n-k)}}(W^{(1)})^{i_{n-k+1}}_{i_{\sigma (n-k+1)}}\cdots (W^{(k)})^{i_{n}}_{i_{\sigma (n)}}.$$ The label $R$ of the giant graviton system is a Young diagram with $n$ boxes, i.e. it also labels a representation of the symmetric group $S_n$. $\Gamma_R(\sigma )$ is the matrix representing $\sigma$ in irreducible representation $R$ of $S_n$. The representation $R_1$ is a Young Diagram with $n-k$ boxes, i.e. it labels a representation of $S_{n-k}$. By taking an $S_{n-k}$ subgroup of $S_n$ (there are many different ways to get this subgroup - see \cite{Balasubramanian:2004nb},\cite{jelena}), $R_1$ will be one of the representations subduced. $\mbox{Tr\,}_{R_1} (\cdot)$ is an instruction to trace only over the subduced $R_1$ subspace. In \cite{jelena} this operator was called a restricted Schur polynomial of representation $R$ with $R_1$ the representation of the restriction. The number of boxes in $R_1$ gives the number of $Z$'s in the giant system. Further details of the construction of this operator are not needed in this article. We refer the interested reader to \cite{jelena} for additional details. In this article we consider the case of a single string, that is, $k=1$. \begin{figure}[t]\label{fig:cgraph1} \begin{center} \includegraphics[height=9cm,width=12cm]{fig8} \caption{The Young diagram shown labels an excited bound state of sphere giants. There is a single open string attached to column $i$.} \end{center} \end{figure} We will use $K$ to denote the total number of $Z$ fields in the operator $\chi_{R,R_1}^{(1)}(Z,W)$ and $J$ to denote the number of $Z$ fields in $W$. Thus, $R_1$ has a total of $K-J$ boxes. It is only when $J+L$ is $O(\sqrt{N})$ and $K$ is $O(N)$ that we can interpret $\chi_{R,R_1}^{(1)}(Z,W)$ as dual to a string plus brane system. Since $R_1$ is obtained from $R$ by removing a single box, we have specified the operator dual to an excited giant graviton if we have given $R$, the open string word and have stated which box is to be removed to obtain $R_1$. We will use the graphical notation of \cite{jelena} in which the operator is labeled by the Young diagram $R$ itself, and the box to be removed is indicated by writing the open string word $W$ in it. In figure 1 we have shown the label for a bound state of sphere giants with a single string attached. Later by employing the state operator correspondence of the conformal field theory, we will obtain a Cuntz oscillator chain. Instead of drawing this label, we will denote the state that corresponds to this operator by $|b_0,b_1,...,b_{n-1};W;i\rangle$. The case $n=1$ has been studied in detail\cite{Balasubramanian:2004nb},\cite{Balasubramanian:2002sa},\cite{Strings},\cite{CuntzChain},\cite{Berenstein:2006qk}. We will extend the analysis to $n>1$. In figure 2 we have shown the label for a bound state of AdS giants with a single string attached. After employing the state operator map to obtain the Cuntz oscillator chain, we will replace this operator by a corresponding state. We denote the state by $|a_0,a_1,...,a_{n-1};W;i\rangle$ instead of drawing the label. The case $n=1$ was considered in \cite{adsgiant}. However, even for $n=1$, the analysis we perform here is different. For our analysis, the open string word is a lattice made using the $Y$'s; we then populate this lattice with $Z$'s. In \cite{adsgiant}, the open string word is a lattice built using covariant derivatives; again this lattice is populated with $Z$'s. Physically our open strings have a large momentum on an $S^3$ contained in the S$^5$ while the strings of \cite{adsgiant} have a large momentum on the $S^3$ contained in the AdS$_5$ space. \begin{figure}[t]\label{fig:cgraph2} \begin{center} \includegraphics[height=9cm,width=12cm]{fig9} \caption{The Young diagram shown labels a excited bound state of AdS giants. There is a single open string attached to row $i$.} \end{center} \end{figure} \subsection{Parameter Scaling} We are interested in determining the mixing matrix of anomalous dimensions for the operators dual to excited giant gravitons with a single string attached. To obtain operators dual to giant gravitons, we take $b_0$ to be $O(N)$ and $b_i$ $i=1,...,n-1$ to be $O(1)$ for the sphere giants and $a_0$ to be $O(N)$ and $a_i$ $i=1,...,n-1$ to be $O(1)$ for the AdS giants. We want to compute this mixing matrix to one loop and at large $N$. This is a hard problem: since the number of fields in the giant graviton is $O(N)$, the planar approximation fails. To get an accurate result, we need to contract all of the fields in the two giant gravitons exactly. The number of fields in each word $W$ is $J+L\approx L$ in the case that $J\ll L$ which we assume. If the $W$ is to be dual to an open string, we need to take $L\sim O(\sqrt{N})$. We will not contract the open strings words exactly - only the planar diagrams are summed. To suppress the non-planar contributions, we need to take ${L^2\over N}\ll 1$. Concretely, we have a double scaling limit in mind, in which the first limit takes $N\to\infty$ holding ${L^2\over N}$ fixed and then the second limit takes the effective genus counting parameter ${L^2\over N}$ to zero. In the dual string theory, taking the limits in this way corresponds to taking the string coupling to zero, in the string theory constructed in a fixed giant graviton background. Finally, we will drop contributions coming from contractions between $Z$s in the open string $W$ and $Z$s associated to the brane system. When computing two point functions in free field theory, as long as the number of boxes in the representation $R$ is less than $O(N^2)$ and the numbers of $Z$'s in the open string is $O(1)$, the contractions between any $Z$s in the open string and the rest of the operator are suppressed in the large $N$ limit\cite{recent}. To ensure that the number of boxes in the representation $R$ is less than $O(N^2)$, we also assume that $n$ is $O(1)$. Other interesting parameters to consider are $N-b_0$ and $N+a_0$. The parameter $N-b_0$ can scale as $O(N)$, $O(\sqrt{N})$ or $O(1)$. We will see, from the results of section 3, that when $N-b_0$ is $O(1)$ the sphere giant boundary interaction is $O({1\over N})$, when $N-b_0$ is $O(\sqrt{N})$ the boundary interaction is $O({1\over\sqrt{N}})$ and when $N-b_0$ is $O(N)$, the boundary interaction is $O(1)$. Since we are interested in the dynamics arising from the boundary interaction, we will assume that $N-b_0$ is $O(N)$. The boundary interaction is always $O(1)$ for the AdS giants because $a_0+N$ is always $O(N)$. Our analysis is only valid if $J$ is $O(1)$. Cases in which $J$ becomes large correspond to the situation in which a lot of momentum is transferred from the giant to the open string, presumably signaling an instability. The value of $J$ is not a parameter that we can choose; it is determined by the dynamics of the problem. In what follows, we solve for the value of $J$. In cases where it turns out to be $O(1)$, it can be dropped and back reaction on the giant is not important. In cases where $J$ is large, back reaction is important and the approximations we are employing are no longer valid. The assumption that we can drop non-planar contributions when contracting the open string words breaks down, essentially because as more and more $Z$s hop onto the open string, it is starting to grow into a state best described as a giant graviton. One can also no longer neglect the contractions between any $Z$s in the open string and the rest of the operator, presumably because the composite system no longer looks like a string plus giant (which can be separated nicely) but rather, it looks like one large deformed membrane. The process in which the word $W$ ``fragments'' thereby allowing $Y$s to populate more than a single box in $R$ corresponds to a splitting of the original string into smaller strings, which are still attached to the giant. This process was considered in \cite{jelena}; it does not contribute in the large $N$ limit. Finally, there is also a process in which the open string detaches from the brane system and is emitted as a closed string state, so that it no longer occupies any box in $R$. This process also does not contribute in the large $N$ limit\cite{jelena}. In what follows we use the results of \cite{jelena} to contract the fields in the two giant gravitons exactly, and we contract the open string words planarly ignoring contractions between $Z$s in the open string and the rest of the operator. \subsection{Cuntz Chain Model} As usual, we can decompose the potential for the scalars into D terms and F terms. The advantage of this decomposition is that for the operators we study here, it is known that at one loop, the D term contributions cancel with the gauge boson exchange and the scalar self energies\cite{Constable:2002hw}. Consequently we will only consider the planar interactions arising from the F term. For any conformal field theory, we can trade our (local) operators for a set of states. Concretely, this involves quantizing with respect to radial time. Considering a fixed ``radial time" slice we obtain a round sphere. In this process, we trade the conformal dimension of our operator for the energy of the corresponding state. As discussed above, we interpret the $Y$ fields in the operator as a ``lattice" which can be populated by inserting impurities (in this case $Z$'s) into the lattice (between the $Y$'s). The F term interaction preserves the number of $Y$'s (the lattice is not dynamical) and allows impurities to hop between neighboring sites. This interpretation thus maps the problem of determining the anomalous dimensions of operators in the super Yang-Mills theory into the dynamics of a Cuntz oscillator chain. The bulk interactions are described by the Hamiltonian \begin{equation} H_{bulk} = 2\lambda\sum_{l=1}^L \hat{a}_l^\dagger \hat{a}_l -\lambda\sum_{l=1}^{L-1}(\hat{a}_l^\dagger \hat{a}_{l+1} +\hat{a}_l \hat{a}^\dagger_{l+1}), \label{bulk} \end{equation} where $$ \hat{a}_i \hat{a}_i^\dagger = I,\qquad \hat{a}^\dagger_i \hat{a}_i = I-|0\rangle\langle 0|.$$ We will not rederive this Hamiltonian. The interested reader is referred to \cite{Berenstein:2006qk} for the details of this derivation. The first term in the Hamiltonian tells us that each occupied site contributes $2\lambda$ to the energy. Notice that this contribution is independent of the number of impurities occupying the state, which is a direct consequence of the fact that we only sum planar contractions. This is accounted for by assigning Cuntz oscillators to the impurities, not the standard bosonic oscillators. The next two terms are hopping terms allowing the impurities to move between sites. Evidently, delocalized impurities lower the energy. To obtain the full Hamiltonian, we need to include the boundary interactions arising from the string/brane system interaction. This interaction, which introduces sources and sinks for the impurities at the boundaries of the lattice, is derived in the next section. \section{Boundary Interactions} One of the interactions we can consider allows a $Z$ to hop from the first or last site of the string onto the giant, or from the giant into the first or last site of the string. In the process the string exchanges momentum with the giant graviton. In addition to these momentum exchanging processes, there is also a boundary interaction in which a $Z$ belonging to the giant ``kisses" the first (or last) $Y$ in the open string word so that no momentum is exchanged. Using the formula derived in appendix A we will be able to derive the term in the Hamiltonian describing the ``hop off" process, in which a $Z$ hops off the string and onto the giant. Since the Hamiltonian must be Hermitian, we can obtain the ``hop on" term by daggering the ``hop off" term. We obtain the momentum conserving boundary interaction by expressing the kiss as a hop on followed by a hop off. We end this section by summarizing our result for the Cuntz oscillator chain Hamiltonian. \subsection{Hop Off Rules} We will start by deriving the hop off interaction, for the case that the open string is attached to a single sphere giant or a single AdS giant. This will serve both to illustrate our method and further, to show that we recover the known boundary interaction in this case. We will then generalize to bound states of giant gravitons. This allows us to determine the general structure of the hop off interaction. \subsubsection{Single Giant Graviton} The hopping interaction allows impurities to hop off the string and onto the giant. Concretely, this hop takes $$ W(\{n_1,n_2,\cdots,n_L\})\to ZW(\{n_1-1,n_2,\cdots,n_L\})\quad {\rm or}$$ $$ W(\{n_1,n_2,\cdots,n_L\})\to W(\{n_1,n_2,\cdots,n_L-1\})Z .$$ To determine the corresponding term in the interaction Hamiltonian, we need to be able to express objects like $\chi_{R,R_1}^{(1)}(Z,ZW)$ in terms of $\chi_{S,S_1}^{(1)}(Z,W)$. Using the formulas in appendix A, we have $$\chi_{\young({\,},{\,},{\,},{\,},w)}-\chi_{\young({\,},{\,},{\,},{\,})}\mbox{Tr\,} (w)= -\chi_{\young({\,},{\,},{\,},v)},\qquad v=Zw ,$$ $$\chi_{\young({\,}{\,}{\,}{\,}w)}-\chi_{\young({\,}{\,}{\,}{\,})}\mbox{Tr\,} (w)= \chi_{\young({\,}{\,}{\,}v)},\qquad v=Zw .$$ Using $(1^{b_0})$ to denote the Young diagram with a single column of $b_0$ boxes\footnote{Thus, $(1^0)\ne (1^1)$. $(1^0)$ is the diagram with no boxes; $(1^1)$ has one box.} and $(a_0)$ to denote the Young diagram with a single row of $a_0$ boxes, the above relations can be rewritten, in general, as \begin{equation} \label{sphereidentity} \chi_{(1^{b_0+1}),(1^{b_0})}^{(1)}(Z,W)-\chi_{(1^{b_0})}(Z)\mbox{Tr\,} (W)=-\chi_{(1^{b_0}),(1^{b_0-1})}^{(1)}(Z,ZW), \end{equation} \begin{equation} \label{adsidentity} \chi_{(a_0+1),(a_0)}^{(1)}(Z,W)-\chi_{(a_0)}(Z)\mbox{Tr\,} (W)= \chi_{(a_0),(a_0-1)}^{(1)}(Z,ZW) . \end{equation} We would like to rewrite these statements in terms of the states of the Cuntz oscillator chain. It is convenient to normalize the states of the Cuntz oscillator chain. Normalized states correspond to operators whose two point function is normalized. Using the technology of \cite{jelena} it is a simple task to compute the equal time correlator. Using the propagators $$ \langle Z^\dagger_{ij}(t)Z_{kl}(t)\rangle = {4\pi\lambda\over N}\delta_{il}\delta_{jk} = \langle Y^\dagger_{ij}(t)Y_{kl}(t)\rangle ,$$ we obtain $$\langle (\chi_{(1^{b_0+1}),(1^{b_0})}^{(1)}(Z,W))^\dagger \chi_{(1^{b_0'+1}),(1^{b_0'})}^{(1)}(Z,W')\rangle =$$ \begin{equation} \left({4\pi\lambda\over N}\right)^{b_0+h} \delta_{b_0 b_0'}\delta_{WW'}N^{h-1}(b_0+1){N!\over (N-b_0-1)!}, \label{giantoverlap} \end{equation} where we have used $h=J+L$ to denote the number of fields in $W$. This is not the exact result for the two point function. In the language of \cite{jelena}, the $F_1$ open string contraction has been dropped. Relative to the leading term, the dropped term is of order (this is an upper bound for the dropped term, obtained by assuming that the word is made only out of one type of field; if there are both $Z$s and $Y$s in $W$, this number is typically reduced by a factor of $h$) $$ {h(N-b_0)\over N(b_0+1)}. $$ Further, we have only summed the planar diagrams when contracting $W$ and $W'$. If any of the $h$ fields in $W$ are $Z$s, we can have contractions between these fields and the $Z$ fields appearing in the giant. These contractions have been dropped. When computing two point functions in free field theory, as long as the number of boxes in the representation $R$ is less than $O(N^2)$ and the numbers of $Z$'s in the open string is $O(1)$, the contractions between any $Z$s in the open string and the rest of the operator are suppressed in the large $N$ limit\cite{recent}. The delta function $\delta_{WW'}$ is one if the set of occupation numbers of the two open strings are equal and is zero otherwise. Next, consider (this is again an upper bound obtained by assuming that the word $W$ is made only out of one type of field) \begin{equation} \langle (\chi_{(1^{b_0})}(Z)\mbox{Tr\,}(W))^\dagger \chi_{(1^{b_0'})}(Z)\mbox{Tr\,}(W')\rangle =\left({4\pi\lambda\over N}\right)^{b_0+h} \delta_{b_0 b_0'}\delta_{WW'}h N^{h}{N!\over (N-b_0)!}. \label{closedgiant} \end{equation} Compare (\ref{closedgiant}) to (\ref{giantoverlap}) \begin{equation} {h N^{h}{N!\over (N-b_0)!}\over N^{h-1}(b_0+1){N!\over (N-b_0-1)!}}= {N h\over (b_0+1)(N-b_0)}. \label{sublead} \end{equation} This is clearly subleading in our case where $b_0\sim O(N)$, $N-b_0\sim O(N)$ and $h\sim O(\sqrt{N})$. In this regime of the parameters the subleading term is naturally interpreted as a state containing a giant graviton and a closed string. The fact that these closed string contributions are subleading and hence do not contribute in the leading order is a general conclusion valid in all of the situations we consider in this article. The correspondence between operators and (normalized) states of the Cuntz oscillator chain is $$ \chi_{(1^{b_0+1}),(1^{b_0})}^{(1)}(Z,W))\leftrightarrow \sqrt{\left({4\pi\lambda\over N}\right)^{b_0+h}N^{h-1}(b_0+1){N!\over (N-b_0-1)!}}|b_0+1; W; 1\rangle . $$ The hop off interaction acts as $$ H|b_0+1; W(\{n_1,n_2,\cdots,n_L\});1\rangle \to |b_0+1; ZW(\{n_1-1,n_2,\cdots,n_L\});1\rangle .$$ After dropping the closed string contributions, writing things in terms of the states of the Cuntz oscillator chain, and employing (\ref{sphereidentity}) we obtain (we want to consider the hop off process for a giant with momentum $b_0$ and hence we start with a single column containing $b_0+1$ boxes; this is {\it not} the complete hop off interaction - we have only shown the term obtained when a $Z$ hops out of the the first site) \begin{eqnarray} \nonumber H|b_0+1; W(\{n_1,n_2,\cdots,n_L\});1\rangle &=& -\sqrt{1-{b_0\over N}}|b_0+1; ZW(\{n_1-1,n_2,\cdots,n_L\});1\rangle\\ \label{boundaryrule} =-\sqrt{1-{b_0\over N}}|b_0&+&2;W(\{n_1-1,n_2,\cdots,n_L\});1\rangle. \end{eqnarray} If we introduce the operator $\hat{A}^\dagger$ that increases the number of $Z$s in the giant by 1, the boundary hop off interaction can be written as (this term in the Hamiltonian is positive because the piece of the F term that generates this interaction is negative and we have a $-$ sign in our rule (\ref{boundaryrule})) \begin{equation} H=\lambda\sqrt{1-{b_0\over N}} \hat{A}^\dagger \hat{a}_1 . \label{hopoffsphere} \end{equation} This interaction Hamiltonian vanishes for the maximal giant\cite{Strings} and is highly suppressed for giants which are close to maximal\cite{CuntzChain}. Notice that the interaction is not proportional to the number of $Z$'s in the giant. For this reason, we choose the oscillator $\hat{A}^\dagger$ to be a Cuntz oscillator $$ \hat{A}\hat{A}^\dagger = I,\qquad \hat{A}^\dagger \hat{A} = I-|0\rangle\langle 0|.$$ Since the total number of $Z$s in the operator is conserved, $b_0$ is the difference between the total number of $Z$s ($=K$) and the number of $Z$s on the string. This gives the expression $$ b_0=K-\sum_{n=1}^{\infty}\sum_{l=1}^L (\hat{a}_l^\dagger)^n (\hat{a}_l)^n .$$ Finally, since the impurity can either hop out of the first or the last sites, we can write the complete hop off interaction, for a string attached to a single sphere giant, as $$H=\lambda \sqrt{1-{K-\sum_{n=1}^{\infty}\sum_{l=1}^L (\hat{a}_l^\dagger)^n (\hat{a}_l)^n-1\over N}} (\hat{A}^\dagger \hat{a}_1 + \hat{A}^\dagger \hat{a}_L) .$$ For the AdS giants, the relevant two point functions are $$\langle (\chi_{(a_0+1),(a_0)}^{(1)}(Z,W))^\dagger \chi_{(a'_0+1),(a'_0)}^{(1)}(Z,W')\rangle =\delta_{a_0 a_0'}\delta_{WW'}N^{h-1}(a_0+1){(N+a_0)!\over N!}\left( {4\pi\lambda\over N}\right)^{a_0+h},$$ $$\langle (\chi_{(a_0)}(Z)\mbox{Tr\,} (W))^\dagger \chi_{(a'_0)}(Z)\mbox{Tr\,} (W')\rangle =\delta_{a_0 a_0'}\delta_{WW'} h N^h {(N+a_0-1)!\over N!}\left( {4\pi\lambda\over N}\right)^{a_0+h}.$$ In the first correlator we have again dropped the $F_1$ open string contraction; relative to the leading term it is of order (this is again using an upper bound for the dropped term, obtained by assuming that the open string word is made only out of one type of field) $$ {h(N+a_0)\over N(a_0+1)},$$ which is subleading for $h\sim O(\sqrt{N})$ and $a_0\sim O(N)$. Looking at the second correlator we again conclude that the closed string contribution is subleading. Writing (\ref{adsidentity}) in terms of states of a Cuntz oscillator chain state, we obtain (we want to consider the hop off process for a giant with momentum $a_0$ and hence start with a single row containing $a_0+1$ boxes; this term is obtained if $Z$ hops out of the first site) $$ H|a_0+1; W(\{n_1+1,n_2,\cdots,n_L\}); 1\rangle =\sqrt{1+{a_0\over N}}|a_0+2;W(\{n_1,n_2,\cdots,n_L\}); 1\rangle .$$ Again, it is a simple matter to modify the above argument to prove that (this term is obtained if $Z$ hops out of the last site) $$ H|a_0+1; W(\{n_1,n_2,\cdots,n_L+1\}) ; 1\rangle =\sqrt{1+{a_0\over N}}|a_0+2;W(\{n_1,n_2,\cdots,n_L\}); 1\rangle .$$ Using these identities, we find that the hop off interaction, for a string attached to a single AdS giant, is \begin{equation} H= -\lambda\sqrt{1+{K-\sum_{n=1}^{\infty}\sum_{l=1}^L (\hat{a}_l^\dagger)^n (\hat{a}_l)^n-1\over N}} (\hat{A}^\dagger \hat{a}_1 + \hat{A}^\dagger \hat{a}_L) . \label{hopoffads} \end{equation} Notice that now the interaction is enhanced as the momentum of the giant grows, in contrast to the sphere giant. This structure of the boundary interaction was also obtained in \cite{adsgiant}, where the $Z$s hop on a lattice made from covariant derivatives. The relative sign difference between (\ref{hopoffsphere}) and (\ref{hopoffads}) is not meaningful; it can be eliminated, for example, by redefining the phases of the sphere giant states. \subsubsection{Boundstate of Giants} The first boundstate we will consider is a boundstate of two sphere giants. A Young diagram with $b_0+b_1$ boxes in the first column and $b_0$ boxes in the second column will be denoted as $(2^{b_0}1^{b_1})$. Then, using the formula derived in appendix A, a little algebra shows that $$\chi^{(1)}_{(2^{b_0}1^{b_1}),(2^{b_0}1^{b_1-1})}(Z,ZW) = -{b_1(b_1+2)\over (b_1+1)^2}\left[ \chi^{(1)}_{(2^{b_0}1^{b_1+1}),(2^{b_0}1^{b_1})}(Z,W) -\chi_{(2^{b_0}1^{b_1})}(Z)\mbox{Tr\,}(W)\right]$$ \begin{equation} \label{twobound} +{b_1\over (b_1+1)^2}\left[ \chi^{(1)}_{(2^{b_0+1}1^{b_1-1}),(2^{b_0}1^{b_1})}(Z,W) -\chi_{(2^{b_0}1^{b_1})}(Z)\mbox{Tr\,}(W)\right], \end{equation} $$\chi^{(1)}_{(2^{b_0}1^{b_1}),(2^{b_0-1}1^{b_1+1})}(Z,ZW) = -{b_1+2\over (b_1+1)^2}\left[ \chi^{(1)}_{(2^{b_0}1^{b_1+1}),(2^{b_0}1^{b_1})}(Z,W) -\chi_{(2^{b_0}1^{b_1})}(Z)\mbox{Tr\,}(W)\right]$$ \begin{equation} \label{twoboundagain} -{b_1(b_1+2)\over (b_1+1)^2}\left[ \chi^{(1)}_{(2^{b_0+1}1^{b_1-1}),(2^{b_0}1^{b_1})}(Z,W) -\chi_{(2^{b_0}1^{b_1})}(Z)\mbox{Tr\,}(W)\right]. \end{equation} For the limit that we consider, $(N-b_0-b_1)=O(N)$, $b_0=O(N)$ and $h=O(\sqrt{N})$, where we again use $h$ to denote the total number of fields in $W$. In this case, we again find that the closed string contributions are not important in the leading order and can be dropped. To interpret these formulas it is useful rewrite them, for some particular values, employing our graphical notation. If $b_1=0$, the string can only be attached to the second column. In this case, the right hand side of (\ref{twobound}) vanishes. This is as expected, since the left hand side would correspond to the case that $b_1=0$ and we attached the string to the first column, which is not an allowed state\footnote{Its not allowed because if you remove the open string you are not left with a valid Young diagram, i.e. for this state $R_1$ in (\ref{restrictedschur}) is not a valid label.}. Looking at (\ref{twoboundagain}), we see that the only surviving term on the right hand side corresponds to the case that the open string is attached to the first column $$\chi_{\young({\,}{\,},{\,}{\,},{\,}{\,},{\,}{x})}\to \chi_{\young({\,}{\,},{\,}{\,},{\,}{\,},{\,}{\,},{w})},\qquad x=Zw.$$ There is a subleading term (suppressed because $b_0$ is $O(N)$) which has been dropped. It has the form \begin{equation} \chi_{\young({\,}{\,},{\,}{\,},{\,}{\,},{\,}{x})}\to \chi_{\young({\,}{\,}{w},{\,}{\,},{\,}{\,},{\,}{\,})},\qquad x=Zw. \label{dropped} \end{equation} The free fermion state corresponding to a Young diagram with the above shape, in the case that the number of rows is $O(N)$ and the number of columns is $O(1)$, would contain one fermion just above the Fermi surface and two holes deep in the Fermi sea. Thus, the interpretation of the right hand side of (\ref{dropped}) is in terms of a bound state of sphere giants together with a closed string (graviton) excitation\cite{Berenstein:2004kk},\cite{Lin:2004nb}. The fact that it is $O({1\over b_0})=O({1\over N})$ is expected because the closed string coupling constant is ${1\over N}$. In view of this example, we find a natural interpretation for the coefficients $$C^1_{b_1} = {b_1 (b_1+2)\over (b_1+1)^2},$$ appearing in (\ref{twobound}) and (\ref{twoboundagain}). These coefficients switch the interactions off gracefully. Indeed, $C^1_{b1}$ vanishes when $b_1$ vanishes, but very rapidly approaches 1 as $b_1$ is increased. These coefficients multiply the terms for which the open string remains attached to the same giant graviton. As $b_1$ is increased, the remaining coefficients in (\ref{twobound}) and (\ref{twoboundagain}) rapidly approach ${1\over 1+b_1}$. For these terms, the string swaps from one giant to the other. Interpret the number of boxes separating the box that the string starts in from the box that the strings lands up in, as we move on the right hand side of the Young diagram, as a distance. This distance is $r=1+b_1$, so that the term in which the string swaps the giant it is attached to is essentially a ${1\over r}$ interaction. The brane worldvolume theory describing the dynamics of the open strings attached to these giants is expected to be a $3+1$ dimensional emergent Yang-Mills theory\cite{Balasubramanian:2001dx},\cite{Balasubramanian:2004nb}. The ${1\over r}$ potential, which would arise from the exchange of massless particles in $3+1$ dimensions, thus looks rather natural. In this article, we will call the limit in which we see these nice simplifications, the effective field theory limit. This distance $r$ is related to the radial coordinate of the two dimensional $y=0$ plane on which the LLM boundary conditions are specified\cite{Lin:2004nb}. Both this distance and the ${1\over r}$ interaction were already visible in\cite{jelena}. Finally, note that in the $b_1\to 0$ limit, the two sphere giants carry exactly the same momentum. Since their momenta determine their radius, in this limit the two brane worldvolumes become coincident. Thus the $C^1_{b_1}$ coefficient is switching off a short distance membrane interaction. We now want to write the boundary interaction term that acts on the Cuntz chain states corresponding to normalized operators. The two point functions we need to evaluate are $$\langle (\chi^{(1)}_{(2^{b_0}1^{b_1}),(2^{b_0}1^{b_1-1})}(Z,W))^\dagger \chi^{(1)}_{(2^{b'_0}1^{b'_1}),(2^{b'_0}1^{b'_1-1})}(Z,W')\rangle =$$ $$ \delta_{b_0 b'_0}\delta_{b_1 b'_1}\delta_{WW'}N^{h-1}{b_1 b_0\over b_1+1} {N!(N+1)!\over (N-b_0-b_1)!(N-b_0+1)!} \left({4\pi\lambda\over N}\right)^{2b_0+b_1+h-1},$$ $$\langle (\chi^{(1)}_{(2^{b_0}1^{b_1}),(2^{b_0-1}1^{b_1+1})}(Z,W))^\dagger \chi^{(1)}_{(2^{b'_0}1^{b'_1}),(2^{b'_0-1}1^{b'_1+1})}(Z,W')\rangle =$$ $$\delta_{b_0 b'_0}\delta_{b_1 b'_1}\delta_{WW'}N^{h-1}{(b_1+2) b_0\over b_1+1} {N!(N+1)!\over (N-b_0-b_1)!(N-b_0+1)!} \left({4\pi\lambda\over N}\right)^{2b_0+b_1+h-1},$$ $$\langle (\chi^{(1)}_{(2^{b_0}1^{b_1}),(2^{b_0}1^{b_1-1})}(Z,W))^\dagger \chi^{(1)}_{(2^{b'_0}1^{b'_1}),(2^{b'_0-1}1^{b'_1+1})}(Z,W')\rangle = 0.$$ The $F_1$ contraction which has again been dropped, is subleading; to verify this, recall that in the limit we consider $b_0=O(N)$, $b_1=O(1)$, $N-b_0-b_1=O(N)$ and $h=O(\sqrt{N})$. We can now write down the action of the hop off interaction on the Cuntz chain states. To write this interaction, again introduce the Cuntz oscillators $\hat{a}_l$ and $\hat{a}_l^\dagger$ for impurities on the string. It is tempting to introduce a pair of Cuntz oscillators, one for each giant graviton. We have not employed this description. To motivate why we have used a different approach, let $\hat{A}_i$ denote the operator that will remove a box from the $i$th column and $\hat{A}_i^\dagger$ the operator that will insert a box into the $i$th column. Thus, for example, we have $$ \hat{A}_1\,\,{}_{\yng(2,2,2,1,1)}={}_{\yng(2,2,2,1)},\qquad \hat{A}_2^\dagger\,\, {}_{\yng(2,2,2,1,1)}={}_{\yng(2,2,2,2,1)}.$$ When these giant oscillators act on a Young diagram, they must produce another Young diagram. This requirement implies that, for example $$ \hat{A}_1\,\,{}_{\yng(2,2,2)}=0,\qquad \hat{A}_2^\dagger\,\, {}_{\yng(2,2,2)}=0.$$ Relations like these can be used to show that the oscillators for the the two giants {\it do not commute}. Indeed, to see that $\hat{A}_1^\dagger$ and $\hat{A}_2^\dagger$ can't commute note that $$ \hat{A}_2^\dagger \hat{A}_1^\dagger \,\, {}_{\yng(2,2)}= {}_{\yng(2,2,2)},\,\,\,\,\,\qquad {\rm but}\,\,\,\,\,\qquad \hat{A}_1^\dagger \hat{A}_2^\dagger \,\, {}_{\yng(2,2)}= 0.$$ Due to these complications, we have pursued an alternative description of the giants. Our alternative description involves associating a one dimensional lattice to each Young diagram. Our lattice has a total of $N$ sites; each site is occupied by an arbitrary number of particles. If the Young diagram has $O(1)$ columns that each have $O(N)$ rows (a bound state of sphere giants), the number of particles in the lattice is equal to the number of sphere giants in the boundstate. We will refer to this lattice as the giant lattice to distinguish it from the string lattice. The translation between the Young diagram and the giant lattice, is given by setting the occupation number of lattice site $i$ $$ n_i = r_i - r_{i+1},\qquad i=1,\dots ,N,$$ where $r_i$ is the number of boxes in the $i$th row of the Young diagram and we set $r_{N+1}=0$. There is one marked site - the site that the open string occupies. The marked site is indicated by writing a bar above the occupation number. Two examples to illustrate the lattice notation $$ \young({\,}{\,}{\,},{\,}{w},{\,})\leftrightarrow \{ n_1= 1, n_2=\bar{1}, n_3=1\}\qquad \young({\,}{\,}{\,}{\,}{\,},{\,}{w})\leftrightarrow \{ n_1= 3, n_2=\bar{2} \} . $$ We will label kets of the giant lattice by their occupation numbers. Occupation numbers that are equal to zero are not displayed. The giant lattice notation is convenient because adding and subtracting boxes from the diagram has a very natural interpretation: adding or subtracting boxes in the first row adds or subtracts particles from the lattice. Adding or subtracting boxes to any other row does not change the particle number - its described by particles hopping on the lattice. When we add a box, particles hop from the $i$th to the $i+1$th site; when we remove a box, particles hop from the $i$th to the $i-1$th site. To describe the giant lattice, we can again introduce Cuntz oscillators - $\hat{A}_i$ and $\hat{A}^\dagger_i$, $i=1,...,N$ - one for each site of the giant lattice. Our original description of single sphere giants and AdS giants is easily translated into this new language: the dynamics for the AdS giant is essentially single site dynamics - only the first site participates; it has occupation number $n_1=a_0$. The dynamics is single particle dynamics for the sphere giant - the particle occupies the $b_0$th site. Apart from the Cuntz oscillators of the giant and string lattices, we will introduce a Cuntz oscillator for the open string itself, denoted $\hat{W}_i$ and $\hat{W}_i^\dagger$. This extra oscillator is needed to keep track of the position of the string. In terms of these oscillators, we obtain an alternative representation of the above states. For example $$ \young({\,}{\,}{\,},{\,}{w},{\,})\leftrightarrow \{ n_1= 1, n_2=\bar{1}, n_3=1\} \leftrightarrow \hat{A}_1^\dagger \hat{W}_2^\dagger \hat{A}_3^\dagger |0\rangle .$$ Notice that the occupied states coincide with the position of the corners of the Young diagram. We could also have constructed a giant lattice using the number of boxes in each column. The reason why we have chosen to use the rows instead, is simply that the number of rows of the Young diagram is bounded by $N$. If we use the columns there is no such bound; further, if we try to use only the ``occupied columns" we obtain a description with a dynamical lattice. The number of particles hopping on this dynamical lattice is equal to the number of AdS giants. From this point of view, the dynamical lattice appears to be the natural description for AdS giants. See appendix C for a description of the AdS giants using the dynamical lattice formulation. Notice that the total number of particles hopping on this dynamical lattice is constrained to be less than or equal to $N$. It is perhaps useful to comment on why the giant lattice provides a good description. The difficulty with introducing a pair of Cuntz oscillators - one for each column - stems from the fact that we need to impose a constraint forcing the number of particles created by the first oscillator to be greater than or equal to the number of particles created by the second oscillator. Indeed, occupation number states that don't satisfy this constraint would correspond to diagrams with more boxes in the second column than in the first column - this is not a legal Young diagram. With the new giant lattice description, {\it any} occupation number assignment leads to a valid Young diagram. The action of the hop off interaction Hamiltonian can now be written as ($W^{(1)}=W(\{n_1,n_2,...,n_L\})$; $W^{(2)}=W(\{n_1-1,n_2,...,n_L\})$ or $W^{(2)}=W(\{n_1,n_2,...,n_L-1\})$ if we hop off the first or last site respectively) \footnote{In writing this contribution to the Hamiltonian, we have dropped ${b_1\over b_0}$ corrections, which are $O({1\over N})$ in the limit we consider. The factors ${b_0+b_1\over N}$ can be replaced by ${b_0\over N}$; we did not make this replacement since by keeping $b_1$ it is clear that these parameters are the momenta of the two giants.} $$ H|\{ n_{b_0}=\bar{1},n_{b_0+b_1}=1\};W^{(1)}\rangle = \lambda\left[\sqrt{1-{b_0\over N}} \sqrt{C_{b_1}^1}|\{ n_{b_0+1}=\bar{1},n_{b_0+b_1}=1\};W^{(2)}\rangle\right.$$ $$+\left. \sqrt{1-{b_0+b_1\over N}}{1\over b_1+1}|\{ n_{b_0}=1,n_{b_0+b_1+1}=\bar{1}\};W^{(2)}\rangle\right],$$ $$ H|\{n_{b_0}=1,n_{b_0+b_1}=\bar{1}\};W^{(1)} \rangle =\lambda\left[\sqrt{1-{b_0+b_1\over N}} \sqrt{C^1_{b_1}}|\{n_{b_0}=1,n_{b_0+b_1+1}=\bar{1}\}; W^{(2)} \rangle\right.$$ $$-\left. \sqrt{1-{b_0\over N}}{1\over b_1+1}|\{n_{b_0+1}=\bar{1},n_{b_0+b_1}=1\};W^{(2)} \rangle\right]. $$ This result is also obtained if we consider the different limit, in which $b_1$ scales as $\sqrt{N}$. This limit is considered so that we can consider the situation in which the two branes are well separated in spacetime and hence when we expect that they stop interacting with each other. In this limit, we have $$ H|\{ n_{b_0}=\bar{1},n_{b_0+b_1}=1\};W^{(1)}\rangle \approx \lambda\sqrt{1-{b_0\over N}}|\{ n_{b_0+1}=\bar{1},n_{b_0+b_1}=1\};W^{(2)}\rangle,$$ $$ H|\{n_{b_0}=1,n_{b_0+b_1}=\bar{1}\};W^{(1)} \rangle \approx \lambda\sqrt{1-{b_0+b_1\over N}} |\{n_{b_0}=1,n_{b_0+b_1+1}=\bar{1}\}; W^{(2)} \rangle ,$$ which is just two copies of the hop off interaction we found for the single giant case. In terms of the Cuntz oscillators, the hop off interaction Hamiltonian can be written as \begin{equation} H=\lambda (\hat{a}_1+\hat{a}_L)\left[ \sum_{l=1}^{N-1}\sqrt{1-{l\over N}}\hat{W}_{l+1}^\dagger\hat{W}_l\sqrt{C^1_{\hat{b}_1}}+ \sum_{l=1}^{N}\sum_{k=1}^{N-1}\sqrt{1-{k\over N}}{\epsilon (k-l)\over |k-l|+1}\hat{W}_{k+1}^\dagger \hat{A}_k\hat{A}_l^\dagger \hat{W}_l \right. \label{twospheregiants} \end{equation} $$\left. +\sum_{l=1}^{N-1}\sqrt{1-{l\over N}}\hat{W}_{l+1}^\dagger\hat{A}_l^\dagger\hat{A}_l\hat{W}_l\right],$$ where $$ \hat{b}_1=\sum_{k=1}^{N}\sum_{l=1}^{N} |k-l|\hat{A}_k^\dagger\hat{A}_k\hat{W}_l^\dagger\hat{W}_l ,$$ and $$ \epsilon (k)=\left\{ \matrix{-1 &{\rm if} &k<0\cr 0 &{\rm if} &k=0\cr +1 &{\rm if} &k>0}\right. .$$ This hop off interaction acts on the subspace of states of the form $$ |\Psi\rangle = \sum_{k,l}\alpha_{kl}\hat{A}_k^\dagger\hat{W}_l^\dagger |0\rangle .$$ The hop off interaction (\ref{twospheregiants}) allows the open string to hop between rows. Note however, that the coefficients of these hopping terms vanish for the hopping process which would allow the open string to hop into the $N+1$th row, i.e. acting on a state which corresponds to a valid Young diagram, the Hamiltonian will produce another state which corresponds to a valid Young diagram. Carrying out the same steps for a boundstate of two AdS giants we find $$ H|\{ n_{1}=\overline{a_0+a_1},n_{2}=a_0\};W^{(1)}\rangle = -\lambda\left[\sqrt{1+{a_0+a_1\over N}} \sqrt{C_{a_1}^1}|\{ n_{1}=\overline{a_0+a_1+1},n_{2}=a_0 \};W^{(2)}\rangle\right.$$ $$+\left. \sqrt{1+{a_0\over N}}{1\over a_1+1}|\{ n_{1}=a_0+a_1,n_{2}=\overline{a_0+1}\};W^{(2)}\rangle\right],$$ $$ H|\{n_{1}=a_0+a_1,n_{2}=\overline{a_0} \};W^{(1)} \rangle =-\lambda\left[\sqrt{1+{a_0\over N}} \sqrt{C^1_{a_1}}|\{n_{1}=a_0+a_1,n_{2}=\overline{a_0+1}\}; W^{(2)} \rangle\right.$$ $$-\left. \sqrt{1+{a_0+a_1\over N}}{1\over a_1+1}|\{n_{1}= \overline{a_0+a_1+1},n_{2}=a_0\};W^{(2)} \rangle\right], $$ for the hop off interaction. Many of the features present for a bound state of two sphere giants are present in this result: (i) the factors $C^1_{a_1}$ gracefully turn off certain interactions as $a_1\to 0$ and this limit again corresponds to coincident membranes, (ii) the off diagonal terms display a ${1\over r}$ dependence in the effective field theory limit and (iii) in the large $a_1$ limit, this contribution to the Hamiltonian reduces to two copies of the hop off interaction for the single giant case. In Appendix B we give the results for boundstates of three and four giants. From these results, it is clear that there is a general structure that can be used to write down the hop off interaction for an arbitrary number of boundstates. Further, the features just discussed for the boundstate of two giants hold for the general giant boundstate. If one considers the effective field theory limit in which $b_0=O(N)$ and the $b_i$ $i=1,2,...,n-1$ are $O(N^0)$ and $\gg 1$, then the hop off term in the Hamiltonian for $n=O(N^0)$ sphere giants takes the particularly simple form $$ H=\lambda (\hat{a}_1+\hat{a}_L)\left[ \sum_{l=1}^{N-1}\sqrt{1-{l\over N}}\hat{W}_{l+1}^\dagger\hat{W}_l+ \sum_{l=1}^{N}\sum_{k=1}^{N-1}\sqrt{1-{k\over N}}{\epsilon (k-l)\over |k-l|}\hat{W}_{k+1}^\dagger \hat{A}_k\hat{A}_l^\dagger \hat{W}_l \right].$$ This Hamiltonian acts on the subspace of states that have a single open string $\hat{W}^\dagger_i$ excitation and $n-1$ $\hat{A}^\dagger_j$ excitations. \subsection{Hop On Rules} We know that the anomalous dimensions are real. Consequently, the energies from our Cuntz chain Hamiltonian must be real implying the Hamiltonian must be Hermitian. Thus, we can obtain the hop on term in the Hamiltonian by taking the Hermitian conjugate of the hop off term. As an example, the hop on term for a string attached to a single sphere giant is given by \begin{eqnarray} \nonumber \lambda\sqrt{1-{b_0\over N}}\left[ \hat{A}^\dagger\hat{a}_1 + \hat{A}^\dagger\hat{a}_L\right]^\dagger &=& \lambda\left[ \left(\hat{A}^\dagger\hat{a}_1 + \hat{A}^\dagger\hat{a}_L\right) \sqrt{1-{\hat{b}_0\over N}}\right]^\dagger\\ \nonumber &=& \lambda\sqrt{1-{\hat{b}_0\over N}}\left[ \hat{A}\hat{a}_1^\dagger + \hat{A}\hat{a}_L^\dagger\right]\\ \nonumber &=& \lambda\left[ \hat{A}\hat{a}_1^\dagger + \hat{A}\hat{a}_L^\dagger\right] \sqrt{1-{\hat{b}_0-1\over N}}\\ \nonumber &=& \lambda\sqrt{1-{b_0-1\over N}}\left[ \hat{A}\hat{a}_1^\dagger + \hat{A}\hat{a}_L^\dagger\right]. \end{eqnarray} These calculations obviously assume we are working in a basis of states that have the momentum of the giant as a good quantum number. For our second example, we consider a bound state of two sphere giants. A useful identity is $$\hat{b}_1\left( \sum_{l=1}^{N-1}\sqrt{1-{l\over N}}\hat{W}_l^\dagger \hat{W}_{l+1}\right)= \left( \sum_{l=1}^{N-1}\sqrt{1-{l\over N}}\hat{W}_l^\dagger \hat{W}_{l+1}\right)(\hat{b}_1-\hat{\epsilon}),$$ where $$\hat{\epsilon}\equiv\sum_{l=1}^{N}\sum_{k=1}^{N}\epsilon (k-l)\hat{W}_k^\dagger\hat{W}_k\hat{A}_l^\dagger\hat{A}_l -\sum_{k=1}^{N} \hat{W}_k^\dagger\hat{W}_k\hat{A}_k^\dagger\hat{A}_k .$$ It is now a simple matter to verify that the hop on interaction is $$\lambda\left[ \sum_{l=1}^{N-1}\sqrt{1-{l\over N}}\hat{W}_l^\dagger \hat{W}_{l+1}\sqrt{C^1_{\hat{b}_1-\hat{\epsilon}}} +\sum_{l=1}^{N}\sum_{k=1}^{N-1} \sqrt{1-{k\over N}}{\epsilon (k-l)\over |k-l|+1}\hat{W}_l^\dagger\hat{A}_l \hat{A}^\dagger_k\hat{W}_{k+1} \right] (\hat{a}_1^\dagger +\hat{a}_L^\dagger ) .$$ \begin{figure}[t]\label{fig:cgraph2} \begin{center} \includegraphics[height=6cm,width=12cm]{fig10} \caption{In the Feynman diagram shown, we have an example of the kissing interaction. The white ribbons are $Z$ fields, the black ribbons are $Y$ fields. The interacting black ribbon shown marks the beginning of the string; there are 3 $Z$s in the first site of the string.} \end{center} \end{figure} \subsection{Hop On and then Off for a Kiss} The terms in our Cuntz chain Hamiltonian generate the Feynman diagrams obtained by allowing a single $F$ term vertex. The kissing interaction corresponds to the Feynman diagram shown in figure 3. The number of $Z$ fields in the giant is unchanged by this process. Since the number of $Z$ fields in the giant determines the momentum of the giant, the string and brane do not exchange momentum by this process. As far as the combinatorics goes, we can model the kissing interaction as a hop on (the string) followed by a hop off. Since we know both the hop on and hop off terms, the kissing interaction follows. Note that a hop on interaction followed by a hop off interaction will leave the number of $Z$ fields in the giant unchanged. See figure 4. Although we have shown the diagrams using the first site of the string for illustration, it is clear that the argument goes through for the last site as well. \subsection{Complete Hamiltonian} \begin{figure}[t]\label{fig:cgraph2} \begin{center} \includegraphics[height=8cm,width=12cm]{fig11} \caption{The Feynman diagram shown has a hop on interaction followed by a hop off interaction. If you shrink the composite hop on/hop off interaction to a point, you recover the kissing interaction.} \end{center} \end{figure} We are now in a position to assemble the complete Hamiltonian, by summing the bulk terms and the complete set of boundary interactions. In this section we will quote the Hamiltonians we have obtained. The complete Hamiltonian is $$ H=H_{bulk}+H_{boundary},$$ where $H_{bulk}$ is given in (\ref{bulk}). For a single sphere giant we have ($b_0$ is the momentum of the giant) $$ H_{boundary}=\lambda \sqrt{1-{b_0\over N}} (\hat{A}^\dagger \hat{a}_1 + \hat{A}^\dagger \hat{a}_L) +\lambda \sqrt{1-{b_0-1\over N}} (\hat{A} \hat{a}_1^\dagger + \hat{A} \hat{a}_L^\dagger ) +2\lambda\left(1-{b_0-1\over N}\right)\hat{A}^\dagger\hat{A}. $$ For a single AdS giant ($a_0$ is the momentum of the giant) $$ H_{boundary}=-\lambda \sqrt{1+{a_0\over N}} (\hat{A}^\dagger \hat{a}_1 + \hat{A}^\dagger \hat{a}_L) -\lambda \sqrt{1+{a_0-1\over N}} (\hat{A} \hat{a}_1^\dagger + \hat{A} \hat{a}_L^\dagger ) +2\lambda\left(1+{a_0-1\over N}\right)\hat{A}^\dagger\hat{A}.$$ We stress that these Hamiltonians have been written down assuming that we work in a basis for which the momentum of the giant is a good quantum number. To obtain the Hamiltonian in a general basis, one can write all factors involving the giant momenta to the right of the giant creation and annihilation operators, and then replace the momenta $b_i$ and $a_i$ by the corresponding number operators. For a bound state of two sphere giants (the first column of the Young diagram has $b_0+b_1$ boxes; the second column of the Young diagram has $b_0$ boxes) $$ H_{boundary}= \lambda (\hat{a}_1+\hat{a}_L)\left[ \sum_{l=1}^{N-1}\sqrt{1-{l\over N}}\sqrt{C^1_{\hat{b}_1-\hat{\epsilon}}}\hat{W}_{l+1}^\dagger\hat{W}_l+ \sum_{l=1}^{N}\sum_{k=1}^{N-1}\sqrt{1-{k\over N}}{\epsilon (k-l)\over |k-l|+1}\hat{W}_{k+1}^\dagger \hat{A}_k\hat{A}_l^\dagger \hat{W}_l \right. $$ $$\left. +\sum_{l=1}^{N-1}\sqrt{1-{l\over N}}\hat{W}_{l+1}^\dagger\hat{A}_l^\dagger\hat{A}_l\hat{W}_l\right]+ 2\lambda\sum_{l=1}^{N-1}\left(1-{l\over N}\right)\hat{W}_{l+1}^\dagger \hat{W}_{l+1}$$ $$+\lambda (\hat{a}_1^\dagger +\hat{a}_L^\dagger )\left[ \sum_{l=1}^{N-1}\sqrt{1-{l\over N}}\hat{W}_{l}^\dagger\hat{W}_{l+1}\sqrt{C^1_{\hat{b}_1-\hat{\epsilon}}}+ \sum_{l=1}^{N}\sum_{k=1}^{N-1}\sqrt{1-{k\over N}}{\epsilon (k-l)\over |k-l|+1}\hat{W}_{l}^\dagger \hat{A}_l \hat{A}_k^\dagger\hat{W}_{k+1} \right. $$ $$\left. +\sum_{l=1}^{N-1}\sqrt{1-{l\over N}}\hat{W}_{l}^\dagger\hat{A}_l^\dagger\hat{A}_l\hat{W}_{l+1}\right].$$ Note that this Hamiltonian preserves both the number of open strings attached to the bound state and the number of columns (= number of sphere giants). This is not the case at higher orders in ${1\over N}$ - as discussed in section 3.1.2, there is a subleading term which allows the open string to occupy the first box in the third column (the open string moves to occupy the first site in the giant lattice). The effective field theory limit ($1\ll b_1 $) of this Hamiltonian is $$ H_{boundary}= \lambda (\hat{a}_1+\hat{a}_L)\left[ \sum_{l=1}^{N-1}\sqrt{1-{l\over N}}\hat{W}_{l+1}^\dagger\hat{W}_l+ \sum_{l=1,\, l\ne k}^{N}\sum_{k=1}^{N-1}\sqrt{1-{k\over N}}{1\over k-l}\hat{W}_{k+1}^\dagger \hat{A}_k\hat{A}_l^\dagger \hat{W}_l \right] $$ $$+ 2\lambda\sum_{l=1}^{N-1}\left(1-{l\over N}\right)\hat{W}_{l+1}^\dagger \hat{W}_{l+1 +\lambda (\hat{a}_1^\dagger +\hat{a}_L^\dagger )\left[ \sum_{l=1}^{N-1}\sqrt{1-{l\over N}}\hat{W}_{l}^\dagger\hat{W}_{l+1}\right.$$ \begin{equation} \left. + \sum_{l=1,\, l\ne k}^{N}\sum_{k=1}^{N-1}\sqrt{1-{k\over N}}{1\over k-l}\hat{W}_{l}^\dagger \hat{A}_l \hat{A}_k^\dagger\hat{W}_{k+1} \right]. \label{last} \end{equation} The giant lattice we have been employing is not dynamical. Further, at leading order, the number of sphere giants is conserved so that the Hamiltonians we have written are rather simple. In contrast to this, even at leading order, if we use this giant lattice to describe the dynamics of AdS giants the number of particles on the giant lattice is not fixed. Using the dual lattice developed in appendix C, we find that at leading order the number of particles on the dual lattice is fixed and the lattice is not dynamical. Thus, at leading order, the description of AdS gaints using the language of appendix C seems to be the simplest. For this reason, we will not pursue the AdS giant dynamics using our present description. Finally, in the effective field theory limit, for $n$ sphere giants in the boundstate and a single open string attached to the boundstate, it is simple to check that the dynamics is described by (\ref{last}). To obtain this result, we have assumed that $n=O(1)$ and $n\ll b_i$, $i=0,1,...,n-1.$ See appendix B for further details on the effective field theory limit of boundstates of three or four sphere or AdS giants. \section{Toy Model} To get some insight into the Hamiltonians we have obtained above, we will study a simple toy model problem in this section: the case that the string has a single site, i.e. the open string word only has 2 $Y$s in it. Toy models of this type were introduced in \cite{Berenstein:2006qk} to study single excited sphere giants and used in \cite{adsgiant} to study single excited AdS giants. Even with a single site, we are not able to solve the energy eigenvalue problem for a single excited sphere or AdS giant analytically. For a single site, the numerical computations of the energy eigenvalues and eigenkets is straight forward. One of the conclusions we reach, based on our numerical results, is that backreaction on the membrane can not be neglected. The results\cite{Berenstein:2006qk},\cite{adsgiant} in the large $N$ limit, after neglecting back reaction, suggest a continuum of states separated from the ground state by a gap. Since back reaction is neglected in these studies, the energy eigenvalue problem amounts to diagonalizing an infinite dimensional matrix. In our approach, the number of $Z$s is finite so that there are only a finite number of possible states for the giant/string system. Thus, our energy eigenvalue problem entails diagonalizing a finite dimensional matrix. If in the boundary interaction terms we hold the value of $\alpha =\sqrt{1-{b_0\over N}}$ (in the case of the sphere giant) or $\alpha =\sqrt{1+{a_0\over N}}$ (in the case of the AdS giant) fixed, we are neglecting the change in the giants momentum i.e. we are neglecting back reaction. In this case, even though we still diagonalize a finite dimensional matrix, we find good agreement with the results of\cite{Berenstein:2006qk},\cite{adsgiant}. Once back reaction is included, the gap disappears so that including back reaction seems to imply both a quantitative and a qualitative change of the result obtained ignoring back reaction. Although our results are suggestive on this point, things are not completely clear: indeed, we compute the expectation value of the number operator for these energy eigenstates and find that the planar approximation assumed when computing the open string word contractions is not accurate. This implies that the single site results obtained from our Hamiltonian can not be trusted. \subsection{Single Sphere Giant} Our numerical analysis entails diagonalizing the matrix representation of the Hamiltonian. The system we consider has a total of $K$ $Z$s in the string/giant system. The ket with zeros everywhere except the $i$th entry is the state with $K-i+1$ $Z$s on the giant and $J=i-1$ $Z$s on the string. The matrix representation of the hop off interaction is given by $$ -\hat{A}^\dagger \hat{a}\sqrt{1-{K-\hat{J}\over N}}=\left[\matrix{ 0 &\sqrt{1-{K-1\over N}} &0 &\dots &0 &0\cr 0 &0 &\sqrt{1-{K-2\over N}} &\dots &0 &0\cr 0 &0 &0 &\dots &0 &0\cr : &: &: &: &: &:\cr 0 &0 &0 &\dots &0 &\sqrt{1-{K-K\over N}}\cr 0 &0 &0 &\dots &0 &0 }\right].$$ This is a $K+1\times K+1$ matrix. It is now straight forward to obtain the matrix representation of the Hamiltonian describing a single excited sphere giant with a single string attached $$H=\lambda\left[\matrix{ 2\left( 1-{K-1\over N}\right) &2\sqrt{1-{K-1\over N}} &0 &\dots &0 &0\cr 2\sqrt{1-{K-1\over N}} &2+2\left( 1-{K-2\over N}\right) &2\sqrt{1-{K-2\over N}} &\dots &0 &0\cr 0 &2\sqrt{1-{K-2\over N}} &2+2\left( 1-{K-3\over N}\right) &\dots &0 &0\cr : &: &: &: &: &:\cr 0 &0 &0 &\dots &4 &2\cr 0 &0 &0 &\dots &2 &2 }\right].$$ \begin{figure}[t]\label{fig:cgraph2} \begin{center} \includegraphics[height=6cm,width=9cm]{sphereenergies} \caption{The energy spectra for a single string attached to a single sphere giant. The plot shows $E_n$ versus $n$. The energy is measured in units of $\lambda$. There are a total of 95 $Z$s in the string/brane system and $N=100$. The solid curve shows the result obtained after backreaction is included. The result obtained ignoring back reaction is plotted as a dashed line and the analytic formula of \cite{Berenstein:2006qk} is plotted as a series of dots. The dashed curve is barely visible under the dots indicating superb agreement between our numerical result and the result of \cite{Berenstein:2006qk}.} \end{center} \end{figure} One of the things we would like to establish, is the importance of back reaction. Towards this end, we have also constructed a Hamiltonian that ignores the effects of back reaction. To ignore the effects of back reaction, we have kept the number of $Z$s on the giant fixed, equal to $K$. Thus, ignoring back reaction, our hop off interaction, for example, is given by $$ -\hat{A}^\dagger \hat{a}\sqrt{1-{K\over N}}=\left[\matrix{ 0 &\sqrt{1-{K\over N}} &0 &\dots &0 &0\cr 0 &0 &\sqrt{1-{K\over N}} &\dots &0 &0\cr 0 &0 &0 &\dots &0 &0\cr : &: &: &: &: &:\cr 0 &0 &0 &\dots &0 &\sqrt{1-{K\over N}}\cr 0 &0 &0 &\dots &0 &0 }\right].$$ In the case that we ignore backreaction, we should be able to compare to the results of \cite{Berenstein:2006qk}. An important difference between our work and that of \cite{Berenstein:2006qk}, is that in \cite{Berenstein:2006qk} the matrix representation of the Hamiltonian is an infinite dimensional matrix. This is simply because an infinite number of $Z$s can hop off the giant and onto the string. Our matrix representation for the Hamiltonian is a $K+1\times K+1$ matrix, corresponding to the fact that a maximum of $K$ $Z$ fields can hop onto the string. Despite this difference, we find convincing agreement between our numerical results ignoring back reaction and the analytic formula of \cite{Berenstein:2006qk} $$ E(k)=2\lambda (1+2\alpha\cos (k)+\alpha^2),\qquad 0\le k\le \pi .$$ In figure 5 we have shown the energy spectra for $K=95$ and $N=100$. Our undeformed result is in perfect agreement with the analytic result of \cite{Berenstein:2006qk}. Note that we are comparing normalizable states (the dots in figure 5) to states from the continuum (the dashed line in figure 5). Since our system is described by a finite Hilbert space, our states are always normalizable. In support of our assumption that this is a sensible comparison, note that the portion of the dashed curve describing continuum states is hidden by the dots. The result obtained when backreaction is taken into account is noticeably different from the result obtained when back reaction is ignored. In particular, note that the mass gap obtained when backreaction is ignored, disappears when back reaction is included. \begin{figure}[t]\label{fig:cgraph2} \begin{center} \includegraphics[height=6cm,width=9cm]{spherenumbers} \caption{The expectation value $\langle \hat{J}\rangle$ versus $n$, for a single string attached to a single sphere giant, is shown. There are a total of 95 $Z$s in the string/brane system and $N=100$. The solid curve shows the result obtained taking backreaction into account. The result obtained ignoring backreaction is plotted as a dashed line.} \end{center} \end{figure} Our Hamiltonian is not exact. One approximation we have made is to assume that we need only sum planar diagrams when contracting the open string words. This amounts to assuming that the number of $Z$s on the open string ($=J$) is very much less than $\sqrt{N}$, so that $J^2/N\ll 1$. We can easily compute $\langle \hat{J}\rangle$ numerically and see if the planar approximation is indeed accurate. In figure 6 we have plotted $\langle \hat{J}\rangle$. Whether or not backreaction is included, $\langle \hat{J}\rangle$ is never much below 40 which is well outside the domain of validity of our Hamiltonian. Our Hamiltonian simply does not provide a valid description of the single site toy model, except for the ground state. Further, for these values of the parameters $K,N$, the interpretation of our system as an open string attached to a brane is not valid. An interesting question to ask is what is the time scale of the instability: Starting with a state corresponding to a string with a finite number of $Z$s between the $Y$s, how long would it take before the dynamics is no longer captured by our Hamiltonian? If this time scale is long enough, one might be able to ignore non-planar effects for small time measurements\footnote{We thank David Berenstein for explaining this to us.}. To estimate this time scale, recall the quantum brachistochrone problem: Given an initial quantum state $|\psi_I\rangle$ and a final quantum state $|\psi_F\rangle$, how does one achieve the transfomation $|\psi_I\rangle\to |\psi_F\rangle =e^{-iHt/\hbar}|\psi_I\rangle$ in the shortest possible time? The optimization is with respect to the Hamiltonian, subject to the constraint that the difference between smallest and largest eigenvalues of $H$ are held fixed. In Hermittian quantum mechanics, such a transformation always requires a non-zero amount of time. The optimal time is\cite{brach} $$ t_{\rm instability}={2\over \Delta E}\arccos |\langle\psi_F|\psi_I\rangle | ,$$ where $\Delta E$ is the difference between the smallest and largest eigenvalues. \begin{figure}[t]\label{fig:Ndep} \begin{center} \includegraphics[height=6cm,width=9cm]{Ndep} \caption{The largest eigenvalue of the energy spectra for a single string attached to a single sphere giant, as a function of $N$.} \end{center} \end{figure} Figure 7 shows the numerical result for the largest eigenvalue as a function of $N$. From our numerical results, we read off $\Delta E\approx 7.9 \lambda$. The final state has many $Z$'s; the initial state has very few $Z$'s. It is thus natural to approximate $|\langle\psi_F|\psi_I\rangle |\approx 0$ and hence $$ t_{\rm instability}={2\over \Delta E}\arccos |\langle\psi_F|\psi_I\rangle | \approx {2\over 7.9\times\lambda}{\pi\over 2}\approx {0.4\over\lambda} .$$ The interpretation of this result is straight forward: increasing $\lambda$ corresponds to increasing the string tension. In this case, the string has a greater mass and thus offers increased resistance when the membrane tries to drag it in non-geodesic motion. \subsection{Single AdS Giant} \begin{figure}[t]\label{fig:cgraph2} \begin{center} \includegraphics[height=6cm,width=9cm]{adsenergy} \caption{The energy spectra for a single string attached to a single AdS giant. The plot shows $E_n$ versus $n$. The energy is measured in units of $\lambda$. There are a total of 95 $Z$s in the string/brane system and $N=100$. The solid curve shows the result obtained after backreaction is included. The result obtained ignoring back reaction is plotted as a dashed line and the analytic formula of \cite{adsgiant} is plotted as a series of dots. There is clearly superb agreement between our numerical result and the result of \cite{adsgiant}.} \end{center} \end{figure} In this subsection we will determine the energy spectrum for a single string attached to an AdS giant, again by numerical diagonalization of the matrix representation of the Hamiltonian. If the string plus AdS giant system has a total of $K$ $Z$ fields, the Hamiltonian is again a $K+1\times K+1$ matrix. The difference between the AdS and sphere giant problems is due to the fact that the boundary interactions are different. For the AdS giant, the hop off interaction is given by $$ \hat{A}^\dagger \hat{a}\sqrt{1+{K-\hat{J}\over N}}=\left[\matrix{ 0 &\sqrt{1+{K-1\over N}} &0 &\dots &0 &0\cr 0 &0 &\sqrt{1+{K-2\over N}} &\dots &0 &0\cr 0 &0 &0 &\dots &0 &0\cr : &: &: &: &: &:\cr 0 &0 &0 &\dots &0 &\sqrt{1+{K-K\over N}}\cr 0 &0 &0 &\dots &0 &0 }\right].$$ It is now straightforward to determine the matrix representation of the Hamiltonian. We are again interested in determining the importance of including the effects of backreaction on the AdS giant. To ignore the effects of back reaction, we again keep the number of $Z$s on the brane fixed, leading to the hop off interaction $$ \hat{A}^\dagger \hat{a}\sqrt{1+{K\over N}}=\left[\matrix{ 0 &\sqrt{1+{K\over N}} &0 &\dots &0 &0\cr 0 &0 &\sqrt{1+{K\over N}} &\dots &0 &0\cr 0 &0 &0 &\dots &0 &0\cr : &: &: &: &: &:\cr 0 &0 &0 &\dots &0 &\sqrt{1+{K\over N}}\cr 0 &0 &0 &\dots &0 &0 }\right].$$ For the case that backreaction is ignored, we can compare to the results of \cite{adsgiant}. Just as for the case of an open string attached to a sphere giant, an important difference between our work and that of \cite{adsgiant}, is that in \cite{adsgiant} the matrix representation of the Hamiltonian is an infinite dimensional matrix. Again, this is simply because an infinite number of $Z$s can hop off the giant and onto the string. Our matrix representation for the Hamiltonian is a $K+1\times K+1$ matrix, corresponding to the fact that a maximum of $K$ $Z$ fields can hop onto the string. The analytic result of \cite{adsgiant} for the spectrum is $$ E(k)=2\lambda (1-2\alpha\cos (k)+\alpha^2),\qquad 0\le k\le \pi .$$ \begin{figure}[t]\label{fig:cgraph2} \begin{center} \includegraphics[height=6cm,width=9cm]{adsnumber} \caption{The expectation value $\langle \hat{J}\rangle$ versus $n$, for a single string attached to a single AdS giant, is shown. There are a total of 95 $Z$s in the string/brane system and $N=100$. The solid curve shows the result obtained taking backreaction into account. The result obtained ignoring backreaction is plotted as a dashed line.} \end{center} \end{figure} In figure 8 we have shown the spectra for $K=95$ and $N=100$. The agreement is again excellent\footnote{We thank Diego Correa for correcting an error in $\alpha$ in the previous version of this paper.}. Just as for the results we obtained for the excited sphere giants, the gap in the spectrum present when backreaction is ignored, is removed when the effects of back reaction are included. We can again check if our Hamiltonian is providing an accurate description of the physics. In figure 9 we have plotted $\langle \hat{J}\rangle$ versus $n$. It is clear that the planar approximation has broken down for all but the few highest energy states. Again, for these values of the parameters $K,N$, the interpretation of our system as an open string attached to a brane is not valid. We are forced to conclude that our Hamiltonian does not provide an accurate description of a string with a single site attached to an AdS giant. For the case of a single AdS giant, we obtain $$ t_{\rm instability}={2\over \Delta E}\arccos |\langle\psi_F|\psi_I\rangle | \approx {2\over 11.6\times\lambda}{\pi\over 2}\approx {0.27\over\lambda} .$$ A few comments are in order. How are we to interpret the fact that our approximation breaks down? We have set up our description by assuming that the operator we study is dual to a membrane with an open string attached. This implies that our operator can be decomposed into a ``membrane piece'' and a ``string piece''. These two pieces are treated very differently: when contracting the membrane piece, all contractions are summed; when contracting the string piece, only planar contractions are summed. Contractions between the two pieces are dropped. We have seen above, that a large number of $Z$s hop between the two $Y$s: our operator is simply not dual to a state that looks like a membrane with an open string attached and our approximations are not valid. We are not claiming that this operator does not have a planar limit - it should still be possible to study this operator using a systematic $1/N$ expansion. Also, if one considered the same numerical study, but with $L\sim 10=\sqrt{N}$ $Y$s we would expect our Hamiltonian to provide a suitable description. This problem appears to be too numerically expensive to perform in practice. \section{The Semiclassical Limit} In the previous section we have argued that our Hamiltonian does not accurately describe the dynamics of a single string attached to a giant graviton, when the string has a single site. In this section we will consider the opposite limit in which we take $L\to\infty$. This limit has been considered, for a single sphere giant in AdS$_5\times$S$^5$, in \cite{CuntzChain},\cite{Berenstein:2006qk}, for a single sphere giant in a $\gamma$ deformed background in \cite{deMelloKoch:2005jg} and for a single AdS giant in AdS$_5\times$S$^5$ in \cite{adsgiant}. In the limit, the dynamics of the Cuntz chain is governed by a semiclassical sigma model. This semiclassical sigma model coincides with the Polyakov action describing an open string attached to the giant \cite{CuntzChain},\cite{deMelloKoch:2005jg},\cite{Berenstein:2006qk},\cite{adsgiant}. This strongly suggests that the Cuntz chain Hamiltonian is relevant for the description of this $L\to\infty$ limit. In this section we will study the semiclassical sigma models arising from the semiclassical limit of our Cuntz chain Hamiltonians. To warm up, we will consider the case of a single sphere giant or a single AdS giant. We will employ the description developed in subsection 3.1.1 as this is, by far, the simplest description. For the semiclassical limit, we take $L\to\infty$ and $\lambda\to\infty$ holding $\lambda/L^2$ fixed and small. In addition, we put each site of the lattice into a coherent state of a Cuntz oscillator $$|z\rangle =\sqrt{1-|z|^2}\sum_{n=0}^\infty z^n |n\rangle ,\qquad |z|<1.$$ The parameter for the coherent state of the $l$th lattice site is $z_l =r_l (t)e^{i\phi_l (t)}$. In this article, we also allow the brane to be dynamical. To obtain a semiclassical limit, we also put the brane in a coherent state, with parameter $Z=R(t)e^{i\Phi (t)}$. The resulting action is given by\cite{Zhang:1990fy} $$S=\int dt\left( i\langle z_1,...,z_L;Z|{d\over dt}|z_1,...,z_L;Z\rangle - \langle z_1,...,z_L;Z|H|z_1,...,z_L;Z\rangle \right).$$ The first term in the action and the bulk terms in the Cuntz chain Hamiltonian are the same for any brane system that the open string is attached to and hence may be read from the results of\cite{CuntzChain}. The first term in the action becomes $$i\langle z_1,...,z_L;Z|{d\over dt}|z_1,...,z_L;Z\rangle = -\sum_{l=1}^L {r_l^2\dot{\phi}_l\over 1-r_l^2}-{R^2\dot{\Phi}\over 1-R^2}$$ $$=-L\int_0^1 {\dot{\phi}(\sigma )r^2(\sigma )\over 1-r^2(\sigma)}d\sigma -{R^2\dot{\Phi}\over 1-R^2}.$$ The bulk terms of the Cuntz chain are $$-\langle z_1,...,z_L;Z|\left[ 2\lambda\sum_{l=1}^L a_l^\dagger a_l -\lambda\sum_{l=1}^{L-1}(a_l^\dagger a_{l+1}+a_la^\dagger_{l+1}) \right]|z_1,...,z_L;Z\rangle $$ $$= -2\lambda\sum_{l=1}^L \bar{z}_l z_l +\lambda\sum_{l=1}^{L-1}(\bar{z}_l z_{l+1}+z_l \bar{z}_{l+1})$$ $$=-L{\lambda\over L^2}\int_0^1\left[\left({\partial r\over\partial\sigma}\right)^2+r^2\left( {\partial\phi\over\partial\sigma}\right)^2\right]d\sigma -\lambda\left[ r^2(1)+r^2(0)\right].$$ To obtain the above integral representations, we have made use of the Euler-Maclaurin formula. There are corrections to the integrals we have written above, expressed in terms of derivatives of the function evaluated at the endpoints. These corrections will need to be taken into account when the first ${1\over L}$ corrections are computed. The remaining boundary interactions in the sigma model are dependent on the details of the specific brane system we study. In the next two subsections we will consider the interactions relevant for a single AdS or a single sphere giant. In the third subsection, we argue that the AdS giant is unstable. Finally, in the last subsection we consider the semiclassical limit of a boundstate of giants. A final comment is in order. Since our Hamiltonian preserves the number of $Z$ fields in the giant plus string system (denoted by $K$) we will look for solutions that minimize the energy and have a sharp classical value for $K$. Concretely, we do this by setting the coherent state expectation value of $\hat{K}$ equal to $K$. \subsection{Single Sphere Giant} The coherent state expectation value of the boundary interaction Hamiltonian given in section 3.4, gives the following contribution to the action $$-2\lambda Z\bar{Z}\left[1-{K\over N}+{1\over N}\sum_{l=1}^L {\bar{z}_l z_l\over 1-\bar{z}_l z_l}\right] -\lambda\left[\bar{Z}(z_1+z_L)+Z(\bar{z}_1+\bar{z}_L)\right] \sqrt{1-{K\over N}+{1\over N}\sum_{l=1}^L {\bar{z}_l z_l\over 1-\bar{z}_l z_l}}$$ $$=-2\lambda R^2\left[ 1-{K\over N}+{L\over N}\int_0^1 {r^2 (\sigma)\over 1-r^2(\sigma )}d\sigma\right]$$ $$-\lambda\left[ \bar{Z}(z(0)+z(1))+Z(\bar{z}(0)+\bar{z}(1))\right] \sqrt{1-{K\over N}+{L\over N}\int_0^1 {r^2 (\sigma)\over 1-r^2(\sigma )}d\sigma} .$$ This result is not exact. The number operator $\hat{b}_0$ appears in the Hamiltonian; we have replaced it by its coherent state expectation value $$\langle\hat{b}_0\rangle ={K\over N}-{1\over N}\sum_{l=1}^L {\bar{z}_l z_l\over 1-\bar{z}_l z_l}.$$ The semi-classical sigma model action describing a single string attached to a sphere giant graviton is $$ S=\int L_\sigma dt $$ $$ L_\sigma =-L\int_0^1 {\dot{\phi}r^2\over 1-r^2}d\sigma -{R^2\dot{\Phi}\over 1-R^2} -L{\lambda\over L^2}\int_0^1\left[\left({\partial r\over\partial\sigma}\right)^2+r^2\left( {\partial\phi\over\partial\sigma}\right)^2\right]d\sigma -\lambda\left[ r^2(1)+r^2(0)\right]$$ $$-\lambda\left[ \bar{Z}(z(0)+z(1))+Z(\bar{z}(0)+\bar{z}(1))\right] \sqrt{1-{K\over N}+{L\over N}\int_0^1 {r^2\over 1-r^2}d\sigma} $$ $$-2\lambda R^2\left[ 1-{K\over N}+{L\over N}\int_0^1 {r^2\over 1-r^2}d\sigma\right].$$ In the above action, $Z$ and $z_l$ are not independent - they are coupled by the constraint $$K=\sum_{n=1}^\infty (\hat{A}^\dagger)^n \hat{A}^n +\sum_{l=1}^L \sum_{n=1}^\infty (\hat{a}_l^\dagger)^n \hat{a}_l^n $$ which says that the total number of $Z$s is equal to the number of $Z$s on the giant plus the number of $Z$s on the string. The coherent state expectation value of the constraint is $$ K={\bar{Z}Z\over 1-\bar{Z}Z}+L\int_0^1 {r^2\over 1-r^2}d\sigma ={\bar{Z}Z\over 1-\bar{Z}Z}+J ,$$ where we have introduced the coherent state expectation value of the number of $Z$s on the string, $J\equiv\langle\hat{J}\rangle$. This is easily solved to eliminate $|Z|$ $$\bar{Z}Z=R^2=1-{1\over K+1-L\int_0^1 {r^2\over 1-r^2}d\sigma }.$$ Using this constraint, we obtain $$ L_\sigma =-L\int_0^1 {\dot{\phi}r^2\over 1-r^2}d\sigma -{R^2\dot{\Phi}\over 1-R^2} -L{\lambda\over L^2}\int_0^1\left[\left({\partial r\over\partial\sigma}\right)^2+r^2\left( {\partial\phi\over\partial\sigma}\right)^2\right]d\sigma -\lambda\left[ r^2(1)+r^2(0)\right]$$ $$-2\lambda\left[ r(0)\cos (\phi (0)-\Phi)+r(1)\cos (\phi (1)-\Phi)\right]\sqrt{1-{K\over N}+{J\over N}} \sqrt{K-J\over 1+K-J}$$ $$-{2\lambda\over 1+K-J} \left[ K-J-{(K-J)^2\over N}\right],$$ If we now shift $\phi (\sigma )\to \phi(\sigma )+\Phi $, the giant and string dynamics decouple so that we finally obtain a sigma model expressed only in terms of $r(\sigma )$ and $\phi (\sigma )$\footnote{We have dropped the term $-{R^2\dot{\Phi}\over 1-R^2}$ from the Lagrangian, as it is not needed to obtain the string dynamics.} $$ L_\sigma =-L\int_0^1 {\dot{\phi}r^2\over 1-r^2}d\sigma -L{\lambda\over L^2}\int_0^1\left[\left({\partial r\over\partial\sigma}\right)^2+r^2\left( {\partial\phi\over\partial\sigma}\right)^2\right]d\sigma -\lambda\left[ r^2(1)+r^2(0)\right]$$ $$-2\lambda\left[ r(0)\cos (\phi (0))+r(1)\cos (\phi (1))\right]\sqrt{1-{K\over N}+{J\over N}} \sqrt{K-J\over 1+K-J}$$ \begin{equation} -{2\lambda\over 1+K-J} \left[ K-J-{(K-J)^2\over N}\right]. \label{finalmodel} \end{equation} In the limit we study $K=O(N)$, $K\gg J$ and $\alpha =\sqrt{1-{K\over N}}=O(1)$, our system can be interpreted as a string attached to brane and further, we expect that back reaction will be a subleading effect. $K$ is a fixed parameter which we may therefore choose to be $O(N)$. $J$ is determined by the dynamics, and thus the issue of how large it is compared to $K$ is a dynamical question. It is natural to expect that $K\gg J$, since the energy contribution from the boundary terms is minimized for small values of $J$. In this limit (\ref{finalmodel}) becomes $$ L_\sigma =-L\int_0^1 {\dot{\phi}r^2\over 1-r^2}d\sigma -L{\lambda\over L^2}\int_0^1\left[\left({\partial r\over\partial\sigma}\right)^2+r^2\left( {\partial\phi\over\partial\sigma}\right)^2\right]d\sigma -\lambda\left[ r^2(1)+r^2(0)\right]$$ $$-2\lambda\left[ r(0)\cos (\phi (0))+r(1)\cos (\phi (1))\right]\alpha - 2\lambda \alpha^2 ,$$ which is in perfect agreement with the sigma model action obtained in \cite{CuntzChain}. This action can be obtained directly as a limit of the Polyakov action in a certain gauge, as demonstrated in \cite{CuntzChain}. This suggests that our Hamiltonian does provide a reliable description of this semiclassical limit. The corrections to $L_\sigma$ due to back reaction are $O({J\over K})$. We know that, at most we can tolerate $J\sim\sqrt{N}$ - beyond this our description breaks down. To correct it we would have to go beyond the planar approximation employed when contracting the open string words. Further, our giant has $K=O(N)$. Thus, when our description is valid $O({J\over K})=O({1\over L})$, so that the ${1\over L}$ corrections to our sigma model action are the same size as the corrections due to back reaction and the corrections coming from the Euler-Maclaurin formula. All of these ${1\over L}$ corrections need to be included when the effects of back reaction are studied. \subsection{Single AdS Giant} For a single AdS giant, the coherent state expectation value of the boundary interaction Hamiltonian given in section 3.4, gives the following contribution to the action $$-2\lambda Z\bar{Z}\left[1+{K\over N}-{1\over N}\sum_{l=1}^L {\bar{z}_l z_l\over 1-\bar{z}_l z_l}\right] +\lambda\left[\bar{Z}(z_1+z_L)+Z(\bar{z}_1+\bar{z}_L)\right] \sqrt{1+{K\over N}-{1\over N}\sum_{l=1}^L {\bar{z}_l z_l\over 1-\bar{z}_l z_l}}$$ $$=-2\lambda R^2\left[ 1+{K\over N}-{J\over N}\right] +\lambda\left[ \bar{Z}(z(0)+z(1))+Z(\bar{z}(0)+\bar{z}(1))\right] \sqrt{1+{K\over N}-{J\over N}} .$$ This is again not an exact result - we have replaced $\hat{a}_0$ by its coherent state expectation value. Thus, the semi-classical sigma model action describing a single string attached to an AdS giant graviton is $$ S=\int L_\sigma dt $$ $$ L_\sigma =-L\int_0^1 {\dot{\phi}(\sigma )r^2(\sigma )\over 1-r^2(\sigma)}d\sigma -{R^2\dot{\Phi}\over 1-R^2} -L{\lambda\over L^2}\int_0^1\left[\left({\partial r\over\partial\sigma}\right)^2+r^2\left( {\partial\phi\over\partial\sigma}\right)^2\right]d\sigma -\lambda\left[ r^2(1)+r^2(0)\right]$$ $$+\lambda\left[ \bar{Z}(z(0)+z(1))+Z(\bar{z}(0)+\bar{z}(1))\right] \sqrt{1+{K\over N}-{J\over N}} -2\lambda R^2\left[ 1+{K\over N}-{J\over N}\right].$$ In the above action, $Z$ and $z_l$ are again not independent - they are coupled by the same constraint that we obtained for the sphere giant, and hence we may again set $$\bar{Z}Z=R^2=1-{1\over K+1-J}.$$ After employing the constraint to eliminate $R$, and shifting $\phi(\sigma )\to\phi(\sigma )+\Phi$ which again decouples the string and the brane dynamics, we obtain $$ L_\sigma =-L\int_0^1 {\dot{\phi}r^2\over 1-r^2}d\sigma -L{\lambda\over L^2}\int_0^1\left[\left({\partial r\over\partial\sigma}\right)^2+r^2\left( {\partial\phi\over\partial\sigma}\right)^2\right]d\sigma -\lambda\left[ r^2(1)+r^2(0)\right]$$ $$+2\lambda\left[ r(0)\cos (\phi (0))+r(1)\cos (\phi (1))\right]\sqrt{1+{K\over N}-{J\over N}} \sqrt{K-J\over 1+K-J}$$ \begin{equation} -{2\lambda\over 1+K-J} \left[ K-J+{(K-J)^2\over N}\right]. \label{finalAdSmodel} \end{equation} We only expect this Lagrangian to be an accurate description of the dynamics in the limit that $K=O(N)$, $K\gg J$ and $\alpha =\sqrt{1+{K\over N}}=O(1)$, which is the limit we are considering. We will see below that this is not a valid assumption. As discussed above for the sphere giant, the size of $J$ is a dynamical question. In contrast to what we found for the sphere giant, the boundary terms in this Lagrangian are minimized for large values of $J$. Continuing anyway with the above assumption, our system can be interpreted as a string attached to a brane and further, back reaction will be a subleading effect. In this limit (\ref{finalAdSmodel}) becomes $$ L_\sigma =-L\int_0^1 {\dot{\phi}r^2\over 1-r^2}d\sigma -L{\lambda\over L^2}\int_0^1\left[\left({\partial r\over\partial\sigma}\right)^2+r^2\left( {\partial\phi\over\partial\sigma}\right)^2\right]d\sigma -\lambda\left[ r^2(1)+r^2(0)\right]$$ $$+2\lambda\left[ r(0)\cos (\phi (0))+r(1)\cos (\phi (1))\right]\alpha - 2\lambda\alpha^2 . $$ If we now shift $\phi(\sigma )\to\phi(\sigma) +\pi$ then this action becomes identical in form to the action describing the single string attached to a sphere giant. Of course, one very important difference is that here $\alpha\ge 1$; for the string attached to a sphere giant, $\alpha\le 1$. \subsection{Interpretation of the Single Giant Results} In this section we will study solutions to the sigma models of sections 5.1 and 5.2, which correspond to point like strings for which $\dot{r}=r'=0$ and $\dot{\phi}=\phi'=0$. The bulk equations of motion (which are the same for the two types of giants) $$ {\lambda\over L}r''={L\dot{\phi}r\over (1-r^2)^2}+{\lambda r(\phi')^2\over L},$$ $$ {r\dot{r}\over (1-r^2)^2}+\partial_\sigma\left({\lambda\over L^2}r^2 \phi'\right)=0,$$ are clearly satisfied. The boundary conditions for the sphere and the AdS giants are different. Consider the case of the sphere giant first. The boundary terms in (\ref{finalmodel}) are minimized if we apply the boundary conditions \begin{equation} \phi (0)=\phi (1)=\pi,\qquad r(0)=r(1)= \sqrt{1-{K\over N}+{J\over N}} \sqrt{K-J\over 1+K-J}. \label{sphereboundary} \end{equation} If we ignore back reaction (set $J=0$ in the above equation) and ${1\over K}$ corrections, we find \begin{equation} r(0)=r(1)=\sqrt{1-{K\over N}}, \label{sphereb} \end{equation} with $K$ now equal to the momentum of the giant. For the AdS giant, the boundary terms in (\ref{finalAdSmodel}) vanish if we require \begin{equation} \phi(0)=\phi(1)=0,\qquad r(0)=r(1)=\sqrt{1+{K\over N}-{J\over N}} \sqrt{K-J\over 1+K-J}. \label{adsboundary} \end{equation} Ignoring back reaction we find \begin{equation} r(0)=r(1)=\sqrt{1+{K\over N}}, \label{adsb} \end{equation} with $K$ now equal to the momentum of the giant. To interpret these boundary conditions, recall how the AdS$_5\times$S$^5$ solution is recovered from the LLM description. The AdS$_5\times$S$^5$ geometry corresponds to a circular droplet boundary condition on the $y=0$ plane, parameterized by $(x_1,x_2)$ (see section 2.3 of \cite{Lin:2004nb}). Introduce radial coordinates $(r,\phi)$ on this plane. The $r$ and $y$ coordinates are related to $\rho$ (the radial variable of AdS$_5$ in global coordinates) and $\theta$ (one of the angles of the S$^5$) by $y=r_0\sinh\rho\sin\theta$ and $r=r_0\cosh\rho\cos\theta$, where $r_0=R_{{\rm AdS}_5}^2=R_{{\rm S}^5}^2$. The sphere giants are located at $\rho=0$ and $\cos\theta =\sqrt{1-{K\over N}}$ so that $y=0$ and $r=\sqrt{1-{K\over N}}$. The AdS giants are located at $\theta=0$ and $\cosh\rho = \sqrt{1+{K\over N}}$ so that $y=0$ and $r=\sqrt{1+{K\over N}}$. This matches beautifully with (\ref{sphereb}) and (\ref{adsb}). We thus obtain a clear geometrical interpretation of our coherent state parameter $z=re^{i\phi}$ - the $r$ in our coherent state parameter is the radial direction on the $y=0$ LLM plane. With this identification, our strings are localized on the $y=0$ plane which is colored black or white. The sphere giant sits in a black region; the AdS giant in a white region. In a black region, the $S^3$ in AdS$_5$ has shrunk to zero size; in a white region, the $S^3$ in the $S^5$ has shrunk to zero. This implies that in a white region, (for our AdS giant) we can't have a string with angular momentum on the $S^3$ contained in the $S^5$. If this interpretation is correct, our description (\ref{finalAdSmodel}) must fail. In \cite{Berenstein:2006qk} a potential source of a D-brane instability was discovered. The giant graviton couples to the background RR flux $F_5$. This coupling produces a Lorentz force acting on the brane and consequently, the giant does not undergo free motion. The string, which does not couple to $F_5$ and hence would undergo geodesic motion, thus feels a force from the brane as the brane drags it along. If this force is enough to overcome the tension of the string, the string will be stretched to large lengths, allowing smaller loops to pinch off. In this way, the brane would decay into gravitational radiation. We conjecture that the AdS giants are unstable against this decay, which is the source of the failure of our description (\ref{finalAdSmodel}). \begin{figure}[t]\label{fig:cgraph2} \begin{center} \includegraphics[height=6cm,width=9cm]{spheren} \caption{The expectation value $\langle \hat{J}\rangle$ versus $K$, for a single string attached to a single sphere giant, is shown. There are a total of K $Z$s in the string/brane system, the string has $L=80$ sites and $N=10000$.} \end{center} \end{figure} To provide some evidence for this interpretation, return to (\ref{sphereboundary}) and (\ref{adsboundary}). Our point string ansatz for the sphere giant is $$ r=a,\qquad \phi=\pi .$$ With this ansatz, \begin{equation} J={La^2\over 1-a^2}. \label{numop} \end{equation} It is now possible to solve (\ref{sphereboundary}) and (\ref{numop}) simultaneously to determine $J$. If back reaction effects are negligible, we expect that $J\ll\sqrt{N}$. From figure 10, it is clear that back reaction is indeed negligible. For the case of the AdS giant, we determine $J$ by solving (\ref{adsboundary}) and (\ref{numop}) simultaneously. If the AdS giant is unstable, we would expect the effects of back reaction to be large, and hence $J$ should be large. Of course, this implies that the dynamics is no longer described by our Hamiltonian. From figure 11 it is clear that the AdS giant suffers from significant back reaction, supporting our conjecture that the AdS giants are unstable. We are not able to verify this conjecture by a detailed study of this instability; this is outside the validity of our description which fails as soon as ${J^2\over N}\sim 1$. Besides the fact that $J$ is so large that our sigma model description can not be trusted, we see that $J>K$ which indicates that this is not a valid solution. \begin{figure}[t]\label{fig:cgraph2} \begin{center} \includegraphics[height=6cm,width=9cm]{adsn} \caption{The expectation value $\langle \hat{J}\rangle$ versus $K$, for a single string attached to a single AdS giant, is shown. There are a total of K $Z$s in the string/brane system, the string has $L=80$ sites and $N=10000$.} \end{center} \end{figure} \subsection{Bound State of Sphere Giants} In this section we consider an open string attached to a bound state of two sphere giants. In the case that a single open string attaches to a bound state of giant gravitons, the Gauss law forces both endpoints of the string to attach to the same brane. To stretch strings between two giants, we need at least two open strings attached to the bound state. In the case that open strings stretch between the giants, we would expect to see a force between the branes. For a single string attached to the bound state, we should be able to verify that we can recover the physics of a single string attached to a single brane, when the two branes are well separated. Our strategy is again to consider point like string solutions $\dot{r}=r'=0$, $\dot{\phi}=\phi'=0$ to the sigma model. As in previous sections, the bulk equations of motion are clearly satisfied, so that we need only focus on the boundary terms in the Hamiltonian. The wave function $|\Psi\rangle$ for this excited bound state will be a direct product of a ket describing the open string with a ket describing the giant. If we denote the point string state by $|z=re^{i\phi}\rangle_{\rm string}$, we can write $$ |\Psi\rangle =|z=re^{i\phi}\rangle_{\rm string}\otimes \sum_{l,p=1}^{N-1} c_{lp}\hat{W}^\dagger_l \hat{A}_p^\dagger |0\rangle ,$$ where the constants $c_{lp}$ need to be determined. To simplify our analysis, we will assume that $b_1\gg 1$. With this assumption, the boundary Hamiltonian\footnote{We have added $2\lambda z\bar{z}$ - which comes from the (string) bulk Hamiltonian (\ref{bulk}) - to this boundary Hamiltonian.}, when acting on the state $|\Psi\rangle$, is $$ H_{boundary}= 2\lambda z\left[ \sum_{l=1}^{N-1}\sqrt{1-{l\over N}}\hat{W}_{l+1}^\dagger\hat{W}_l+ \sum_{l=1,\, l\ne k}^{N}\sum_{k=1}^{N-1}\sqrt{1-{k\over N}}{1\over k-l}\hat{W}_{k+1}^\dagger \hat{A}_k\hat{A}_l^\dagger \hat{W}_l \right] $$ $$+2\lambda\sum_{l=1}^{N-1}\left(1-{l\over N}\right)\hat{W}_{l+1}^\dagger \hat{W}_{l+1} +2\lambda z\bar{z} $$ $$+2\lambda \bar{z}\left[ \sum_{l=1}^{N-1}\sqrt{1-{l\over N}}\hat{W}_{l}^\dagger\hat{W}_{l+1}+ \sum_{l=1,\, l\ne k}^{N}\sum_{k=1}^{N-1}\sqrt{1-{k\over N}}{1\over k-l}\hat{W}_{l}^\dagger \hat{A}_l \hat{A}_k^\dagger\hat{W}_{k+1} \right]. $$ For the case of a single string attached to a giant, we put the giant into a coherent state and fixed the coherent state parameter so that the boundary contribution to the energy vanished. In this section we will show that essentially the same approach works for a boundstate of two giants. Using our giant lattice notation, a coherent state for a single sphere giant, can be written as $$ |Z\rangle =\sum_l Z^l\hat{W}_l^\dagger |0\rangle .$$ Motivated by this observation, we have studied the state $$ |\Psi\rangle = \sum_{l,p=1}^{N-1} c_{lp}\hat{W}^\dagger_l \hat{A}_p^\dagger |0\rangle ={\cal N} \sum_{l_1=1}^{N-1}\sum_{l_2=1}^{N-1}(Z_1)^{l_1}(Z_2)^{l_2}\hat{W}^\dagger_{l_1}\hat{A}^\dagger_{l_2}|0\rangle ,$$ where $$ {\cal N}^{-2}={Z_1\bar{Z}_1-(Z_1\bar{Z}_1)^{N}\over 1-Z_1\bar{Z}_1}{Z_2\bar{Z}_2-(Z_2\bar{Z}_2)^{N}\over 1-Z_2\bar{Z}_2} .$$ To compute the coherent state expectation value of the terms in the Hamiltonian that do not depend on the $\hat{A}_p$ oscillator $$ H_{W}= 2\lambda z \sum_{l=1}^{N-1}\sqrt{1-{l\over N}}\hat{W}_{l+1}^\dagger\hat{W}_l + 2\lambda\sum_{l=1}^{N-1}\left(1-{l\over N}\right)\hat{W}_{l+1}^\dagger \hat{W}_{l+1} +2\lambda z\bar{z}$$ $$+2\lambda \bar{z} \sum_{l=1}^{N-1}\sqrt{1-{l\over N}}\hat{W}_{l}^\dagger\hat{W}_{l+1}, $$ we will replace $l$ by its coherent state expectation value (this is the same approximation employed in sections 5.1 and 5.2) $$\langle\hat{l}\rangle ={1\over 1-Z_1 \bar{Z}_1}.$$ For a giant graviton, we are interested in the case that $N(1-Z_1 \bar{Z}_1)$ is $O(1)$ so that $Z\bar{Z}_1=1-\alpha N^{-1}+O(N^{-2}).$ This sets the radius of our giant graviton $$ R^2= R_{S^5}^2{1\over N(1-Z_1 \bar{Z}_1)}=R_{S^5}^2{1\over \alpha }.$$ It is now straight forward to verify that, to leading order at large $N$, we have \begin{equation} \langle\Psi |H_W |\Psi\rangle = 2\lambda (z\bar{Z}_1+ \bar{z}Z_1)\sqrt{1\over N(1-Z_1 \bar{Z}_1)} +2\lambda Z_1\bar{Z}_1\left[{1\over N(1-Z_1 \bar{Z}_1)}\right] +2\lambda z\bar{z}. \label{Benergy} \end{equation} We can set (\ref{Benergy}) to zero by choosing \begin{equation} Z_1=R_1 e^{i\Phi_1},\quad \Phi_1=\phi+\pi, \label{twosolved} \end{equation} $$ r=R_1 \sqrt{1-{1\over N(1-R_1^2)}}=\sqrt{1-{1\over N(1-R_1^2)}},$$ with the last equality holding at leading order in $N$. This is a very natural result: the string is located precisely on the radius of the orbit of the giant in spacetime. Next, consider $$ \langle H_4\rangle =\lambda \langle\Psi |\sum_{l=1,\, l\ne k}^{N}\sum_{k=1}^{N-1}\sqrt{1-{k\over N}}{1\over k-l}\hat{W}_{k+1}^\dagger \hat{A}_k\hat{A}_l^\dagger \hat{W}_l |\Psi\rangle ,$$ $$ \approx\lambda \sqrt{1-{1\over N (1-Z_2\bar{Z}_2)}}\sum_{l=1,\, l\ne k}^{N}\sum_{k=1}^{N-1}{1\over k-l}\langle\Psi | \hat{W}_{k+1}^\dagger \hat{A}_k\hat{A}_l^\dagger \hat{W}_l |\Psi\rangle ,$$ $$ = {\cal N}^2 \lambda \bar{Z}_1\sqrt{1-{1\over N (1-Z_2\bar{Z}_2)}} \sum_{k,l=1,\, k\ne l}^{N-1} (\bar{Z}_1 Z_2)^k (\bar{Z}_2 Z_1)^l {1\over k-l}.$$ If we choose $Z_2=R_2e^{i\Phi_1}$, the above expectation value vanishes. It is now easy to see that, with this choice for $Z_2$ and the choice (\ref{twosolved}) we have $$ \langle\Psi |H_{boundary} |\Psi\rangle = 0,$$ so that the contribution to the energy coming from $H_{boundary}$ is minimized. The only loose end is to fix the sum of the number of $Z$s in the giant boundstate plus the number of $Z$s in the string to $K$. After taking the coherent state expectation value of the constraint $$ K=\sum_{l=1}^{N-1}l\hat{W}_l^\dagger\hat{W}_l+\sum_{k=1}^{N-1}k\hat{A}_k^\dagger\hat{A}_k +\hat{J},$$ we obtain (recall that $K$ and $L$ are given quantum numbers of the operator - they are not determined by the dynamics) $$K={1\over 1-Z_1\bar{Z}_1}+{1\over 1-Z_2\bar{Z}_2}+{Lr^2\over 1-r^2}.$$ This is a single equation for the two parameters $R_2$ and $r$, indicating that our solution has a single free parameter. This is expected - it specifies how we share the momentum between the two giants in the boundstate. \section{Conclusions} In this article, we have given methods that determine the Cuntz chain Hamiltonians describing the dynamics of open strings attached to giant gravitons. These Hamiltonians are accurate to first order in $g_{YM}^2$. The bulk term of the Hamiltonian has been obtained previously. The contribution of the present article is to obtain an explicit expression for the boundary interactions. There are boundary interactions which allow the string and the membrane to exchange momentum. We have managed to obtain an explicit expression for the back reaction on the membrane as a result of these interactions. Although the interactions are rather complicated, we have found a natural interpretation for the coefficients which appear. For example, there are coefficients that are responsible for gracefully switching off certain interactions as the branes become coincident. Further, we have found an ``effective field theory limit" in which the Hamiltonians simplify considerably. The operators we consider are labeled by Young diagrams; open strings are denoted by filling the boxes of the Young diagram with the label of the open string. We have only considered attaching a single string to each system of giants in this article. One interesting feature of our results, is that the Young diagrams labeling the operators have a clear geometrical interpretation. Indeed, one of the processes allowed by the boundary interactions involves a string detaching from the brane to which it is attached and reattaching to a second brane in the system. In terms of the labels for the operators, the open string hops from one box in the Young diagram, to a different box, and in the process it changes both the row and the column it is in. We have found clear signals that we should interpret the number of boxes separating the box that the string starts in from the box that the string lands up in, as we move on the right hand side of the Young diagram, as a distance. This distance is related to the radial coordinate of the two dimensional $y=0$ plane on which the LLM boundary conditions are specified\cite{Lin:2004nb},\cite{Balasubramanian:2005mg}. The interaction displays an inverse dependence on this distance. The effective theory describing these open strings should be a Yang-Mills theory, local not on the space on which the original field theory is defined, but rather on the $3+1$ dimensional worldvolume of the brane we are describing\cite{Balasubramanian:2004nb}. This new space should emerge from the matrix degrees of freedom of the ${\cal N}=4$ super Yang-Mills theory. The ${1\over r}$ potential which would arise from the exchange of massless particles in three spatial dimensions, thus looks rather natural. The Polyakov action for both closed\cite{Kr} and open strings\cite{CuntzChain} emerge as a semi-classical limit from a spin chain (or equivalently for us, from a Cuntz chain) that can be derived directly from the gauge theory. In this work we have managed to provide a complete account of the back reaction of the string on the membrane. In particular, we have introduced a Cuntz oscillator chain (which is equivalent to a spin chain) for the giant graviton itself. This Cuntz oscillator chain keeps track of the ``motion of the corners'' of the Young diagram, which describes the giant bound state. It is natural to expect that the semi-classical limit of this Cuntz chain will make contact with membrane dynamics in the dual gravitational description. We have tried to build a toy model in which we consider a string with a single site. The advantage of the toy model is that it is numerically tractable. For this ``short string" toy model the planar approximation used in computing the contractions between open string words is not valid. Thus, our Hamiltonians do not accurately describe this ``short string limit" and the numerical results are not to be trusted. Finally, we have considered the opposite limit in which the number of sites in the open string $L$ is taken to infinity $L\sim O(\sqrt{N})$. In this semi classical limit, the dynamics of the Cuntz chain is governed by a sigma model. We have argued that the description of strings attached to sphere giants is reliably captured by the sigma model dynamics. This was argued by showing that back reaction on the giant is a small effect. In contrast to this, the back reaction on an AdS giant is so large that the use of the sigma model to describe the open string dynamics is not valid. Based on this result, we conjecture that the AdS giant is unstable against gravitational decay, although a detailed study of this question is not within reach of our sigma model description. Finally, we studied an open string attached to a bound state of two sphere giants. In the case that a single open string attaches to a bound state of giant gravitons, the Gauss law forces both endpoints of the string to attach to the same brane. We have recovered the physics of a single string attached to a single brane, when the two branes are well separated, as expected. There are a number of directions in which our results can be extended. It would be interesting to extend our results to include the case that two strings are attached to the system of giants. When we have two (or more) strings attached to the giant graviton bound state, we can have strings stretching between different branes. Studying this system would allow us to compute the force between two branes. We have initiated a study of the dynamics of our Cuntz chain Hamiltonians. A natural question to ask is if this dynamics is integrable or not? Following the discussion of \cite{Berenstein:2006qk}, if this is the case, integrability might not be realized by a Bethe Ansatz. See \cite{Sam} for a recent discussion of this question. Recently an extremely interesting proposal for determining the metric of LLM geometries from closed string sigma models constructed as the semiclassical limit of Cuntz chain dynamics was given in \cite{SamMetric}. Can this be extended to open string dynamics for open strings attached to giants probing the general ${1\over 2}$ BPS (LLM) geometry? Given the technology we have developed for operators dual to excited giant gravitons, it may be possible to construct the string field theory describing open strings attached to giant gravitons, systematically. A powerful framework that has already given impressive results for closed string field theory\cite{friends} exploits the methods of collective field theory\cite{coll}. Finally, we have restricted ourselves to the $SU(2)$ sector in this article. This corresponds to studying strings with two angular momenta on the sphere. One could generalize our analysis to the full $SO(6)$ excitations on the sphere and further, to include spin in the AdS space. One could also consider attaching strings to giants that preserve less supersymmetry\cite{aristos}. {\vskip 0.4truecm} \noindent {\it Acknowledgements:} We would like to thank Michael Abbott, Simon Connell, Sera Cremonini, Aristomenis Donos, Antal Jevicki, Charles Kasl, Jeff Murugan, Sanjaye Ramgoolam, Joao Rodrigues and especially David Berenstein, for pleasant discussions and/or helpful correspondence. This work is based upon research supported by the South African Research Chairs Initiative of the Department of Science and Technology and National Reseach Foundation. Any opinion, findings and conclusions or recommendations expressed in this material are those of the authors and therefore the NRF and DST do not accept any liability with regard thereto. \vfill\eject
1,108,101,564,008
arxiv
\section{Introduction}\label{sec_intro} It is commonly held among the wider physics community that the topic of classical measurement is essentially trivial. I don't mean the modeling in physical detail of any one laboratory setup, which of course can get very complicated, but just the examination of ``measurement" as a bare-bones physical process, idealized away from as many complications as possible; a theoretical physicist's model of measurement. One way of stating the wide-held intuition is that there is in principle no obstruction in classical physics to measuring any observable of a system with arbitrary precision while disturbing the system arbitrarily little. This intuition is in sharp contrast to the situation in quantum physics, where the Heisenberg uncertainty principle (specifially the Ozawa inequality~\cite{ozawa2003universally}) asserts just such a limit. Surely influenced by this attitude, there is a correspondingly sharp contrast between the little attention ever payed to the measurement process in classical physics, and the large attention payed over the decades (deservedly) to that same process in quantum physics. To the best of my knowledge, only a handful of examples can be associated with the first category: Heisenberg's own thought experiments in the late 1920's~\cite{heisenberg1949physical} (particularly Heisenberg's microscope); although they served as the motivation for his quantum uncertainty principle, they were essentially classical arguments, augmented only by Einstein's theory of the photon. In 1996 Lamb and Fearn~\cite{lamb1996classical} set up the problem of a classical point particle (the system) in interaction with a second point particle (the ``apparatus") subject to noise. They stopped short of a thorough analysis; their primary interest being the quantum case. Recently Morgan~\cite{morgan2020algebraic} and Katagiri~\cite{katagiri2020measurement} made use of KvN formalism in separate attempts to use quantum measurement theory to examine measurement in classical mechanics. The only long-lasting foray into classical measurement seems to be the body of work surrounding Maxwell's demon and the foundations of thermodynamics. The demon was first conceptualized by Maxwell in 1867~\cite{collier1990two} as a ``very observant and neat-fingered being" capable of monitoring the molecules of a gas, and, by opening and closing a small door without exerting any work, of sorting the high-energy molecules from the low, thus creating a temperature gradient. This amplifier of fluctuations, if it existed, could then be used to run a perpetual motion machine of the second kind, violating the second law. Writing in 1929 Szil\'ard~\cite{szilard1929entropieverminderung} realized that, to save the second law, somewhere in the measurement process entropy had to be produced. Soon afterwards von Neumann~\cite{von1932mathematical}, in his reading of Szil\'ard, pointed to information acquisition as the key step incurring entropy cost---a claim that would later be developed by Brillouin~\cite{brillouin1951maxwell, brillouin1953negentropy}. Building on work by Landauer~\cite{landauer1961irreversibility}, Bennett in 1982~\cite{bennett1982thermodynamics, bennett1987demons} argued against Brillouin, pointing instead to erasure of the measurement record as the key step incurring entropy cost. This 150-year-long inquiry seems to have finally near a close in recent years, with the realization that the entropy cost of measurement can in fact be traded between the acquisition and erasure steps, as reviewed in~\cite{sagawa2012thermodynamics}. Crucially, in the absence of a mature theory of classical measurement, the latter rigorous analyses had to rely on quantum measurement theory to settle the problem. Earlier attempts had it worse: with neither a mature classical nor quantum theory of measurement available, most of them proceeded by contradiction; requiring that measurement (whatever its mechanism may be) not be incompatible with the laws of thermodynamics. The above illustrates three points which I would like to contend: (i) despite the wide-held intuition, measurement in classical physics is far from trivial; (ii) it is a woefully underdeveloped subject; and (iii) unacknowledged, it is a subject whose absence has long held back progress in physics. To address the issue, a reasonable aim would be a theory of measurement in the context of Hamiltonian mechanics, which is the mathematical framework at the foundation of classical physics. The research program I'm suggesting can be summarized as: to bring Bayesian probability to bear on an ontology governed by Hamiltonian mechanics, with the full strength, and no more, that is permitted by the geometro-algebraic structure of the ontology. The present paper aims to kickstart this program. We begin by noting that the assumption of perfect information regarding the initial state of the measuring apparatus is unrealistic. In fact it is ruled out as a matter of principle by the third law of thermodynamics; initial uncertainty must be present if for nothing other than for finite-temperature thermal noise. Next we posit a model of the measurement as a physical process. While some assumptions are made concerning the systems that can be used as measuring apparatuses, no restrictions are placed on the system under measurement. This model enjoys substantial generality while at the same time lending itself to Bayesian analysis. We then show that, in the process of measurement, the uncertainty in the state of the apparatus propagates into two uncertainties regarding the object-system: one is the precision of the measurement; and the other an uncertainty in the magnitude of the disturbance caused upon the system---that is, an observer effect. And we find that these two are bound by a Heisenberg-like precision-disturbance relation. In particular, while we find no obstacle in principle to making a measurement arbitrarily precise, we do find an obstruction to realizing such a measurement without disturbance. Thus our findings are at odds with the wide-held intuition. We argue that our measurement model is maximally efficient, in the sense that it saturates the precision-disturbance relation, while deviations from it decrease efficiency. Next, we derive a novel pair of Liouville-like master equations describing the dynamics of (a rational agent's knowledge of) a system under continuous measurement, according to whether the measurement record is read or discarded. And we show that the fine-grained Shannon entropy is a Lyapunov function (i.e.~$\dot S\geq0$) of the dynamics when the record is discarded. Finally, we suggest that the master equation for the case of discarded record doubles as a description of an open thermodynamic system. In this case the above result constitutes a novel H-theorem detailing entropy increase in non-equilibrium thermodynamics. I hope our topic will be of interest to several fields of physics, particularly to (non-equilibrium) statistical physics and to the foundations of quantum mechanics. The rest of the paper is organized as follows. We begin in Section~\ref{sec_HamMec} by reminding the reader of the basic concepts and equations of Hamiltonian mechanics. Section~\ref{sec_measurement} does the conceptual heavy lifting; there we construct our measurement model and obtain the basic results on which the rest of the paper is based. In Section~\ref{sec_precDistRel} we arrive at the precision-disturbance relation. Section~\ref{sec_continuousMeasurement} considers the problem of continuous weak measurement over time. The method of analysis there is drawn directly from the field of continuous quantum measurement. In Section~\ref{sec_discussion} we discuss a number of relevant topics in the new light of our results: the similarities, and likely coexistence in the real world, of the classical and quantum uncertainty relations; the approach to thermal equilibrium in statistical physics; the subtle interplay between ontology and epistemology in a theory of measurement; and the epistemic limitations inherent to classical Hamiltonian ontology. The paper ends by contemplating some of the many possibilities ahead; both the concrete and the speculative. \section{Brief recap of Hamiltonian mechanics}\label{sec_HamMec} Hamiltonian mechanics is a confluence of differential, algebraic and symplectic geometry, Lie algebra and Lie groups. A wonderful resource for the topic is~\cite{arnol1982mathematical}. We consider a continuous-time dynamical system over a $2n$-dimensional symplectic manifold, called \emph{phase space}. The \emph{observables} of the system (e.g.~position, momentum, angular momentum, etc) are the smooth, single-valued, real-valued functions defined globally over phase space. By convention we take observables to not depend explicitly on time. (With this convention, any explicit time-dependence is regarded as specifying a different observable at each moment in time.) The points in phase space can be expressed in local \emph{canonical coordinates} $(q,p)=(q_1,\dots,q_n,p_1,\dots,p_n)$ (Darboux's theorem). The state of the system evolves over time according to \emph{Hamilton's equations}, \begin{equation}\label{eq_HamsEqs} \dot{q}(t)=\frac{\partial H}{\partial p}(q(t),p(t);t),\ \ \dot{p}(t)=-\frac{\partial H}{\partial q}(q(t),p(t);t), \end{equation} where at each moment the system's \emph{Hamiltonian}, $H$, is an observable. Notice that ``Hamiltonian" and ``$H$" are indexical terms; they don't specify any concrete function over phase space, but refer to whichever observable happens to serve as the generator of time-evolution (as in~\eqref{eq_HamsEqs}) for a given system at a given time. At each moment Hamilton's equations describe a \emph{flow} $\Phi^H_\tau$ on phase space. Along the integral curves of this flow the value of any observable $A(q,p)$ changes as \begin{equation}\label{eq_eom} \dot A=\{A,H\}, \end{equation} where $\{A,H\}$ denotes the \emph{Poisson bracket}, \begin{equation}\label{eq_bracket} \{A,H\}\triangleq\sum_{j=1}^n\left(\frac{\partial A}{\partial q_j}\frac{\partial H}{\partial p_j}-\frac{\partial A}{\partial p_j}\frac{\partial H}{\partial q_j}\right). \end{equation} (Note that~\eqref{eq_eom} follows from~\eqref{eq_HamsEqs} after application of the chain rule to $\frac{d}{dt}A(q,p)$; but also contains~\eqref{eq_HamsEqs} as special cases when $A$ equals one of the canonical coordinates.) Two observables $A,B$ for which $\{A,B\}$ is identically zero are said to be in \emph{involution} with each other. In this case, by~\eqref{eq_eom}, the value of $A$ remains constant along the integral curves of the flow $\Phi^B_t$ (and vice versa). It follows that any observable in involution with the Hamiltonian is a \emph{constant of the motion}. In particular, if $H$ is not explicitly time-dependent then it is itself a constant of the motion (conservation of energy). Including itself, a given observable can be in involution with as few as one and as many as $2n-1$ independent observables, but only as many as $n$ can be all in involution with one another. On the other hand, if $\{A,B\}=1$ identically then $A,B$ are said to be \emph{conjugate} to each other. In this case $B$ is also said to be ``the" \emph{generator of translations} in $A$ (and vice versa); because, by~\eqref{eq_eom}, the value of $A$ changes monotonically at unit rate along the integral curves of the flow $\Phi^B_t$. A given observable, $A$, may fail to have a conjugate observable. In this case it is still possible to speak of a locally-defined conjugate ``quantity", $B$, which satisfies $\{A,B\}=1$ but fails to satisfy the stringent definition of a bona fide observable. This is illustrated on the 2D phase space by the observable $I=\frac12(q^2+p^2)$ (the Hamiltonian for the simple harmonic oscillator); whose conjugate quantity $\phi=\arg(q+ip)$ (the phase of oscillation for the sho) either fails to be globally continuous, or else fails to be single-valued (depending on one's choice of definition). Notice that the components of $(q,p)$ satisfy the \emph{canonical relations} \begin{equation}\label{eq_ccr} \{q_i,q_j\}=\{p_i,p_j\}=0, \quad \{q_i,p_j\}=\delta_{ij}, \end{equation} so each canonical coordinate is in involution with all other coordinates but one, to which it is conjugate. A diffeomorphism of phase space, $(q,p)\mapsto(q',p')$, such that $(q',p')$ again satisfy these canonical relations is said to be a \emph{canonical transformation}. Canonical transformations have Jacobian determinant equal to 1, so they preserve the \emph{Liouville measure} of phase space volume, $d^nqd^np=d^nq'd^np'$. For any flow parameter, $\tau$, the Hamiltonian flow $\Phi^H_\tau$ is an example of an (active) canonical transformation; in particular, Hamiltonian flow preserves the Liouville measure (Liouville's theorem). Changes of coordinates implemented by (passive) canonical transformations are particularly convenient since they preserve the simple form of the Liouville measure, the equations of motion~(\ref{eq_HamsEqs},~\ref{eq_eom}), and the Poisson bracket~\eqref{eq_bracket}. \section{A model of measurement in a Hamiltonian world}\label{sec_measurement} Suppose we wished to measure an observable $A(q,p)$ of the system~\eqref{eq_HamsEqs} at time $t_0$. In the world of Hamiltonian mechanics this can only be done by coupling the system to a measuring apparatus, where the joint system ($=$~object-system $+$ apparatus) is itself a Hamiltonian system, with \begin{align} H_\text{joint}(q,p,x,y;t)&=H(q,p;t)+H_\text{app}(x,y;t)\notag\\ &\quad+H_\text{int}(q,p,x,y;t).\label{eq_jointHam} \end{align} Here $(x,y)$ are canonical coordinates on the $2m$-dimensional phase space of the apparatus; $H_\text{app}$ is the apparatus' Hamiltonian; and $H_\text{int}$ is the interaction between system and apparatus, which we will assume to be switched on only briefly around $t=t_0$. We now stipulate a model for the measurement. \subsection{System-apparatus coupling}\label{sec_coupling} Consider the ``gauge", or ``pointer display", of the apparatus; by which I mean the observable of the apparatus which, after interaction with the system, we want to reflect the sought-after value of $A$ at time $t_0$. Denote this observable of the apparatus by $Q(x,y)$. Suppose $Q$ has a conjugate observable, $P(x,y)$. For the interaction to imprint the value of $A$ on $Q$, the interaction Hamiltonian must involve $A$ and the conjugate quantity to $Q$, namely $P$; because this is the generator of translations in $Q$~\cite{fn_CJL}. The simplest interaction of this form is the product \begin{equation}\label{eq_interactionH} H_\text{int}(q,p,x,y;t)=\sqrt{k}\delta(t-t_0)A(q,p)P(x,y), \end{equation} where the constant $k>0$ represents the \emph{strength} of the measurement (see Section~\ref{sec_integratingHam}), and $\delta(t-t_0)$ is the Dirac delta function indicating that the measurement is idealized as taking place instantaneously at $t_0$. Without loss of generality we will assume that $P^2$ has physical dimensions of energy. (Otherwise this can be achieved by scaling $(Q,P)\mapsto(aQ, P/a)$ for some suitable choice of constant $a$.) The physical dimensions of $k$ are then $[k]=\text{energy}\cdot\text{time}^2/[A]^2$. \subsection{Readying the apparatus} Let us take a step back to consider how to initialize the apparatus into its ``ready state" prior to interaction at $t_0$. Being, as we are, in the process of defining what we mean by ``measurement", on pain of circularity we shouldn't appeal to measurement to assess the state of the apparatus, as might be needed to actively manipulate it into a state ready for measurement of the system. This difficulty can be circumvented by letting low-temperature thermalization take care of confining the state of the apparatus to a narrow region of its phase space. The region in question can be specified experimentally by setting up a deep energetic well there---a ``trap". This trap could be due to a confining gravitational or electrostatic potential; a combination of near-field electric and magnetic fields; a light field; atomic chemical bonds; etc. We write \begin{equation} H_\text{app}(x,y;t)=H_\text{app}^\text{own}(x,y)+\Pi(t)H_\text{trap}(x,y), \end{equation} where $H_\text{app}^\text{own}$ is the apparatus' own, or internal, Hamiltonian, which we take to be time-independent; and $\Pi(t)$ is a rectangular step-function taking only the values $1/0$, describing the on/off switch of the trap. The trap will be switched off for all $t> t_0$; it is only switched on in the time leading up to $t_0$, to help bring the apparatus into its ready state, as we will now describe. The trap consists of a deep energetic well which, when switched on ($\Pi(t)=1$), sets the ground state of the apparatus at some point $(x^*,y^*)$. Without loss of generality we may set our coordinates such that $(x^*,y^*)=(0,0)$, and we may assume that the corresponding energy is $H_\text{app}(x^*,y^*)|_\text{trap on}=0$. (Otherwise these conditions can be met by shifted redefinitions of $x,y, H_\text{app}$.) We Taylor-expand $H_\text{app}(x,y)|_\text{trap on}$ around the ground state, obtaining a positive-definite quadratic form: \begin{align} H_\text{app}(x,y)\Big|_\text{trap on}&=H_\text{app}^\text{own}(x,y)+H_\text{trap}(x,y)\notag\\ &=\frac{1}{2} \begin{pmatrix} x &y \end{pmatrix} \hat M \begin{pmatrix} x\\ y \end{pmatrix} +\mathcal{O}(3),\label{eq_HappWithTrap1} \end{align} where $\hat M$ is a symmetric positive-definite $2m$-by-$2m$ matrix of coefficients, and $\mathcal{O}(3)$ denotes all higher-degree terms in the series. As shown by Whittaker~\cite{whittaker1959treatise} (see also theorem by Williamson~\cite{williamson1936algebraic}, explained in~\cite[appendix~6]{arnol1982mathematical}), there exists a local linear canonical transformation $(x,y)\mapsto (z,w)$ which reduces~\eqref{eq_HappWithTrap1} to the normal form \begin{align} H_\text{app}(z,w)\Big|_\text{trap on}&=\frac{1}{2}\sum_{i=1}^{m}(b_i^2z_i^2+w_i^2)+\mathcal{O}(3).\label{eq_HappWithTrap2} \end{align} Here $b_1\geq b_2\geq\dots\geq b_m>0$ are constants with physical dimensions of angular frequency; they are the natural frequencies of oscillation of the apparatus around its trapped ground state. Now to ready the apparatus: while the trap is on, the apparatus is brought into contact with a thermal bath at some temperature $T=1/\beta k_B$, allowed to equilibrate, and then isolated again~\cite{fn_noSpoil}. After this our knowledge about the state of the apparatus is given by the Boltzmann probability distribution \begin{equation}\label{eq_canonicalDist1} \rho(z,w)\,d^mzd^mw\propto e^{-\beta H_\text{app}(z,w)|_\text{trap on}}\,d^mzd^mw. \end{equation} Note that in the time between isolation from the bath and measurement at $t_0$ the evolution of the apparatus will preserve this distribution, as opposed to spoiling the preparation, since $H_\text{app}|_\text{trap on}$ is constant under the phase-space flow generated by itself and such flow preserves the Liouville measure $d^mzd^mw$. At this point we make three requirements that constrain the apparatuses, traps, and temperatures allowed by our model. (i) We require that the trap be harmonic enough, or the temperature be low enough, that in the Boltzmann distribution~\eqref{eq_canonicalDist1} the higher-degree terms in~\eqref{eq_HappWithTrap2} can be neglected. (ii) We require that at least one of the coordinates $z_i$ be in involution with $H_\text{app}^\text{own}$. Let $i=i^*$ be the index of this special coordinate. (If given a choice, we want the associated frequency $b_{i^*}$ to be as large as possible, for a reason to be seen in Section~\ref{sec_precDistRel}.) The condition means that $z_{i^*}$ will be a constant of the motion of the apparatus when the trap is switched off---a desirable property for the pointer $Q$ (introduced in Section~\ref{sec_coupling}); so that the measurement record is stable after the interaction has past. We thus identify the pointer $Q\triangleq z_{i^*}$ and its conjugate $P\triangleq w_{i^*}$. We denote the corresponding frequency by $\Omega\triangleq b_{i^*}$. Note the physical interpretation of $\Omega$ as the natural frequency of oscillation of the pointer around its trapped state. Since $Q,P$ are observables, in making these identifications we're implicitly assuming that (iii) the pair of conjugate local quantities $(z_{i^*},w_{i^*})$ are globally extendable to smooth single-valued functions on phase space. From now on $Q, P$ are the only observables of the apparatus with which we will be concerned. With the above requirements met, we can easily marginalize over all other variables in~\eqref{eq_canonicalDist1} to find the probability distribution over the pointer and its conjugate: \begin{equation}\label{eq_canonicalDist} \rho(Q,P)\,dQ\,dP=\frac{\beta\Omega}{2\pi}e^{-\frac{\beta\Omega^2}{2}Q^2-\frac{\beta}{2}P^2}dQ\,dP. \end{equation} This is the apparatus ready state. It describes a preparation in which the pointer and its conjugate have been set independently to zero, but there remains some uncertainty on their exact values. \subsection{Integrating Hamilton's equations}\label{sec_integratingHam} Integrating Hamilton's equations for the joint system, the effect of the interaction~\eqref{eq_interactionH} is to instantaneously change the state of both object-system and apparatus as~\cite{fn_integratingHam} \begin{subequations}\label{eq_IntegrateHamsEqs} \begin{align} \begin{pmatrix} q\\ p \end{pmatrix}_{t_0^+} &= \Phi^A_{\sqrt{k}P} \begin{pmatrix} q\\ p \end{pmatrix}_{t_0^-}\label{eq_flow}\\ \begin{pmatrix} Q\\ P \end{pmatrix}_{t_0^+} &= \begin{pmatrix} Q+\sqrt{k}A(q,p)\\ P \end{pmatrix}_{t_0^-}\label{eq_record} \end{align} \end{subequations} where $\Phi^A_{\tau}$ is the transformation on the system's phase space that implements flowing for a ``time" $\tau$ under the Hamiltonian flow generated by $A$. Having initialized the apparatus to its ready state~\eqref{eq_canonicalDist} prior to the interaction, then, in view of~\eqref{eq_record}, after the interaction our state of knowledge of the apparatus, conditional on a given state of the system at the time of measurement, is \begin{align} \rho(Q,P|q,p)dQdP&=\frac{\beta\Omega}{2\pi}e^{-\frac{\beta\Omega^2}{2}\left(Q-\sqrt{k}A(q,p)\right)^2-\frac{\beta}{2}P^2}dQdP.\label{eq_postInt} \end{align} Note that the dependence on $(q,p)$ is only through $A(q,p)$. The trap on the apparatus is released at the moment of measurement ($\Pi(t)=0$ for $t>t_0$), so that the apparatus Hamiltonian returns to its internal setting $H_\text{app}^\text{own}$. By construction the pointer $Q$ is in involution with this Hamiltonian, so it constitutes a stable record of the measurement. At this time (i.e.~any time after $t_0$) we read the pointer on the apparatus, yielding some definite value $Q^*$, or equivalently \begin{equation}\label{eq_recordA} A^*\triangleq\frac{Q^*}{\sqrt{k}}. \end{equation} ($A^*$ is just the reading on the pointer with the scale set appropriately.) Note that this does not mean that the value of $A$ at the time of measurement is $A^*$! Rather, given this datum, the likelihood function for the value of $A$ at the time of measurement is, from~\eqref{eq_postInt}, \begin{align}\label{eq_likelihood} \rho(A^*|A)dA^*=\sqrt{\frac{\beta k\Omega^2}{2\pi}}e^{-\frac{\beta k\Omega^2}{2}\left(A^*-A\right)^2}dA^*. \end{align} This completes our model of measurement. The \emph{measurement record} $A^*$, or equivalently the likelihood function~\eqref{eq_likelihood} (with $A^*$ specified), constitutes the outcome of the measurement. \subsection{Consuming the measurement} There are two operations that one, as a recipient, should perform to consume the information of the measurement. The first is triggered by the information that the observable $A$ of the system was measured at time $t_0$ by the stipulated procedure, with specified settings $(\beta,k,\Omega)$. As seen in~\eqref{eq_flow}, the interaction involved in this measurement affects the state of the system by causing it to move along the flow generated by $A$ for some unknown ``time" $\sqrt{k}P$. If one knew the value of $P$ then one should change their probability distribution about the state of the system at time $t_0$ according to \[ \rho(q,p;t_0^-)\mapsto\rho(q,p;t_0^+)=\left[\left(\Phi^{A}_{\sqrt{k}P}\right)_*\rho\right](q,p;t_0^-), \] where $(\Phi^{A}_{\tau})_*$ denotes the push-forward of the transformation $\Phi^{A}_{\tau}$, defined as $[(\Phi^{A}_\tau)_*\rho](q,p)\triangleq\rho(\Phi^{A}_{-\tau}(q,p))$; and the $+/-$ superscripts on $t_0$ are meant as a reminder that this update reflects a physical transition of the system that took place in a short time interval around $t_0$. But one does not know the value of $P$; all that is know about it is expressed by the probability distribution~\eqref{eq_canonicalDist}. One folds this in by marginalizing over $P$: \begin{equation}\label{eq_marginalize} \rho(q,p;t_0^+)=\sqrt{\frac{\beta}{2\pi}}\int dP\,e^{-\frac{\beta}{2}P^2}\left[\left(\Phi^{A}_{\sqrt{k}P}\right)_*\rho\right](q,p;t_0^-). \end{equation} The second operation is triggered by the information of the measurement outcome~\eqref{eq_likelihood}. One assimilates this by performing the Bayesian update $\rho_\text{prior}(q,p;t_0)\mapsto\rho_\text{posterior}(q,p;t_0)$, with \begin{align} \rho&_\text{posterior}(q,p;t_0)\propto\rho_\text{prior}(q,p;t_0)\rho(A^*|A(q,p))\notag\\ &\qquad\qquad\quad \propto \rho_\text{prior}(q,p;t_0)e^{-\frac{\beta k\Omega^2}{2}\left(A^*-A(q,p)\right)^2},\label{eq_BayesianUpdate} \end{align} where the omitted factor of proportionality is just the normalization, obtained by integrating the expression shown over the system's phase space ($\int d^nqd^np$). Since multiplication by a function of $A$ commutes with the push-forward $\left(\Phi^{A}_{\tau}\right)_*$, operations~(\ref{eq_marginalize},~\ref{eq_BayesianUpdate}) can be performed in either order to the same effect. If~\eqref{eq_BayesianUpdate} is performed first, it corresponds to updating one's knowledge about the state the system was in before the measurement was made (i.e.~at $t_0^-)$; if second, about the state the system was left in by the measurement. Notice that if only the fact of the measurement is revealed but not the outcome (in this case we say the outcome was \emph{discarded}), then one should only perform operation~\eqref{eq_marginalize}, not~\eqref{eq_BayesianUpdate}. Finally, if a single number is desired as an objective quantification of the measured observable (i.e.~not biased by anyone's prior), the maximum-likelihood estimate can be given, from~\eqref{eq_likelihood}: \begin{gather} A\Big|_{t_0}=A^*\pm\frac{1}{\sqrt{\beta k\Omega^2}}\label{eq_outcome} \end{gather} (mean $\pm$ standard deviation)~\cite{fn_MLE}. We will refer to \begin{equation}\label{eq_precision} \epsilon_A\triangleq\frac{1}{\sqrt{\beta k\Omega^2}} \end{equation} as the \emph{precision} of the measurement, or of the measurement' outcome. (But notice that to translate this to an uncertainty in a given agent's knowledge of $A$ we must first combine the likelihood function with the agent's prior, as in~\eqref{eq_BayesianUpdate}.) At this point it is clear why $k$ represents the strength of the measurement: the larger $k$ the higher the measurement's precision~\cite{nt_AstarOK}. \section{A Heisenberg-like precision-disturbance relation in Hamiltonian mechanics}\label{sec_precDistRel} The measurement model we've just stipulated is characterized by the triad $(\beta,k,\Omega)$; respectively the (inverse) temperature of the thermal bath ($=$~temperature of apparatus), the measurement strength, and the frequency of oscillation of the apparatus' pointer around its trapped ground state. Notice, from~\eqref{eq_precision}, that for a given bath and trap (i.e.~fixed $\beta,\Omega$), we can make our measurement of $A$ more precise by cranking up the strength, $k$, which we might think of as a knob on our experimental setup. However, notice that the more precisely $A$ is measured (i.e.~the larger $k$), the more the system is affected by the measurement (i.e.~the longer the ``flow time" $\sqrt{k}P$ in~\eqref{eq_flow}), and more importantly, the more uncertain we are about the magnitude of said observer effect (since the uncertainty in $P$, $1/\sqrt{\beta}$ from~\eqref{eq_canonicalDist}, translates into uncertainty $\sqrt{k/\beta}$ in the ``flow time"). It's worth emphasizing that this disturbance of the system is not arbitrary, but has the form of time-evolution along the Hamiltonian flow generated by the measured observable, $A$; the only thing uncertain is how much ``time" the system flowed. We can see that this disturbance will affect some observables of the system more than others: in particular, any observable $B$ in involution with $A$ will emerge undisturbed in the immediate aftermath of the measurement; although subsequent time-evolution under the system's own dynamics will cause the initial disturbance to ``leak into" such a $B$, unless $B$ is also in involution with $H$. Concretely, we find that the precision of a measurement~\eqref{eq_precision}, and the \emph{disturbance} caused by the measurement upon the system, \begin{equation}\label{eq_disturbance} \eta_A\triangleq \sqrt{k/\beta}, \end{equation} obey the inverse relation \begin{equation} \epsilon_A\eta_A=\frac{1}{\beta\Omega},\label{eq_precDistRel} \end{equation} which is fixed for a given bath and trap; independent of the identity of the system measured, of that of the system used as measuring apparatus, of the measurement strength and of the choice of observable measured. The product on the left-hand side can easily be made larger (see discussion in Section~\ref{sec_quantAndClass}) but not smaller, as far as I can tell. This Heisenberg-like precision-disturbance relation (or ``uncertainty relation" for short) suggests an obstruction to how close we can come in a world governed by Hamiltonian mechanics to the idealization of measurement without disturbance. To the extent that our model of measurement has a claim to generality, relation~\eqref{eq_precDistRel} will be a general principle. Note that this relation is softer than the Heisenberg uncertainty principle of quantum mechanics: for any given bath and trap one will have a finite obstruction on the right-hand side of~\eqref{eq_precDistRel}, but one can always endeavor to make the obstruction smaller by cooling the apparatus further or tightening the trap. Instead this obstruction is of a kind with the third law, to which it is clearly related: it suggests that it is impossible by any procedure, no matter how idealized, to reduce the observer effect of measurement to zero in a finite number of operations. \section{Continuous measurement over time: new master equations and H-theorem}\label{sec_continuousMeasurement} Extracting information about the system by measurement increases our knowledge about some aspect of it. However, we've seen that any such measurement according to our model will disturb the system to an extent that we cannot monitor; and this decreases our knowledge about some other aspect of the system. For a single measurement this tradeoff is expressed by the precision-disturbance relation~\eqref{eq_precDistRel}, or in more detail by the updates~(\ref{eq_marginalize},~\ref{eq_BayesianUpdate}). In this section we explore the compound effect of such tradeoff due to multiple measurements; specifically, a continuous succession of vanishingly-weak measurements. The method of analysis we follow is drawn from the field of continuous quantum measurement, which addresses the corresponding problem in that setting. (See for example~\cite{jacobs2006straightforward}.) Subdivide a finite interval of time $[0,T]$ into $N$ equal subintervals demarcated by $t_0=0<t_1<t_2<\dots<t_N=T$, with $t_j=j\Delta t$. For each $j\in\{1,\dots,N\}$, select an observable $A_j=A_j(q,p)$ of the system, and prepare for it a measurement $(\beta_j,k_j\Delta t,\Omega_j)$ to be carried out at time $t_j$. Notice that we've scaled the strength according to the size of the subintervals; smaller $\Delta t$ means each individual measurement is weaker, but a greater number of them fit into $[0,T]$. We will see that this is the right scaling for the effects to converge when we take the limit of smaller and smaller $\Delta t$. (Note that this changes the physical dimensions of $k_j$; they are now $[k_j]=\text{energy}\cdot\text{time}/[A_j]^2$.) The resulting tuple of pointer readings $\mathbf{A}^*\triangleq(A^*_1,A^*_2,\dots,A^*_N)$ constitutes the measurement record for the entire succession of measurements. To assimilate the $j$-th measurement we perform the two operations~(\ref{eq_marginalize},~\ref{eq_BayesianUpdate}), resulting in the update \begin{align} \rho&(q,p;t_{j+1})\propto e^{-\frac{\beta_j(k_j\Delta t)\Omega_j^2}{2}\left(A_j^*-A_j(q,p)\right)^2}\notag\\ &\quad\cdot\sqrt{\frac{\beta_j}{2\pi}}\int dP_j\,e^{-\frac{\beta_j}{2}P_j^2}\left[\left(\Phi^{A_j}_{\sqrt{k_j\Delta t}P_j}\right)_*\rho\right](q,p;t_j).\label{eq_longEq} \end{align} For small $\tau$, the push-forward \begin{equation} \left[\left(\Phi^{A}_{\tau}\right)_*\rho\right](q_0,p_0)=\rho\left(\Phi^{A}_{-\tau}(q_0,p_0)\right)=\rho(q(-\tau),p(-\tau)) \end{equation} can be calculated by Taylor-expanding the function $\tau\mapsto \rho(q(-\tau),p(-\tau))$ around $\tau=0$; using the chain rule to pass all time-derivatives onto $q,p$, and calculating the latter from Hamilton's equations with Hamiltonian $A$. The result is \begin{equation} \left(\Phi^{A}_{\tau}\right)_*\rho=\rho+\tau\{A,\rho\}+\frac{\tau^2}{2}\{A,\{A,\rho\}\}+\mathcal{O}(\tau^3). \end{equation} Putting this into~\eqref{eq_longEq}, the integral over $P_j$ can then be done order-by-order. The odd-order terms all vanish by symmetry, leaving us with \begin{align} \rho&(q,p;t_{j+1})\propto e^{-\frac{\beta_jk_j\Omega_j^2\Delta t}{2}\left(A_j^*-A_j\right)^2}\notag\\ &\qquad\cdot\left(\rho+\frac{k_j\Delta t}{2\beta_j}\{A_j,\{A_j,\rho\}\}+\mathcal{O}(\Delta t^2)\right)\Bigg|_{(q,p;t_j)}.\label{eq_longEq2} \end{align} \subsection{Case of discarded measurement record}\label{sec_discardedRec} \begin{figure*} \centering \includegraphics[width=\linewidth]{final_figures/sho_master_equations.png} \caption{\textbf{Master equation dynamics in various measurement regimes.} Evolution of the state of knowledge $\rho(q,p;t)$ of a rational agent under master equation~\eqref{eq_ModdedStochasticLiouville} is illustrated in a simple example: the system under measurement is a 1D simple harmonic oscillator (sho); the measurement is characterized by constant $\beta,k,\Omega$ and fixed $A$; the measured observable is the energy $A=H\triangleq\frac{1}{2}(\omega^2q^2+p^2)$; and the initial distribution over phase space is unimodal. Although not proven here, three timescales are involved: that of internal dynamics, $\tau_\text{dyn}\sim1/\omega$; that of diffusion due to observer effect, $\tau_\text{dif}\sim\beta/k\omega^2$; and that of collapse due to Bayesian update on the measurement record, $\tau_\text{col}\sim1/\beta k\Omega^2\Delta E^2$, where $\Delta E$ is the target certainty on $H$ (i.e.~$\tau_\text{col}$ is the characteristic timescale for the variance of $\rho(H;t)$ to fall below $\Delta E^2$). \textbf{(a)} Phase portrait showing level sets of the sho Hamiltonian. \textbf{(b--e)} From left to right, snapshots of $\rho(q,p;t)$ at successive times, indicated at top in units of the sho period, for four different measurement regimes (rows). For ease of visualization the color scheme (left) is normalized anew for each plot. \textbf{(b)} Regime $\tau_\text{dyn}\ll\tau_\text{dif}, \tau_\text{col}$; describes an isolated system;~\eqref{eq_ModdedStochasticLiouville} reduces to the Liouville equation $\partial\rho/\partial t=\{H,\rho\}$. \textbf{(c)} Regime $\tau_\text{dyn}\sim\tau_\text{dif}\ll\tau_\text{col}$; describes case of discarded measurement record;~\eqref{eq_ModdedStochasticLiouville} reduces to~\eqref{eq_ModdedLiouville}. Notice entropy increase, in accordance with~\eqref{eq_ModdedLiouville}, due to diffusion along the flow generated by $A$. \textbf{(d)} Regime $\tau_\text{dyn}\sim\tau_\text{col}\ll\tau_\text{dif}$; describes an approximation to ideal classical measurement with minimal disturbance. Notice the trend of decreasing entropy, in accordance with~\eqref{eq_antiHtheorem}, due to collapse towards the measurement outcome. \textbf{(e)} Regime $\tau_\text{dyn}\sim\tau_\text{col}\sim\tau_\text{dif}$; describes the three processes (dynamics, diffusion and collapse) happening together. Notice the tradeoff between information about $A$ and information about the conjugate quantity (sho phase). \textbf{(f)} Evolution of the first four cumulants of $\rho(A;t)$ in regime \textbf{d} (equivalently regime \textbf{e}). For ease of visualization each cumulant is rescaled to 1 at $t=0$. Note qualitative agreement with~\eqref{eq_hierarchyOfCumulants}.} \label{fig_sho_master_equations} \end{figure*} Let's pause to consider the case in which the measurement record $\mathbf A^*$ is discarded. In this case we should skip update~\eqref{eq_BayesianUpdate}, which amounts to dropping the exponential factor and the omitted proportionality factor in~\eqref{eq_longEq2}. Taking then the limit $\Delta t\to dt$ describing a continuous succession of vanishingly-weak measurements, we arrive in this case (discarded measurement record) at \begin{align} \frac{\partial\rho}{\partial t}&= \underbrace{\{H,\rho\}}_{\substack{\text{internal dynamics}\\ \text{Hamiltonian flow}\\ \text{info preserved}}} +\underbrace{\frac{k}{2\beta}\{A,\{A,\rho\}\}}_{\substack{\text{observer effect}\\ \text{diffusion along flow $\Phi^A_t$}\\ \text{info about $A$ preserved}\\ \text{other info lost}}}\label{eq_ModdedLiouville} \end{align} where we've introduced the well-known Liouville term $\{H,\rho\}$ accounting for the internal dynamics of the system under $H$~\cite{kardar2007statistical}, which we had been ignoring until now; and all quantities shown may be explicit functions of time. This is a Liouville-like master equation, with an additional second-order term due to the observer effect of measurement. We can get some sense for the effect of this new term as follows. Let $B(q,p;t)$ denote any function over phase space, possibly explicitly time-dependent. Here and throughout let's use $\langle\,\cdot\,\rangle$ to denote the phase-space average: \begin{equation} \langle B\rangle\triangleq\int d^nqd^np\,\rho(q,p;t)B(q,p;t). \end{equation} In Appendix~\ref{app_evolutionOfMean} we prove that under master equation~\eqref{eq_ModdedLiouville} any such phase-space average evolves as \begin{align} \frac{d}{dt}\langle B\rangle&=\langle\lbrace B,H\rbrace\rangle+\left\langle\frac{\partial B}{\partial t}\right\rangle-\frac{k}{2\beta}\left\langle\lbrace A,\log\rho\rbrace\lbrace A,B\rbrace\right\rangle.\label{eq_evolutionOfMean} \end{align} The first term on the right-hand side of this equation is due to the Liouville term in~\eqref{eq_ModdedLiouville}; the second term is due to any explicit time-dependence of $B$; and the third term is due to the second-order term in~\eqref{eq_ModdedLiouville}. As a special application of this equation consider $B=-\log\rho$, in which case the phase-space average is the Shannon entropy: \begin{equation}\label{eq_Shannon} S(t)\triangleq\langle-\log\rho\rangle. \end{equation} It is not hard to show that the first two terms on the right-hand side of~\eqref{eq_evolutionOfMean} vanish in this case~\cite{fn_proofEntropy}. Thus we find that under dynamics~\eqref{eq_ModdedLiouville}, \begin{align} \dot S&=\frac{k}{2\beta}\left\langle\{A,\log\rho\}^2\right\rangle\geq0.\label{eq_Htheorem} \end{align} It is a well-known result that $S(t)$ remains constant ($\dot S=0$) under the Liouville equation $\partial\rho/\partial t=\{H,\rho\}$ (see example in Figure~\ref{fig_sho_master_equations}b). In breaking with that, we have just found that entropy generally increases over time under~\eqref{eq_ModdedLiouville} on account of the new term. Thus the Liouville term preserves information, while the second-order term causes information loss. Indeed, in accordance with our discussion in Section~\ref{sec_precDistRel} concerning the nature of the observer effect, this term describes diffusion along the flow lines generated by the instantaneous observable $A(q,p;t)$ (see example in Figure~\ref{fig_sho_master_equations}c). This diffusion preserves instant-to-instant information pertaining to that instant's observable $A(q,p;t)$, while it erases information pertaining to observables not in involution with it. We note here, in passing, that~\eqref{eq_Htheorem} may alternatively be construed as an H-theorem detailing entropy increase in an open thermodynamic system in contact with a thermal bath (described here as the collection of apparatuses). In this light~(\ref{eq_ModdedLiouville},~\ref{eq_evolutionOfMean},~\ref{eq_Htheorem}) may help further our understanding of non-equilibrium statistical physics and the approach to thermal equilibrium. We discuss this topic further in Section~\ref{sec_thermo}. \subsection{Case of simulated measurement record} Returning now to~\eqref{eq_longEq2}, suppose instead that the measurement record is not discarded but that we have only yet read up to the $(j-1)$-th entry; i.e.~$A^*_1$ through $A^*_{j-1}$ are known while $A^*_j$ onward are not. We would like to simulate ahead of time (say, on a computer) how our state of knowledge will evolve as we continue to read more of the record. However, without the benefit of hindsight the upcoming record entries appear to us as random variables. The language for this kind of simulation is stochastic calculus. (See tutorial on stochastic calculus in~\cite{jacobs2006straightforward}.) Let us first ask: what should be our probability distribution for the $j$-th outcome, $A_j^*$? Making use of the likelihood function~\eqref{eq_likelihood}, this question can be answered in terms of our current knowledge of the value of $A_j$: \begin{align} \rho(A_j^*;t_j)&=\int dA_j\,\rho(A_j;t_j)\rho(A_j^*|A_j)\notag\\ &\propto\int dA_j\,\rho(A_j;t_j)e^{-\frac{\beta_j(k_j\Delta t)\Omega_j^2}{2}\left(A_j^*-A_j\right)^2}. \end{align} As $\Delta t\to dt$, the Gaussian in this expression becomes very wide and spread out as a function of $A_j$. The distribution $\rho(A_j;t_j)$ becomes very narrow by comparison, and can be replaced by a Dirac delta, which must be centered at $\langle A_j\rangle$ for the means to match. Using the delta to do the integral over $A_j$ we have, up to a normalization factor, \begin{align} \rho(A_j^*;t_j)&\underset{\Delta t\to dt}{\longrightarrow}e^{-\frac{\beta_jk_j\Omega_j^2\Delta t}{2}\left(A_j^*-\langle A_j\rangle\right)^2}.\label{eq_PDFoverPj} \end{align} By a simple change of variables we introduce $\Delta W_j$, our probability distribution of which is a zero-mean Gaussian with variance $\Delta t$, and in terms of which \begin{equation}\label{eq_finiteStochProc} A_j^*=\langle A_j\rangle+\frac{1}{\sqrt{\beta_jk_j\Omega_j^2}}\frac{\Delta W_j}{\Delta t}. \end{equation} The value of expressing~\eqref{eq_PDFoverPj} this way is two-fold. From a simulation standpoint, we can use a random number generator to sample $\Delta W_j$ from its Gaussian distribution, and~\eqref{eq_finiteStochProc} then tells us how to convert this into a sample of $A_j^*$. And from an analysis standpoint, this expression enables a very convenient form of calculation: in the limit $\Delta t\to dt$ we write \begin{equation} A^*=\langle A\rangle+\frac{1}{\sqrt{\beta k\Omega^2}}\frac{dW}{dt}, \end{equation} where $W(t)\triangleq\int_0^tdW$ is a standard Wiener process, with $dW$ obeying the basic rule of It\^o calculus $dW^2=dt$. Notice also that $\Delta W_j$ is statistically-independent from $A_j$. Using $\llangle\,\cdot\,\rrangle$ to denote averaging over the Wiener process we thus have, for any function $f(A)$: \begin{equation}\label{eq_averageOverWiener} \llangle f(A)dW\rrangle=f(A)\llangle dW\rrangle=0. \end{equation} Taking stock: given $\rho(q,p;t_k)$ for a given time $t_k$ we can use it to calculate $\langle A_k\rangle$, and combine this with the output of a random number generator as in~\eqref{eq_finiteStochProc} to simulate the upcoming entry of the measurement record $A_k^*$. We can then use~\eqref{eq_longEq2} to calculate what our updated state of knowledge $\rho(q,p;t_{k+1})$ would be upon reading that entry, and iterate the process. Analytically we proceed as follows. Substitute~\eqref{eq_finiteStochProc} into~\eqref{eq_longEq2}; expand the square in the exponent, discarding the overall factor $\exp{-\Delta W_j^2/2\Delta t}$ which is independent of $(q,p)$; and Taylor-expand the exponential, keeping in mind that powers of $\Delta W_j$ count for ``half an order", to obtain \begin{align} \rho(q,p&;t_{j+1})\propto\notag\\ &\mspace{-55mu}\Bigg(1-\frac{\beta_jk_j\Omega_j^2}{2}(A_j-\langle A_j\rangle)^2\Delta t+\sqrt{\beta_jk_j\Omega_j^2}(A_j-\langle A_j\rangle)\Delta W_j\notag\\ &+\frac{\beta_jk_j\Omega_j^2}{2}(A_j-\langle A_j\rangle)^2\Delta W_j^2+\mathcal{O}(\Delta t\,\Delta W_j)\Bigg)\notag\\ &\cdot\Bigg(\rho+\frac{k_j\Delta t}{2\beta_j}\{A_j,\{A_j,\rho\}\}+\mathcal{O}(\Delta t^2)\Bigg)\Bigg|_{(q,p;t_j)}. \end{align} In the limit of continuous measurement $\Delta t\to dt,\Delta W_j\to dW,\Delta W_j^2\to dt$ this reduces to \begin{align} \rho(q,p;t+dt)&\propto\rho+\frac{k}{2\beta}\{A,\{A,\rho\}\}dt\notag\\ &\quad+\sqrt{\beta k\Omega^2}(A-\langle A\rangle)\rho\,dW\Bigg|_{(q,p;t)}, \end{align} where again all quantities shown may be explicit functions of time. One can check that the right-hand side is already normalized, so the omitted factor of proportionality is $1$. We arrive in this case (simulated measurement record) at \begin{align} \frac{\partial\rho}{\partial t}&= \underbrace{\{H,\rho\}}_{\substack{\text{internal dynamics}\\ \text{Hamiltonian flow}\\ \text{info preserved}}} +\underbrace{\frac{k}{2\beta}\{A,\{A,\rho\}\}}_{\substack{\text{observer effect}\\ \text{diffusion along flow $\Phi^A_t$}\\ \text{info about $A$ preserved}\\ \text{other info lost}}}\notag\\ &\quad+\underbrace{\sqrt{\beta k\Omega^2}(A-\langle A\rangle)\rho\,\frac{dW}{dt}}_{\substack{\text{Bayesian update} \\\text{collapse towards measurement outcome}\\ \text{non-linear \& non-local}\\{\mathbb \llangle\Delta\text{info}\rrangle\geq0}}},\label{eq_ModdedStochasticLiouville} \end{align} where again we've re-introduced the Liouville term $\{H,\rho\}$ accounting for the internal dynamics of the system. Compared to~\eqref{eq_ModdedLiouville} we now have a new stochastic term appearing, which is due to assimilation of the simulated measurement record via Bayesian update. To get some sense for the effect of this new term, in Appendix~\ref{app_antiHtheorem} we prove that under master equation~\eqref{eq_ModdedStochasticLiouville} the Shannon entropy~\eqref{eq_Shannon} evolves as \begin{align} \dot S&= \underbrace{\frac{k}{2\beta}\left\langle\{A,\log\rho\}^2\right\rangle}_{\substack{\text{observer effect}\\ \Delta\text{entropy}\geq0}}\notag\\ &\quad\underbrace{-\frac{\beta k\Omega^2\sigma_A^2}{2}-\sqrt{\beta k\Omega^2}\left\langle(A-\langle A\rangle)\log\rho\right\rangle \frac{dW}{dt}}_{\substack{\text{Bayesian update}\\ \text{can be positive or negative}}},\label{eq_antiHtheorem1} \end{align} where \begin{equation} \sigma_A^2\triangleq\langle(A-\langle A\rangle)^2\rangle \end{equation} is the variance in our knowledge of $A$. The first term on the r.h.s.~of~\eqref{eq_antiHtheorem1} is familiar from~\eqref{eq_Htheorem}; it describes increasing entropy due to the observer effect of measurement. The remaining two terms are due to the stochastic term in~\eqref{eq_ModdedStochasticLiouville}; these two together may be positive for particular measurement outcomes, but they are non-positive on average, as can be seen by invoking~\eqref{eq_averageOverWiener}: \begin{equation}\label{eq_antiHtheorem} \llangle \dot S\rrangle=\underbrace{\frac{k}{2\beta}\left\langle\{A,\log\rho\}^2\right\rangle}_{\substack{\text{observer effect}\\ \Delta\text{entropy}\geq0}} \underbrace{-\ \ \frac{\beta k\Omega^2\sigma_A^2}{2}}_{\substack{\text{Bayesian update}\\ \llangle \Delta\text{entropy}\rrangle\leq0}}. \end{equation} Thus the stochastic term in~\eqref{eq_ModdedStochasticLiouville} leads, on average, to increasing information (see example in Figure~\ref{fig_sho_master_equations}d,e). It is also interesting to note that this term is both non-linear and non-local in $\rho$, since $\langle A\rangle$ depends on the value of $\rho$ everywhere on phase space. To gain further insight into the effects of this term, suppose the measured observable is fixed $A=A(q,p)$, and consider our PDF over this observable, $\rho(A;t)$, which is just the marginal \begin{equation} \rho(A';t)\triangleq\int d^nqd^np\, \delta(A(q,p)-A')\rho(q,p;t). \end{equation} Let $\kappa_i$ denote the $i$-th cumulant of this distribution. In Appendix~\ref{app_hierarchyOfCumulants} we prove the following hierarchy of equations describing the contribution of the stochastic term in~\eqref{eq_ModdedStochasticLiouville} to the evolution of these cumulants: \begin{subequations}\label{eq_hierarchyOfCumulants} \begin{align} d\kappa_1&=\sqrt{\beta k\Omega^2}\,\kappa_2\, dW,\\ d\kappa_2&=\sqrt{\beta k\Omega^2}\,\kappa_3\, dW-\frac{\beta k\Omega^2}{2}(2\kappa_2^2)dt,\\ d\kappa_3&=\sqrt{\beta k\Omega^2}\,\kappa_4\, dW-\frac{\beta k\Omega^2}{2}(6\kappa_2\kappa_3)dt,\\ d\kappa_4&=\sqrt{\beta k\Omega^2}\,\kappa_5\, dW-\frac{\beta k\Omega^2}{2}(8\kappa_2\kappa_4+6\kappa_3^2)dt,\\ &\dots\notag \end{align} \end{subequations} Notice in particular the trends $\llangle\dot\kappa_1\rrangle=0$, $\llangle\dot\kappa_2\rrangle\sim-\llangle\kappa_2\rrangle^2$, $\llangle\dot\kappa_3\rrangle\sim-\llangle\kappa_3\rrangle$, $\llangle\dot\kappa_4\rrangle\sim-\llangle\kappa_4\rrangle$, \dots. These trends tell us that (supposing $A$ is not explicitly time-dependent and the Liouville term does not intervene too strongly) the stochastic term in~\eqref{eq_ModdedStochasticLiouville} causes all cumulants of $\rho(A;t)$ higher than second to vanish exponentially fast, leaving $\rho(A;t)$ a Gaussian; it then causes the variance to vanish like $\sim 1/t$, while the mean jiggles around in a random walk of zero drift and volatility decaying with the variance. In the limit in which the measurement process is complete, $\rho(A;t)$ converges to a delta distribution centered at the simulation's putative true value of $A$. (See example in Figure~\ref{fig_sho_master_equations}d--f.) \subsection{Simultaneous measurements}\label{sec_simultMeas} Simultaneous weak measurement of multiple observables $A_1(q,p),\dots,A_s(q,p)$, whether these are in involution or not, can be handled by letting $A(q,p;t)$ switch between these observables on a fast time scale. By averaging~\eqref{eq_ModdedStochasticLiouville} over this fast time scale this equation then becomes \begin{align} \frac{\partial\rho}{\partial t}&=\{H,\rho\} +\sum_{j=1}^s\frac{k_j}{2\beta_j}\{A_j,\{A_j,\rho\}\}\notag\\ &\qquad+\sum_{j=1}^s\sqrt{\beta_j k_j\Omega_j^2}(A_j-\langle A_j\rangle)\rho\,\frac{dW_j}{dt},\label{eq_ModdedStochasticLiouvilleMultObs} \end{align} where $(\beta_j,k_j,\Omega_j)$ describes the measurement setup for the $j$-th observable, and the $W_j(t)\triangleq\int_0^tdW_j$ are independent Wiener processes for $j\neq j'$. The analogues of~\eqref{eq_antiHtheorem1} and~\eqref{eq_antiHtheorem} for this equation are \begin{align} &\dot S=\sum_{j=1}^s\frac{k_j}{2\beta_j}\left\langle\{A_j,\log\rho\}^2\right\rangle\notag\\ &-\sum_{j=1}^s\left(\frac{\beta_j k_j\Omega_j^2\sigma_{A_j}^2}{2}+\sqrt{\beta_j k_j\Omega_j^2}\left\langle(A_j-\langle A_j\rangle)\log\rho\right\rangle \frac{dW_j}{dt}\right),\label{eq_antiHtheorem1MultObs} \end{align} and \begin{equation}\label{eq_antiHtheoremMultObs} \llangle \dot S\rrangle=\sum_{j=1}^s\frac{k_j}{2\beta_j}\left\langle\{A_j,\log\rho\}^2\right\rangle -\sum_{j=1}^s\frac{\beta_j k_j\Omega_j^2\sigma_{A_j}^2}{2}. \end{equation} If some of the measurement outcomes are discarded the corresponding terms should be dropped from the right-most sums in~(\ref{eq_ModdedStochasticLiouvilleMultObs}--\ref{eq_antiHtheoremMultObs}). If all the outcomes are discarded we are left with \begin{align} \frac{\partial\rho}{\partial t}&=\{H,\rho\} +\sum_{j=1}^s\frac{k_j}{2\beta_j}\{A_j,\{A_j,\rho\}\},\label{eq_ModdedLiouvilleMultObs} \end{align} which is linear, local, and deterministic; and \begin{align} \dot S&=\sum_{j=1}^s\frac{k_j}{2\beta_j}\left\langle\{A_j,\log\rho\}^2\right\rangle\geq0.\label{eq_HtheoremMultObs} \end{align} \section{Discussion}\label{sec_discussion} \subsection{Comparing the quantum and classical uncertainty relations}\label{sec_quantAndClass} How does our classical precision-disturbance relation~\eqref{eq_precDistRel} compare to the Heisenberg uncertainty principle of quantum mechanics? The latter can be stated in a few different forms. We will consider the Kennard-Weyl-Robertson form in section~\ref{sec_epistLim}, where we discuss the epistemology of Hamiltonian ontology. Here we consider the ``joint measurement form"~\cite{arthurs1965bstj, arthurs1988quantum, ozawa1991quantum, ishikawa1991uncertainty}, pertaining to simultaneous measurement of two observables, $A$ and $B$. When $A,B$ are conjugate to each other this reads: \begin{equation}\label{eq_Heisenberg} \epsilon_{A}\epsilon_{B}\geq\frac{\hbar}{2}, \end{equation} where $\epsilon_{A}$ and $\epsilon_{B}$ denote the precisions in the measurement of $A$ and $B$, respectively~\cite{fn_realistQM}; and $\hbar$ is the reduced Planck constant. One superficial difference between~\eqref{eq_precDistRel} and~\eqref{eq_Heisenberg} is that one is an equality while the other an \emph{in}equality. However, this difference is illusory. The product on the left-hand side of~\eqref{eq_precDistRel} can easily be made larger than the right-hand side, so that for a more general class of measurement models I indeed expect us to have \begin{equation}\label{eq_precDistRel2} \epsilon_A\eta_A\geq\frac{1}{\beta\Omega}. \end{equation} Let us call a measurement that saturates this bound \emph{maximally efficient}, while one failing to saturate it, \emph{inefficient} (or not as efficient). In these terms any measurement following our model is maximally efficient, as we've seen. The most obvious modification to our model that leads to inefficiency is if the measurement outcome is discarded ($\epsilon_A\to\infty; \eta_A$ unchanged). Another way to reduce efficiency is if the apparatus' pointer fails to be in involution with $H_\text{app}^\text{own}$, so that some amount of ``deterioration" of the measurement record can happen between the time of the system-apparatus interaction and whenever the record is read. In the opposite direction, one might ask: could not a higher efficiency than~\eqref{eq_precDistRel} be reached, say, by using a pointer and coupling, $(Q,P)$, that are correlated in the apparatus ready state? To achieve the latter, one would need $(Q,P)$ to \emph{not} diagonalize the quadratic form~\eqref{eq_HappWithTrap2}; namely, instead of choosing $(Q,P)=(z_{i^*},w_{i^*})$, one would choose $(Q,P)$ related to $(z_{i^*},w_{i^*})$ by some linear canonical transformation. In fact, although not proven here, I find that this approach leads to the same precision-disturbance relation~\eqref{eq_precDistRel}; the only difference (aside from the Bayesian analysis becoming more involved) is that a systematic component is added to the observer effect. This component can be corrected given the measurement outcome $A^*$; so it doesn't count towards the disturbance $\eta_A$~\cite{fn_meaningOfEta}. In summary, these remarks suggest that, while it is easy to do worse than~\eqref{eq_precDistRel}, it may not be possible to do better; i.e.~they suggest inequality~\eqref{eq_precDistRel2} to be the general principle. A second difference, which remains between~\eqref{eq_Heisenberg} and~\eqref{eq_precDistRel2}, is that one involves the product of two precisions, while the other the product of a precision with a disturbance. This difference can be bridged as well. Recall that the disturbance in question amounts to flowing along $\Phi^A_\tau$ for an unknown ``time" $\tau$ whose uncertainty is $\eta_A$. Under this flow the ``rate" of change of any observable $B$ is as given by~\eqref{eq_eom}: $\frac{d}{d\tau} B=\{B,A\}$. In particular, if $B$ is the conjugate to $A$, so that $\{B,A\}=1$, then $B$ increases monotonically at the steady rate of $1$; and the net effect of the flow on $B$ is simply to displace its value by $\tau$. (This final step can fail if $B$ has a discontinuity somewhere; so it is important that $B$ be a bona fide observable, not just a local quantity such as the phase $\phi$ of an oscillator.) So the uncertainty in the ``flow time", $\eta_A$, translates directly into a disturbance in the value of the conjugate observable, $B$. This places a bound on the precision, $\epsilon_B$, with which any subsequent measurement can hope to determine the original value of $B$: $\epsilon_B\geq\eta_A$, with equality holding only if the measurement of $B$ is done at full strength. Thus we have \begin{equation}\label{eq_precPrecRel} \epsilon_A\epsilon_B\geq\frac{1}{\beta\Omega}, \end{equation} and the parallel with~\eqref{eq_Heisenberg} becomes apparent. Historically it seems that Heisenberg's own interpretation of the uncertainty principle was as a precision-disturbance relation~\cite{heisenberg1949physical}, not very different in spirit from~\eqref{eq_precDistRel2}. And in recent years work in quantum mechanics has payed considerable attention to precision-disturbance relations, yielding formulas similar to~\eqref{eq_precDistRel2} but with $\hbar/2$ on the right-hand side~\cite{ozawa2003universally, erhart2012experimental, fujikawa2012universally, baek2013experimental, busch2013proof}. \begin{figure} \centering \includegraphics[width=\linewidth]{final_figures/classical_vs_quantum_uncertainty.png} \caption{\textbf{Coexistence of quantum and classical uncertainty relations in the real world.} One may expect the quantum and classical uncertainty relations to coexist in reality based on the observation that a Hamiltonian world effectively emerges from quantum mechanics at macroscopic scales. The quantum relation dominates at low apparatus temperature and/or tightly-trapped pointer in the apparatus ready state; when $1/\beta\Omega<\hbar/2$ (below the dashed diagonal). The classical relation dominates in the other direction. Here $f=\Omega/2\pi$ and $T=1/k_B\beta$. Notice that for the range of $(f,T)$ shown, the obstruction is never larger than $\sim10^{-17}$J$\cdot$s.} \label{fig_classical_vs_quantum} \end{figure} The real world is no doubt quantum mechanical, and so the Heisenberg uncertainty principle is fundamental. But as we know, as one ``zooms out" to larger scales somehow an approximately Hamiltonian world effectively emerges (Bohr's correspondence principle and the quantum-to-classical transition). Hand in hand with the emergence of this effective Hamiltonian world I expect our classical uncertainty relation to gain traction. Figure~\ref{fig_classical_vs_quantum} illustrates how the classical and quantum relations then must coexist. For a tight enough trap and/or cold enough bath (below the dashed diagonal), the obstruction in~\eqref{eq_precPrecRel} is brought below $\hbar/2$ and becomes unreachable; the quantum obstruction acts like rock bottom. For less tight traps and/or less cold baths the obstruction in~\eqref{eq_precPrecRel} rises above $\hbar/2$ and begins to dominate. Taken together, one may expect to have in the real world an obstruction that interpolates between these two; something along the lines of \begin{align} \epsilon_A\epsilon_B&\geq\ \frac{\hbar}{2}+\frac{1}{\beta\Omega} \ \ \text{or perhaps}\ \ \frac{\hbar/2}{1-e^{-\frac{1}{2}\beta\hbar\Omega}}; \label{eq_precDistRel4} \end{align} it will take a detailed quantum calculation to work out the precise formula (see Section~\ref{sec_future} for a germ of how this might be done). To gain some perspective for the scales involved, note from Figure~\ref{fig_classical_vs_quantum} that, at room temperature, trap frequencies any lower than about $12$THz (corresponding to light wave-lengths $\gtrsim25\mu$m) are already enough to put us in the classical regime. At the same time, even for the highest temperatures and lowest frequencies shown in the top-left of Figure~\ref{fig_classical_vs_quantum}, the classical obstruction hardly becomes larger than $\sim10^{-17}$J$\cdot$s; an extremely small quantity by macroscopic standards. And yet, even in more moderate regimes towards the center of Figure~\ref{fig_classical_vs_quantum}, the classical obstruction may be relevant in the contexts of precision measurement, nanoengineering and molecular machines. \subsection{On the foundations of statistical physics}\label{sec_thermo} As obtained here, master equation~\eqref{eq_ModdedLiouville} describes a rational agent's knowledge of a Hamiltonian system under continuous measurement when the measurement record is discarded. As briefly mentioned in Section~\ref{sec_discardedRec}, this equation can be repurposed to describe (a rational agent's knowledge of) a Hamiltonian system $(q,p)$ in interaction with a thermal bath at (inverse) temperature $\beta$. Seen in this light, corollary~\eqref{eq_Htheorem} constitutes a novel H-theorem detailing how the Shannon entropy increases over time for an open thermodynamic system. Note that this H-theorem deals directly with the fine-grained entropy, without coarse-graining phase space as in Gibbs' H-theorem~\cite{tolman1979principles}. We should summarize the inputs that go into justifying this application of~\eqref{eq_ModdedLiouville}: (i) equilibrium thermodynamics to give us prior knowledge about the state of the bath, in~\eqref{eq_canonicalDist1}; (ii) Hamiltonian mechanics to describe the system-bath dynamics, in~\eqref{eq_IntegrateHamsEqs}; (iii) Bayesian probability to translate ontic into epistemic dynamics, in~\eqref{eq_marginalize}~\cite{fn_favoredInterp}. A fourth and final ingredient perhaps deserves the most scrutiny in future work: (iv) modeling of the bath as a collection of temporally-uncorrelated systems, each initially in thermal equilibrium, each momentarily ``minimally coupled" to the object-system, as in~\eqref{eq_interactionH}~\cite{fn_releaseConstraints}. \begin{figure*} \centering \includegraphics[width=\linewidth]{final_figures/ergod_and_therm.png} \caption{\textbf{Thermalization under master equation~\eqref{eq_ModdedLiouville} according to the ergodic program.} Cartoon of the $2n$-dimensional phase space of system $(q,p)$, showing a level set of $A(q,p)$ ($(2n-1)$-dimensional), and a single integral curve of the flow $\Phi^A_t$ on this level set ($1$-dimensional). Note that the apparent self-intersections of the curve are an artifact of our low-dimensional cartoon. For $A$ ergodic, almost all its level sets are densely filled by almost all integral curves on them. Assuming $H(q,p)\triangleq0$, dynamics~\eqref{eq_ModdedLiouville} just describes diffusion along these integral curves. \textbf{(a--c)} The value of $\rho(q,p;t)$ along one integral curve is illustrated at three different times. For ease of visualization the color scheme (below) is normalized anew for each plot. The depicted process happens in parallel along each integral curve (only one shown). \textbf{(a)} An initial state of knowledge is localized in a small neighborhood on the level set. \textbf{(b)} As time passes probability diffuses out along the integral curve. Because the curve is ergodic, this can lead to rapid spread of probability on the level set. Taking all integral curves into account amplifies the speed of spreading on the level set. \textbf{(c)} At late times the distribution converges to a steady state which is uniform all along the curve. Taking all integral curves into account, the distribution is now uniform on the level set.} \label{fig_ergodAndTherm} \end{figure*} There is ongoing debate among different programs of statistical mechanics for understanding the approach to thermal equilibrium~\cite{frigg2008field}. Equation~\eqref{eq_ModdedLiouville}, and its corollaries~(\ref{eq_evolutionOfMean},~\ref{eq_Htheorem}), seem ideally poised for informing this debate. For instance, consider the ergodic program~\cite{frigg2008field}, which posits that thermal equilibrium corresponds to a steady state of $\rho(q,p;t)$. A criticism levied against this program is that, even if the ergodic hypothesis is granted---according to which the integral curves of the Hamiltonian flow $\Phi^H_t$ will typically fill the level sets of $H$ densely---it still remains the case that under the Liouville equation it is impossible for a non-steady state to evolve into a steady state (because this equation implies $\dot S=0$); which seems to lead to the absurd conclusion that thermal equilibrium could never follow from non-equilibrium. I think this objection can be entirely resolved by the use of master equation~\eqref{eq_ModdedLiouville} in place of Liouville's equation. To see how this comes about consider the special case in which $A=A(q,p)$ is not explicitly time-dependent, and the system Hamiltonian is trivial, $H(q,p)\triangleq0$. In this case~\eqref{eq_ModdedLiouville} simply describes diffusion along the integral curves of the flow $\Phi^A_t$. Clearly, from any given initial condition a steady state will be reached in which probability is distributed uniformly along each integral curve. If we suppose further, as in the ergodic hypothesis, that such integral curves typically fill the level sets of $A$, then we see that an initially-localized distribution (Figure~\ref{fig_ergodAndTherm}a) will converge as time passes to a distribution uniform on the level sets of $A$ (Figure~\ref{fig_ergodAndTherm}c). In the general case, the features of the steady state (including whether or not there is one) will depend on the interplay between $H$ and $A(t)$ (alternatively, between $H, A_1,\dots,A_s$ in~\eqref{eq_ModdedLiouvilleMultObs}). I expect that settings can be found corresponding to the various ensembles of statistical mechanics. For example, Figure~\ref{fig_sho_master_equations}c illustrates a situation in which the coupling to the bath does not disturb the energy of the system. As shown, the steady state is uniformly distributed on the energy level sets. If the value of the energy is precisely known (Figure~\ref{fig_sho_master_equations}e, late time), this is just the micro-canonical ensemble. We should noted that, despite the time-irreversibility present in~\eqref{eq_ModdedLiouville}, it would be mistaken to suggest this equation as an answer to Loschmidt's reversibility objection~\cite{frigg2008field} and the issue of the arrow of time. Rather, I think the arrow of time has been baked into~\eqref{eq_ModdedLiouville} by our model of the thermal bath; according to which in each infinitesimal interval of time the system comes into contact with a fresh uncorrelated thermal system and leaves it a little bit correlated. This is not unlike the effect that Boltzmann's molecular chaos hypothesis has for his H-theorem~\cite{tolman1979principles, kardar2007statistical}. Personally I'm not too troubled by this, since I consider the ``past hypothesis"~\cite{frigg2008field} a better avenue for pursuing this issue. Finally, I would emphasize that the equations we have derived are valid for systems of any size, not just for the large-size limit; they are well suited for studying fluctuations from the mean in finite-size thermodynamic systems, both in and out of equilibrium. \subsection{Reading the measurement record; is it turtles all the way down?}\label{sec_turtles} We return to a complication that was tactically overlooked in Section~\ref{sec_measurement}. In our measurement model, after the apparatus had interacted with the system---let's call that step ``pre-measurement"---, we stipulated that the pointer on the apparatus should be read, which would yield some definite value $Q^*$. But what could it mean to ``read $Q$" if not to measure this observable of the apparatus? This seems to lead us down an infinite regression in which the system is pre-measured by an apparatus, which must then be pre-measured by an apparatus, which must then be\dots the passage from ``systems interacting" to ``agent being informed" never quite taking place. In a sense this predicament is similar to the quantum measurement problem, particularly as articulated by the Wigner's friend thought experiment~\cite{wigner1961remarks}. In both settings, the paradoxical step is the passage from what seems to be best regarded as an ontic-level description (Hamiltonian or unitary dynamics) to what seems to be best regarded as an epistemic-level operation (Bayesian updating or collapse of the wave function). At the epistemic level we speak freely of agents, observers, measurements, observations, reading the measurement record, information, probability, Bayesian updating, collapse of the wave function. But at the ontic level all of these are complicated phenomena, resisting precise characterization. Until such characterizations are available~\cite{fn_shiftySplit} we are stuck with the shifty split---to use a term coined by Bell~\cite{bell1990against}. In our model, the shifty split was introduced at one degree of separation from the system under study: we described the system-apparatus interaction at the ontic level; then we described the apparatus-agent interaction at the epistemfic level. Such a once-removed approach can be very useful, as demonstrated by the example of the theory of general quantum measurement~\cite{nielsen2002quantum}. However, there is a pair of consistency tests that should be checked of such a theory. Notice that the once-removed theory contains the twice-removed and higher theories. To see this, simply let what we have been calling the object-system instead be used as an apparatus of some kind to (pre-)measure another system. (Now we have an object-system which is pre-measured by an apparatus, which is pre-measured by a second apparatus, whose pointer is ``read" by an agent.) This maneuver uses only the rules of the ontology, yet succeeds in shifting away the split by one degree. In light of this feature of the theory, for a first test (T1) we ask: is there any such way to shift away the split that increases the efficiency of our measurement (i.e.~decreases the obstruction in~\eqref{eq_precDistRel2})? If the answer is yes then test (T1) is failed; it is a sign that the way we are bridging the shifty split does not fully exploit the possibilities allowed by the ontology, and we should strengthen it. (This strengthening is necessary. So long as test (T1) is not passed, bounds such as~\eqref{eq_precDistRel2} derived from once-removed measurement schemes cannot be taken seriously, since they can be circumvented by better use of the operations allowed by the ontology.) On the other hand, for a second test (T2) we ask: is it the case that when we shift away the split, no matter how we do it, we find that it always decreases the efficiency of our measurement (i.e.~increases the obstruction in~\eqref{eq_precDistRel2})? If the answer is yes then test (T2) is failed; it is a sign that the way we are bridging the shifty split requires an operation that is not allowed by the ontology, and must be revised. How does our model fair on these tests? Consider test (T2) first. We have bridged the shifty split by stipulating: (i) ``read" the pointer on the apparatus, yielding some definite value $Q^*$. (This is after the apparatus has pre-measured the object-system.) Here's a way to shift away the split without reducing efficiency. Instead of (i) do: (ii) use a second apparatus, operated according to our same model with parameters $(\beta^{(2)},k^{(2)},\Omega^{(2)})$, to pre-measure at full strength $(k^{(2)}\to\infty)$ the value of $Q$, recording it on its own pointer $Q^{(2)}$ which we then ``read", yielding some definite value $Q^{(2)*}$. To see that procedure (ii) is just as efficient as (i) notice, first, that neither of the two involve further disturbance of the object-system, so $\eta_A$ is the same for both. Second, since the final measurement in procedure (ii) is at full strength, we have $\epsilon^{(2)}_Q\propto 1/\sqrt{k^{(2)}}\to0$, so this measurement reveals the exact value of $Q$; $Q^{(2)*}=Q^*$. Thus procedure (ii) leads to the same likelihood function~\eqref{eq_likelihood} as (i), and hence to the same $\epsilon_A$. This gives us proof-by-example that it is possible to shift away the split without reducing our model's efficiency, so test (T2) is passed. It is harder to prove that test (T1) is passed since this requires proving that efficiency remains unimproved \emph{for all} ways of shifting away the split. I don't know how to do that, but I conjecture that our model passes this test too. In support of this conjecture, note one way in which efficiency might have been increased by shifting away the split, but isn't. Turn again to procedure (ii) just discussed. Imagine that, after having measured the pointer of the first apparatus ($Q$) as described, it were possible to do another measurement on this apparatus to determine $P$. If we could gain even a little bit of information about the value that $P$ had at the time of interaction with the object-system ($t_0$), we could combine it with what we know of $P(t_0)$ from the thermal distribution~\eqref{eq_canonicalDist} to reduce our uncertainty $\sigma_P$, and hence reduce $\eta_A=\sqrt{k}\sigma_P$. If this were possible our model would fail test (T2). That it is impossible follows from the fact that the measurement of $Q$ in procedure (ii) had to be done at full strength $(k^{(2)}\to\infty)$, which leads to an infinite disturbance of the first apparatus ($\eta^{(2)}_Q\propto1/\epsilon^{(2)}_Q\to\infty$); and this infinitely disturbs the value of $P$ (since $\{Q,P\}=1$). So we see that in the course of carrying out procedure (ii) all information about $P(t_0)$ is lost beyond recovery. \subsection{On the epistemology of Hamiltonian ontology}\label{sec_epistLim} Consider the Kennard-Weyl-Robertson (KWR) form of the Heisenberg uncertainty principle of quantum mechanics~\cite{griffiths2005introduction}. For a pair of conjugate observables, $A$ and $B$, it reads: \begin{equation}\label{eq_Heisenberg1} \sigma_{A}\sigma_{B}\geq\frac{\hbar}{2}, \end{equation} where $\sigma_{A}$ and $\sigma_{B}$ denote the standard deviations at a given time in our knowledge of $A$ and $B$, respectively~\cite{fn_realistQM}. This form of the uncertainty principle speaks directly to the limits of what can be known about the state of a quantum system; that is, to the epistemology of quantum ontology. In this section we ask whether the present developments allow us to establish an analogous result about the epistemology of Hamiltonian ontology. Notice, first of all, the sense in which we must understand such a question. Unlike in the quantum formalism, there is nothing in our classical formalism that rules out the possibility of \emph{starting} with perfect information about conjugate observables: \begin{equation}\label{eq_perfectInfo} \rho(A',B';t)=\delta(A(q(t),p(t))-A')\delta(B(q(t),p(t))-B'), \end{equation} where \begin{align} \rho(A',B';t)&\triangleq\int d^nqd^np\,\delta(A(q,p)-A')\notag\\ &\qquad\qquad\cdot\delta(B(q,p)-B')\rho(q,p;t). \end{align} Rather, the question is whether it is at all possible to \emph{arrive} at such a state of perfect information from a state of less information. In particular: suppose we were handed a Hamiltonian system of which we knew nothing at all, so that $\rho$ were initially uniform on phase space. Does there exist a sequence of measurements on the system that would take $\rho$ into the perfect-information state~\eqref{eq_perfectInfo}? Consider the direct approach of performing simultaneous measurement of $A$ and $B$, with respective measurement settings $(\beta_A,k_A,\Omega_A)$ and $(\beta_B,k_B,\Omega_B)$. The evolution of $\rho$ is as given by master equation~\eqref{eq_ModdedStochasticLiouvilleMultObs}. Suppose that the measurements are strong enough that they come to completion on a much faster timescale than that of the system's dynamics, so that the Liouville term in~\eqref{eq_ModdedStochasticLiouvilleMultObs} can be neglected. It is a bit tricky, because one must be mindful of the rules of It\^o calculus, but one can check that, starting from an uncorrelated Gaussian distribution in $A$ and $B$ (of which the uniform distribution is the special case of infinite variances), the general solution to~\eqref{eq_ModdedStochasticLiouvilleMultObs} is \begin{equation} \rho(A,B;t)=\frac{1}{2\pi\sigma_{A}\sigma_{B}}e^{-\frac{(A-\mu_{A})^2}{2\sigma_{A}^2}-\frac{(B-\mu_{B})^2}{2\sigma_{B}^2}}, \end{equation} where the means $\mu_{A},\mu_{B}$ are stochastic functions of time evolving as \begin{subequations} \begin{align} d\mu_{A}&=\sqrt{\beta_{A}k_{A}\Omega_{A}^2}\sigma_{A}(t)^2dW_{A},\\ d\mu_{B}&=\sqrt{\beta_{B}k_{B}\Omega_{B}^2}\sigma_{B}(t)^2dW_{B}; \end{align} \end{subequations} while the variances $\sigma_{A}^2,\sigma_{B}^2$ are the deterministic functions of time \begin{subequations}\label{eq_solSigma} \begin{align} \sigma_{A}(t)^2&=\frac{\left[\coth{\sqrt{(\beta_A/\beta_B)k_Ak_B\Omega_A^2}(t-t_0)}\right]^{l_A}}{\sqrt{\beta_{A}\beta_{B}(k_{A}/k_{B})\Omega_{A}^2}},\label{eq_solSigmaA}\\ \sigma_{B}(t)^2&=\frac{\left[\coth{\sqrt{(\beta_B/\beta_A)k_Ak_B\Omega_B^2}(t-t_0)}\right]^{l_B}}{\sqrt{\beta_{A}\beta_{B}(k_{B}/k_{A})\Omega_{B}^2}},\label{eq_solSigmaB} \end{align} \end{subequations} where $t_0<t$ is a constant of integration, and $l_A,l_B\in\{+1,-1\}$. We see that, under simultaneous measurement of conjugate observables, an initially-Gaussian PDF remains Gaussian for all time. Also, much like we saw in~\eqref{eq_hierarchyOfCumulants}, the mean of the distribution executes a random walk (this time in two dimensions) of volatilities proportional to the variances. However, unlike in~\eqref{eq_hierarchyOfCumulants}, now the variances converge to non-zero values as the measurement runs to completion: \begin{subequations}\label{eq_completeJointMeasurement} \begin{align} \sigma_{A}^2&\to\frac{1}{\sqrt{\beta_{A}\beta_{B}(k_{A}/k_{B})\Omega_{A}^2}},\\ \sigma_{B}^2&\to\frac{1}{\sqrt{\beta_{A}\beta_{B}(k_{B}/k_{A})\Omega_{B}^2}}. \end{align} \end{subequations} This comes about because the measurement of $A$ causes collapse ``along the $A$-direction" (along the integral curves of $\Phi^B_\tau$) and diffusion ``perpendicular to the $A$-direction" (along the integral curves of $\Phi^A_\tau$); while the simultaneous measurement of $B$ causes the converse; and at completion of the measurement the effects precisely cancel out. Notice that~\eqref{eq_completeJointMeasurement} gives \begin{align} \sigma_{A}\sigma_{B}&\to\frac{1}{\sqrt{\beta_{A}\Omega_{A}}\sqrt{\beta_{B}\Omega_{B}}}, \end{align} which begins to resemble~\eqref{eq_Heisenberg1}. Is it the case that the product $\sigma_{A}(t)\sigma_{B}(t)$ remains above this limit at all times? That depends on the exponents $l_A,l_B$. The case $l_A=+1$ gives $\sigma_A(t_0^+)\to\infty$; it describes complete ignorance about $A$ at some past time $t_0$. On the other hand, the case $l_A=-1$ gives $\sigma_A(t_0^+)=0$; it describes perfect information about $A$ at the past time $t_0$. Likewise for $l_B$. Since we are interested in beginning from a state of ignorance, the relevant solution for us has $l_A=l_B=+1$. It then follows from~\eqref{eq_solSigma} that the inequality \begin{align}\label{eq_uncertUncertRel} \sigma_{A}\sigma_{B}\geq\frac{1}{\sqrt{\beta_{A}\Omega_{A}}\sqrt{\beta_{B}\Omega_{B}}} \end{align} holds for all times. If both measurements are characterized by the same (inverse) temperature $\beta$ and trap frequency $\Omega$, this further reduces to \begin{align}\label{eq_uncertUncertRel2} \sigma_{A}\sigma_{B}\geq\frac{1}{\beta\Omega}. \end{align} We have derived this uncertainty-uncertainty relation by considering simultaneous measurement of the pair of conjugate observables $A,B$. Could this be a general epistemic obstruction, or is there some different sequence of measurements that fares better? We leave the question open for future investigation. \subsection{Future directions}\label{sec_future} In closing we look to some of the many questions and possibilities ahead. We have argued that our model of measurement is maximally efficient; i.e.~that it is impossible to do better than~\eqref{eq_precDistRel2} in the way of measuring without disturbing. It would be of fundamental interest to have a proof of this claim at the level of rigor of mathematical physics. Complementing this, it would be valuable to have experimental tests of~(\ref{eq_precDistRel2},~\ref{eq_precPrecRel},~\ref{eq_uncertUncertRel2}) and of the bigger picture outlined in Figure~\ref{fig_classical_vs_quantum}. Moving forward it will be worth honing our intuition about the range of possible dynamics of a Hamiltonian system under measurement (or more accurately, of the epistemic state of a rational agent about a Hamiltonian system under measurement). For this it would be good to see numerical studies of equations~(\ref{eq_ModdedLiouville},~\ref{eq_ModdedStochasticLiouville}) in more interesting scenarios than the one-dimensional simple harmonic oscillator explored in Figure~\ref{fig_sho_master_equations}. For this task it might be useful if somebody reproduced in the classical setting the steps, discovered already in the quantum setting, to transform a non-linear stochastic master equation like~\eqref{eq_ModdedStochasticLiouville} into an equivalent linear equation~\cite{wiseman1996quantum,jacobs2006straightforward}. Another calculation that I think should be attempted is the derivation of the precise version of~\eqref{eq_precDistRel4}, which I suspect can be obtained by reproducing our measurement model from Section~\ref{sec_measurement} in the quantum setting. As discussed in Section~\ref{sec_thermo}, equations~(\ref{eq_ModdedLiouville},~\ref{eq_evolutionOfMean},~\ref{eq_Htheorem}) seem like a promising basis for a better understanding of equilibrium and non-equilibrium statistical physics. I think this research program deserves much attention. To name just two topics of inquiry in this direction: (i) Can the various ensembles of equilibrium statistical mechanics indeed be obtained as the equilibrium solutions to~\eqref{eq_ModdedLiouville} under suitable choices of the coupling to the bath, $A$? And if so, what does this teach us about the approach to thermal equilibrium in each case? (ii) How do the present results relate to the body of work on Maxwell's demon reviewed in Section~\ref{sec_intro}? Can the old debate now be resolved within classical physics, without recourse to quantum physics? In connection with the quantum measurement problem and the interpretation of quantum mechanics, there is a program dating back to Einstein~\cite{einstein1935can, harrigan2010einstein} of attempting to identify and unmix a possible epistemic component of quantum theory from its ontic content. In recent times this program has made promising progress at the hands of Caves, Fuchs, and others~\cite{fuchs2002quantum, caves2002unknown, harrigan2010einstein}. In particular Spekkens~\cite{spekkens2007evidence, spekkens2016quasi}, and Bartlett, Rudolph, and Spekkens~\cite{bartlett2012reconstruction}, have illustrated how an uncircumventable epistemic limitation in an otherwise classical world, much like what is suggested by our discussion in Section~\ref{sec_epistLim}, can lead to several of the phenomena usually regarded as characteristic of quantum mechanics. It will be interesting to see what these two programs can contribute to each other. Finally I would like to venture the following speculative suggestions. (i) As we know from general relativity, gravity couples directly to energy. Perhaps a system subject to a strong external gravitational field can, in certain cases, be reasonably modeled by~\eqref{eq_ModdedLiouville} with $A=H$. If so, could this tell us something interesting about the entropy of a system falling onto the event horizon of a black hole? Could this be a useful tool for studying black hole thermodynamics? (A quantum version of this master equation (c.f.~earlier comments in connection to~\eqref{eq_precDistRel4}) might be an even better tool.) (ii) To the best of my knowledge, theoretical computer science grounds its notions of computability and complexity in concrete (if highly abstracted and idealized) physical models. Does the existence of an obstruction to ideal measurement without disturbance in Hamiltonian mechanics have some bearing on those notions of computer science grounded in the world of classical physics~\cite{fn_reversibleComp}? (iii) Hamilton's equations and their underlying geometro-algebraic structure are not unique to physics; they emerge wherever the equations of a theory can be gotten out of a variational principle~\cite{arnold1990symplectic}. Indeed, in classical physics they emerge in just this way from Hamilton's principle of stationary action. In particular, optimal control theory uses essentially the same equations under the name of Pontryagin's minimum principle~\cite{kirk1970optimal}. Could the present results have consequences for aspects of optimal control under partial information and, by extension, for artificial intelligence? At the least, these musings illustrate the breadth of potential implications of our subject. \begin{acknowledgments} It is a pleasure to thank Omar Eulogio L\'opez, Kurt Jacobs, and Pavel Chvykov for reading versions of the typescript and providing many useful suggestions. I'm particularly grateful to Matthew A.~Wilson, for believing in me when I most needed it and for his continued mentorship and patience. While conducting this research I was supported by the Picower Neurological Disorder Research Fund. \end{acknowledgments}
1,108,101,564,009
arxiv
\section{Background} \label{sec:background} The prequential framework for evaluating probability forecasters was introduced by A.~P.~Dawid in \cite{dawid:1984} and \cite{dawid:1985}. Suppose two players, Forecaster and Reality, interact according to the following protocol. \bigskip \noindent \textsc{Binary prequential protocol}\nopagebreak \smallskip \parshape=4 \IndentI \WidthI \IndentII \WidthII \IndentII \WidthII \IndentI \WidthI \noindent FOR $n=1,2,\dots$:\\ Forecaster announces $p_n\in[0,1]$.\\ Reality announces $y_n\in\{0,1\}$.\\ END FOR. \bigskip \noindent The interpretation is that $p_n$ is Forecaster's subjective probability that $y_n=1$ after having observed $y_1,\ldots,y_{n-1}$ and taking account of all other relevant information available at the time of issuing the forecast. We will refer to $p_n$ as \emph{forecasts} and to $y_n$ as \emph{outcomes}. More generally, the outcomes take values in an arbitrary measurable space and the forecasts are probability distributions on that measurable space, but in this article we will restrict our attention to binary outcomes (as in \cite{dawid:1985}); this will be further discussed at the end of Section \ref{sec:application}. In general, the two players possess perfect information about each other's moves: Forecaster chooses $p_1$, Reality observes $p_1$ and chooses $y_1$, Forecaster observes $y_1$ and chooses $p_1$, etc. We might, however, be interested in ``oblivious'' strategies for a player, especially for Reality, who may generate her moves randomly according to a probability measure on $\{0,1\}^{\infty}$ chosen in advance. On the other hand, the players may also react to events outside the protocol. Dawid's \emph{prequential principle} (see, e.g., \cite{dawid:1984,dawid:1985,dawid/vovk:1999}) says that when testing the adequacy of the forecaster in light of the outcomes $y_n$ we should only use the forecasts $p_n$, not the forecasting strategy (if any) that Forecaster used to produce $p_n$. In this article we will be only interested in testing procedures that respect the prequential principle. In other words, we will be interested in testing the sequence \begin{equation}\label{eq:sequence} (p_1,y_1,p_2,y_2,\ldots) \end{equation} of forecast/outcome pairs $(p_n,y_n)$ for agreement. This sequence may be infinite or finite. There are two main ways to test sequences (\ref{eq:sequence}) for agreement, which we will call game-theoretic and measure-theoretic. For concreteness, suppose the sequence (\ref{eq:sequence}) does not satisfy \begin{equation}\label{eq:calibration} \lim_{n\to\infty} \frac{1}{n} \sum_{i=1}^n (y_i-p_i) = 0 \end{equation} (i.e., the sequence is not ``unbiased in the large''; see, e.g., \cite{dawid/vovk:1999} for numerous other ways of testing probability forecasts). What do we mean when we say that violation of (\ref{eq:calibration}) evidences lack of agreement? Two ways to answer this question correspond to two different approaches to the foundations of probability theory. One version of the game-theoretic answer is that we can gamble against the forecasts is such a way that, risking only one monetary unit, we can become infinitely rich when (\ref{eq:calibration}) is violated. The measure-theoretic answer is that, no matter what strategy Forecaster is using, the probability of (\ref{eq:calibration}) is one; therefore, if (\ref{eq:calibration}) is violated, an \emph{a priori} specified event of probability zero (given by the negation of the formula (\ref{eq:calibration})) has occurred. Both becoming infinitely rich and the occurrence of a pre-specified event of probability zero can be interpreted as lack of agreement between the forecasts and outcomes. In fact, even the first answer can be expressed in terms of probability. The game-theoretic approach to the foundations of probability is as old as the standard measure-theoretic based on Kolmogorov's axioms (\cite{kolmogorov:1933}; see \cite{GTP4} for the historical background). An imperfect version of the game-theoretic approach was championed by von Mises \cite{mises:1919} and formalized, in different ways, by Wald \cite{wald:1937} and Church \cite{church:1940}. Ville \cite{ville:1939} gave an example demonstrating that von Mises's notion of a gambling strategy was too restrictive, and introduced a more general class of gambling strategies and a closely related notion of a martingale. However, the formal notion of game-theoretic probability was introduced only recently (see, e.g., \cite{vovk:1993logic}, \cite{dawid/vovk:1999}, or, for a much fuller treatment, \cite{shafer/vovk:2001}). In particular, an event has zero game-theoretic probability if and only if there is a gambling strategy that, risking at most one monetary unit, makes the player infinitely rich when the event happens. The notion of game-theoretic probability makes the game-theoretic and measure-theoretic justifications of the testing procedure based on (\ref{eq:calibration}) look very similar: we just say that the probability (either game-theoretic or measure-theoretic) of (\ref{eq:calibration}) being violated is zero. The main result of this article says that the two notions of probability coincide on the analytic sets, and so the two approaches to testing probability forecasts are equivalent, in the prequential framework. The restriction to the analytic, and even Borel, sets is not a limitation in all practically interesting cases. For testing procedures based on events of probability zero (basically, on strong laws of probability theory, such as (\ref{eq:calibration})), a special case of our result is sufficient: it is sufficient to know that a Borel set has zero game-theoretic probability if and only if it has zero measure-theoretic probability. Our full result is also applicable to events of merely low, not zero, probability. For example, we could reject the hypothesis of agreement if \begin{equation}\label{eq:finite-calibration} \frac1n \left| \sum_{i=1}^n (y_i-p_i) \right| \ge C\sqrt{n} \end{equation} for prespecified large numbers $C$ and $n$. Our result shows that this and similar procedures have equally strong game-theoretic and measure-theoretic justifications. Notice that in the case of (\ref{eq:finite-calibration}) our decision to reject the hypothesis of agreement can be made after observing a finite sequence, $(p_1,y_1,\ldots,p_n,y_n)$. \section{This article} In the following two sections, \ref{sec:game} and \ref{sec:measure}, we formally introduce in the prequential framework the two notions of probability discussed in the previous section. The main result of this article is Theorem \ref{thm:main} in Section \ref{sec:result}, asserting the coincidence of the two kinds of probability on the analytic sets. This result has several predecessors. In the situation where Forecaster's strategy is fixed, Ville (\cite{ville:1939}, Theorems 1 and 2 in Chapter IV) showed that a set $E$ has game-theoretic probability zero if and only if it has measure-theoretic probability zero. (Ville stated this result in slightly different terms, without explicit use of game-theoretic probability.) This was generalized in \cite{shafer/vovk:2001} (Proposition 8.13) to the statement that game-theoretic and measure-theoretic probability coincide on the Borel sets. In the case of a finite-horizon protocol, a statement analogous to Theorem \ref{thm:main} was proved by Shafer in \cite{shafer:1996art} (Proposition 12.7.4). The special case of Theorem \ref{thm:main} asserting the coincidence of game-theoretic and measure-theoretic probability on the open sets was first proved in \cite{vovk/shen:2008journal} (Theorem 2). In the same Section \ref{sec:result} we also prove that measure-theoretic probability never exceeds game-theoretic probability. This simple statement is true for all sets, not just analytic. The proof of the opposite inequality is given in Section \ref{sec:proof}. It relies on two fundamental results: Choquet's capacitability theorem \cite{choquet:1954} and L\'evy's zero-one law in its game-theoretic version recently found in \cite{GTP27}. \subsection*{Some notation and definitions} The set of all natural (i.e., positive integer) numbers is denoted $\bbbn$, $\bbbn:=\{1,2,\ldots\}$. As always, $\bbbr$ is the set of all real numbers. Let $\Omega:=\{0,1\}^{\infty}$ be the set of all infinite binary sequences and $\Omega^{\diamond}:=\cup_{n=0}^{\infty}\{0,1\}^n$ be the set of all finite binary sequences. Set $\Pi:=([0,1]\times\{0,1\})^{\infty}$ and $\Pi^{\diamond}:=\cup_{n=0}^{\infty}([0,1]\times\{0,1\})^n$. The empty element (sequence of length zero) of both $\Omega^{\diamond}$ and $\Pi^{\diamond}$ will be denoted $\Lambda$. In our applications, the elements of $\Omega$ and $\Omega^{\diamond}$ will be sequences of outcomes (infinite or finite), and the elements of $\Pi$ and $\Pi^{\diamond}$ will be sequences of forecasts and outcomes (infinite or finite). The set $\Pi$ will sometimes be referred to as the \emph{prequential space}. For $x\in\Omega^{\diamond}$, let $\Gamma(x)\subseteq\Omega$ be the set of all infinite extensions of $x$ that belong to $\Omega$. Similarly, for $x\in\Pi^{\diamond}$, $\Gamma(x)\subseteq\Pi$ is the set of all infinite extensions of $x$ that belong to $\Pi$. For each $\omega=(y_1,y_2,\ldots)\in\Omega$ and $n\in\bbbn\cup\{0\}$, set $\omega^n:=(y_1,\ldots,y_n)$. Similarly, for each $\pi=(p_1,y_1,p_2,y_2,\ldots)\in\Pi$ and $n\in\bbbn\cup\{0\}$, set $\pi^n:=(p_1,y_1,\ldots,p_n,y_n)$. In some proofs and remarks we will be using the following notation, for $n\in\bbbn\cup\{0\}$: $\Omega^n:=\{0,1\}^n$ is the set of all finite binary sequences of length $n$; $\Omega^{\ge n}:=\cup_{i=n}^{\infty}\Omega^i$ is the set of all finite binary sequences of length at least $n$; $\Pi^n:=([0,1]\times\{0,1\})^n$; $\Pi^{\ge n}:=\cup_{i=n}^{\infty}([0,1]\times\{0,1\})^i$. \section{Game-theoretic prequential probability} \label{sec:game} A \emph{farthingale} is a function $V:\Pi^{\diamond}\to(-\infty,\infty]$ satisfying \begin{multline}\label{eq:farthingale} V(p_1,y_1,\ldots,p_{n-1},y_{n-1})\\ = (1-p_n) V(p_1,y_1,\ldots,p_{n-1},y_{n-1},p_n,0)\\ + p_n V(p_1,y_1,\ldots,p_{n-1},y_{n-1},p_n,1) \end{multline} for all $n\in\bbbn$ and all $(p_1,y_1,p_2,y_2,\ldots)\in\Pi$; the product $0\infty$ is defined to be $0$. If we replace ``$=$'' by ``$\ge$'' in (\ref{eq:farthingale}), we get the definition of a \emph{superfarthingale}. These are prequential versions of the standard notions of martingale and supermartingale. We will be interested mainly in non-negative farthingales and superfarthingales. The value of a farthingale can be interpreted as the capital of a gambler betting according to the odds announced by Forecaster. In the case of superfarthingales, the gambler is allowed to throw away part of his capital. Game-theoretic probability can be introduced as either upper or lower probability; in this article the former is more convenient (and was used in the informal discussion of Section \ref{sec:background}). A \emph{prequential event} is a subset of $\Pi$. The \emph{upper game-theoretic probability} of a prequential event $E$ is \begin{equation}\label{eq:upper-game} \UpProb(E) := \inf \left\{ a \mathrel{|} \exists V: V(\Lambda)=a \text{ and } \forall\pi\in E: \limsup_n V(\pi^n)\ge1 \right\}, \end{equation} where $V$ ranges over the non-negative farthingales. It is clear that we will obtain the same notion of upper game-theoretic probability if we replace the $\ge$ in (\ref{eq:upper-game}) by $>$, replace $\limsup$ by $\sup$ or $\liminf$ (we can always stop when 1 is reached), or allow $V$ to range over the non-negative superfarthingales. We will need the following property of countable subadditivity of game-theoretic probability. \begin{lemma}\label{lem:subadditivity} For any sequence $E_1,E_2,\ldots$ of prequential events, \begin{equation*} \UpProb \left( \cup_{i=1}^{\infty} E_i \right) \le \sum_{i=1}^{\infty} \UpProb(E_i). \end{equation*} In particular, if $\UpProb(E_i)=0$ for all $i$, then $\UpProb(\cup_{i=1}^{\infty}E_i)=0$. \end{lemma} \begin{proof} It suffices to notice that the sum of a sequence of non-negative farthingales is again a non-negative farthingale. \end{proof} \section{Measure-theoretic prequential probability} \label{sec:measure} A \emph{forecasting system} is a function $\phi:\Omega^{\diamond}\to[0,1]$. Let $\Phi$ be the set of all forecasting systems. For each $\phi\in\Phi$ there exists a unique probability measure $\Prob_{\phi}$ on $\Omega$ (equipped with the Borel $\sigma$-algebra) such that, for each $x\in\Omega^{\diamond}$, $\Prob_{\phi}(\Gamma(x1)) = \phi(x)\Prob_{\phi}(\Gamma(x))$. (In other words, such that $\phi(x)$ is a version of the conditional probability, according to $\Prob_{\phi}$, that $x$ will be followed by $1$.) The notion of a forecasting system is close to that of a probability measure on $\Omega$: the correspondence $\phi\mapsto\Prob_{\phi}$ becomes an isomorphism if we only consider forecasting systems taking values in the open interval $(0,1)$ and probability measures taking positive values on the sets $\Gamma(x)$, $x\in\Omega^{\diamond}$. For each sequence $(y_1,\ldots,y_n)\in\Omega^{\diamond}$ and each forecasting system $\phi\in\Phi$, let \begin{equation*} (y_1,\ldots,y_n)^{\phi} := (\phi(\Lambda),y_1,\phi(y_1),y_2,\ldots,\phi(y_1,\ldots,y_{n-1}),y_n) \in \Pi^{\diamond}. \end{equation*} Similarly, for each $(y_1,y_2,\ldots)\in\Omega$ and each $\phi\in\Phi$, \begin{equation*} (y_1,y_2,\ldots)^{\phi} := (\phi(\Lambda),y_1,\phi(y_1),y_2,\phi(y_1,y_2),y_3,\ldots) \in \Pi. \end{equation*} We can apply the idea of measure-theoretic probability to prequential events as follows, in the spirit of \cite{huber:1981}, Section 10.2. For each forecasting system $\phi$ and prequential event $E\subseteq\Pi$, define \begin{equation* \Prob^{\phi}(E) := \Prob_{\phi} \left\{ \omega\in\Omega \mathrel{|} \omega^{\phi}\in E \right\} = \Prob_{\phi}(E^{\phi}), \end{equation*} where $ E^{\phi} := \left\{ \omega\in\Omega \mathrel{|} \omega^{\phi}\in E \right\} $ and $\Prob_{\phi}(A)$ is understood, in general, as the outer measure of $A$, i.e., as $\inf_{B}\Prob_{\phi}(B)$, $B$ ranging over the Borel sets containing $A$. The convention about using the outer measure is important only for our proofs, not for the statement of the main result: according to Luzin's theorem (see, e.g., \cite{kechris:1995}, Theorem 21.10), every analytic set is universally measurable, and $E^{\phi}$ is analytic whenever $E$ is. Now we define the \emph{upper measure-theoretic probability} of $E$ as \begin{equation}\label{eq:upper-measure} \UpProbMeas(E) := \sup_{\phi}\Prob^{\phi}(E). \end{equation} \begin{remark}\label{rem:naive} Our definition (\ref{eq:upper-measure}) is not fully adequate from the intuitive point of view: even if we are willing to assume that Forecaster follows some forecasting strategy (which is a non-trivial assumption: cf.\ the discussion in \cite{dawid:1985}, pp.~1255--1256), why should this forecasting strategy depend only on the past outcomes? For example, a meteorologist forecasting rain might have data about temperatures, winds, etc. (See \cite{dawid:1985}, Section 9, for further discussion.) A more satisfactory definition would involve a supremum over all probability spaces equipped with a filtration and for each such a probability space a further supremum over all forecasting systems adapted to the corresponding filtration (with a natural more general definition of a forecasting system). Our definition (\ref{eq:upper-measure}) is the simplest one mathematically and leads to the strongest inequality $\UpProb(E)\le\UpProbMeas(E)$ (for the analytic sets), which is the non-trivial part of our main result, Theorem \ref{thm:main}. \end{remark} \section{Main result} \label{sec:result} Now we are have all ingredients needed to state our main result. \begin{theorem}\label{thm:main} For all analytic sets $E\subseteq\Pi$, $\UpProb(E)=\UpProbMeas(E)$. \end{theorem} Intuitively, this theorem establishes the equivalence between the purely prequential and Bayesian viewpoints in the framework of probability forecasting. The definition of measure-theoretic probability is Bayesian, in that Forecaster is modeled as a coherent subjectivist Bayesian having a joint probability distribution over the sequences of outcomes (cf.\ \cite{dawid:1982}, Section 1); we represent this joint probability distribution as a forecasting system. Rejecting his forecasts is the same as rejecting all forecasting systems that could have produced those forecasts: cf.\ the $\sup_{\phi}$ in (\ref{eq:upper-measure}). The definition of game-theoretic probability is purely prequential, in that it does not postulate any joint probability distribution behind the forecasts; the latter are used for testing directly. \ifFULL\begingroup\color{blue} In an important sense, game-theoretic probability is dual to measure-theoretic probability; in particular, we have $\sup$ in the definition of the latter and $\inf$ in the definition of the former. \endgroup\fi \begin{remark} As discussed in the previous section (Remark \ref{rem:naive}), our Bayesian forecaster is somewhat naive: he conditions only on the observed outcomes. It would be easy (but would complicate the exposition) to allow Reality to issue a \emph{signal} $s_n$, taking one of a finite number of values, before Forecaster chooses his forecast $p_n$. Allowing both farthingales and forecasting systems to depend on the signals, one could still prove that $\UpProb(E)=\UpProbMeas(E)$ for all analytic $E\subseteq\Pi$ following the proof of Theorem \ref{thm:main}. \end{remark} In this section we will only prove the inequality $\ge$ in Theorem \ref{thm:main}. It turns out that this inequality holds for all sets $E$, not necessarily analytic. \begin{theorem}\label{thm:easy-way} For all sets $E\subseteq\Pi$, $\UpProb(E)\ge\UpProbMeas(E)$. \end{theorem} The simple proof of Theorem \ref{thm:easy-way} will follow from Ville's inequality (\cite{ville:1939}, p.~100; in modern probability textbooks this result is often included among ``Doob's inequalities'': see, e.g., \cite{shiryaev:1996}, Theorem VII.3.1.III). Let $\phi$ be a forecasting system. A \emph{martingale} w.r.\ to $\phi$ is a function $V:\Omega^{\diamond}\to(-\infty,\infty]$ satisfying \begin{equation* V(x) = (1-\phi(x)) V(x,0) + \phi(x) V(x,1) \end{equation*} for all $x\in\Omega^{\diamond}$ (with the same convention $0\infty:=0$). \begin{proposition}[\cite{ville:1939}]\label{prop:Ville} If $\phi$ is a forecasting system, $V$ is a non-negative martingale w.r.\ to $\phi$, and $C>0$, \begin{equation*} \Prob_{\phi} \left\{ \omega\in\Omega \mathrel{|} \sup_n V(\omega^n) \ge C \right\} \le \frac{V(\Lambda)}{C}. \end{equation*} \end{proposition} If $V$ is a farthingale, the function $V^{\phi}:\Omega^{\diamond}\to(-\infty,\infty]$ defined by \begin{equation* V^{\phi}(x) := V\left(x^{\phi}\right), \quad x\in\Omega^{\diamond}, \end{equation*} is a martingale w.r.\ to $\phi$. It is important that this statement does not require measurability of the farthingale $V$; even if $V$ is not measurable, $V^{\phi}$ is always measurable, like any other function on $\Omega^{\diamond}$ (which is why there was no need to include the requirement of measurability in our definition of a martingale). \begin{proof}[Proof of Theorem \ref{thm:easy-way}] Let $E\subseteq\Pi$. It suffices to prove that $\Prob_{\phi}(E^{\phi})\le V(\Lambda)$ for any forecasting system $\phi$ and any non-negative farthingale $V$ satisfying $\limsup_n V(\pi^n)\ge1$ for all $\pi \in E$. Fix such $\phi$ and $V$. Then $V^{\phi}$ is a non-negative martingale w.r.\ to $\phi$ satisfying $\limsup_n V^{\phi}(\omega^n)\ge1$ for all $\omega \in E^{\phi}$. Applying Proposition \ref{prop:Ville} to $V^{\phi}$, we can see that indeed $\Prob_{\phi}(E^{\phi})\le V^{\phi}(\Lambda) = V(\Lambda)$. \end{proof} \section{Proof of the inequality $\le$ in Theorem \ref{thm:main}} \label{sec:proof} We start from proving a special case of Theorem \ref{thm:main}. \begin{lemma}\label{lem:compact} If $E\subseteq\Pi$ is a compact set, $\UpProbMeas(E)=\UpProb(E)$. \end{lemma} \begin{proof} Fix a compact prequential event $E\subseteq\Pi$. (Of course, ``compact'' is the same thing as ``closed'' in this context.) Represent $E$ as the intersection $E=\cap_{i=1}^{\infty}E_i$ of a nested sequence $E_1\supseteq E_2\supseteq\cdots$ of closed sets such that \begin{equation}\label{eq:level-i} \forall\pi\in\Pi: \pi\in E_i \Longrightarrow \Gamma(\pi^i)\subseteq E_i \end{equation} is satisfied for all $i$. Informally, $E_i$ is a property of the first $i$ forecasts and outcomes. For each $i=1,2,\ldots$, define a superfarthingale $W_i$ by setting \begin{equation}\label{eq:W-base} W_i(x) := \begin{cases} 1 & \text{if $\Gamma(x)\subseteq E_i$}\\ 0 & \text{otherwise} \end{cases} \end{equation} for all $x\in\Pi^{\ge i}$ and then proceeding inductively as follows. If $W_i(x)$ is already defined for $x\in\Pi^n$, $n=i,i-1,\ldots,1$, define $W_i(x)$, for each $x\in\Pi^{n-1}$, by \begin{equation}\label{eq:sup} W_i(x) := \sup_{p\in[0,1]} \bigl( (1-p) W_i(x,p,0) + p W_i(x,p,1) \bigr). \end{equation} It is clear that $W_1\ge W_2\ge\cdots$. Let us check that $W_i(x)$ is upper semicontinuous as a function of $x\in\Pi^{\diamond}$. By (\ref{eq:W-base}) this is true for $x\in\Pi^{\ge i}$. Suppose this is true for $x\in\Pi^n$, $n\in\{i,i-1,\ldots,2\}$, and let us prove that it is true for $x\in\Pi^{n-1}$, using the inductive definition (\ref{eq:sup}). It is clear that $ f(x,p) := (1-p) W_i(x,p,0) + p W_i(x,p,1) $ is upper semicontinuous as function of $p\in[0,1]$ and $x\in\Pi^{n-1}$. It is well known that $\sup_p f(x,p)$ is upper semicontinuous whenever $f$ is upper semicontinuous and $x$ and $p$ range over compact sets (see, e.g., \cite{dellacherie:1972}, Theorem I.2(d)). \ifnotJOURNAL A simple proof of a slightly more general fact will be given below in Lemma \ref{lem:upper-semicontinuity}. \fi Therefore, $W_i(x)=\sup_{p\in[0,1]}f(x,p)$ is an upper semicontinuous function of $x\in\Pi^{n-1}$. An important implication of the upper semicontinuity of $W_i$ and the compactness of $[0,1]$ is that the supremum in (\ref{eq:sup}) is attained: it is easy to check that an upper semicontinuous function attains its supremum over a compact set (cf.\ \cite{engelking:1989}, Problem 3.12.23(g)). For each $i=1,2,\ldots$, we can now define a forecasting system $\phi_i$ as follows. For each $x\in\Omega^n$, $n=0,1,\ldots,i-1$, choose $\phi_i(x)$ such that \begin{multline*} (1-\phi_i(x)) W_i(x^{\phi_i},\phi_i(x),0) + \phi_i(x) W_i(x^{\phi_i},\phi_i(x),1)\\ = \sup_{p} \bigl( (1-p) W_i(x^{\phi_i},p,0) + p W_i(x^{\phi_i},p,1) \bigr) = W_i(x^{\phi_i}) \end{multline*} (this is an inductive definition; in particular, $x^{\phi_i}$ is already defined at the time of defining $\phi_i(x)$). For $x\in\Omega^{\ge i}$, set, for example, $\phi_i(x):=0$. The important property of $\phi_i$ is that $W_i^{\phi_i}$ is a martingale w.r.\ to $\phi_i$, and so $\Prob^{\phi_i}(E_i)=W_i(\Lambda)$. Since the set $\Phi$ of all forecasting systems is compact in the product topology, the sequence $\phi_i$ has a convergent subsequence $\phi_{i_k}$, $k=1,2,\ldots$; let $\phi:=\lim_{k\to\infty}\phi_{i_k}$. We assume, without loss of generality, $i_1<i_2<\cdots$. Set \begin{equation*} c := \inf_i W_i(\Lambda) = \lim_{i\to\infty} W_i(\Lambda). \end{equation*} Fix an arbitrarily small $\epsilon>0$. Let us prove that $\Prob_{\phi}(E^{\phi})\ge c-\epsilon$. Let $K\in\bbbn$. The restriction of $\Prob_{\phi_{i_k}}$ to $\Omega^{i_K}$ (more formally, the probability measure assigning weight $\Prob_{\phi_{i_k}}(\Gamma(x))$ to each singleton $\{x\}$, $x\in\Omega^{i_K}$) comes within $\epsilon$ of the restriction of $\Prob_{\phi}$ to $\Omega^{i_K}$ in total variation distance from some $k$ on; let the total variation distance be at most $\epsilon$ for all $k\ge K'\ge K$. Let $k\ge K'$. Since $\Prob_{\phi_{i_k}}(E_{i_k}^{\phi_{i_k}})\ge c$, it is also true that $\Prob_{\phi_{i_k}}(E_{i_K}^{\phi_{i_k}})\ge c$; therefore, it is true that $\Prob_{\phi}(E_{i_{K}}^{\phi_{i_k}})\ge c-\epsilon$. By Fatou's lemma, we now obtain \begin{equation}\label{eq:Fatou} \Prob_{\phi} \left( \limsup_{k} E_{i_{K}}^{\phi_{i_k}} \right) \ge \limsup_{k\to\infty} \Prob_{\phi}(E_{i_{K}}^{\phi_{i_k}}) \ge c-\epsilon. \end{equation} Let us check that \begin{equation}\label{eq:limsup} \limsup_{k} E_{i_{K}}^{\phi_{i_k}} \subseteq E_{i_{K}}^{\phi}. \end{equation} Indeed, let $\omega\notin E_{i_{K}}^{\phi}$, i.e., $\omega^{\phi}\notin E_{i_{K}}$. Since $\phi_{i_k}\to\phi$ in the product topology and the set $E_{i_K}$ is closed, $\omega^{\phi_{i_k}}\notin E_{i_K}$ from some $k$ on. This means that $\omega\in E_{i_K}^{\phi_{i_k}}$ for only finitely many $k$, i.e., $ \omega \notin \limsup_{k} E_{i_{K}}^{\phi_{i_k}} $. From (\ref{eq:Fatou}) and (\ref{eq:limsup}) we can see that $\Prob_{\phi}(E_{i_K}^{\phi})\ge c-\epsilon$, for all $K\in\bbbn$. This implies $\Prob_{\phi}(E^{\phi})\ge c-\epsilon$. Since this holds for all $\epsilon$, $\Prob_{\phi}(E^{\phi})\ge c$. The rest of the proof is easy: since \begin{equation*} \UpProb(E) \le c \le \Prob_{\phi}(E^{\phi}) \le \UpProbMeas(E) \le \UpProb(E) \end{equation*} (the last inequality following from Theorem~\ref{thm:easy-way}), we have \begin{equation*} \UpProb(E) = c = \Prob_{\phi}(E^{\phi}) = \UpProbMeas(E). \qedhere \end{equation*} \end{proof} \ifnotJOURNAL In the proof of Lemma \ref{lem:compact} we referred to the following simple result. \begin{lemma}\label{lem:upper-semicontinuity} Suppose $X$ and $Y$ are topological spaces and $Y$ is compact. If a function $f:X\times Y\to\bbbr$ is upper semicontinuous, then the function $x\in X\mapsto g(x):=\sup_{y\in Y} f(x,y)$ is also upper semicontinuous. \end{lemma} \begin{proof} For any $c\in\bbbr$, we are required to show that the set $G:=\{x\mathrel{|}\sup_y f(x,y)<c\}$ is open. Let $x\in G$. For any $y\in Y$ there exists a neighborhood $O'_y$ of $x$ and a neighborhood $O''_y$ of $y$ such that, for some $\epsilon>0$, $f(x',y')<c-\epsilon$ for all $x'\in O'_y$ and all $y'\in O''_y$. By the compactness of $Y$, there is a finite family $O''_{y_1},\ldots,O''_{y_K}$ that covers $Y$. The intersection of $O'_{y_1},\ldots,O'_{y_K}$ will contain $x$ and will be a subset of $G$. Therefore, $G$ is indeed open. The argument in \cite{dellacherie:1972}, proof of Theorem I.2(d), is even simpler, but it assumes that $X$ is compact (which is, however, sufficient for the purpose of Lemma \ref{lem:compact}). \end{proof} \fi The idea of the proof of Theorem \ref{thm:main} is to extend Lemma \ref{lem:compact} to the analytic sets using Choquet's capacitability theorem (stated below). Remember that a function $\gamma$ (such as $\UpProb$ or $\UpProbMeas$) mapping the power set of a topological space $X$ (such as $\Pi$) to $[0,\infty)$ is a \emph{capacity} if: \begin{itemize} \item for any subsets $A$ and $B$ of $X$, \begin{equation}\label{eq:condition-1} A\subseteq B \Longrightarrow \gamma(A)\le\gamma(B); \end{equation} \item for any nested increasing sequence $A_1\subseteq A_2\subseteq\cdots$ of arbitrary subsets of $X$, \begin{equation}\label{eq:condition-2} \gamma \left( \cup_{i=1}^{\infty} A_i \right) = \lim_{i\to\infty} \gamma(A_i); \end{equation} \item for any nested decreasing sequence $K_1\supseteq K_2\supseteq\cdots$ of compact sets in $X$, \begin{equation}\label{eq:condition-3} \gamma \left( \cap_{i=1}^{\infty} K_i \right) = \lim_{i\to\infty} \gamma(K_i). \end{equation} \end{itemize} Condition (\ref{eq:condition-3}) is sometimes replaced by a different condition which is equivalent to (\ref{eq:condition-3}) for compact metrizable spaces $X$: cf.\ \cite{kechris:1995}, Definition 30.1. It turns out that both $\UpProb$ and $\UpProbMeas$ are capacities. We start from $\UpProb$. \begin{theorem}\label{thm:game-capacity} The set function $\UpProb$ is a capacity. \end{theorem} It is obvious that $\UpProb$ satisfies condition (\ref{eq:condition-1}). The following two statements establish conditions (\ref{eq:condition-2}) and (\ref{eq:condition-3}). Condition (\ref{eq:condition-3}) is easier to check: it can be extracted from the proof of Lemma \ref{lem:compact}. \begin{lemma}\label{lem:condition-3} If $K_1\supseteq K_2\supseteq\cdots$ is a nested sequence of compact sets in $\Pi$, \begin{equation}\label{eq:condition-3-UpProb} \UpProb \left( \cap_{i=1}^{\infty} K_i \right) = \lim_{i\to\infty} \UpProb(K_i). \end{equation} \end{lemma} \begin{proof} We will use the equality $ \UpProb(E) = \lim_{i\to\infty} \UpProb(E_i) $, in the notation of the proof of Lemma \ref{lem:compact}. This equality follows from \begin{equation*} \UpProb(E) = c = \lim_{i\to\infty} W_i(\Lambda) \ge \lim_{i\to\infty} \UpProb(E_i) \end{equation*} (the opposite inequality is obvious). Represent each $K_n$ in the form $K_n=\cap_{i=1}^{\infty}E_i$ where $E_1\supseteq E_2\supseteq\cdots$ and each $E_i$ satisfies (\ref{eq:level-i}); we will write $K_{n,i}$ in place of $E_i$. Without loss of generality we will assume that $ K_{1,i}\supseteq K_{2,i}\supseteq\cdots $ for all $i$. Then the set $K:=\cap_{i=1}^{\infty} K_i$ can be represented as $K=\cap_{i=1}^{\infty}K_{i,i}$, and so (\ref{eq:condition-3-UpProb}) follows from \begin{multline*} \UpProb(K) = \UpProb \left( \cap_{i=1}^{\infty} K_{i,i} \right) = \lim_{i\to\infty} \UpProb(K_{i,i}) = \lim_{n\to\infty} \lim_{i\to\infty} \UpProb(K_{n,i})\\ = \lim_{n\to\infty} \UpProb \left( \cap_{i=1}^{\infty} K_{n,i} \right) = \lim_{n\to\infty} \UpProb(K_n). \qedhere \end{multline*} \end{proof} To check condition (\ref{eq:condition-2}) for $\UpProb$, we will need the game-theoretic version, proved in \cite{GTP27}, of L\'evy's zero-one law (\cite{levy:1937}, Section 41). For each $x\in\Pi^{\diamond}$, define the \emph{conditional upper game-theoretic probability} of $E\subseteq\Pi$ by \begin{multline*} \UpProb(E\mathrel{|} x) :=\\ \inf \left\{ a \mathrel{|} \exists V: V(x)=a \text{ and } \forall\pi\in E\cap\Gamma(x): \limsup_n V(\pi^n)\ge1 \right\}, \end{multline*} where $V$ ranges over the non-negative (super)farthingales. \begin{proposition}[\cite{GTP27}]\label{prop:levy} Let $E\subseteq\Pi$. For almost all $\pi\in E$, \begin{equation}\label{eq:goal} \UpProb(E\mathrel{|}\pi^n) \to 1 \end{equation} as $n\to\infty$. (In other words, there exists a prequential event $N$ such that $\UpProb(N)=0$ and (\ref{eq:goal}) holds for all $\pi\in E\setminus N$.) \end{proposition} \begin{proof} It suffices to construct a non-negative farthingale $V$ starting from 1 that tends to $\infty$ on the sequences $\pi\in E$ for which (\ref{eq:goal}) is not true. Without loss of generality we replace ``for which (\ref{eq:goal}) is not true'' by \begin{equation*} \liminf_{n\to\infty} \UpProb(E\mathrel{|}\pi^n) < a, \end{equation*} where $a\in(0,1)$ is a given rational number (see Lemma \ref{lem:subadditivity}). Let $\pi$ be any sequence in $\Pi$; we will define $V(\pi^n)$ by induction for $n=1,2,\ldots$ (intuitively, we will describe a gambling strategy with capital process $V$). Start with 1 monetary unit: $V(\Lambda):=1$. Keep setting $V(\pi^n):=1$, $n=1,2,\ldots$, until $\UpProb(E\mathrel{|}\pi^n) < a$ (if this never happens, $V(\pi^n)$ will be $1$ for all $n$). Let $N_1$ be the first $n$ when this happens: $\UpProb(E\mathrel{|}\pi^{N_1}) < a$ but $\UpProb(E\mathrel{|}\pi^{n}) \ge a$ for all $n<N_1$. Choose a non-negative farthingale $S_1$ starting at $\pi^{N_1}$ from $1$, $S_1(\pi^{N_1}) = 1$, whose upper limit exceeds $1/a$ on all extensions of $\pi^{N_1}$ in $E$. Keep setting $V(\pi^n):=S_1(\pi^n)$, $n=N_1,N_1+1,\ldots$, until $S_1(\pi^n)$ reaches a value $s_1>1/a$. After that keep setting $V(\pi^n):=V(\pi^{n-1})$ until $\UpProb(E\mathrel{|}\pi^n) < a$. Let $N_2$ be the first $n$ when this happens. Choose a non-negative farthingale $S_2$ starting at $\pi^{N_2}$ from $s_1$, $S_2(\pi^{N_2}) = s_1$, whose upper limit exceeds $s_1/a$ on all extensions of $\pi^{N_2}$ in $E$. Keep setting $V(\pi^n):=S_2(\pi^n)$, $n=N_2,N_2+1,\ldots$, until $S_2(\pi^n)$ reaches a value $s_2>s_1(1/a)>(1/a)^2$. After that keep setting $V(\pi^n):=V(\pi^{n-1})$ until $\UpProb(E\mathrel{|}\pi^n) < a$. Let $N_3$ be the first $n$ when this happens. Choose a non-negative farthingale $S_3$ starting at $\pi^{N_3}$ from $s_2$ whose upper limit exceeds $s_2/a$ on all extensions of $\pi^{N_3}$ in $E$. Keep setting $V(\pi^n):=S_3(\pi^n)$, $n=N_3,N_3+1,\ldots$, until $S_3$ reaches a value $s_3>s_2(1/a)>(1/a)^3$. And so on. \end{proof} \begin{lemma} If $A_1\subseteq A_2\subseteq\cdots\subseteq\Pi$ is a nested sequence of prequential events, \begin{equation}\label{eq:condition-2-UpProb} \UpProb \left( \cup_{i=1}^{\infty} A_i \right) = \lim_{i\to\infty} \UpProb(A_i). \end{equation} \end{lemma} \begin{proof} Let $A_1,A_2,\ldots$ be a nested increasing sequence of prequential events. The non-trivial inequality in (\ref{eq:condition-2-UpProb}) is $\le$. For each $A_i$ the process \begin{equation*} S_i(x) := \UpProb(A_i\mathrel{|} x) \end{equation*} is a non-negative superfarthingale (see Lemma \ref{lem:UpProb-superfarthingale} below). By Proposition \ref{prop:levy}, $\limsup_nS_i(\pi^n)\ge1$ for almost all $\pi\in A_i$. The sequence $S_i$ is increasing, $S_1\le S_2\le\cdots$, so the limit $S:=\lim_{i\to\infty}S_i=\sup_{i}S_i$ exists and is a non-negative superfarthingale such that $ S(\Lambda) = \lim_{i\to\infty} \UpProb(A_i) $ and $\limsup_nS(\pi^n)\ge1$ for almost all $\pi\in\cup_iA_i$ (by Lemma \ref{lem:subadditivity}). We can get rid of ``almost'' by adding to $S$ a non-negative farthingale $V$ that starts at $V(\Lambda)<\epsilon$, for an arbitrarily small $\epsilon>0$, and satisfies $\limsup_n V(\pi^n)\ge1$ for all $\pi\in\cup_iA_i$ violating $\limsup_nS(\pi^n)\ge1$. \end{proof} \begin{lemma}\label{lem:UpProb-superfarthingale} For any prequential event $E$, the function $x\in\Pi^{\diamond}\mapsto\UpProb(E\mathrel{|} x)$ is a superfarthingale. \end{lemma} \begin{proof} Suppose there are $x\in\Pi^{\diamond}$ and $p\in[0,1]$ such that \begin{equation*} \UpProb(E\mathrel{|} x) < (1-p) \UpProb(E\mathrel{|} x,p,0) + p \UpProb(E\mathrel{|} x,p,1). \end{equation*} Then there exists a non-negative farthingale $V$ with $\limsup_n V(\pi^n)\ge1$ for all $\pi\in E\cap\Gamma(x)$ that satisfies \begin{equation*} V(x) < (1-p) \UpProb(E\mathrel{|} x,p,0) + p \UpProb(E\mathrel{|} x,p,1) \end{equation*} and, therefore, \begin{equation*} (1-p) V(x,p,0) + p V(x,p,1) < (1-p) \UpProb(E\mathrel{|} x,p,0) + p \UpProb(E\mathrel{|} x,p,1). \end{equation*} The last inequality implies that there exists $j\in\{0,1\}$ such that $V(x,p,j)<\UpProb(E\mathrel{|} x,p,j)$, which is impossible. \end{proof} This completes the proof of Theorem \ref{thm:game-capacity}. Let us now check that measure-theoretic probability is also a capacity. \begin{lemma}\label{lem:meas-capacity} The set function $\UpProbMeas$ is a capacity. \end{lemma} \begin{proof} Property (\ref{eq:condition-1}) is obvious for $\UpProbMeas$. Property (\ref{eq:condition-3}) follows from Lemmas \ref{lem:compact} and \ref{lem:condition-3}. \ifFULL\begingroup\color{blue} We will also give an independent proof. Let $K_1\supseteq K_2\supseteq\cdots$ be a decreasing sequence of compact sets, and let $\epsilon>0$. For each $i=1,2,\ldots$ choose a forecasting system $\phi_i$ satisfying \begin{equation*} \Prob^{\phi_i}(K_i) \ge \UpProbMeas(K_i) - \epsilon. \end{equation*} Since the set of all forecasting systems is compact in the product topology, we can choose a subsequence $\phi_{i_k}$, $i_k\uparrow\infty$, that converges in that topology to a forecasting system $\phi$. The probability measures $\Prob_{\phi_{i_k}}$ then converge to $\Prob_{\phi}$ in the weak topology. By the standard properties of weak convergence (see, e.g., \cite{kechris:1995}, Theorem 17.20), for each $K_i$ we have \begin{multline*} \Prob^{\phi}(K_i) \ge \limsup_{k\to\infty} \Prob^{\phi_{i_k}}(K_i) \ge \limsup_{k\to\infty} \Prob^{\phi_{i_k}}(K_{i_k})\\ \ge \limsup_{k\to\infty} \UpProbMeas(K_{i_k}) - \epsilon = \lim_{i\to\infty} \UpProbMeas(K_i) - \epsilon. \end{multline*} [Some justification for the first inequality has to be spelled out: e.g., $\Prob^{\phi}(K_i)$ is $\Prob_{\phi}(K_i^{\phi})$, and the presence of $\phi$ in two places complicates things.] This implies \begin{equation*} \Prob^{\phi} \left( \cap_{i=1}^{\infty} K_i \right) \ge \lim_{i\to\infty} \UpProbMeas(K_i) - \epsilon \end{equation*} and, therefore, \begin{equation*} \UpProbMeas \left( \cap_{i=1}^{\infty} K_i \right) \ge \lim_{i\to\infty} \UpProbMeas(K_i) - \epsilon. \end{equation*} Since this is true for any $\epsilon>0$, the proof is complete. \endgroup\fi Let us now check the remaining property (\ref{eq:condition-2}), with $\UpProbMeas$ as $\gamma$. Suppose there exists an increasing sequence $A_1\subseteq A_2\subseteq\cdots\subseteq X$ of prequential events such that \begin{equation*} \UpProbMeas \left( \cup_{i=1}^{\infty} A_i \right) > \lim_{i\to\infty} \UpProbMeas(A_i). \end{equation*} Let $\phi$ be a forecasting system satisfying \begin{equation*} \Prob^{\phi} \left( \cup_{i=1}^{\infty} A_i \right) > \lim_{i\to\infty} \UpProbMeas(A_i). \end{equation*} Then $\phi$ will satisfy $ \Prob^{\phi} \left( \cup_{i=1}^{\infty} A_i \right) > \lim_{i\to\infty} \Prob^{\phi}(A_i) $, which is equivalent to the obviously wrong $ \Prob_{\phi} \left( \cup_{i=1}^{\infty} A_i^{\phi} \right) > \lim_{i\to\infty} \Prob_{\phi}(A_i^{\phi}) $. \ifFULL\begingroup\color{blue} Let us check that for any forecasting system $\phi$ the set function $\Prob^{\phi}$ is a capacity. It is well known that $\Prob_{\phi}$ is a capacity (see, e.g., \cite{kechris:1995}, Exercise 30.3). The property $ \Prob^{\phi} \left( \cap_{i=1}^{\infty} K_i \right) = \lim_{i\to\infty} \Prob^{\phi}(K_i) $, i.e., $ \Prob_{\phi} \left( \cap_{i=1}^{\infty} K_i^{\phi} \right) = \lim_{i\to\infty} \Prob_{\phi}(K_i^{\phi}) $ for compact $K_1\supseteq K_2\supseteq\cdots$ follows from the corresponding property for $\Prob_{\phi}$ and the fact that each $K_i^{\phi}$ is a compact set as the preimage $\{\omega\in\Omega\mathrel{|}\omega^{\phi}\in K_i\}$ in the compact space $\Omega$ of the closed set $K_i$ under the continuous transformation $\omega\mapsto\omega^{\phi}$. The other two defining properties of capacities are even easier to check for $\Prob^{\phi}$. \endgroup\fi \end{proof} In combination with Choquet's capacitability theorem, Theorem \ref{thm:game-capacity} and Lemma \ref{lem:meas-capacity} allow us to finish the proof of Theorem \ref{thm:main}. \begin{ChoquetTheorem}[\cite{choquet:1954}] If $X$ is a compact metrizable space, $\gamma$ is a capacity on $X$, and $E\subseteq X$ is an analytic set, \begin{equation*} \gamma(E) = \sup \left\{ \gamma(K) \mathrel{|} K \text{ is compact}, K\subseteq E \right\}. \end{equation*} \end{ChoquetTheorem} For a proof of Choquet's theorem, see, e.g., \cite{kechris:1995}, Theorem 30.13. \begin{proof}[Proof of Theorem \ref{thm:main}] Combining Choquet's capacitability theorem (applied to the compact metrizable space $\Pi$), Lemma \ref{lem:compact}, Theorem \ref{thm:game-capacity}, and Lemma \ref{lem:meas-capacity}, we obtain \begin{equation*} \UpProb(E) = \sup_{K\subseteq E}\UpProb(K) = \sup_{K\subseteq E}\UpProbMeas(K) = \UpProbMeas(E), \end{equation*} $K$ ranging over the compact sets. \end{proof} \begin{remark} The fact that game-theoretic probability and measure-theoretic probability are capacities has allowed us to prove their coincidence on the analytic sets, and it might be useful for other purposes as well. In general, neither of these capacities is \emph{strongly subadditive}, in the sense of satisfying \begin{equation*} \gamma(A\cup B) + \gamma(A\cap B) \le \gamma(A) + \gamma(B) \end{equation*} for all prequential events $A$ and $B$. To demonstrate this it suffices, in view of Theorem \ref{thm:main}, to find analytic sets $A$ and $B$ that violate \begin{equation}\label{eq:strictly-subadditive} \UpProb(A\cup B) + \UpProb(A\cap B) \le \UpProb(A) + \UpProb(B). \end{equation} We can define $\UpProb(E)$ for subsets of $\Pi^n$ by (\ref{eq:upper-game}) with $\limsup_n$ omitted. This is an example of subsets $A$ and $B$ of $\Pi^2$ for which (\ref{eq:strictly-subadditive}) is violated: \begin{align} A &= \left\{ \left( 0, 0, \frac12, 0 \right), \left( \frac12, 0, 0, 0 \right) \right\},\label{eq:A}\\ B &= \left\{ \left( 0, 0, \frac12, 0 \right), \left( \frac12, 1, 0, 0 \right) \right\}. \label{eq:B} \end{align} For these subsets we have \begin{equation*} \UpProb(A\cup B) + \UpProb(A\cap B) = 1 + \frac12 > \frac12 + \frac12 = \UpProb(A) + \UpProb(B). \end{equation*} To obtain an example of subsets $A$ and $B$ of the full prequential space $\Pi$ for which (\ref{eq:strictly-subadditive}) is violated, it suffices to add $00\dots$ at the end of each element of the sets $A$ and $B$ defined by (\ref{eq:A}) and (\ref{eq:B}). \end{remark} \section{Application to the limit theorems of probability theory} \label{sec:application} The \emph{lower game-theoretic probability} of a prequential event $E$ is defined to be $1-\UpProb(\Pi\setminus E)$. Similarly, the \emph{lower measure-theoretic probability} of a prequential event $E$ is defined to be $1-\UpProbMeas(\Pi\setminus E)$. The game-theoretic strong law of large numbers (see, e.g., \cite{shafer/vovk:2001}, Section 3.3) implies that (\ref{eq:calibration}) holds with lower game-theoretic probability one. The standard martingale strong law of large numbers implies that (\ref{eq:calibration}) holds with lower measure-theoretic probability one. Our Theorem \ref{thm:main} establishes the equivalence between these two statements. Similarly, Theorem \ref{thm:main} establishes the equivalence between the game-theoretic law of the iterated logarithm for binary outcomes (a special case of Theorems 5.1 and 5.2 in \cite{shafer/vovk:2001}) and the martingale law of the iterated logarithm for binary outcomes in measure-theoretic probability theory. Transition from game-theoretic to measure-theoretic laws of probability, corresponding to the inequality $\ge$ in Theorem \ref{thm:main}, depends only on Ville's inequality, and so can be easily done for a wide variety of prediction protocols (see, e.g., \cite{shafer/vovk:2001}, Section 8.1). Transition in the opposite direction, corresponding to the inequality $\le$, is more difficult, and its feasibility has been demonstrated only in a very limited number of cases. In an important respect Theorem \ref{thm:main} is only an existence result. For example, in combination with the standard martingale strong law of large numbers in measure-theoretic probability theory it implies the game-theoretic strong law of large numbers for binary outcomes, but the resulting farthingale is very complex. The corresponding strategy for the gambler (or Skeptic, in the terminology of \cite{shafer/vovk:2001}) is also very complex. This contrasts with the simple and efficient gambling strategies designed in game-theoretic probability: see, e.g., \cite{shafer/vovk:2001}, Section 3.2, and \cite{kumon/takemura:2008}. It would be interesting to design efficient general procedures producing simple gambling strategies witnessing that $\UpProb(E)=0$ for natural classes of prequential events satisfying $\UpProbMeas(E)=0$. For example, such a procedure might be applicable to all prequential events satisfying $\UpProbMeas(E)=0$ and situated at a given low level of the Borel hierarchy. This would allow an automatic procedure of transition from measure-theoretic to constructive game-theoretic laws of probability: e.g., the set of sequences (\ref{eq:sequence}) violating the strong law of large numbers (\ref{eq:calibration}) is in the class $\Sigma^0_3$ of the Borel hierarchy, and the set of sequences violating the law of the iterated logarithm is in $\Delta^0_4$. \ifFULL\begingroup\color{blue} The levels of the Borel hierarchy are: \begin{itemize} \item $\Sigma^0_3$ for the strong law of large numbers, since the event \begin{equation*} \lim_{n\to\infty} \frac{1}{n} \sum_{i=1}^n (y_i-p_i) \ne 0 \end{equation*} is the complement of the $\Pi^0_3$ event \begin{equation*} \forall\epsilon>0 \exists N \forall n\ge N: \frac{1}{n} \left| \sum_{i=1}^n (y_i-p_i) \right| \le \epsilon. \end{equation*} \item $\Delta^0_4$ for the law of the iterated logarithm, since the event \begin{equation}\label{eq:LIL} \lim_{n\to\infty} A_n = \infty \text{ and } \limsup_{n\to\infty} \frac{1}{\sqrt{A_n\ln\ln A_n}} \sum_{i=1}^n (y_i-p_i) \ne 1, \end{equation} $A_n$ standing for $\sum_{i=1}^n p_i(1-p_i)$, is the intersection of the $\Pi^0_3$ event \begin{equation*} \forall C \exists N \forall n\ge N: A_n \ge C, \end{equation*} the $\Sigma^0_3$ event \begin{equation*} \exists \epsilon>0 \forall N \exists n\ge N: \frac{1}{\sqrt{A_n\ln\ln A_n}} \sum_{i=1}^n (y_i-p_i) > 1 + \epsilon, \end{equation*} and the $\Sigma^0_2$ event \begin{equation*} \exists \epsilon>0 \exists N \forall n\ge N: \frac{1}{\sqrt{A_n\ln\ln A_n}} \sum_{i=1}^n (y_i-p_i) \le 1 - \epsilon \end{equation*} (the last two events corresponding to the possibilities $<$ and $>$ for the $\ne$ in (\ref{eq:LIL})). \end{itemize} The central limit theorem requires a procedure of transition from $\UpProbMeas(E)<\epsilon$ to $\UpProb(E)<\epsilon$. \endgroup\fi In this article we have only considered the case where the outcomes $y_n$ are restricted to the binary outcome space $Y:=\{0,1\}$. It is easy to extend our results to the case where $Y$ is any finite set and Forecaster outputs probability measures on $Y$, interpreted as his probability forecasts for $y_n$. It remains an open problem whether it is possible to modify our definitions in a natural way so that the equivalence between game-theoretic and measure-theoretic probability extends to a wide classes of outcome spaces and prequential events; this would require imposing suitable measurability or topological conditions on the farthingales (or superfarthingales) used in the definition (\ref{eq:upper-game}) of game-theoretic probability. \ifFULL\begingroup\color{blue} \section{Directions of further research} The following steps would be: \begin{itemize} \item Extending some version of Theorem \ref{thm:main} to compact metrizable $Y$, still assuming that Forecaster outputs a probability forecast for $y_n$. The theorem itself is likely to become false since measure-theoretic probability is based on the notion of measurability whereas measurability does not play such a fundamental role in game-theoretic probability (cf.\ the discussion in \cite{shafer/vovk:2001}, pp.~168--169). For the theorem to continue to hold, the definition of game-theoretical probability might need to be modified by imposing suitable measurability or topological conditions on the permitted (super)farthingales. \item A further extension would involve forecast spaces different from the set of probability measures on $Y$; see \cite{shafer/vovk:2001} for numerous examples of such more general forecasting protocols. \item Extension to non-compact outcome and forecast spaces. This would cover, e.g., the unbounded forecasting protocol used to state Kolmogorov's strong law of large numbers (\cite{shafer/vovk:2001}, Chapter 4). Since the assumption of compactness has been so important in this article, it might be more promising to try and construct a counterexample to the analogue of Theorem \ref{thm:main} for the countable outcome space $\bbbn$ (this space is not compact but does not present any measurability problems). \end{itemize} The main question in this direction is: \begin{quote} What is the class $\FFF$ of prequential events $E$ for which $\UpProb(E)=\UpProbMeas(E)$? \end{quote} Two capacities on $\Pi$ (for concreteness) are said to be \emph{equivalent} if they coincide on the compact sets. Any two equivalent capacities coincide on the universally capacitable sets (and so, by Choquet's theorem, on the analytical sets). Let us say that a set $E$ is \emph{universally$^*$ capacitable} if its capacity is equal to the infimum of the capacities of the Borel sets that contain $E$. Every universally capacitable set is universally$^*$ capacitable (see, e.g., \cite{kechris:1995}, Exercise 30.17, or, for details, \cite{srivastava:1998}, Proposition 4.10.10). It can be shown that the smallest class of sets on which any two equivalent capacities agree coincides with the class of universally$^*$ capacitable sets. (This observation and the term ``universally$^*$ capacitable'' are due to Alexander Kechris.) So $\FFF$ contains all universally$^*$ capacitable sets. Of course, $\FFF$ can be strictly wider than the class of universally$^*$ capacitable sets. It should be easy to extend Theorem \ref{thm:main} to bounded functions, i.e., to prove that $\UpExpect(F)=\UpExpectMeas(F)$ for all bounded analytic functions $F:\Pi\to\bbbr$ and for natural definitions of upper game-expectation $\UpExpect$ (see \cite{shafer/vovk:2001}, p.~12) and upper measure-expectation $\UpExpectMeas$. For the functional version of the Choquet capacitability theorem see, e.g., \cite{dellacherie:1972}, Theorem II.5. \fi \ifJOURNAL \section*{Acknowledgements} \fi \ifnotJOURNAL \subsection*{Acknowledgements} \fi This article has greatly benefitted from conversations with Philip Dawid, Glenn Shafer, and Alexander Shen. Its main result answers, within the prequential framework, a question that has been asked independently by several people, including Shafer and, more recently, Shen. I am grateful to Alexander Kechris for his advice about capacities. \ifnotJOURNAL This work was supported in part by EPSRC (grant EP/F002998/1). \fi
1,108,101,564,010
arxiv
\subsection*{Acknowledgments}We have profited from helpful discussions with M. Mantoiu, C.-A. Pillet, L. Rey-Bellet, and J. Rougemont. This work was partially supported by the Fonds National Suisse. \mysection{The model} \label{sec:model} We will study the model of a (small) classical $N$-particle Hamiltonian system coupled to $M$ stochastic heat baths proposed in \cite{EPR}. The small system without the heat baths is governed by a Hamiltonian \equ{ H_{\!S} \in \cinf(\R^{2N})\;. } (We stay here with $d=1$ dimensional position space for each particle to simplify notation.) The heat baths are modeled by classical field theories associated to the wave equation. The fields will be called $\phi_i$ and their conjugate momenta $\pi_i$, where the index $i$ ranges from $1$ to $M$. The Hamiltonian for one heat bath is given by \equ{ H_{\!B}(\pi, \phi) = \frac12\int_\R \bigl(|\d\phi|^2 + |\pi|^2\bigr)\,dx\;. } The couplings allowed for the model are linear in the field variables. The total Hamiltonian for our model is then given by \equ[e:totHam]{ H(p,q,\pi,\phi) = \sum_{i=1}^M \Bigl(H_{\!B}(\pi_i,\phi_i) + F_i(p,q)\int_\R \d\phi_i(x) \rho_i(x)\, dx\Bigr) + H_{\!S}(p,q)\;. } We assume the initial conditions describe the heat baths at equilibrium at inverse temperatures $\beta_i$, \ie they are distributed in a sense according to the measure with ``weight'' \equ{ e^{-\beta_i H_{\!B}(\pi_i,\phi_i)}\;. } The paper \cite{EPR} explains in detail how, and under which conditions on the coupling functions $\rho_i$, one can reduce the resulting ``big'' system to a ``small'' system, where the heat baths are described by a finite number of variables. The price to pay for that is that we are now dealing with the following system of stochastic differential equations: \begin{alignat}{2}\label{e:sysStoch} dq_j &= \d_{p_j}H_{\!S}\,dt - \sum_{i=1}^M \bigl(\d_{p_j}F_i\bigr) r_i\,dt\;, &\qquad j&=1,\ldots,N\;,\notag\\ dp_j &= -\d_{q_j}H_{\!S}\,dt + \sum_{i=1}^M \bigl(\d_{q_j}F_i\bigr) r_i\,dt\;, &&\\ dr_i &= -\gamma_i r_i\,dt + \lambda_i^2\gamma_i F_i(p,q)\,dt - \lambda_i\sqrt{2\gamma_i T_i}\,dw_i(t)\;, &\qquad i&=1,\ldots,M\notag\;, \end{alignat} where the $w_i$ are independent Wiener processes. The various constants appearing in \eref{e:sysStoch} have the following meaning. $T_i$ is the temperature of the $i^{\text{th}}$ heat bath, $\lambda_i$ is the strength of the coupling between that heat bath and the small system and $1/\gamma_i$ is the relaxation time of the $i^{\text{th}}$ heat bath. The value of $\gamma_i$ depends on the choice of the coupling function $\rho_i$. If we wanted to be more general, we would have to introduce for each bath a family of auxiliary variables $r_{i,m}$ as is done in \cite{EPR}. This would only cause notational problems and does not change our argument. If we consider a generic $n$-dimensional system of stochastic differential equations with additive noise of the form \equ[e:genStoch]{ dx_i(t) = b_i(x(t))\,dt + \sum_{j=1}^n \sigma_{i j}\,dw_j(t)\;, } we can associate with it the second-order differential operator $\CL$ formally defined by \equ[e:genProc]{ \CL \equiv \frac12 \sum_{i,j=1}^n \d_i (\sigma\sigma^T)_{i j}\d_j + \sum_{i=1}^n b_i(x)\,\d_i\;. } It is a classical result that if the solution of such a system of stochastic differential equations exists, the probability density of the solution satisfies the partial differential equation \equ{ \d_t p(x,t) = \bigl(\CL p\bigr)(x,t)\;. } In our case, the differential operator $\CL$ is given by \equ[e:genChain]{ \CL = \sum_{i=1}^M \lambda_i^2\gamma_i T_i \d_{r_i}^2 - \sum_{i=1}^M \gamma_i\bigl(r_i - \lambda_i^2 F_i(p,q)\bigr)\d_{r_i} + X^{H_{\!S}} - \sum_{i=1}^M r_i X^{F_i}\;, } where the symbol $X^F$ denotes the Hamiltonian vector field associated to the function $F$. It is convenient to introduce the ``effective energy'' given by \equ[e:defG]{ G(p,q,r) = H_{\!S}(p,q) + \sum_{i=1}^M \Bigl(\frac{r_i^2}{2\lambda_i^2} - F_i(p,q) r_i\Bigr)\;. } At this point, we make the following assumption on the asymptotic behavior of $G$. \begin{assum}{0} There exist constants $\tilde d_i, C > 0$ and $\alpha > 0$, as well as constants $\tilde c_i > 2/\lambda_i^2$ such that \sublabels \equs[1,e:ass0]{ H_{\!S}(p,q) &\ge C(1 + \|p\|^\alpha + \|q\|^\alpha)\;, \sublabel{e:ass01}\\[1mm] F_i^2(p,q) &\le \tilde c_i H_{\!S}(p,q) + \tilde d_i\;. \sublabel{e:ass02} } \end{assum} \begin{remark} This assumption essentially means that the effective energy $G$ grows at infinity at least like $1+\|r\|^2 + \|p\|^\alpha + \|q\|^\alpha$. This implies the stability of the system, as follows easily from the inequality \equ{ |r_i F_i(p,q)| \le s^2 r_i^2 + \frac{F_i^2(p,q)}{s^2}\;, } which holds for every $s > 0$. In particular, this implies that $\exp(-\beta G)$ is integrable for every $\beta > 0$. \end{remark} We also define \equ{ W \equiv \sum_{i=1}^M \gamma_i T_i\;, } which is, in some sense that will be clear in a moment, the maximal power the heat baths can pull into the chain. We have the following result. \begin{proposition} Assume {\bf A0} holds. Then the solution $\xi(t;x_0,w)$ of \eref{e:sysStoch} exists and is continuous for all $t>0$ with probability $1$. Moreover, the mean energy of the system satisfies for all values of $t$ and $x_0$ the estimate \equ[e:GrowG]{ \Exp{G(x(t;x_0,w))} - G(x_0) \le W t\;, } where $\Exp{\cdot}$ denotes the expectation with respect to the $M$-dimensional Wiener process $w$. \end{proposition} \begin{remarque} The bound \eref{e:GrowG} allows the energy to grow forever, which would cause the system to ``explode.'' But this is not the case for the systems we consider in this paper. Indeed, we will prove that the process possesses a unique stationary state. This implies among other features that the mean time needed to reach any compact region is finite, and so the energy can not grow forever. \end{remarque} \begin{proof} A classical result (see \eg \cite[Thm~4.1]{Ha}) states the following. Assume that the vector field $b$ of \eref{e:genStoch} is locally Lipshitz and that there exists a confining ${\cal C}^2$ function $G : \R^n \to \R$ and a constant $k$ such that \equ{ (\CL G)(x) \le k \qquad\text{for all}\qquad x\in \R^n\;. } Then there exists a unique stochastic process $\xi(t)$ solving \eref{e:genStoch}. The process $\xi$ is regular (\ie it does not blow up in a finite time) and continuous for all $t>0$. It satisfies the statistics of a Markovian diffusion process with generator $\CL$. Moreover, we have the estimate \equ{ \Exp{G(x(t;x_0,w))} - G(x_0) \le k t\;. } This result can be applied to our case, if we take for $G$ the effective energy defined in \eref{e:defG}. An explicit computation yields indeed \equ[e:dissip]{ \CL G = W - \sum_{i=1}^M \frac{\gamma_i}{\lambda_i^2} \bigl(r_i - \lambda_i^2 F_i(p,q)\bigr)^2\;. } Moreover, $G$ is confining by {\bf A0}. This proves the assertion. \end{proof} \subsection{Definition and simple properties of the semi-group} In this paper, we will mainly be interested in studying under which assumptions on the chain Hamiltonian $H_{\!S}$ it is possible to prove the existence of a \emph{unique invariant measure} for the stochastic process $\xi(t;x_0,w)$ solving \eref{e:sysStoch}. Throughout, we will use the notation $$ \CX \,=\,\R^{2N+M} $$ for the extended phase space $(p,q,r)$. This stochastic process defines a semi-group ${\cal T}^t$ on $\cOinf[\CX]$ by \equ[e:defTt]{ {\cal T}^{t} f(x_0) = \Exp{f(\xi(t;x_0,w))}\;. } This semi-group satisfies the following \begin{proposition} \label{prop:Tt} Assume {\bf A0} holds. Then ${\cal T}^t$ extends to a strongly continuous, quasi-bounded semi-group of positivity preserving operators on $\Ltwo(\CX)$. Its generator $L$ is the closure of the operator $\CL$ with domain $\cOinf[\CX]$. The adjoint $L^*$ is the closure of the formal adjoint $\CL^T$ with domain $\cOinf[\CX]$. \end{proposition} \begin{proof} The proof will be given in Appendix~\ref{App:ops}. \end{proof} This in turn defines a dual semi-group $({\cal T}^t)^*$ by \equ{ \int \bigl({\cal T}^t f\bigr)(x)\,\nu(dx) = \int f(x)\,\bigl(({\cal T}^t)^* \nu\bigr)(dx)\;. } The generator of $({\cal T}^t)^*$ is given by the adjoint of $\CL$ in $\Ltwo$ that will be denoted $\CL^T$. It is possible to check that if the heat baths are all at the same temperature $T = 1/\beta$, we have \equ{ \CL^T \mu_0 = 0\;, \qquad\text{where}\qquad \mu_0(p,q,r) = e^{-\beta G(p,q,r)}\;. } Thus, the generalized Gibbs measure \equ{ d\mu_0 = e^{-\beta G(p,q,r)}\,dp\,dq\,dr = \mu_0(p,q,r)\,dp\,dq\,dr\;, } is an invariant measure for the Markov process described by \eref{e:sysStoch}. This confirms our definition of $G$ as the effective energy of our system. We want to consider the more interesting case where the temperatures of the heat baths are not the same. The idea is to work in a Hilbert space that is weighted with a Gibbs measure for some reference temperature. We will therefore study an extension ${\cal T}_0^t$ of ${\cal T}^t$ acting on an auxiliary weighted Hilbert space $\CH_0$, given by \equ{ \CH_0 \equiv \Ltwo\bigl(\CX, Z_0^{-1}e^{-2\beta_0 G(p,q,r)}\,dp\,dq\,dr\bigr)\;, } where $Z_0$ is a normalization constant and $\beta_0$ is a ``reference'' inverse temperature that we choose such that \equ[e:condTRefGen]{ 1/\beta_0 \equiv T_0 > \max\{T_i\;|\; i=1,\ldots,M\}\;. } We have the following \begin{proposition} \label{prop:Tt0} Assume {\bf A0} holds. Then the semi-group ${\cal T}^t$ given by \eref{e:defTt} extends to a strongly continuous quasi-bounded semi-group ${\cal T}_0^t$ on $\CH_0$. Moreover, ${\cal T}_0^t 1 = 1$ and ${\cal T}_0^t$ is positivity preserving, \ie \equ{ {\cal T}_0^t f \ge 0 \qquad\text{if}\qquad f \ge 0\;. } Let $L_0$ be the generator of ${\cal T}_0^t$. Then $L_0$ coincides on $\cOinf[\CX]$ with $\CL$ of \eref{e:genChain} and $\cOinf[\CX]$ is a core for both $L_0$ and $L_0^*$. \end{proposition} \begin{proof} The statement can be proven by simply retracing the proof of Lemma~3.1 in \cite{EPR}. There are only three points that have to be checked. We define the vector fields $b$ and $b_0$ respectively by \equs{ b &= - \sum_{i=1}^M \gamma_i\bigl(r_i - \lambda_i^2 F_i(p,q)\bigr)\d_{r_i} + X^{H_{\!S}} - \sum_{i=1}^M r_i X^{F_i}\;, \\ b_0 &= 2\beta_0 \sum_{i=1}^M \lambda_i^2 \gamma_i T_i \bigl(\d_{r_i} G\bigr) \d_{r_i} = 2\beta_0 \sum_{i=1}^M \gamma_i T_i \bigl(r_i - \lambda_i^2 F_i(p,q)\bigr) \d_{r_i}\;. } In order to make the proof of \cite{EPR} work, we have to check that \equ{ \|\!\div b\|_\infty < \infty\;,\quad \|\!\div b_0\|_\infty < \infty\;,\quad \sup_{x\in \CX} \bigl(b + {\textstyle\frac{1}{2}}b_0 \bigr) G(x) < \infty\;, } where $b$ and $b_0$ are considered as first-order differential operators in the last inequality. The divergence of any Hamiltonian vector field vanishes, and so we have \equ{ \|\!\div b\|_\infty = -\sum_{i=1}^M \gamma_i < \infty\;. } The term involving the divergence of $b_0$ can easily be computed to give \equ{ \|\!\div b_0\|_\infty = \beta_0 \sum_{i=1}^M \gamma_i T_i < \infty\;. } In order to check the last inequality, we compute the expression \equ{ \bigl(b + {\textstyle\frac{1}{2}}b_0 \bigr) G(p,q,r) = \sum_{i=1}^M \frac{\gamma_i}{\lambda_i^2} (\beta_0 T_i - 1) \bigl( r_i - \lambda_i^2 F_i(p,q)\bigr)^2\;. } We see that condition \eref{e:condTRefGen} on $\beta_0$ obviously implies $\beta_0 T_i - 1 < 0$, and so the desired inequality holds. The domains of $L_0$ and $L_0^*$ are controlled by the techniques of Appendix~\ref{App:ops}. \end{proof} We are mainly interested in the case $M=2$. The Hamiltonian $H_{\!S}$ will describe a chain of $N+1$ strongly anharmonic oscillators coupled to two heat baths at the first and the last particle. In the case in which the Hamiltonian $H_{\!S}$ can be written as a quadratic function plus some bounded terms, the existence and uniqueness of a stationary state for every temperature difference has been proved in \cite{EPR,EPR2}. We will extend this result to the case where the potentials grow faster than quadratically at infinity. Besides some weak conditions on the derivatives of the one and two-body potentials, we will only require that they grow algebraically and that the two-body potentials grow asymptotically faster than the one-body potentials, \ie at large separation the interaction energy between neighboring particles grows \emph{faster} than the one-particle energy. \subsection{Notations} \label{sec:defs} Throughout, the domain of an operator $A$ will be denoted by $\CD(A)$. Unless specified, the domain of any operator will always be the closure in the graph norm of $\cOinf$. For example, if we write $[A,B]$, we mean in fact $\overline{(AB - BA)\upharpoonright \cOinf}$, so that the domain of $[A,B]$ can be larger than that of $A$ or $B$ separately. \mysection{Setting and results} \label{sec:settRes} In order to set up our model, we need to be able to describe precisely the growth rates of the potentials at infinity. This will be achieved with the following function spaces. \begin{definition} \label{def:func} Choose $\alpha \in \R$. We call ${\cal F}_\alpha$ the set of all $\cinf$ functions from $\R^n$ to $\R$ such that for every multi-index $k$ there exists a constant $C_k\;$ for which \equ{ \| D^k f(x)\| \le C_k (1 + \|x\|^2)^{\alpha/2}\;,\quad\text{for all}\quad x \in \R^n\;. } \end{definition} \begin{definition} \label{def:func2} Choose $\alpha \in \R$ and $i \in \N \cup \{\infty\}$. We call ${\cal F}_\alpha^i$ the set of all $\cinf$ functions from $\R^n$ to $\R$ such that for every multi-index $k$ with $|k| \le i$, we have $D^k f(x) \in \CF_{\alpha - |k|}$. \end{definition} \begin{remarque} For any $\alpha \in \R$, the function \equs[e:defPoly]{ P^\alpha:\R^n &\to \R \\ x\;\, &\mapsto (1 + \|x\|^2)^{\alpha/2} } belongs to $\CF_\alpha^\infty$. Moreover, any polynomial of degree $n$ belongs to $\CF_n^\infty$. \end{remarque} \subsection{The chain} \begin{MHfig}{ChainFig}[-2mm] \vspace{-5mm} \caption{Chain of oscillators} \label{fig} \end{MHfig} We consider the Hamiltonian \equ[e:defHChain]{ H_{\!S}(p,q) = \sum_{i=0}^N \Bigl( \frac{p_i^2}{2} + \V1(q_i)\Bigr) + \sum_{i=1}^{N} \V2(q_{i} - q_{i-1})\;, } describing a chain of particles with nearest-neighbor interaction (Figure~\ref{fig}). We slightly modify the notations used so far. Because there are only two heat baths, we will not use for them the indices $i \in \{1,2\}$, but rather $i\in\{L,R\}$. Concerning the coupling between the chain and the baths, we assume that we can make a dipole approximation, so we set \equ[e:defCouplChain]{ F_L = q_0 \qquad\text{and}\qquad F_R = q_N\;, } in equation \eref{e:totHam}. We will make the assumptions {\bf A1}--{\bf A3} on $\V1$ and $\V2$. \begin{assum}{1} The potential $\V1$ is in $\CF_{2n}^2$ for some $n>1$. Moreover, there are constants $c_i > 0$ such that \sublabels \equs[1,ass1]{ \V1(x) &\ge c_1 P^{2n}(x)\;, \sublabel{e:boundV1}\\ x\V1'(x) &\ge c_2P^{2n}(x) - c_3\;, \sublabel{e:boundxV1} } for all $x \in \R$. \end{assum} \begin{assum}{2} The potential $\V2$ is in $\CF_{2m}^2$ for some $m>n$. Moreover, there are constants $c_i' > 0$ such that \sublabels \equs[1,ass2]{ \V2(x) &\ge c_1' P^{2m}(x)\;, \sublabel{e:boundV2}\\ x\V2'(x) &\ge c_2' P^{2m}(x) - c_3'\;, \sublabel{e:boundxV2} } for all $x \in \R$. \end{assum} \begin{assum}{3} The function \equ{ x \mapsto \frac{1}{\V2''(x)} } belongs to $\CF_\ell$ for some $\ell$. \end{assum} \begin{remarque} It is clear that \eref{e:defCouplChain}, together with {\bf A1} and {\bf A2} immediately imply {\bf A0}. Notice that the assumptions $\V1 \in \CF_{2n}^2$ and $\V2 \in \CF_{2m}^2$ give bounds not only on the asymptotic behavior of $\V1$ and $\V2$, but also of their derivatives. The numbers $n$, $m$ and $\ell$ need not be integers. The generalization to a Hamiltonian with $\V1$, $\V2$ depending also on the number of the particle only creates notational problems and is left to the reader. \end{remarque} An example of potentials that satisfy {\bf A1}--{\bf A3} is \equ{ \V1(x) = x^4 - x^2 + 2 \quad\text{and}\quad \V2(x) = (1+x^2)^{5/2} - \cos(x)\;. } The effective energy of the system chain+baths is given by \equ[e:defGChain]{ G(p,q,r) = H_{\!S}(p,q) + \frac{{r_{\!L}}^2}{2\lambda_L^2} + \frac{{r_{\!R}}^2}{2\lambda_R^2} - q_0{r_{\!L}} - q_N {r_{\!R}} + \Gamma\;, } where we choose the constant $\Gamma$ such that $G \ge 1$, which is always possible, because $n>1$. In fact, it is important that the function $\exp(-\beta G)$ be integrable for any $\beta > 0$. This could also be achieved with for example only one of the one-body potentials non-vanishing, but would cause some unimportant notational difficulties. The case $n=1$ is marginal, the stability of the system depends on the values of the constants $\lambda_i$ and was treated in \cite{EPR}. We will not treat this case, but it would not cause any big trouble, as long as $G$ remains confining. In the sequel, we will extensively use the notations \equ{ \tilde q_i \equiv q_i - q_{i-1} \qquad\text{and}\qquad Q \equiv \sum_{i=0}^N q_i \;. } The system of stochastic differential equations we consider is given by \equs[e:stochChain]{ dq_i &= p_i\,dt\;,\\[1mm] dp_0 &= -\V1'(q_0)\,dt + \V2'(\tilde q_1)\,dt + {r_{\!L}}\,dt\;, \\[1mm] dp_j &= -\V1'(q_j)\,dt - \V2'(\tilde q_{j})\,dt + \V2'(\tilde q_{j+1})\,dt\;, \\[1mm] dp_N &= -\V1'(q_N)\,dt - \V2'(\tilde q_{N})\,dt + {r_{\!R}}\,dt\;, \\[1mm] d{r_{\!L}} &= -\gamma_L {r_{\!L}}\,dt + \lambda_L^2\gamma_L q_0\,dt - \lambda_L\sqrt{2\gamma_L T_L}\,dw_L(t)\;, \\[1mm] d{r_{\!R}} &= -\gamma_R {r_{\!R}}\,dt + \lambda_R^2\gamma_R q_N\,dt - \lambda_R\sqrt{2\gamma_R T_R}\,dw_R(t)\;, } where $i = 1,\ldots,N$ and $j=1,\ldots,N-1$. Since {\bf A0} holds, the results of the preceding section apply. Therefore, there exists for any initial condition $x_0$ a unique stochastic process $\xi(t;x_0,w)$ solving \eref{e:stochChain}. It obeys the statistics of a Markov diffusion process with generator \equs[e:defL]{ \CL =&\; \lambda_L^2\gamma_L T_L \d_{{r_{\!L}}}^2 + \lambda_R^2\gamma_R T_R \d_{{r_{\!R}}}^2 - \gamma_L({r_{\!L}} - \lambda_L^2 q_0)\d_{{r_{\!L}}} - \gamma_R({r_{\!R}} - \lambda_R^2 q_N)\d_{{r_{\!R}}} \\ &+ {r_{\!L}} \d_{p_0} + {r_{\!R}} \d_{p_N} + \sum_{i=0}^N \bigl(p_i\d_{q_i} - \V1'(q_i)\d_{p_i}\bigr) - \sum_{i=1}^{N} \V2'(\tilde q_i)\bigl(\d_{p_i}-\d_{p_{i-1}}\bigr)\;. } We want to prove the existence of a smooth invariant measure with density $\mu(p,q,r)$. It is the solution of $({\cal T}^t)^*\mu = 0$, where $({\cal T}^t)^*$ is the dual semi-group of ${\cal T}^t$. To achieve this, we introduce, as above, the Hilbert space \equ{ \CH_0 \equiv \Ltwo\bigl(\R^{2N+4}, Z_0^{-1}e^{-2\beta_0 G(p,q,r)}\,dp\,dq\,dr\bigr)\;, } where $Z_0$ is a normalization constant and $\beta_0$ is a ``reference'' inverse temperature that we choose such that \equ[e:condTRef]{ 1/\beta_0 \equiv T_0 > \max\{T_L,\,T_R\}\;. } Proposition~\ref{prop:Tt} holds, so the dynamics of our system is described by a semi-group ${\cal T}_0^t$ acting in $\CH_0$ with generator $L_0$, formally given by $\CL$. The extended phase space of our system will again be denoted by $\CX \equiv \R^{2N+4}$. For convenience, we would like to work in $\CH = \Ltwo(\CX)$, so we define the unitary transformation $U:\CH \to \CH_0$ by \equ{ \bigl(U f\bigr)(x) = e^{\beta_0 G(x)}f(x)\;. } So $L_0$ is unitarily equivalent to the operator $L_{\CH} : \CD(L_{\CH}) \to \CH$ defined by \equ{ L_{\CH} = U^{-1} L_0 U = e^{-\beta_0 G}L_0 e^{\beta_0 G}\;. } An explicit computation shows that $L_{\CH}$ is given by \equ{ L_{\CH} = \alpha - K\;, } where the formal expression for the differential operator $K$ is \equs[e:defKchain]{ K =&\; \alpha_K - c_L^2 \d_{{r_{\!L}}}^2 + a_L^2({r_{\!L}} - \lambda_L^2 q_0)^2 - c_R^2 \d_{{r_{\!R}}}^2 + a_R^2({r_{\!R}} - \lambda_R^2 q_N)^2 \\ &\;- {r_{\!L}}\d_{p_0} + b_L({r_{\!L}} - \lambda_L^2 q_0)\d_{r_{\!L}} - {r_{\!R}}\d_{p_N} + b_R({r_{\!R}} - \lambda_R^2 q_N)\d_{r_{\!R}} \\ &\;- \sum_{i=0}^N \bigl(p_i\d_{q_i} - \V1'(q_i)\d_{p_i}\bigr) + \sum_{i=1}^{N} \V2'(\tilde q_i)\bigl(\d_{p_i}-\d_{p_{i-1}}\bigr)\;. } Since $\cOinf[\CX]$ is invariant under the unitary transformation $U$, it remains a core for both $K$ and $K^*$. The various constants appearing in \eref{e:defKchain} are given by \begin{alignat*}{2} a_i^2 &= \gamma_i(\beta_0 T_i - 1)\;, \quad&\quad &\\ b_i &= \frac{\gamma_i \beta_0}{\lambda_i^2}\bigl(\beta_0 T_i - 1\bigr)\;, \quad&\quad i&\in\{L,R\}\;,\\ c_i &= \lambda_i\sqrt{\gamma_i T_i}\;,\quad&\quad&\\ \alpha_K &= -\frac{b_L}{2} - \frac{b_R}{2}\;,\quad&\quad&\\ \alpha &= \alpha_K + {\beta_0} \sum_{i\in\{L,R\}} \gamma_i T_i\;.\quad&\quad & \end{alignat*} We see that condition \eref{e:condTRef} ensures the positivity of the constants $a_L^2$ and $a_R^2$, which in turn implies that the closure of $\Re K = (K + K^*)/2$ is a strictly positive self-adjoint operator. The first feature we notice about $K$ is that {\bf A3} implies the hypoellipticity of the operators $K$, $K^*$, $\d_t + K$ and $\d_t + K^*$. We recall that a differential operator $L$ acting on functions in a finite-dimensional differentiable manifold $\CM$ is called hypoelliptic if \equ{ \text{sing supp } f = \text{sing supp } L f \;,\qquad \text{for all}\quad f \in \CD'(\CM)\;, } where $\CD'(\CM)$ is the space of distributions on $\cOinf[\CM]$. In particular, the eigenfunctions of a hypoelliptic operator are $\CC^\infty$. The hypoellipticity of the above operators is a consequence of a theorem by H\"ormander \cite{H1,Ho}: given a second-order differential operator \equ{ L = \sum_{i=1}^n L_i^* L_i + L_0 + c\;, } where $c : \CM \to \C$ is a smooth function and the $L_i$ are smooth vector fields. Then a sufficient condition for $L$ to be hypoelliptic is that the Lie algebra generated by $\{L_i\;|\; i=0,\ldots,n\}$ has maximal rank everywhere. It is not hard to verify that {\bf A3} ensures that this condition is verified for $K$, $K^*$, $\d_t + K$ and $\d_t + K^*$. \begin{proposition} If {\bf A0} and {\bf A3} are satisfied, the transition probabilities of the Markov process solving \eref{e:stochChain} have a smooth density \equ{ P(t,x,y) \in \CC^\infty\bigl((0,\infty) \times \CX \times \CX\bigr)\;. } \end{proposition} \begin{proof} This is an immediate consequence of the Kolmogorov equations which state that \equ{ \d_t P = \CL P\qquad\Rightarrow\qquad (\d_t + K - \alpha)U^{-1}P = 0\;, } so $U^{-1}P$ is an eigenfunction of the operator $\d_t + K - \alpha$, which is hypoelliptic. \end{proof} \subsection{Main results} Our main technical result is \begin{theorem} \label{theo:Chain} If Assumptions {\bf A1}--{\bf A3} are satisfied, then the operator $K$ defined in \eref{e:defKchain} has compact resolvent. \end{theorem} In order to prepare the proof of Theorem \ref{theo:Chain}, we will prove the following two propositions. \begin{proposition} \label{prop:estG} If Assumptions {\bf A1} and {\bf A2} are satisfied, there exist constants $C$ and $\eps > 0$ such that \sublabels \equs[2,e:estG]{ \|G^{\eps}f\| &\le C(\|K f\| + \|f\|)\;, \quad&\text{for all}\quad f&\in \CD(K)\;,\sublabel{e:estG1}\\ \|G^{\eps}f\| &\le C(\|K^* f\| + \|f\|)\;, \quad&\text{for all}\quad f&\in \CD(K^*)\;.\sublabel{e:estG2} } \end{proposition} \begin{proposition} \label{prop:estDeltaPrime} If Assumptions {\bf A1}--{\bf A3} are satisfied, there exist constants $C$, $\eps > 0$, a positive function $a_0:\CX \to \R$ and a finite number $\bar N$ of smooth vector fields $L_i$ with bounded coefficients such that, for every function $f \in \cOinf[\CX]$, we have \equs[1,e:defDeltaPrime]{ \|\tilde\Delta^{\eps}f\| &\le C(\|K f\| + \|f\|)\;,\\ \intertext{where} \tilde \Delta &= \sum_{i=1}^{\bar N}L_i^* L_i + a_0\;.\notag } Moreover, the $L_i$ span the whole of $\R^{2N+4}$ at every point. \end{proposition} Given Theorem~\ref{theo:Chain}, we can state and prove the main result of this paper, namely the existence and uniqueness of an invariant measure for our Markov process. More precisely, we have the following result. \begin{theorem} \label{theo:exist} If Assumptions {\bf A1}--{\bf A3} are satisfied, then the stochastic process $\xi(t)$ solving \eref{e:sysStoch} possesses a unique and strictly positive invariant measure $\mu$. Its density $h$ is ${\cal C}^\infty$ and satisfies for any $\beta_0 < \min\{\beta_L,\beta_R\}$, \equ{ h(x) = \tilde h(x) e^{-\beta_0 G(x)}\;, } where $\tilde h$ decays at infinity faster than any polynomial. \end{theorem} \begin{MHwrap}{r}{7cm}{SpectrFig}[-5mm] \vspace{-5mm} \caption{Spectrum of $K$.} \label{fig:spec} \end{MHwrap} The above results say that the spectrum of $K$ looks roughly like the one schematically depicted in Figure~\ref{fig:spec}. We see that it is discrete (compactness of the resolvent) and located in the right half of the complex plane ({\it m}-accretivity). Moreover, it is symmetric along the real axis, because $K$ is a differential operator with real coefficients. Most of the remainder of this paper is devoted to the proofs of Theorems~\ref{theo:Chain} and \ref{theo:exist}. In the sequel, we will always use the notation \equ{ K = \sum_{i=1}^4 X_i^* X_i + X_0\;, } where we define \sublabels \equs[2,e:defXi]{ X_1 &= c_L \d_{{r_{\!L}}}\;, \quad&\quad X_2 &= a_L({r_{\!L}} - \lambda_L^2 q_0)\;, \sublabel{e:defLeft}\\ X_3 &= c_R \d_{{r_{\!R}}}\;, \quad&\quad X_4 &= a_R({r_{\!R}} - \lambda_R^2 q_N)\;, \sublabel{e:defRight} } \vspace{-5mm} \sublabels* \equs[e:defX0]{ X_0 =&\; - {r_{\!L}}\d_{p_0} + b_L({r_{\!L}} - \lambda_L^2 q_0)\d_{r_{\!L}} - {r_{\!R}}\d_{p_N} + b_R({r_{\!R}} - \lambda_R^2 q_N)\d_{r_{\!R}} \\ &\;- \sum_{i=0}^N \bigl(p_i\d_{q_i} - \V1'(q_i)\d_{p_i}\bigr) + \sum_{i=1}^{N} \V2'(\tilde q_i)\bigl(\d_{p_i}-\d_{p_{i-1}}\bigr) - \alpha_K\;. } The operator $X_0$ is antisymmetric, \ie \equ[e:propX0star]{ X_0^* = -X_0\;. } This implies that \equ[e:defReK]{ \Re K = \sum_{i=1}^4 X_i^* X_i \quad\text{and}\quad X_0 = K-\Re K\;, } and thus $\Re K$ is a positive self-adjoint operator. We have one more estimate that will be extensively used in the sequel. If $f$ is some function in $\cOinf[\CX]$ and $i\in\{1,\ldots,4\}$ we have \equ[e:estXchain]{ \|X_i f\|^2 = \scal{f,X_i^* X_i f} \le \scal{f, \Re K f} = \Re\scal{f,K f} \le \|f\|\|K f\| \le (\|Kf\| + \|f\|)^2\;, } and by a similar argument also \equ[e:estXstarchain]{ \|X_i^* f\|^2 \le (\|Kf\| + \|f\|)^2\;. } \mysection{Proof of the bound in position space (Proposition~\ref{prop:estG})} \label{sec:proof1} First of all, we need a collection of functions belonging to $\CF_0$, as defined in Definition~\ref{def:func}. We have the following result. \begin{proposition} \label{prop:Degs} Let $r$, $p$, $q$ and $\tilde q$ designate the vectors \equs[2]{ r &= ({r_{\!L}}, {r_{\!R}})\;,\quad&\quad q &= (q_0, \ldots, q_N)\;,\\ p &= (p_0, \ldots, p_N)\;,\quad&\quad\tilde q &= (\tilde q_1, \ldots, \tilde q_N)\;. } Choose $\alpha \ge 0$ and let $h_k : \R^k \to \R$ be functions in $\CF_\alpha$. Then the functions \equ{ G^{-\alpha/2}h_2(r)\;,\quad G^{-\alpha/2}h_{N+1}(p)\;,\quad G^{-\alpha/(2n)}h_{N+1}(q)\;,\quad\text{and}\quad \;G^{-\alpha/(2m)}h_N(\tilde q) } belong to $\CF_0$. \end{proposition} \begin{proof} We will only sketch the proof of the statement for $G^{-\alpha/(2m)}h_N (\tilde q)$. The other expressions can easily be treated in a similar way. We first notice that $G^{-1}(D^k G)$ is bounded for every multi-index $k$. This is a straightforward consequence of two observations. The first one is that because of the lower bounds \eref{e:boundV1} and \eref{e:boundV2} of {\bf A1} and {\bf A2} and the expression \eref{e:defGChain} of $G$, there exists a constant $C >0$ for which \equ[e:bound1]{ G(p,q,r) \ge C\bigl(r^2 + p^2 + P^{2n}(q) + P^{2m}(\tilde q)\bigr)\;, } where $P^k$ was defined in \eref{e:defPoly}. The second observation is that, because $\V1 \in \CF_{2n}$ and $\V2 \in \CF_{2m}$, we have for every multi-index $k$ some constant $C_k$ for which \equ[e:bound2]{ |D^k G(p,q,r)| \le C_k \bigl(r^2 + p^2 + P^{2n}(q) + P^{2m}(\tilde q)\bigr)\;. } Notice that $G^{-\alpha/(2m)}D^k h_N(\tilde q)$ is bounded by a similar argument, in particular because $h_N \in \CF_\alpha$. We set $\alpha = -\alpha/(2m)$ and write \equ{ \d_i\bigl(G^\alpha h_N(\tilde q)\bigr) = \alpha\bigl(G^{-1}\d_i G\bigr) G^\alpha h_N(\tilde q) + G^\alpha \d_i h_N(\tilde q)\;. } Both terms are bounded by \eref{e:bound1}, \eref{e:bound2} and the fact that $h_N \in \CF_\alpha$. It is easy to see that all the derivatives can be bounded similarly. The proof of Proposition~\ref{prop:Degs} is complete. \end{proof} Let us define \equ{ \Lambda_1 \equiv G^{1/2}\;. } The symbol $\Lambda_1$ was chosen in order to emphasize the similarity between the proof of Proposition~\ref{prop:estG} and the proof of the main result of Section~\ref{sec:Hormander}, Theorem~\ref{theo:princ}. Before we start the proof of Proposition~\ref{prop:estG}, we notice two more facts. Let us choose $\alpha$,$\beta \in \R$ with $0 \le \beta \le 1$, and let $A$, $B$ be two operators of multiplication by positive functions $A \le B$. We then have \equ[e:estPosFun]{ \scal{\Lambda_1^\alpha A f, f} \le \scal{\Lambda_1^\alpha B f, f}\;, } as well as the implication \equ[e:estExpos]{ \|\Lambda_1^\alpha A f\| \le C(\|K f\| + \|f\|) \quad\Rightarrow\quad \|\Lambda_1^{\alpha \beta} A^\beta f\| \le C(\|K f\| + \|f\|)\;. } Both inequalities are trivial consequences of the fact that $\Lambda_1$ is an operator of multiplication by a positive function and the estimate $x^s \le 1+x$ if $x \ge 0$ and $s \le 1$. \subsection{The main tool of the proof} The main tool in the proof of Proposition~\ref{prop:estG} is the following lemma. \begin{lemma} \label{lem:estG} Let $\Lambda_1$ and $K$ be defined as above. Let $A$ and $B$ be multiplication operators represented by functions of the form \equ{ h(p,q,r) = c_L {r_{\!L}} + c_R {r_{\!R}} + \tilde h(p,q)\;,\qquad \tilde h \in \cinf(\R^{2N+2})\;. } Assume moreover that there are exponents $\alpha_i$ and $\beta_i$ and positive constants $C_i$ such that the following estimates are true for every $f \in \cOinf[\CX]$. \begin{alignat*}{2} \|\Lambda_1^{-\alpha_1}A f\| &\le C_1(\|K f\| + \|f\|)\;, \qquad&\qquad \|\Lambda_1^{-\beta_1}B f\| &\le C_2(\|K f\| + \|f\|)\;, \\ \|\Lambda_1^{-\alpha_2}A f\| &\le C_3\|f\|\;, \qquad&\qquad \|\Lambda_1^{-\beta_2}B f\| &\le C_4\|f\|\;, \\ \|\Lambda_1^{-\alpha_3}[X_0,A]f\| &\le C_5(\|K f\| + \|f\|)\;, \qquad&\qquad \|\Lambda_1^{-\beta_3}[X_0,B]f\| &\le C_6(\|K f\| + \|f\|)\;. \end{alignat*} If $\gamma$ satisfies the conditions \begin{eqnarray} \gamma &\ge& \alpha_3 + \beta_1 \label{e:cond1}\;,\\ \gamma &\ge& \alpha_2 + \frac{\beta_1 + \max\{\beta_2,\beta_3\}}2 \label{e:cond2}\;,\\ \gamma &\ge& \min\{\alpha_1 + \beta_2\,,\, \alpha_2 + \beta_1\}\;, \label{e:cond4} \end{eqnarray} then there exists a constant $C$ such that \equ[e:estWork]{ |\scal{[X_0,B]f,\Lambda_1^{-\gamma}A f}| \le C(\|K f\| + \|f\|)^2\;,\quad\text{for all}\quad f\in \cOinf[\CX]\;. } \end{lemma} \begin{proof} The proof of this lemma involves some of the commutation techniques developed by H\"ormander \cite{Ho}, but it uses the fact that most operators involved are multiplication operators, \ie they commute. An explicit computation, using \eref{e:defGChain} and \eref{e:defXi} yields \sublabels \equs[2,e:comms]{ [X_0, G] &= \sum_{j\in\{R,L\}} \frac{b_j}{\lambda_j^2}\bigl(r_j - \lambda_j^2 F_j\bigr)^2 \;,&\quad [X_1,G] &= c_L\bigl({r_{\!L}}/\lambda_L^2 - q_0\bigr)\;, \sublabel{e:comms1}\\ [X_2,G] &= [X_4,G] = 0\;,&\quad [X_3,G] &= c_R\bigl({r_{\!R}}/\lambda_R^2 - q_N\bigr)\;. \sublabel{e:comms2} } We therefore see that, by Proposition~\ref{prop:Degs}, we have for $i=0,\ldots,4$ \equ[e:boundG]{ G^{-1}[X_i,G] \in \CF_0\;. } Since the $X_i$ are either differentiation operators or multiplicative operators, we have, for any $\alpha \in \R$, the relation \equ{ G^{-\alpha}[X_i, G^\alpha] = \alpha G^{-1}[X_i,G] \in \CF_0\;, } and so, since $\Lambda_1^2 = G$, \equ[e:boundComm]{ \|\Lambda_1^{\alpha}[X_i,\Lambda_1^{-\alpha}]\| < \infty\;. } We can now start to bound \eref{e:estWork}. Since $[X_0, B] = - X_0^* B - B X_0$, we can write \eref{e:estWork} as \equs{ |\scal{[X_0,B]f,\Lambda_1^{-\gamma}A f}| &\le |\scal{B X_0f,\Lambda_1^{-\gamma}A f}| + |\scal{B f, X_0 \Lambda_1^{-\gamma}A f}| \\ & \equiv T_1 + T_2\;. } Both terms will be estimated separately. \proclaim{Term $\boldsymbol{T_1}$.} Since we know by \eref{e:defReK} that $X_0 = K - \Re K$, we can write it as \equ{ T_1 \le |\scal{B(\Re K)f, \Lambda_1^{-\gamma}A f}| + |\scal{B K f, \Lambda_1^{-\gamma}A f}| \equiv T_{11} + T_{12}\;. } The term $T_{12}$ can be estimated by using \eref{e:cond4}. We indeed have either $\gamma \ge \alpha_1 + \beta_2$, or $\gamma \ge \alpha_2 + \beta_1$. In the former case, we write \equ{ T_{12} \le \|\Lambda_1^{-\beta_2} B\| \|K f\| \|\Lambda_1^{-\gamma + \beta_2}A f\| \le C(\|K f\| + \|f\|)^2\;. } In the latter case, we use the fact that $A$, $B$ and $\Lambda_1$ commute and are self-adjoint to write similarly \equ{ T_{12} = |\scal{A K f, \Lambda_1^{-\gamma}B f}| \le \|\Lambda_1^{-\alpha_2} A\| \|K f\| \|\Lambda_1^{-\gamma + \alpha_2}B f\| \le C(\|K f\| + \|f\|)^2\;. } Let us now focus on the term $T_{11}$. Using the positivity of $\Re K$, it can be written as \equs{ T_{11} &= \scal{(\Re K)^{1/2}\Lambda_1^{-\gamma_1}B f,(\Re K)^{1/2}\Lambda_1^{-\gamma_2}A f} + \scal{[\Lambda_1^{-\gamma_1}B,\Re K]f, \Lambda_1^{-\gamma_2}A f}\\ &\equiv T_{13} + T_{14}\;, } where \equ{ \gamma_1, \gamma_2 > 0\;,\qquad\gamma_1 + \gamma_2 = \gamma\;, } are to be chosen later. We estimate both terms separately. The commutator in $T_{14}$ can be expanded to give \equ{ T_{14} = \scal{\Lambda_1^{-\gamma_1}[B,\Re K]f,\Lambda_1^{-\gamma_2}A f} + \scal{[\Lambda_1^{-\gamma_1},\Re K]B f,\Lambda_1^{-\gamma_2}A f}\;. } In order to estimate these terms, we recall that $\Re K = \sum_{i=1}^4 X_i^* X_i$. We therefore have \equs{ T_{14} =&\; \sum_{i=1}^4 \Bigl( \scal{\Lambda_1^{-\gamma_1}[B,X_i^*]X_i f,\Lambda_1^{-\gamma_2}A f} + \scal{\Lambda_1^{-\gamma_1}X_i^*[B,X_i] f,\Lambda_1^{-\gamma_2}A f} \\[0mm] &\qquad+\scal{[\Lambda_1^{-\gamma_1},X_i^*]X_i B f,\Lambda_1^{-\gamma_2}A f}+\scal{X_i^*[\Lambda_1^{-\gamma_1},X_i] B f,\Lambda_1^{-\gamma_2}A f}\Bigr) \\[0mm] \equiv&\; \sum_{i=1}^4 \bigl(T_{i}^{(1)} + T_{i}^{(2)}+T_{i}^{(3)}+T_{i}^{(4)}\bigr)\;. } Noticing that $[B, X_i^*]$ is a multiple of the identity operator and that $\Lambda_1$ is self-adjoint, we have \equ{ |T_i^{(1)}| \le C\scal{X_i f, \Lambda_1^{-\gamma} A f} \le \|X_i f\| \|\Lambda_1^{-\gamma} A f\| \le C(\|K f\| + \|f\|)^2\;, } where we used \eref{e:estXchain} and the fact that $\gamma > \alpha_2$ to get the last inequality. The term $T_{i}^{(2)}$ is bounded by $C(\|K f\| + \|f\|)^2$ in a similar way. The term $T_{i}^{(3)}$ is written as \equs{ |T_{i}^{(3)}| &= |\scal{\Lambda_1^{\gamma_1}[\Lambda_1^{-\gamma_1},X_i^*]X_i f, \Lambda_1^{-\gamma}A B f} + \scal{\Lambda_1^{\gamma_1}[\Lambda_1^{-\gamma_1},X_i^*][X_i,B] f, \Lambda_1^{-\gamma}A f}| \\[1mm] &\le C\|X_i f\|\|\Lambda_1^{-\gamma} A B f\| + C\|f\|\|\Lambda_1^{-\gamma} A f\|\;, } where we used \eref{e:boundComm} and the fact that $[X_i, B]$ is bounded. Now we can bound $T_i^{(3)}$ by $C(\|Kf\| + \|f\|)^2$, using \eref{e:estXchain} to estimate $\|X_i f\|$ and \eref{e:cond4} to estimate $\|\Lambda_1^{-\gamma} A B f\|$ and $\|\Lambda_1^{-\gamma} A f\|$. The term $T_{i}^{(4)}$ can be estimated in a similar way. Let us now focus on the term $T_{13}$. We can write \equ{ |T_{13}| \le |\Re\scal{K \Lambda_1^{-\gamma_1}B f,\Lambda_1^{-\gamma_1}B f}|^{1/2}\sqrt{\sum_{i=1}^4\|X_i\Lambda_1^{-\gamma_2}A f\|}\;. } If we choose \equ[e:hyp1]{ \gamma_2 = \alpha_2\;, } the terms under the square root are easily estimated by writing them as \equs{ \|X_i\Lambda_1^{-\gamma_2}A f\| &\le \|\Lambda_1^{-\gamma_2}A\| \|X_i f\| + \|[X_i,\Lambda_1^{-\gamma_2}]\Lambda_1^{\gamma_2}\| \|\Lambda_1^{-\gamma_2}A f\| \\ &\quad + \|\Lambda_1^{-\gamma_2}[X_i,A] f\|\;, } and estimating the two commutators by \eref{e:boundComm} and \eref{e:comms} respectively. The term preceding the square root can be written as \equs{ \scal{K \Lambda_1^{-\gamma_1}B f,\Lambda_1^{-\gamma_1}B f} &= \scal{\Lambda_1^{-\gamma_1}B K f,\Lambda_1^{-\gamma_1}B f} + \scal{[K,\Lambda_1^{-\gamma_1}B]f,\Lambda_1^{-\gamma_1}B f} \\ &\equiv T_{15} + T_{16}\;. } The term $T_{15}$ can be bounded if we choose \equ[e:hyp2]{ 2\gamma_1 \ge \beta_1 + \beta_2\;, } because we have then \equ{ T_{15} \le \|K f\|\|\Lambda_1^{-\beta_2}B\| \|\Lambda_1^{-\beta_1}B f\| \le C(\|K f\| + \|f\|)^2\;. } In order to estimate the term $T_{16}$, we use $K = \Re K + X_0$ to write \equs{ T_{16} &= \scal{[X_0,\Lambda_1^{-\gamma_1}B]f,\Lambda_1^{-\gamma_1}B f} + \scal{[\Re K,\Lambda_1^{-\gamma_1}B]f,\Lambda_1^{-\gamma_1}B f}\\[1mm] &\equiv T_{16}^{(1)} + T_{16}^{(2)}\;. } The term $T_{16}^{(1)}$ can be estimated by writing it as \equ{ T_{16}^{(1)} = \scal{\Lambda_1^{-\gamma_1}[X_0,B]f,\Lambda_1^{-\gamma_1}B f} + \scal{[X_0,\Lambda_1^{-\gamma_1}]\Lambda_1^{\gamma_1} \Lambda_1^{-\gamma_1}B f,\Lambda_1^{-\gamma_1}B f}\;. } The first term can be bounded by $C(\|K f\| + \|f\|)^2$ if we choose \equ[e:hyp3]{ 2\gamma_1 \ge \beta_1 + \beta_3\;. } In order to bound the second term, it suffices to have $\gamma_1 \ge \beta_1$, which is the case because of \eref{e:hyp2} and the fact that $\beta_2 \ge \beta_1$. The term $T_{16}^{(2)}$ can be bounded by $C(\|K f\| + \|f\|)^2$, by treating it in a similar way than the term $T_{14}$. We leave to the reader the verification that no additional conditions on $\gamma_1$ have to be made. This completes the estimate of $T_1$, because \eref{e:hyp1}, \eref{e:hyp2} and \eref{e:hyp3} can be satisfied simultaneously by \eref{e:cond2}. \proclaim{Term $\boldsymbol{T_2}$.} We decompose this term as \equs{ T_2 &\le |\scal{B f, \Lambda_1^{-\gamma}A X_0 f}| + |\scal{B f, \Lambda_1^{-\gamma}[X_0,A] f}| + |\scal{B f, [X_0, \Lambda_1^{-\gamma}]A f}| \\[1mm] & \equiv T_{21} + T_{22} + T_{23}\;. } Since $\gamma \ge \alpha_3 + \beta_1$ the term $T_{22}$ is easily estimated by \equ{ T_{22} \le \|\Lambda_1^{-\beta_1} B f\|\|\Lambda_1^{-\alpha_3} [X_0,A] f\| \le C(\|K f\| + \|f\|)^2\;. } Noticing that we can assume $\alpha_1 \le \alpha_2$ and $\beta_1 \le \beta_2$, condition \eref{e:cond4} implies $\gamma \ge \alpha_1 + \beta_1$. Since $[X_0, \Lambda_1^{-\gamma}]$ is a function, it commutes with $\Lambda_1$, and so $T_{23}$ can be estimated writing \equs{ T_{23} &\le |\scal{\Lambda_1^{-\beta_1}B f, \Lambda_1^\gamma [X_0, \Lambda_1^{-\gamma}]\Lambda_1^{-\alpha_1}A f}| \\ & \le \|\Lambda_1^{-\beta_1}B f\| \|[X_0, \Lambda_1^{-\gamma}]\Lambda_1^\gamma\|\|\Lambda_1^{-\alpha_1}A f\| \le C(\|Kf\| + \|f\|)^2\;, } where we used \eref{e:boundComm} to get the last bound. We finally bound $T_{21}$. Since $X_0 = K - \Re K$, it can be expanded as \equ{ T_{21} \le |\scal{B f, \Lambda_1^{-\gamma}A K f}| + |\scal{B f, \Lambda_1^{-\gamma}A(\Re K) f}| \equiv T_{21}^{(1)} + T_{21}^{(2)}\;. } The term $T_{21}^{(1)}$ can be estimated by writing \equ{ T_{21}^{(1)} \le \|K f\| \|\Lambda_1^{-\gamma}A B f\|\;, } and using \eref{e:cond4}. The term $T_{21}^{(2)}$ can be written as \equ{ T_{21}^{(2)} = \scal{B f, \Lambda_1^{-\gamma}A(\Re K) f} = T_{13} + \scal{\Lambda_1^{-\gamma_1}B f, [\Lambda_1^{-\gamma_2}A, \Re K] f}\;. } The term $T_{13}$ has already been estimated. The other term can be treated like the term $T_{14}$. We leave to the reader the verification that one can indeed bound it by $C(\|K f\| + \|f\|)^2$ without any further restriction on $\gamma_1$ and $\gamma_2$. This completes the proof of the lemma. \end{proof} \newcommand\expos[7][]{ \begin{alignat*}{2} \alpha_1 &= #2\;,\qquad&\qquad \beta_1 &= #5\;, \\ \alpha_2 &= #3\;,\qquad&\qquad \beta_2 &= #6\;, \\ \alpha_3 &= #4\;,\qquad&\qquad \beta_3 &= #7\;#1 \end{alignat*} } \subsection{The main step of the proof of Proposition~\ref{prop:estG}} By an elementary approximation argument, it is sufficient to prove the inequalities \eref{e:estG} for $f \in \cOinf[\CX]$, since this is a core for both $K$ and $K^*$. Moreover, we will prove only \eref{e:estG1}. The interested reader may verify that the same arguments also apply for \eref{e:estG2}. We want to show that we can find constants $\eps$ and $C$ such that \equ{ \|\Lambda_1^{\eps}f\| \le C(\|K f\| + \|f\|)\;,\quad\text{for all}\quad f\in \cOinf[\CX]\;. } In order to show this, we notice that there is a constant $C$ such that \equ{ \Lambda_1^2 \le C\Bigl(1 + ({r_{\!L}} - \lambda_L^2 q_0)^2 + ({r_{\!R}} - \lambda_r^2 q_N)^2 + \sum_{i=0}^N p_i^2 + P^{2n}(Q) + \sum_{i=1}^N P^{2m}(\tilde q_i)\Bigr) \equiv \tilde G\;. } The immediate consequence is that \equ{ \|\Lambda_1^{\eps}f\|^2 = \scal{f, \Lambda_1^{2\eps}f} \le \scal{f,\Lambda_1^{2\eps-2}\tilde G f}\;. } It is therefore enough to show that there exists a (small) constant $\eps$ such that the terms \equ{ \|\Lambda_1^{\eps-1}P^n(Q)f\|\,,\;\|\Lambda_1^{\eps-1}p_i f\|\,,\;\|\Lambda_1^{\eps-1}P^m(\tilde q_i)f\|\,,\ldots } are bounded by $C(\|K f\| + \|f\|)$. We are first going to bound the terms involving variables near the boundary of the chain. Then, we will proceed by induction towards the middle of the chain. \proclaim{The term $\boldsymbol{\|\Lambda_1^{\eps-1}({r_{\!L}} - \lambda_L^2 q_0)f\|}$.} We have \equs[e:estR]{ \|({r_{\!L}} - \lambda_L^2 q_0)f\|^2 &= |\scal{({r_{\!L}}-\lambda_L^2 q_0)^2f,f}| \le C|\scal{(\Re K)f,f}| \\ &= C|\Re\scal{K f,f}| \le C\|K f\|\|f\| \le C(\|K f\| + \|f\|)^2\;, } where we used the fact that $a_L \neq 0$ to obtain the first inequality. Since $\Lambda_1 \ge 1$, we thus have the estimate \equ{ \|\Lambda_1^{\eps-1}({r_{\!L}} - \lambda_L^2 q_0)f\| \le C(\|K f\| + \|f\|)^2 } if we take $\eps \le 1$. \proclaim{The term $\boldsymbol{\|\Lambda_1^{\eps-1}p_0 f\|}$.} We will prove the estimate \equ[e:estp1]{ \|\Lambda_1^{\eps_0-1}p_0 f \| \le C(\|K f\| + \|f\|)\;, } for $\eps_0 \le 1/(2m)$. An explicit computation yields the relation \equ[e:estp02]{ [X_0, {r_{\!L}} - \lambda_L^2 q_0] = b_L ({r_{\!L}} - \lambda_L^2 q_0) - \lambda_L^2 p_0\;. } Solving \eref{e:estp02} for $p_0$, we get \equ{ \|\Lambda_1^{\eps_0-1}p_0 f \|^2 = \scal[b]{\lambda_L^{-2} \bigl(b_L ({r_{\!L}} - \lambda_L^2 q_0) - \lambda_L^{-2}[X_0,{r_{\!L}} - \lambda_L^2 q_0]\bigr)f, \Lambda_1^{2\eps_0-2}p_0f} \equiv X_0^{(1)} - X_0^{(2)}\;. } The term $X_0^{(1)}$ can be estimated as \equ{ |X_0^{(1)}| \le \lambda_L^{-2} \|b_L ({r_{\!L}} - \lambda_L^2 q_0) f\|\|\Lambda_1^{2\eps_0-2}p_0 f\| \le C(\|K f\| + \|f\|)^2\;, } where the last inequality holds because $\eps_0 \le 1/2$. In order to estimate $X_0^{(2)}$, we apply Lemma~\ref{lem:estG} with $A = p_0$ and $B = {r_{\!L}} - \lambda_L^2 q_0$. An explicit computation yields $[X_0, A] = \V1'(q_0) + \V2'(\tilde q_1) - {r_{\!L}}$. The term $[X_0,B]$ has already been computed in \eref{e:estp02}. Because of Proposition~\ref{prop:Degs} and of \eref{e:estR}, we can choose \expos[.]{1}{1}{2-1/m}{0}{1}{1} The hypotheses of Lemma~\ref{lem:estG} are thus fulfilled if we choose $\gamma = 2-1/m$. We therefore have the estimate \eref{e:estp1} with $\eps_0 = 1/(2m)$. We have a similar estimate for the symmetric term at the other end of the chain. \proclaim{The term $\boldsymbol{\|\Lambda_1^{\eps-1} P^m(\tilde q_1) f\|}$.} We will prove the estimate \equ{ \|\Lambda_1^{\eps_0'-1}P^m(\tilde q_1) f \| \le C(\|K f\| + \|f\|)\;, } for some $\eps_0' < \eps_0$. Because of the bound \eref{e:boundxV2} of {\bf A2}, we can find some constants $c_1$ and $c_2$ such that \equ[e:estq1q0]{ \scal[b]{\Lambda_1^{2\eps_0'-2}P^{2m}(\tilde q_1) f,f} \le c_1\bigl|\scal[b]{\Lambda_1^{2\eps_0'-2} \V2'(\tilde q_1)f, \tilde q_1 f}\bigr| + c_2\bigl|\scal[b]{\Lambda_1^{2\eps_0'-2}f, f}\bigr|\;, } where we also used \eref{e:estPosFun}. The second term is easily estimated because $\Lambda_1^{2\eps_0'-2}$ is bounded if $\eps_0' \le 1$. We use once again the fact that $[X_0, p_0] = \V1'(q_0) + \V2'(\tilde q_1) - {r_{\!L}}$ to write the first term as \equ{ \bigl|\scal[b]{\Lambda_1^{2\eps_0'-2} \V2'(\tilde q_1)f, \tilde q_1 f}\bigr| = \bigl|\scal[b]{\Lambda_1^{2\eps_0'-2} \bigl([X_0, p_0] - \V1'(q_0) + {r_{\!L}}\bigr)f, \tilde q_1 f}\bigr| \equiv |Y_1^{(1)} + Y_1^{(2)} + Y_1^{(3)}|\;. } The term $Y_1^{(2)}$ can be written as \equ{ |Y_1^{(2)}| = \bigl|\scal[b]{\Lambda_1^{2\eps_0'-2+1/m}\V1'(q_0)f, \Lambda_1^{-1/m}\tilde q_1 f}\bigr| \le \bigl\|\Lambda_1^{2\eps_0'-2+1/m}\V1'(q_0)f\bigr\| \|\Lambda_1^{-1/m}\tilde q_1 f\|\;. } By Proposition~\ref{prop:Degs} and the fact that $\V1' \in \CF_{2n-1}$, this term is bounded by $C\|f\|^2$ if we take $\eps_0'$ so small that \equ[e:eps01]{ 2\eps_0' \le 1/n - 1/m\;. } The term $Y_1^{(3)}$ is bounded similarly by writing \equ{ |Y_1^{(3)}| \le \bigl\|\Lambda_1^{2\eps_0'-2+1/m}{r_{\!L}} f\bigr\| \|\Lambda_1^{-1/m}\tilde q_1 f\|\;, } if we impose \equ[e:eps02]{ 2\eps_0' \le 1 - 1/m\;. } Both conditions can be satisfied because we assumed that $1 < n < m$. In order to estimate $Y_1^{(1)}$, we apply once again Lemma~\ref{lem:estG}. This time we have $A = \tilde q_1$ and $B = p_0$. Using \eref{e:estp1} and Proposition~\ref{prop:Degs}, we see that we can choose \expos[.]{1/m}{1/m}{1}{1-\eps_0}{1}{2-1/m} By using $m > 1$, we see that the hypotheses of Lemma~\ref{lem:estG} are fulfilled if \eref{e:eps01} and \eref{e:eps02} hold, together with $\eps_0' < \eps_0/2$. Once again, we have the same estimate at the other end of the chain. We can now go along the chain by induction. At each step, we go one particle closer towards the middle of the chain. We present here only the terms arising when we go from the left to the right of the chain. \proclaim{The term $\boldsymbol{\|\Lambda_1^{\eps-1}p_i f\|}$.} We already treated the case $i=1$. Let us therefore assume $i>1$. We moreover assume that there exist constants $\eps_{i-1}, \eps_{i-1}'>0$ such that we have the estimates \equ[e:indp]{ \bigl\|\Lambda_1^{\eps_{i-1}-1}p_{i-1} f\bigr\| \le C(\|K f\| + \|f\|) \quad\text{and}\quad \bigl\|\Lambda_1^{\eps_{i-1}'-1}P^m(\tilde q_i) f\bigr\| \le C(\|K f\| + \|f\|)\;. } We will show that this implies the existence of a constant $\eps_i > 0$ such that \equ[e:estpi]{ \bigl\|\Lambda_1^{\eps_i-1}p_i f\bigr\| \le C(\|K f\| + \|f\|)\;. } We use $p_i = p_{i-1} + [X_0, \tilde q_i]$ to write \equ{ \bigl\|\Lambda_1^{\eps_i-1}p_i f\bigr\|^2 = \scal{\Lambda_1^{2\eps_i-1}p_{i-1}f, \Lambda_1^{-1}p_i f} + \scal{[X_0, \tilde q_i]f, \Lambda_1^{2\eps_i-2}p_i f} \equiv X_i^{(1)} + X_i^{(2)}\;. } The term $X_i^{(1)}$ is easily bounded if we write \equ{ |X_i^{(1)}| \le \|\Lambda_1^{2\eps_i-1}p_{i-1}f\| \|\Lambda_1^{-1}p_i f\| \le C(\|K f\| + \|f\|)^2\;, } where the last inequality is obtained by using Proposition~\ref{prop:Degs} and \eref{e:indp}. We only have to make the assumption $2\eps_i \le \eps_{i-1}$. In order to estimate the term $X_i^{(2)}$, we apply Lemma~\ref{lem:estG} with $A = p_i$ and $B = \tilde q_i$. Explicit computation yields $[X_0,p_i] = \V1'(q_i) - \V2'(\tilde q_{i+1}) - \V2'(\tilde q_i)$. Using the induction hypothesis \eref{e:indp} and Proposition~\ref{prop:Degs}, we see that we can choose \expos[.]{1}{1}{2-1/m}{(1-\eps_{i-1}')/m}{1/m}{1} If we take $\eps_i \le \eps_{i-1}'/(2m)$, we see that the hypotheses of Lemma~\ref{lem:estG} are satisfied. We thus have the desired bound \eref{e:estpi}. \proclaim{The term $\boldsymbol{\|\Lambda_1^{\eps-1}P^m(\tilde q_{i+1}) f\|}$.} We assume that there exist strictly positive constants $\eps_i$ and $\eps_{i-1}'$ such that \equ{ \bigl\|\Lambda_1^{\eps_i-1}p_i f\bigr\| \le C(\|K f\| + \|f\|) \quad\text{and}\quad \bigl\|\Lambda_1^{\eps_{i-1}'-1}P^m(\tilde q_i) f\bigr\| \le C(\|K f\| + \|f\|)\;. } We will show that this implies the existence of a constant $\eps_i' > 0$ for which \equ[e:estqtilde]{ \bigl\|\Lambda_1^{\eps_i'-1}P^m(\tilde q_{i+1}) f\bigr\| \le C(\|K f\| + \|f\|)\;. } Expression \eref{e:estq1q0} with $\tilde q_1$ replaced by $\tilde q_{i+1}$ holds. In order to prove \eref{e:estqtilde}, it suffices therefore to show that \equ{ |\scal{\Lambda_1^{2\eps_i'-2}\V2'(\tilde q_{i+1})f, \tilde q_{i+1} f}| \le C(\|Kf\| + \|f\|)^2\;. } Since, for $i>1$ we have $[X_0, p_i] = \V1'(q_i) - \V2'(\tilde q_{i+1}) - \V2'(\tilde q_i)$, the preceding term can be written as \equ{ \bigl|\scal[b]{\Lambda_1^{2\eps_i'-2}\bigl([X_0, p_i] + \V1'(q_i) + \V2'(\tilde q_i)\bigr)f, \tilde q_{i+1}f}\bigr| \equiv |Y_i^{(1)} + Y_i^{(2)} + Y_i^{(3)}|\;. } We impose $2\eps_i' \le 1/n - 1/m$. The term $Y_i^{(2)}$ is then estimated as \equ{ |Y_i^{(2)}| \le \|\Lambda_1^{-1/m}\tilde q_{i+1}f\|\bigl\|\Lambda_1^{2\eps_i'-2+1/m}\V1'(q_i)f\bigr\| \le C(\|K f\| + \|f\|)^2\,, } where the last step uses Proposition~\ref{prop:Degs} and $\V1' \in \CF_{2n-1}$. In order to estimate the term $Y_i^{(3)}$, we notice that by the Cauchy-Schwarz inequality and assumption {\bf A2}, we have \equs{ |Y_i^{(3)}| &\le C\|\Lambda_1^{-1/m}\tilde q_{i+1} f\| \bigl\|\Lambda_1^{2\eps_i'-2+1/m}P^{2m-1}(\tilde q_i) f\bigr\|\\ &\le C\|f\|\bigl\|\Lambda_1^{1/m-1}P^{m-1}(\tilde q_i)\Lambda_1^{2\eps_i'-1}P^{m}(\tilde q_i) f\bigr\|\\ &\le C\|f\|\bigl\|\Lambda_1^{2\eps_i'-1} P^{m}(\tilde q_i) f\bigr\| } We can choose $2\eps_i' < \eps_{i-1}'$, so this term can be estimated by the induction hypothesis. The term $Y_i^{(1)}$ is once again estimated by using Lemma~\ref{lem:estG}, this time with $A = \tilde q_{i+1}$ and $B=p_i$. Using Proposition~\ref{prop:Degs}, it is easy to verify that one can take \expos[.]{1/m}{1/m}{1}{1-\eps_i}{1}{2-1/m} It suffices then to choose $2\eps_i' < \eps_i$ to satisfy the assumptions of Lemma~\ref{lem:estG} and get the desired estimate. It is obvious that this induction also works in the other direction, starting from the other end of the chain. It also accommodates to a little bit more complicated topologies, as long as the chain does not contain any closed loop. In order to complete the proof of the lemma, we have to estimate the last term corresponding to the motion of the center of mass. \proclaim{The term $\boldsymbol{\|\Lambda_1^{\eps-1}P^n(Q) f\|}$.} Finally, we want to show the estimate \equ[e:estQ]{ \|\Lambda_1^{\eps - 1}P^n(Q) f\| \le C(\|K f\| + \|f\|), } for some $\eps$. We start with a little computation. We write \equ{ (N+1) q_0 = Q + (q_{N-1}-q_N) + 2(q_{N-2}-q_{N-1}) + \ldots + N(q_0 - q_1)\;. } Moreover, we have $q_i = q_0 + (q_1-q_0) + \ldots + (q_i - q_{i-1})$. We can thus write \equ{ \frac{Q}{N+1} - q_i = \sum_{j=1}^N b_{i j} \tilde q_j\;,\quad\text{with}\quad b_{i j} \in \R\;. } This, together with the mean-value theorem, implies the useful relation \equs[e:relSec]{ (N+1)Q\V1'\bigl(Q/(N+1)\bigr) &= Q\sum_{i=0}^N \V1'(q_i) + Q\sum_{i=0}^N \Bigl(\V1'\bigl(Q/(N+1)\bigr) - \V1'(q_i)\Bigr) \\ &= Q\sum_{i=0}^N \V1'(q_i) + Q\sum_{i=0}^N \V1''(\xi_i) \sum_{j=1}^N b_{i j} \tilde q_j \;, } where $\xi_i$ is located somewhere on the $Q/(N+1)$ and $q_i$. In the case of $d$-dimensional particles, the expression corresponding to \eref{e:relSec} is \equs{ |(N+1)Q\V1'\bigl(Q/(N+1)\bigr)| \le&\; (N+1)|Q||\nabla\V1(q_i)| \\ &+ |Q|\sum_{i=0}^N \sup_{t \in (0,1)} \bigl|\nabla^2\V1\bigl(tQ/(N+1) + (1-t)q_i\bigr)\bigr| \sum_{j=1}^N b_{i j} |\tilde q_j| \;. } The subsequent expressions can be rewritten accordingly. We use {\bf A1} and \eref{e:relSec} to write the left-hand side of \eref{e:estQ} as \equs{ \|\Lambda_1^{\eps-1} P^n(Q)f\|^2 &= |\scal{\Lambda_1^{2\eps-2}P^{2n}(Q) f,f}| \le C(N+1)\bigl|\scal[b]{\Lambda_1^{2\eps-2}\V1'\bigl(Q/(N+1)\bigr)f, Q f}\bigr| + C\|f\|^2 \\ & \le C\Bigl|\Bigl\langle\Lambda_1^{2\eps-2}\Bigl(\sum_{i=0}^N \V1'(q_i)\Bigr) f, Q f \Bigr\rangle\Bigr| + C \sum_{i,j=1}^N b_{i j}|\scal{\Lambda_1^{2\eps-2}\tilde q_j \V1''(\xi_i) f, Q f}| + C\|f\|^2\\ &\equiv Y^{(1)} + Y^{(2)} + C\|f\|^2\;. } The term $Y^{(2)}$ can be bounded because $\V1'' \in \CF_{2n-2}$, and so \equ{ |\V1''(\xi_i)| \le C(1+\xi_i^2)^{n-1} \le C P^{2n-2}(Q) + C P^{2n-2}(q_i) \le C\sum_{k=0}^N P^{2n-2}(q_k) \;. } Thus, $Y^{(2)}$ can be split in terms of the form \equ{ |\scal{\Lambda_1^{2\eps-2}\tilde q_j P^{2n-2}(q_k) f, Q f}| \le \|\Lambda_1^{1/n-2} P^{2n-2}(q_k) Q f\| \|\Lambda^{2\eps-1/n} \tilde q_j f\|\;. } The first factor clearly can be bounded by $C\|f\|$ if we notice that $q \mapsto P^{2n-2}(q_k) Q$ belongs to $\CF_{2n-1}$ and then apply Proposition \ref{prop:Degs}. The second factor can also be bounded by $C\|f\|$ if we impose \equ{ 0 < \eps \le \frac{1}{2n} - \frac{1}{2m}\;, } which can be done because we assumed $n < m$. It remains to estimate $Y^{(1)}$. We define $P = \sum_{i=0}^N p_i$. Since it may easily be verified that $[X_0, P] = \sum_{i=0}^N \V1'(q_i) - {r_{\!L}} - {r_{\!R}}$, we can write $Y_1$ as \equ{ Y_1 = \scal[b]{\Lambda_1^{2\eps-2} \bigl([X_0, P] + {r_{\!L}} + {r_{\!R}}\bigr)f, Q f} \equiv Y^{(3)} + Y^{(4)} + Y^{(5)}\;. } We leave to the reader the verification that the terms $Y^{(4)}$ and $Y^{(5)}$ can be bounded by $C\|f\|^2$ without introducing any stronger condition on $\eps$. The term $Y^{(3)}$ can be estimated by using Lemma~\ref{lem:estG} with $A = Q$ and $B = P$. We have already verified that \eref{e:estpi} holds for every $i$, so we can define \equ{ \eps_P \equiv \min\{\eps_i \;|\; i=0,\ldots,N\}\;. } This, together with Proposition \ref{prop:Degs}, allows us to choose, \expos[,]{1/n}{1/n}{1}{1-\eps_P}{1}{2-1/n} and thus \eref{e:estQ} is fulfilled if we choose $2\eps \le \eps_P$. This completes the proof of the lemma.\endproof \mysection{Generalization of H\"ormander's theorem} \label{sec:Hormander} \defC_{\!{\cal K}}{C_{\!{\cal K}}} In a celebrated paper \cite{H1}, H\"ormander studied second-order differential operators of the form \equ[e:defP]{ P = \sum_{j=1}^r L_j^* L_j + L_0\;, } where the $L_j$ are some smooth vector fields acting in $\R^d$. He showed that a sufficient condition for the operator $P$ to be hypoelliptic is that the Lie algebra generated by $\{L_0,\ldots,L_r\}$ has maximal rank everywhere. The main step in his proof is to show that there exists a constant $\eps>0$ and, for every compact domain $\CK \subset \R^d$, a constant $C_{\!{\cal K}}$ such that \equ[e:resHor]{ \|u\|_{(\eps)} \le C_{\!{\cal K}}(\|P u\| + \|u\|)\;,\quad\forall\;u\in\cOinf[\CK]\;. } In this expression, the norm $\|\cdot\|_{(\eps)}$ is the natural norm associated to the Sobolev space $H^{\eps}(\R^d)$, \ie \equ{ \|u\|_{(\eps)}^2 = \int_{\R^d} |\hat u(k)|^2 (1+k^2)^{\eps}\, d^d \!k\; \equiv \|(1 + \Delta)^{\eps/2} u\|. } We base our discussion on the proof presented in \cite{Ho}. H\"ormander first defines $Q_1$ as the set of all properly supported symmetric first-order differential operators $q$ such that for every compact domain $\CK$, there exist constants $C_{\!{\cal K}}'$ and $C_{\!{\cal K}}''$ with \equ[e:constQ1]{ \|q u\|^2 \le C_{\!{\cal K}}' \Re \scal{Pu,u} + C_{\!{\cal K}}'' \|u\|^2\;,\quad u\in \cOinf[\CK]\;. } In particular, if we write $L_j^* = -L_j + c_j$, where $c_j$ is some function, $Q_1$ contains all the operators of the form \equ{ (L_j - c_j/2)/i\;,\quad j \ge 1\;, } as well as their linear combinations. It also contains every operator of order $0$. H\"ormander then defines $Q_2$ as consisting of the operator $(P-P^*)/i$, as well as all the commutators of the form $[q,q']/i$ with $q,q' \in Q_1$. For $k>2$, he defines $Q_k$ as the set of all commutators $[q,q']/i$ with $q\in Q_{k-1}$ and $q' \in Q_{k-2}$. One feature of this construction is that a finite number of steps suffices to catch every symmetric first-order differential operator. This is a consequence of the maximal rank hypothesis. The main point of H\"ormander's proof is then the following result. \begin{lemma}[H\"ormander] If $q_k\in Q_k$ and $\eps \le 2^{1-k}$, we have for every $\CK \subset \R^d$ \equ[e:estHor]{ \|q_k u\|_{(\eps-1)} \le C(\|Pu\| +\|u\|)\;,\qquad u \in \cOinf[\CK]\;. } \end{lemma} The proof can be found in \cite[p.~355]{Ho}. The result \eref{e:resHor} then follows almost immediately, because the operators $i\d_j$ all belong to some $Q_k$. Thus there exists some $\eps>0$ such that \equ{ \sum_{j=1}^d \|\d_j u\|^2_{(\eps-1)} \le C_{\!{\cal K}}(\|Pu\| +\|u\|)^2\;,\qquad u \in \cOinf[\CK]\;, } which implies \eref{e:resHor}. One of the major problems encountered in this paper is to find a \emph{global} estimate analogous to \eref{e:resHor}, \ie to find constants $C$ and $\eps$ such that \equ{ \|\tilde \Delta^\eps u \| \le C(\|P u\| + \|u\|)\;,\quad\text{for all}\quad u\in \cOinf(\R^d)\;, } where $\tilde \Delta$ is some modified Laplacean. There are two major difficulties: \begin{list}{$\bullet$}{\setlength{\leftmargin}{5mm}} \item If we were to construct the sets $Q_k$ as in \cite{Ho}, they would not necessarily ``close'' in the sense that the successive commutators could blow up, and the whole proof would break down. To avoid this we do not necessarily put $(P-P^*)/i$ into $Q_2$, but rather $g_0(P-P^*)/i$, where $g_0$ is some bounded function. This allows to get decreasing bounds on the successive commutators. This problem does \emph{not} appear in \cite{EPR}, where the successive commutators are all first-order differential operators with constant (or bounded) coefficients. On the other hand, the commutator technique is essentially the same as in \cite{EPR}. \item The above construction does not allow to deal with arbitrary symmetric first-order differential operators. The reason is that if we want a global equivalent of \eref{e:constQ1}, the set $Q_1$ is no longer allowed to contain products of the $L_j$ and unbounded functions. We thus work with fewer operators, which means that we track much more closely the expressions which appear in the constructions. \end{list} \subsection{General setting} Let us consider the Hilbert space $\H = \Ltwo(\R^d, dx)$ for some integer $d \ge 1$. We define the set ${\mathfrak C}(\H)$ as the set of closed operators on $\H$ and the algebra $\B(\H)$ as the everywhere defined bounded operators on $\H$. We define $\CD \equiv \cOinf[\R^d]$, which is dense in $\CH$. Let us fix some sub-algebra $\F \subset \B(\H)$ that is closed under conjugation and such that $F\CD \subset \CD$ for all $F \in \F$ (typically $\F$ is some algebra of bounded functions). The advantage of considering $\cOinf[\R^d]$ is that \emph{every} differential operator with sufficiently smooth coefficients is closable on it (see \cite{Yo} for a justification). Moreover, every differential operator with smooth coefficients maps $\CD$ into itself. This allows us to make a formal calculus, \ie every relationship between operators appearing in this section is supposed to hold on $\CD$. The actual operators are then the closures of the operators defined on $\CD$. We define ${\mathfrak L}$ as the set of all formal expressions of the form \equ{ \sum_{|\ell| \le k} a_\ell(x) D^\ell\;,\qquad k \ge 0\;,\quad a \in \CC^\infty(\R^d)\;, } where $D^\ell$ denotes the $|\ell|^{\text{th}}$ derivative with respect to the multi-index $\ell$. By the above remark, any element of ${\mathfrak L}$ can naturally be identified with a differential operator in ${\mathfrak C}(\CH)$. Consider a differential operator $K$ that can be written as \equ[e:defK]{ K = \sum_{i=1}^n X_i^* X_i^{} + X_0^{}\;, \qquad X_j \in {\mathfrak L}\;,\qquad j=1,\ldots,n\;, } where $X_0$ is such that \equ[e:propX0]{ X_0^* = -X_0 + \g\;, \qquad \g \in \F\;. } We introduce now a definition that will be very useful in the sequel. \begin{definition} Let $\CS \subset {\mathfrak L}$ be a finite set of differential operators and $i\ge 0$ a natural number. We define the set $\CY_{\F}^i(\CS)$ as the module on $\F$ generated by the terms \equ{ S_1 S_2 \cdots S_i\;, \qquad S_k \in \CS \cup \{1\}\;, \qquad k=1,\ldots,i\;. } The elements of $\CY_{\F}^i(\CS)$ are naturally identified with densely defined closed operators on $\CH$. If $i=0$, we use the convention $\CY_\F^0(\CS) \equiv \F$. \end{definition} The subscript $\F$ will be dropped in the sequel when the algebra $\F$ is clear from the context. We construct the sets \equ[e:defA0]{ \A_{-1} = \{X_1,\ldots,X_n\}\;,\quad \A_0 = \{g_0X_0,X_1,\ldots,X_n\}\;, \qquad g_0 \in \F\;, } where the operator $g_0$ is assumed to be self-adjoint, positive and such that \equ[e:propg0]{ [g_0, X_0] \in \F\;. } Let us now construct recursively up to a level $R < \infty$ some finite sets $\CB_i, \A_i \subset {\mathfrak L}$ by the following procedure. Assume $\A_{i-1}$ is known. Consider next the set $\CB_i^{(0)}$ of all $A$ of the form \equ[e:defA]{ A = \sum_{B \in \A_{i-1}} \Bigl(f_B B + \sum_{X\in \A_0} f_{XB}[X, B]\Bigr)\;, \qquad f_B,\, f_{XB} \in \F\;. } We then select a finite subset $\CB_i\subset \CB_i^{(0)}$. The set $\A_i$ is then defined as \equ{ \A_i \equiv \A_{i-1} \cup \CB_i\;. } \begin{remark}It is here that our construction differs from similar ones where \emph{all} elements of $\CB_i^{(0)}$ would have been selected. This makes the set of operators which we study much smaller, but then we of course have to verify that the operators of interest are really covered by our construction. \end{remark} We will make some working hypotheses on the sets $\A_i$. \begin{hypo}{1} The pair $(\A_R , \F)$ satisfies the following. If $A,B \in \A_R$ and $f \in \F$, then \equ{ [A,B] \in \CY^1(\A_R)\;,\quad A^* \in \CY^1(\A_R)\;,\quad [A,f] \in \F\;. } \end{hypo} \begin{hypo}{2} If $A \in \A_i$ with $i \ge -1$, we have $A^* \in \CY^1(\A_i)$. \end{hypo} \begin{remark} Hypothesis \hyp{1} implies that if $X \in \CY^j(\A_R)$ and $Y \in \CY^k(\A_R)$, then $[X,Y] \in \CY^{k+j-1}(\A_R)$. This will be very useful in the sequel. Hypothesis \hyp{2} implies that the classes $\CY^k(\A_i)$ are closed under conjugation. \end{remark} We define now the operator $\Lambda^2$ by \equ[e:defLambda]{ \Lambda^2 = 1 + \sum_{A\in\A_R} A^*A\;. } This is, in some sense that will immediately be clear from Lemma~\ref{lem:power}, the ``biggest'' operator contained in $\CY^2(\A_R)$. The operator $\Lambda^2$ is symmetric, densely defined and positive. We will moreover assume that \begin{hypo}{3} $\Lambda^2$ is essentially self-adjoint on $\CD$. \end{hypo} The powers $\Lambda^\alpha$ thus exist and are also essentially self-adjoint on $\CD$ for $\alpha \le 2$. \subsection{Results and a preliminary lemma} The following theorem is the main result of this section. \begin{theorem} \label{theo:princ} Let $K$ and $\Lambda$ be defined as above and assume \hyp{1}--\hyp{3} are satisfied for some $R$. Then there exist some constants $C, \eps > 0$ such that for every $f \in \CD$, we have \equ[e:estPrinc]{ \|\Lambda^\eps f\| \le C(\|K f\| + \|f\|)\;. } \end{theorem} In the sequel, we will write $\A$ instead of $\A_R$ to simplify the notation. In order to prove Theorem~\ref{theo:princ}, we need the following lemma, which will be extensively used in the sequel. \begin{lemma} \label{lem:power} Let $\Lambda$, $\F$ and $\A$ be as above and assume \hyp{1} and \hyp{3} hold. If $X \in \CY_\F^j(\A)$, then the operators \equ{ \Lambda^\beta X \Lambda^\gamma \quad\text{with}\quad \beta+\gamma \le -j } are bounded. If $Y \in {\mathfrak L}$ is such that $[Y,\Lambda^2] \in \CY_\F^j(\A)$, then the operators \equ{ \Lambda^\beta[\Lambda^\alpha,Y]\Lambda^\gamma \quad\text{with}\quad \alpha + \beta + \gamma \le 2-j } are bounded. If $X,Y \in {\mathfrak L}$ are such that \equ{ [X,\Lambda^2] \in \CY_\F^j(\A)\;\;,\quad [Y,\Lambda^2] \in \CY_\F^k(\A)\quad\text{and}\quad\bigl[[\Lambda^2,X],Y\bigr] \in \CY_\F^{j+k-2}(\A)\;, } then the operators \equ{ \Lambda^\beta\bigl[[\Lambda^\alpha,X],Y\bigr]\Lambda^\gamma \quad\text{with}\quad \alpha + \beta + \gamma \le 4-j-k } are bounded. \end{lemma} \begin{proof} The proof of this lemma is postponed to Appendix \ref{App:main}. \end{proof} \begin{remark} Lemma~\ref{lem:power} allows us to count powers in the following sense. Each time we see an operator that is a monomial containing fractional powers of $\Lambda$ and some operators of $\CY^j(\A)$, we know that the operator is bounded if its ``degree'' is less or equal to $0$. The rule is that if $Y \in \CY^j(\A)$, its degree is $j$ and the degree of $\Lambda^\alpha$ is $\alpha$. Moreover, every time we encounter a commutator, we can lower the degree by one unit. \end{remark} Lemma~\ref{lem:power} also shows that if $f\in \CD$, $A \in\CA$ and $\alpha \le 2$, expressions such as $A\Lambda^\alpha f$ can be well defined by \equ{ A\Lambda^\alpha f \equiv \Lambda^\alpha A f + [A,\Lambda^\alpha]\Lambda^{-2}\Lambda^2 f\;, } where $[A,\Lambda^\alpha]\Lambda^{-2}$ is bounded and can therefore be defined on all of $\H$. Similar expressions hold to show that any expression of this section can be well defined. We are now ready to prove the theorem. \subsection{Proof of Theorem~\ref{theo:princ}} The proof uses the commutation techniques developed by H\"ormander \cite{Ho} and improved by Eckmann, Pillet, Rey-Bellet \cite{EPR}. Large parts of this proof are inspired from this latter work. Before we start the proof itself, let us make a few computations, the results of which will be used repeatedly in the sequel. We first show that we can assume $\Re K$ positive. An explicit computation, using \eref{e:defK} and \eref{e:propX0}, shows that \equ[e:ReK]{ \Re K = \sum_{i=1}^n X_i^* X_i^{} + \frac{\g}{2}\;,\quad\text{and thus also}\quad X_0 = K - \Re K + g/2\;. } Because $\g \in \F$, we can add a sufficiently big constant to $X_0$ to make $\Re K$ positive. This will change neither the commutation relations, nor the estimate \eref{e:estPrinc}. Another useful equality is \equ[e:g0ReK]{ g_0\Re K = \Re(g_0K + K_1) + K_2\qquad K_1, K_2 \in \CY^1(\A_{-1})\;, } where $K_1$ is a self-adjoint operator such that $\Re(g_0K + K_1)$ is a positive self-adjoint operator. This is a consequence of the following two equalities, which are easily verified by inspection \equs{ g_0 \Re K &= \sum_{i=1}^n X_i^* g_0 X_i^{} + K_2 \quad K_2 \in \CY^1(\A_{-1})\;, \\ \Re (g_0 K) &= \sum_{i=1}^n X_i^* g_0 X_i^{} - K_1 \quad K_1 \in \CY^1(\A_{-1})\;. } We therefore have \equ{ \Re(g_0K + K_1) = \sum_{i=1}^n X_i^* g_0 X_i^{}\;. } This proves \eref{e:g0ReK}. Another useful identity will be \equs[e:explX0]{ (g_0 X_0)^* &= -X_0 g_0 + \g g_0 = -g_0 X_0 + [g_0, X_0] + \g g_0 \\ &= -g_0 X_0 + g_0'\;, \qquad g_0' \in \F\;, } where the last equality is a consequence of \eref{e:propg0}. We will now verify the estimate \eref{e:estPrinc} for some vector $f\in \CD$. In the sequel, the symbol $C$ will be used to denote some constant depending only on the operator $K$. This constant can change from one line to the other. We will first prove that $A \in \CY^1(\A_i)$ with $0 \le i \le R$ implies \equ[e:est]{ \|\Lambda^{1/4^{i+1} - 1}A f\| \le C(\|K f\| + \|f\|)\;. } In fact, an immediate consequence of the first part of Lemma~\ref{lem:power} is that we only have to prove this assertion for $A \in \A_i$. The proof will proceed by induction on $i$. \subsubsection{Verification for $\boldsymbol{i=0}$} We want to verify the estimate \equ{ \|\Lambda^{-3/4}A f\| \le C(\|K f\|+\|f\|)\;,\quad\text{for all}\quad A \in \A_0\;. } The cases $A=g_0 X_0$ and $A = X_j$ with $j \neq 0$ will be treated separately. \proclaim{The case $\boldsymbol{A = X_j}$.} We write \equs{ \|\Lambda^{-3/4}X_jf\|^2 \le C\|X_j f \|^2 &\le C\scal{f,X_j^* X_j^{}f} \le C \scal{f,(K + K^* - \g)f} \\ &\le C\Re\scal{f,K f} + C\|f\|^2 \le C\|f\|(\|K f\| + \|f\|)\;. } This implies the desired estimate. Because $X_j^* \in \CY^1(\A_{-1})$ by hypothesis, this computation immediately implies the estimates \sublabels \equs[1,e:estX]{ \|X_j f \| &\le C(\|K f\| + \|f\|)\;, \sublabel{e:myestX} \\ \|X_j^* f \| &\le C(\|K f\| + \|f\|)\;,\sublabel{e:estXstar} } which hold for every $j \ge 1$. \proclaim{The case $\boldsymbol{A = g_0X_0}$.} We write, using expression \eref{e:ReK}, \equs{ \|\Lambda^{-3/4}A f\|^2 &= \scal{g_0X_0f, \Lambda^{-3/2}A f} \\ &= \scal{K f, g_0\Lambda^{-3/2}A f} + \scal{g_0\g f, \Lambda^{-3/2}A f}/2 - \scal{(\Re K)f,g_0 \Lambda^{-3/2}A f} \\ &\equiv S_1 + S_2 - S_3\;. } The terms $S_1$ and $S_2$ are easily bounded by $C(\|K f\| + \|f\|)^2$, using the Cauchy-Schwarz inequality and the first part of Lemma~\ref{lem:power}. Using the positivity of $\Re K$ and the explicit form of $K$, the term $S_3$ can be bounded as \equs{ |S_3| &= \scal{(\Re K)^{1/2}f,(\Re K)^{1/2}g_0 \Lambda^{-3/2}A f} \\[2mm] &\le |\Re\scal{K f,f}|^{1/2}|\scal{(\Re K)g_0 \Lambda^{-3/2}A f,g_0 \Lambda^{-3/2}A f}|^{1/2} \\ &\le \sqrt{\|K f\|\|f\|}\,\Bigl|\scal{gg_0 \Lambda^{-3/2}A f,g_0\Lambda^{-3/2}A f}/2+\sum_{i=1}^n\|X_ig_0 \Lambda^{-3/2}A f\|^2\Bigr|^{1/2} \\ &\equiv \sqrt{\|K f\|\|f\|}\,\sqrt{S_0 + \sum_{i=1}^n S_{0,i}^2}\;. } The term $S_0$ is estimated by simple power counting (the $\Lambda$'s contribute for $-3$ and the $A$'s for $2$ in the total degree of the expression, hence $|S_0| \le C\|f\|^2$). The terms $S_{0,i}$ are estimated by writing \equ{ |S_{0,i}| \le \|g_0 \Lambda^{-3/2}AX_if\| + \|[X_i,g_0 \Lambda^{-3/2}A]f\|\;. } The first term is estimated by using \eref{e:estX} and power counting. The second term is estimated by expanding the commutator as \equ{ [X_i,g_0 \Lambda^{-3/2}A] = [X_i,g_0] \Lambda^{-3/2}A + g_0[X_i,\Lambda^{-3/2}]A + g_0 \Lambda^{-3/2}[X_i,A]\;, } and estimating separately the resulting terms. \subsubsection{The induction hypothesis} We shall proceed by induction. Let us fix $j>0$, take $A \in \A_j$ and assume \eref{e:est} holds for $i<j$. Let us moreover define $\eps \equiv 1/4^{j+1}$ in order to simplify the notation. Our assumption is therefore that \equ[e:indHyp]{ \|\Lambda^{4\eps-1}B f\| \le C(\|K f\| + \|f\|)\qquad \forall\; B\in \CY^1(\A_{j-1})\;. } We will now prove that this assumption implies the desired estimate, \ie \equ[e:mainEst]{ \|\Lambda^{\eps-1}A f\| \le C(\|K f\| + \|f\|)\qquad \forall\; A\in \CY^1(\A_{j})\;. } This, together with the preceding paragraph, will imply the estimate \eref{e:est}. \subsubsection{Proof of the main estimate} Because of the induction hypothesis, we only have to check \eref{e:mainEst} for $A \in \A_j \backslash \A_{j-1}$. By \eref{e:defA}, we can write \equ{ A = \sum_{B \in \A_{j-1}} \Bigl(f_B B + f^0_B [g_0X_0,B] + \sum_{i=1}^n f^i_B [X_i,B]\Bigr)\;, } with all the $f$ belonging to $\F$. We have \equs{ \|\Lambda^{\eps - 1}A f\|^2 &= \sum_{B \in \A_{j-1}} \scal[B]{\Bigl(f_B B + f^0_B [g_0X_0,B] + \sum_{i=1}^n f^i_B [X_i,B]\Bigr)f, \Lambda^{2\eps-2} A f} \\ &\equiv \sum_{B \in \A_{j-1}} \Bigl(T_{B} + T^0_B + \sum_{i=1}^n T_{B}^i\Bigr)\;. } We are going to bound each term of this sum separately by $C(\|K f\| + \|f\|)^2$. \proclaim{Term $\boldsymbol{T_{B}}$.} We have \equ{ |T_{B}| = |\scal{\Lambda^{2\eps-1} f_B \Lambda^{1-2\eps}\Lambda^{2\eps-1}B f, \Lambda^{-1}A f}|\;. } The operators $\Lambda^{2\eps-1} f_B \Lambda^{1-2\eps}$ and $\Lambda^{-1}A$ are bounded by Lemma~\ref{lem:power}. Using the induction hypothesis \eref{e:indHyp}, we thus get the bound $|T_{B}| \le C(\|K f\|+\|f\|)^2$. \proclaim{Term $\boldsymbol{T_B^i}$ with $\boldsymbol{i \neq 0}$.} We define $h \equiv f^i_B$. The term $T_B^i$ is then written as \equ{ T_B^i = \scal{B f, X_i^*h^*\Lambda^{2\eps-2}A f} - \scal{X_if, h^* B^* \Lambda^{2\eps-2}A f} \equiv Q_1 - Q_2\;. } \proclaim{Term $\boldsymbol{Q_1}$.} It can be estimated by writing \equ{ Q_1 = \scal{B f, h^*\Lambda^{2\eps-2}AX_i^*f} + \scal{B f, [X_i^*,h^*\Lambda^{2\eps-2}A]f}\;. } The first term is estimated by rewriting it as \equs{ |\scal{B f, h^*\Lambda^{2\eps-2}AX_i^*f}| &= |\scal{\Lambda^{2\eps-1}B f, \Lambda^{1-2\eps}h^*\Lambda^{2\eps-2}AX_i^*f}| \\ &\le \|\Lambda^{2\eps-1}B f\|\|\Lambda^{1-2\eps}h^*\Lambda^{2\eps-2}AX_i^*f\| \le C(\|K f\|+\|f\|)^2\;. } The last inequality has been obtained by using the induction hypothesis \eref{e:indHyp}, the estimate \eref{e:estXstar} and the fact that the operator $\Lambda^{1-2\eps}h^*\Lambda^{2\eps-2}A$ is bounded by Lemma~\ref{lem:power}. The second term is estimated as \equs{ |\scal{B f, [X_i^*,h^*\Lambda^{2\eps-2}A]f}| &= |\scal{\Lambda^{2\eps-1}B f,\Lambda^{1-2\eps} [X_i^*,h^*\Lambda^{2\eps-2}A]f}| \\ &\le \|\Lambda^{2\eps-1}B f\| \|\Lambda^{1-2\eps} [X_i^*,h^*\Lambda^{2\eps-2}A]f\|\;. } The term $\|\Lambda^{2\eps-1}B f\|$ is bounded by the induction hypothesis \eref{e:indHyp}. The other term can be estimated by writing the commutator as \equ{ [X_i^*,h^*\Lambda^{2\eps-2}A] = [X_i^*,h^*]\Lambda^{2\eps-2}A + h^*[X_i^*,\Lambda^{2\eps-2}]A + h^*\Lambda^{2\eps-2}[X_i^*,A]\;. } The resulting terms are estimated by power counting, using the fact that $X_i^* \in \CY^1(\A)$. \proclaim{Term $\boldsymbol{Q_2}$.} We bound this term as \equs{ |Q_2| &= \bigl| \scal{X_i f, h^* \Lambda^{2\eps-2}AB^*f} + \scal{X_if, h^*[B^*,\Lambda^{2\eps-2}A]f}\bigr| \\[1.5mm] &\le \|X_i f\|\bigl(\|h^* \Lambda^{2\eps-2}AB^*f\| + \| h^*[B^*,\Lambda^{2\eps-2}A]f\|\bigr) \\[1.5mm] &\le \|X_i f\|\bigl(\|h^* \Lambda^{2\eps-2}A\Lambda^{1-2\eps}\|\|\Lambda^{2\eps-1}B^*f\| + \| h^*[B^*,\Lambda^{2\eps-2}A]f\|\bigr)\;. } We leave to the reader the not too hard task to verify that it is indeed possible to get the bound $|Q_2| \le C(\|K f\| + \|f\|)^2$ by similar estimates as for the term $Q_1$. \proclaim{Term $\boldsymbol{T_{B}^0}$.} We define $h \equiv f^0_B$. The term $T_B^0$ is thus equal to \equ{ T_B^0 = \scal{[g_0 X_0,B]f,h^*\Lambda^{2\eps-2}A f} = \scal{g_0 X_0 B f,h^*\Lambda^{2\eps-2}A f} - \scal{B g_0 X_0 f,h^*\Lambda^{2\eps-2}A f}\;. } We use \eref{e:explX0} to write this as \equs{ T_B^0 =&\, -\scal{B f, h^*\Lambda^{2\eps-2}A g_0 X_0 f} + \scal{B f, g_0'h^*\Lambda^{2\eps-2}A f} \\[1mm] &\, -\scal{B f, [g_0 X_0, h^*\Lambda^{2\eps-2}A]f} - \scal{B g_0 X_0,h^*\Lambda^{2\eps-2}A f} \\[1mm] \equiv&\, - U_1 + U_2 - U_3 - U_4\;, } where $g_0' \in \F$. The term $U_2$ can easily be estimated by \equs{ |U_2| &= |\scal{\Lambda^{2\eps-1}B f, \Lambda^{1-2\eps}g_0'h^*\Lambda^{2\eps-2}A f}| \le \|\Lambda^{2\eps-1}B f\|\|\Lambda^{1-2\eps}g_0'h^*\Lambda^{2\eps-2}A f\| \\[1mm] &\le C(\|K f\| + \|f\|)\|f\|\;, } using the induction hypothesis. In order to estimate the term $U_3$, we notice that $g_0 X_0 \in \A$, and thus $[g_0X_0,\Lambda^2] \in \CY^2(\A)$. We can therefore write \equ{ |U_3| = |\scal{\Lambda^{2\eps-1}B f, \Lambda^{1-2\eps}[g_0 X_0, h^*\Lambda^{2\eps-2}A]f}| \le \|\Lambda^{2\eps-1}B f\|\|\Lambda^{1-2\eps}[g_0 X_0, h^*\Lambda^{2\eps-2}A]f\|\;, } expand the commutator and estimate the resulting terms separately by power counting. We use the equality \equ{ X_0 = K - \Re K + \g/2\;, } to write the terms $U_1$ and $U_4$ as \equs{ U_1 &= \bigl\langle B f, h^*\Lambda^{2\eps-2}A g_0 \bigl(K - (\Re K) + \g/2\bigr) f\bigr\rangle \equiv T_{B,1} - T_{B,2} + T_{B,3}\;,\\ U_4 &= \bigl\langle B g_0 \bigl(K - (\Re K) + \g/2\bigr)f, h^*\Lambda^{2\eps-2}A f\bigr\rangle \equiv T_{B,4} - T_{B,5} + T_{B,6}\;. } Each of these terms will now be estimated separately. \proclaim{Terms $\boldsymbol{T_{B,3}}$ and $\boldsymbol{T_{B,6}}$.} They are easily bounded like the term $U_2$ by power counting and using the induction hypothesis to bound $\|\Lambda^{2\eps-1}B f\|$. In the case of $T_{B,6}$, we first have to commute $B$ with $g_0 g/2$, but this does not cause any problem. \proclaim{Term $\boldsymbol{T_{B,1}}$.} This term can be estimated by \equ{ |T_{B,1}| \le \|K f\|\|g_0^* A^* \Lambda^{2\eps-2}h B f\| \le \|K f\|\|g_0^* A^* \Lambda^{2\eps-2}h\Lambda^{2-2\eps}\|\|\Lambda^{2\eps-2} B f\|\;. } The norm of $g_0^* A^* \Lambda^{2\eps-2}h\Lambda^{2-2\eps}$ is bounded by power counting. Using the induction hypothesis \eref{e:indHyp}, we thus have $|T_{B,1}| \le C(\|K f\| + \|f\|)^2$. \proclaim{Term $\boldsymbol{T_{B,4}}$.} We have the estimate \equ{ |T_{B,4}| = |\scal{K f,g_0^*B^*h^*\Lambda^{2\eps-2}A f}| \le \|K f\|\|g_0^*B^*h^*\Lambda^{2\eps-2}A f\|\;. } The second norm can be estimated by writing \equ{ \|g_0^*B^*h^*\Lambda^{2\eps-2}A f\| \le \|g_0^*h^*\Lambda^{2\eps-2}A\Lambda^{1-2\eps}\|\|\Lambda^{2\eps-1}B^*f\| + \|g_0^*[B^*,h^*\Lambda^{2\eps-2}A]f\|\;. } Here, the first term can be bounded by $C(\|K f\| + \|f\|)$ because, by \hyp{2}, we have $B^* \in \CY^1(\A_{j-1})$ and so we can use the induction hypothesis. The commutator can be expanded and bounded by power counting. \proclaim{Term $\boldsymbol{T_{B,2}}$.} We can write this term as \equs{ T_{B,2} =&\, \scal{\Lambda^{2\eps-1}h B f,(g_0 \Re K)\Lambda^{-1}A f} + \scal{\Lambda^{2\eps-1}h B f,[\Lambda^{-1}A, g_0 \Re K]f} \\ =&\, \scal{\Lambda^{2\eps-1}h B f,K_2 \Lambda^{-1}A f} + \scal{\Lambda^{2\eps-1}h B f,[\Lambda^{-1}A, g_0 \Re K]f} \\ &\,+ \scal{\Lambda^{2\eps-1}h B f,\Re(g_0 K + K_1)\Lambda^{-1}A f} \\ \equiv&\, M_1 + M_2 + M_3\;, } where the second equality has been obtained using \eref{e:g0ReK}. These terms can now be estimated separately. \proclaim{Term $\boldsymbol{M_1}$.} We write this term as \equ{ M_1 = \scal{\Lambda^{2\eps-1}h B f,\Lambda^{-1}A K_2 f} + \scal{\Lambda^{2\eps-1}h B f,[K_2,\Lambda^{-1}A]f}\;. } The first term is estimated by using \equ{ K_2 \in \CY^1(\A_{-1}) \quad\Rightarrow\quad \|K_2f\| \le C(\|K f\| + \|f\|)\;, } where the implication is a straightforward consequence of \eref{e:myestX}. The second term can be estimated by power counting and the induction hypothesis, using the fact that $K_2 \in \CY^1(\A_{-1})$, so that $[K_2,\Lambda^{-1}A]$ is bounded. \proclaim{Term $\boldsymbol{M_2}$.} We use the explicit form of $\Re K$ to write this term as \equs{ M_2 =&\; \scal{\Lambda^{2\eps-1}h B f,[\Lambda^{-1}A, g_0] (\Re K) f} \\ & + \sum_{i=1}^n\bigl(\scal{\Lambda^{2\eps-1}h B f,g_0 X_i^* [\Lambda^{-1}A, X_i]f} + \scal{\Lambda^{2\eps-1}h B f,g_0[\Lambda^{-1}A,X_i^*]X_i^{}f}) \\ & + \scal{\Lambda^{2\eps-1}h B f,g_0[\Lambda^{-1}A, \g] f}/2\\ \equiv&\; M_{20} + \sum_{i=1}^n (M_{i1}+ M_{i2}) + M_{21}\;. } The term $M_{20}$ is estimated by using the explicit form of $\Re K$ to decompose it in terms of the form \equ{ |\scal{\Lambda^{2\eps-1}h B f,[\Lambda^{-1}A, g_0]X_i^*X_i^{} f}| \le \|[\Lambda^{-1}A, g_0]X_i^*\|\|\Lambda^{2\eps-1}h B f\|\|X_i f\|\;. } The norm $\|[\Lambda^{-1}A, g_0]X_i^*\|$ is finite by Lemma~\ref{lem:power}. The terms $\|\Lambda^{2\eps-1}h B f\|$ and $\|X_i f\|$ are bounded by $C(\|K f\| + \|f\|)$, using the induction hypothesis \eref{e:indHyp} and the estimate \eref{e:myestX} respectively. The terms $M_{21}$ and $M_{i2}$ are estimated by power counting and the induction hypothesis. In order to estimate the term $M_{i1}$, we have to commute once more to find \equ{ M_{i1} = \scal{\Lambda^{2\eps-1}h B f,g_0 [\Lambda^{-1}A, X_i^{}]X_i^*f} + \bigl\langle{\Lambda^{2\eps-1}h B f,g_0 \bigl[X_i^*,[\Lambda^{-1}A, X_i^{}]\bigr]f}\bigr\rangle\;. } The first term is estimated by using \eref{e:estXstar}. The second term is estimated by expanding the double commutator and power counting. \proclaim{Term $\boldsymbol{M_3}$.} We use the positivity of $\Re (g_0 K + K_1)$ to write \equs{ |M_3| =&\; \scal[b]{\bigl(\Re (g_0K + K_1)\bigr)^{1/2}\Lambda^{2\eps-1}h B f,\bigl(\Re (g_0K + K_1)\bigr)^{1/2}\Lambda^{-1}A f} \\ \le&\; |\Re\scal{(g_0K + K_1)\Lambda^{2\eps-1}h B f,\Lambda^{2\eps-1}h B f}|^{1/2}|\scal{\Re(g_0K + K_1)\Lambda^{-1}A f,\Lambda^{-1}A f}|^{1/2} \\ \le&\; \sqrt{|\Re M_4| + |\Re M_5|}\sqrt{|M_6|}\;. } We will now estimate $M_4$, $M_5$ and $M_6$ separately. \proclaim{Term $\boldsymbol{M_4}$.} We want to put the operator $g_0K$ to the left of $f$. So we write \equ{ M_4 = \scal{\Lambda^{-1}h Bg_0K f,\Lambda^{4\eps-1}h B f} + \scal{[g_0K,\Lambda^{2\eps-1}h B]f,\Lambda^{2\eps-1}h B f}\equiv M_{41} + M_{42}\;. } The term $M_{41}$ is estimated easily by using the induction hypothesis and the fact that $\Lambda^{-1}h Bg_0$ is bounded. In order to estimate $M_{42}$, we use the explicit form of $K$ to write \equs{ M_{42} =&\; \scal{\Lambda^{-2\eps}[g_0X_0,\Lambda^{2\eps-1}h B]f,\Lambda^{4\eps-1}h B f} \\ &+ \sum_{i=1}^n \bigl(\scal{g_0X_i^*[X_i^{},\Lambda^{2\eps-1}h B]f,\Lambda^{2\eps-1}h B f} + \scal{g_0[X_i^*,\Lambda^{2\eps-1}h B]X_i f,\Lambda^{2\eps-1}h B f}\bigr) \\ &+ \scal{\Lambda^{-2\eps}[g_0,\Lambda^{2\eps-1}h B]K f,\Lambda^{4\eps-1}h B f} \\ \equiv&\; M_{40} + \sum_{i=1}^n (M_{i3} + M_{i4}) + M_{4K}\;. } The terms $M_{40}$ and $M_{4K}$ are estimated by expanding the commutator and power counting. The term $M_{i4}$ can be written as \equs{ |M_{i4}| &= |\scal{\Lambda^{-2\eps}g_0[X_i^*,\Lambda^{2\eps-1}h B]X_i f,\Lambda^{4\eps-1}h B f}| \\ &\le \|\Lambda^{1-4\eps}h^*\Lambda^{2\eps-1}g_0[X_i^*,\Lambda^{2\eps-1}h B]\|\|X_i f\|\|\Lambda^{4\eps-1}B f\|\;. } It is then estimated by power counting, using moreover the induction hypothesis and the estimate \eref{e:estX}. In order to estimate the term $M_{i3}$, we have to commute once more to write \equ{ M_{i3} = \scal{\Lambda^{-2\eps}g_0[X_i^{},\Lambda^{2\eps-1}h B]X_i^*f,\Lambda^{4\eps-1}h B f} + \bigl\langle\Lambda^{-2\eps}g_0\bigl[X_i^*[X_i^{},\Lambda^{2\eps-1}h B]\bigr]f,\Lambda^{4\eps-1}h B f\bigr\rangle\;. } The first term is estimated exactly like $M_{i4}$. The second term can then be estimated by expanding the double commutator and power counting. \proclaim{Term $\boldsymbol{M_5}$.} We write this term as \equ{ M_5 = \scal{\Lambda^{-1}h B K_1f, \Lambda^{4\eps-1}h B f} + \scal{\Lambda^{-2\eps}[K_1,\Lambda^{2\eps-1}h B]f, \Lambda^{4\eps-1}h B f}\;. } The first term is estimated using the induction hypothesis and the fact that \eref{e:g0ReK} and \eref{e:estX} imply \equ[e:estK1]{ K_1 \in \CY^1(\A_{-1}) \quad\Rightarrow\quad \|K_1 f\| \le C(\|K f\| + \|f\|)\;. } The other term is estimated by using the fact that $[K_1, \Lambda^2] \in \CY^2(\A)$ and $[K_1, h B] \in \CY^1(\A)$, which follows from $\A_{-1} \subset \A$ and thus $K_1 \in \CY^1(\A)$. \proclaim{Term $\boldsymbol{M_6}$.} We use the explicit expression for $\Re(g_0 K + K_1)$ to write this term as \equ{ M_6 = \sum_{i=1}^n\|g_0^{1/2}X_i \Lambda^{-1}A f\|^2 \le C\sum_{i=1}^n\|X_i \Lambda^{-1}A f\|^2\;. } These terms are easily estimated by putting the $X_i$ to the left of $f$, using \eref{e:estX} and estimating the commutators. \proclaim{Term $\boldsymbol{T_{B,5}}$.} This is the last term we have to estimate. Using the expression \eref{e:g0ReK} and the positivity of $\Re(g_0 K + K_1)$, it can be written in the form \equs{ T_{B,5} =&\; \bigl\langle\bigl(\Re(g_0 K + K_1)\bigr)^{1/2}f, \bigl(\Re(g_0 K + K_1)\bigr)^{1/2}B^*h^*\Lambda^{2\eps-2}A f\bigr\rangle \\ &+ \scal{K_2f, B^*h^*\Lambda^{2\eps-2}A f} \\ \equiv&\; N_1 + N_2\;. } These terms are now estimated separately. \proclaim{Term $\boldsymbol{N_2}$.} We use the Cauchy-Schwarz inequality to write \equ{ |N_2| \le \|K_2 f\|\|B^*h^*\Lambda^{2\eps-2}A f\| \equiv \|K_2 f\|\|N_3\|\;. } We can estimate $N_3$ by writing \equ{ B^*h^*\Lambda^{2\eps-2}A = h^*\Lambda^{2\eps-2}A\Lambda^{1-2\eps}\Lambda^{2\eps-1} B^* + [B^*,h^*\Lambda^{2\eps-2}A]\;, } and estimating the resulting terms using the induction hypothesis. We already noticed that we have the desired estimate for $\|K_2 f\|$. \proclaim{Term $\boldsymbol{N_1}$.} Using the Cauchy-Schwarz inequality, we write it as \equs{ N_1 & \le \scal{f,\Re (g_0 K + K_1)f}^{1/2}\scal{\Re(g_0 K + K_1)B^*h^*\Lambda^{2\eps-2}A f,B^*h^* \Lambda^{2\eps-2}A f}^{1/2} \\ & \le C(\|K f\| + \|f\|)|\scal{\Lambda^{-2\eps}(g_0K + K_1)B^*h^*\Lambda^{2\eps-2}A f,\Lambda^{2\eps}B^*h^* \Lambda^{2\eps-2}A f}|^{1/2}\\ & \equiv C(\|K f\| + \|f\|)\sqrt{|\scal{f_1+f_2,f_3}|} \le C(\|K f\| + \|f\|)\sqrt{(\|f_1\|+\|f_2\|)\|f_3\|}\;. } \proclaim{Estimate of $\boldsymbol{\|f_3\|}$.} We write it as \equ{ f_3 = \Lambda^{2\eps}h^*\Lambda^{2\eps-2}A\Lambda^{1-4\eps}\Lambda^{4\eps-1}B^*f + \Lambda^{2\eps}[B^*, h^*\Lambda^{2\eps-2}A]f \;. } The first term is estimated by using the recurrence hypothesis and the fact that \hyp{2} implies $B^* \in \CY^1(\A_{j-1})$. The second term is estimated by power counting and by using the fact that $\eps < 1/4$. \proclaim{Estimate of $\boldsymbol{\|f_2\|}$.} We write it as \equ{ f_2 = \Lambda^{-2\eps} B^*h^*\Lambda^{2\eps-2}AK_1f + \Lambda^{-2\eps}[K_1,B^*h^*\Lambda^{2\eps-2}A]f\;. } The first term is estimated using the fact that $\|K_1f\| \le C(\|K f\| + \|f\|)$ and power counting. The second term is simply estimated by power counting, and the fact that $K_1\in \CY^1(\A)$. \proclaim{Estimate of $\boldsymbol{\|f_1\|}$.} We use the explicit form of $K$ to write $f_1$ as \equs{ f_1 =&\; \Lambda^{-2\eps}B^*h^*\Lambda^{2\eps-2}Ag_0K f + \Lambda^{-2\eps}[g_0X_0,B^*h^*\Lambda^{2\eps-2}A]f \\ &+ \sum_{i=1}^n\bigl(\Lambda^{-2\eps}g_0 X_i^*[X_i^{},B^*h^*\Lambda^{2\eps-2}A]f + \Lambda^{-2\eps}[g_0 X_i^{*},B^*h^*\Lambda^{2\eps-2}A]X_i f\bigr) \\ \equiv&\; Q_K + Q_0 + \sum_{i=1}^n(Q_{i,1}+Q_{i,2})\;. } These terms will now be estimated separately. \proclaim{Term $\boldsymbol{Q_K}$.} We notice that the operator \equ{ \Lambda^{-2\eps}B^*h^*\Lambda^{2\eps-2}Ag_0 } is bounded by power counting. This yields the desired estimate. \proclaim{Term $\boldsymbol{Q_0}$.} This term is bounded by $C\|f\|$ by power counting, noticing that $g_0 X_0 \in \A$. \proclaim{Term $\boldsymbol{Q_{i,2}}$.} This term can be estimated by power counting if we expand the commutator and use the estimate \eref{e:estX}. \proclaim{Term $\boldsymbol{Q_{i,1}}$.} We use once more the trick that consists of putting the $X_i^*$ to the left of $f$. We write therefore \equ{ Q_{i,1} = \Lambda^{-2\eps}g_0[X_i^{},B^*h^*\Lambda^{2\eps-2}A]X_i^*f + \Lambda^{-2\eps}g_0\bigl[X_i^*[X_i^{},B^*h^*\Lambda^{2\eps-2}A]\bigr]f\;. } The first term is estimated by using \eref{e:estXstar} and expanding the commutator. The second term is estimated in a similar way by expanding the double commutator. We don't write the resulting terms here, because there are too much of them. They are all bounded by simple power counting and by using Lemma~\ref{lem:power}. This completes the proof of estimate \eref{e:mainEst}. It is now straightforward to prove the theorem. Recall that $R$ is the level up to which the $\CA_i$ are defined. We put $\eps = 1/4^{R+1}$, and we write: \equs{ \|\Lambda^\eps f\| &= \scal{f, \Lambda^{2\eps-2}\Lambda^2f} = \sum_{A \in \A} \scal{f, \Lambda^{2\eps-2}A^*A f} \\ &= \sum_{A \in \A} \bigl(\|\Lambda^{\eps-1}A f\|^2 + \scal{f, [A^*, \Lambda^{2\eps-2}]A f}\bigr)\;. } The first term in the sum is bounded by using \eref{e:est}, the second term by simple power counting. This finally completes the proof of Theorem~\ref{theo:princ}.\phantom{a}\nobreak\hfill\qed We next note a consequence of this theorem, namely a simple criterion to see if a quadratic differential operator has compact resolvent. It is an easy illustration of the technique that will be used in the sequel to show that $K$ has compact resolvent. \subsection{Quadratic differential operators} \begin{definition} An operator $A : \CD(A) \to \CH$ is called \emph{accretive} if it satisfies \equ{ \Re \scal{f, A f} \ge 0 \;,\quad\text{for all}\quad f \in \CD(A)\;. } An operator $A$ is called \emph{quasi accretive} if there exists $\lambda \in \R$ such that $A + \lambda$ is accretive. It is called \emph{strictly accretive} if there exists $\lambda > 0$ such that $A - \lambda$ is still accretive. \end{definition} If $-A$ is accretive, $A$ is called \emph{dissipative}. An operator $A$ is called \emph{{\it m}-accretive} if it is accretive and if $(A + \lambda)^{-1}$ exists for all $\lambda > 0$ and satisfies $\|(A + \lambda)^{-1}\| \le \lambda^{-1}$. The expressions {\it m}-dissipative, quasi dissipative, etc.~are defined similarly in an obvious way. An equivalent characterization of {\it m}-accretive operators is that they are accretive with no proper accretive extension. It is a classical result (see \eg \cite{Da}) that the quasi {\it m}-dissipative operators are precisely the generators of quasi-bounded semi-groups. An immediate consequence is that if an operator $A$ is (quasi) {\it m}-accretive ({\it m}-dissipative), its adjoint $A^*$ is also (quasi) {\it m}-accretive ({\it m}-dissipative). \begin{proposition} \label{prop:Comp} Let $\CH$ be a Hilbert space and $\CC$ be a dense subset of $\CH$. Let $K : \CD(K) \to \CH$ be a quasi {\it m}-accretive (or quasi {\it m}-dissipative) operator and let $\Lambda^2 : \CD(\Lambda^2) \to \CH$ be a self-adjoint positive operator such that $\CC \subset \CD(\Lambda^2)$. Assume moreover that $\CC$ is a core for $K$, that $\Lambda^2$ has compact resolvent and that there are constants $C>0$ and $0 < \eps < 2$ such that \equ[e:estComp]{ \|\Lambda^\eps f\| \le C(\|K f\| + \|f\|)\;,\quad\text{for all}\quad f \in \CC\;. } Then $K$ has compact resolvent too. \end{proposition} \begin{proof} By assumption, there exists a constant $\lambda > 0$ such that $K + \lambda$ is strictly {\it m}-accretive. Moreover, \eref{e:estComp} with $K$ replaced by $K+\lambda$ holds if we change the constant $C$. Since $\CC$ is a core for $K$, a simple approximation argument shows that $\CD(K) \subset \CD(\Lambda^\eps)$ and that \eref{e:estComp} holds for every $f \in \CD(K)$. This immediately implies that $(K+\lambda)^*(K+\lambda)$ has compact resolvent. Since $(K+\lambda)$ is strictly {\it m}-accretive, it is invertible and the operator \equ{ \bigl((K+\lambda)^*(K+\lambda)\bigr)^{-1} = (K+\lambda)^{-1}\bigl((K+\lambda)^{-1}\bigr)^{*}\;, } is compact. Moreover, we know that $(K+\lambda)^{-1}$ is closed, so we can make the polar decomposition \equ{ (K+\lambda)^{-1} = P J\;, } with $P$ self-adjoint and $J$ unitary. Thus $P^2$ is compact. By the spectral theorem and the characterization of compact operators, this immediately implies $P$ compact, and thus also $P J$ compact. Thus $K$ has compact resolvent. \end{proof} We now consider $\H=\Ltwo(\R^d)$ and $\F = \{\lambda I\;|\; \lambda \in \R\}$, where $I$ is the identity operator in $\H$. We define the formal expressions \equs{ x^T &= (x_1,\ldots,x_d)\;,\\ \qquad\d_x^T &= (\d_{x_1},\ldots,\d_{x_d})\;. } Let $A:\R^d \to \R^d$ be a linear map and \equ{ \CB = \{b_i \in \R^d \;|\; i=1,\ldots,s\}\;,\quad \CC = \{c_i \in \R^d \;|\; i=1,\ldots,t\}\;, } two vector families. Let us consider the differential operator $K$ defined as the closure on $\cOinf[\R^d]$ of \equ[e:defKquad]{ K = -\sum_{i=1}^s \d_x^T b_i b_i^T \d_x + \sum_{j=1}^t x^T c_j c_j^T x + x^T A \d_x\;. } We are interested in giving a geometrical condition on $A$, $\CB$ and $\CC$ that implies the compactness of the resolvent of $K$, and therefore the discreteness of its spectrum. It is possible to prove that $K$ is quasi {\it m}-accretive. Just follow the proof of Proposition~\ref{prop:maccr}, replacing $G(x)$ by $x^T x$. We have the following result. \begin{proposition} \label{theo:quad} A sufficient condition for the resolvent of the operator $K$ defined in \eref{e:defKquad} to be compact is that the vector families \equ[e:condGeo]{ \bigcup_{N\ge 0} (A^T)^N\CB \qquad\hbox{and}\qquad \bigcup_{N\ge 0} A^N \CC } span the whole space $\R^n$. \end{proposition} \begin{remark} The intuitive meaning of this theorem is that we can apply H\"ormander's criterion in both direct and Fourier space to obtain an estimate of the form \equ[e:estHarm]{ \|H^\eps f\|\le C(\|K f\| + \|f\|)\;,\quad H = -\d_x^T \d_x^{} + x^T x\;. } It is well known that $H$ has compact resolvent. By Proposition~\ref{prop:Comp}, \eref{e:estHarm} implies that $K$ has compact resolvent. \end{remark} \begin{proof} We have the following relations \equs{ [x^T A\d_x, b^T\d_x] &\equiv \sum_{i,j,k}[x_ia_{i j}\d_{x_j}, b_k\d_{x_k}] = \sum_{i,j,k}b_k[x_i, \d_{x_k}]a_{i j}\d_{x_j} \\ &= -\sum_{i,j,k}b_k\delta_{ki}a_{i j}\d_{x_j} = -b^TA\d_x\;,\\ [x^TA\d_x, c^Tx] &\equiv \sum_{i,j,k}[x_ia_{i j}\d_{x_j}, c_k x_k] = \sum_{i,j,k}x_i a_{i j}[\d_{x_j}, x_k]c_k \\ &= \sum_{i,j,k}x_i a_{i j}\delta_{j k}c_k = c^T A^T x\;. } We take $g_0 = 1$, so we have $\A_0 = \A_{-1} \cup \{x^T A \d_x\}$. We construct the remaining $\A_i$ by \equ{ \CB_i \equiv [x^T A \d_x, \CB_{i-1}]. } It is very easy to verify \hyp{1} and \hyp{2}, because the assumptions we made on $A$, $\CB$ and $\CC$ imply that $\CY^1(\A)$ contains every operator of the form $b^T\d_x$ or $c^Tx$. We have moreover \equ{ (b^T\d_x)^* = - b^T \d_x \qquad\text{and}\qquad (c^Tx)^* = c^Tx\;. } It is well-known that \hyp{3} concerning the essential self-adjointness of the $\Lambda^2$ constructed in Theorem~\ref{theo:princ} holds. Finally, it is straightforward that $\Lambda^2$ satisfies $\Lambda^2 \ge C H$, where $H$ is the ``harmonic oscillator'' defined in \eref{e:estHarm}. This proves the validity of \eref{e:estHarm}, and hence of the assertion. \end{proof} The interested reader may verify that Proposition~\ref{theo:quad} is quite stable under perturbations. A similar result indeed still holds when the coefficients $b_i$ and $c_i$ are not constants, but functions in $\CF_0$. This is precisely what was proved in \cite{EPR}. \mysection{Proof of the bound in momentum space (Proposition~\ref{prop:estDeltaPrime})} \label{sec:proof2} This proposition is an application of Theorem \ref{theo:princ}. It is just a little bit cumbersome to verify the hypotheses of the theorem. In this section, the symbol $K$ will again denote the operator defined in \eref{e:defKchain}. We choose\equ{ \F \equiv \CF_0 \;, } which is simply the set of bounded smooth functions with all their derivatives bounded. It is trivial to check that $\F$ is an algebra of closed operators. Moreover, they are all self-adjoint. We also define $\CD \equiv \cOinf[\CX]$. In this section, we will first construct a set $\CA$ according to the rules explained in Section \ref{sec:Hormander}. Then we will check that \hyp{1}--\hyp{3} are indeed satisfied, so we will be able to apply Theorem \ref{theo:princ}. This will prove Proposition~\ref{prop:estDeltaPrime} almost immediately. Before we start this program, we write down once again the definition of $X_0$, as it will be used repeatedly throughout this section: \equs{ X_0 =&\; - {r_{\!L}}\d_{p_0} + b_L({r_{\!L}} - \lambda_L^2 q_0)\d_{r_{\!L}} - {r_{\!R}}\d_{p_N} + b_R({r_{\!R}} - \lambda_R^2 q_N)\d_{r_{\!R}}\\ &\;- \sum_{i=0}^N \bigl(p_i\d_{q_i} - \V1'(q_i)\d_{p_i}\bigr) + \sum_{i=1}^{N} \V2'(\tilde q_i)\bigl(\d_{p_i}-\d_{p_{i-1}}\bigr) - \alpha_K\;. } \subsection{Definition of $\boldsymbol{\CA}$} We choose an exponent $\alpha < -3/2-\ell/(2m)$ and we let $g_0$ be the operator of multiplication by $G^\alpha$. It is clear that $g_0$ is self-adjoint and positive. Moreover, we recall that \equ{ [X_0,G^\alpha] = \alpha G^{\alpha} G^{-1}[X_0,G] \in \CF_0\;, } and so we have $[X_0, g_0] \in \F$. The set $\A_0$ is defined as \equ{ \A_0 = \{ c_L \d_{{r_{\!L}}},\, c_R \d_{{r_{\!R}}},\, G^\alpha X_0\} \cup \bar\A \;,\qquad\text{with}\qquad \bar \A = \bigl\{a_L ({r_{\!L}} - \lambda_L^2 q_0),\, a_R({r_{\!R}} - \lambda_R^2 q_N)\bigr\}\;. } Before we define the sets $\A_i$, we need a few functions. Let $i>0$ be a natural number. The functions $V_L^{(i)}$ and $V_R^{(i)}$ are defined respectively by \equs{ V_L^{(i)}(\tilde q) &= \V2''(\tilde q_{i}) \V2''(\tilde q_{i-1})\cdot\ldots\cdot\V2''(\tilde q_{1}) \;,\\ V_R^{(i)}(\tilde q) &= \V2''(\tilde q_{N+1-i}) \cdot\ldots\cdot \V2''(\tilde q_{N-1}) \V2''(\tilde q_{N})\;. } It is useful to notice that \equ[e:derV]{ \d_{q_j}V_L^{(i)}(\tilde q) = \left\{\begin{array}{clr@{\,\,}c@{\,\,}l@{}l} 0\;, &\qquad \text{if} &j &> &i&\;, \\[2mm] \bigl(\V2'''(\tilde q_1)\V2''(\tilde q_1)^{-1}\bigr) V_L^{(i)}(\tilde q) \;, &\qquad \text{if} &j &= &0&\;, \\[2mm] \bigl(\V2'''(\tilde q_i)\V2''(\tilde q_i)^{-1}\bigr) V_L^{(i)}(\tilde q) \;, &\qquad \text{if} &j &= &i&\;, \\[2mm] \bigl(\V2'''(\tilde q_i)\V2''(\tilde q_i)^{-1} - \V2'''(\tilde q_{i+1})\V2''(\tilde q_{i+1})^{-1}\bigr) V_L^{(i)}(\tilde q) \;, &\multicolumn{4}{c}{\qquad \text{otherwise}}&\;. \end{array}\right. } There are symmetric relations for the derivatives of $V_R^{(i)}$. At this point, we use assumption {\bf A3} to write \equ[e:estVi1]{ \d_{q_j}V_L^{(i)}(\tilde q) = f_{ij}(\tilde q) V_L^{(i)}(\tilde q)\;, \qquad f_{ij} \in \CF_{2m-2+\ell}\;. } This implies \equ[e:estVi2]{ [G^\alpha X_0,V_L^{(i)}(\tilde q)] = G^\alpha\sum_{j=0}^N p_j f_{ij} V_L^{(i)}(\tilde q) = f_i V_L^{(i)}(\tilde q)\;,\qquad f_i \in \F\;, } because of Proposition \ref{prop:Degs} and by the choice $\alpha < -3/2 - \ell/(2m)$. Moreover, we notice that \equ{ G^{2i\alpha}V_R^{(i)} \in \F\;, } still because of Proposition \ref{prop:Degs}. One more thing we have to remember is \eref{e:boundG}, which implies for example that there exists a function $f_0 \in \F$ such that \equ{ [G^\alpha X_0, G^\beta] = \beta f_0 G^\beta\;,\quad\text{for any}\quad \beta \in \R\;. } We are now ready to complete the construction of $\A$. \subsubsection{Definition of $\boldsymbol{\CA_1}$ and $\boldsymbol{\CA_2}$} We verify that in the case of our model, we can find functions $f_B$ and $f_{XB}$ in \eref{e:defA} such that \equs{ \A_1 \sauf \A_0 &= \{G^\alpha\d_{p_0},\, G^\alpha \d_{p_N}\}\;, \\ \A_2 \sauf \A_1 &= \{G^{2\alpha}\d_{q_0},\, G^{2\alpha} \d_{q_N}\} \;. } Considering the elements of $\CA_1$, we see that it is indeed possible to write \equ{ G^\alpha \d_{p_0} = c_L^{-1} [G^\alpha X_0, c_L \d_{{r_{\!L}}}] - G^{-1}(\d_{{r_{\!L}}}G) G^\alpha X_0 + G^\alpha b_L \d_{r_{\!L}}\;, } and a similar relation concerning $G^\alpha \d_{p_N}$. The operators $G^{-1}(\d_{{r_{\!L}}}G)$ and $G^\alpha b_L c_L^{-1}$ belong to $\F$, so we succeeded to construct $\A_1$ according to \eref{e:defA}. Let us now focus on the elements of $\A_2$. We can write \equ{ G^{2\alpha}\d_{q_0} = [G^\alpha X_0, G^\alpha \d_{p_0}] - G^{\alpha-1}(\d_{p_0}G) G^\alpha X_0 - \alpha f_0 G^\alpha \d_{p_0}\;, } and an equivalent expression at the other end of the chain. Since $G^{\alpha-1}(\d_{p_0}G) \in \F$ and $f_0 \in \F$, we succeeded to construct $\A_2$ according to \eref{e:defA}. \subsubsection{Definition of $\boldsymbol{\CA_{2i-1}}$ and $\boldsymbol{\CA_{2i}}$} For $i \ge 1$, these sets are defined by \equs{ \A_{2i-1} \sauf \A_{2i-2} &= \{G^{(2i+1)\alpha}V_L^{(i)}\d_{p_i},\, G^{(2i+1)\alpha}V_R^{(i)}\d_{p_{N-i}}\}\;, \\ \A_{2i} \sauf \A_{2i-1} &= \{G^{(2i+2)\alpha}V_L^{(i)}\d_{q_{i}},\, G^{(2i+2)\alpha} V_R^{(i)}\d_{q_{N-i}}\} \;. } We repeat this construction until $i=N-1$, \ie we do not stop at the middle of the chain, but we go on until we reach the other end. We want to check that these sets were constructed according to \eref{e:defA}. In fact, we will see that any element $A$ of $\A_j \sauf \A_{j-1}$ with $j \ge 2$ can be written as \equ[e:construct]{ A = [G^\alpha X_0, B] + D\;, \quad B \in \A_{j-1}\;,\quad D \in \CY^1(\A_{j-1})\;. } We will verify this only for $2 \le i \le N-2$. We let the reader verify that \eref{e:construct} is also valid for the remaining sets. Let us first take $j = 2i-1$ and $A = G^{(2i+1)\alpha}V_L^{(i)}\d_{p_i}$. We choose $B = G^{2i\alpha}V_L^{(i-1)} \d_{q_{i-1}} \in \A_{j-1}$ and write \equs{ [G^\alpha X_0, B] =&\; f_{i-1}G^{2i\alpha}V_L^{(i-1)} \d_{q_{i-1}} - G^{2i\alpha}V_L^{(i-1)} G^{-1}(\d_{q_{i-1}}G) G^{\alpha}X_0 \\ &+ 2i\alpha f_0 B + G^{(2i+1)\alpha}V_L^{(i-1)} [X_0, \d_{q_{i-1}}]\;. } The first three terms belong to $\CY^1(\A_{2i-2})$ and can thus be absorbed into $D$. The last term can be written as \equs{ G^{(2i+1)\alpha}V_L^{(i-1)}[X_0, \d_{q_{i-1}}] =&\; G^{2\alpha}\bigl(\V1''(q_{i-1}) + \V2''(\tilde q_i) + \V2''(\tilde q_{i-1})\bigr) G^{(2i-1)\alpha}V_L^{(i-1)}\d_{p_{i-1}} \\ &+ G^{4\alpha} \V2''(\tilde q) \V2''(\tilde q) G^{(2i-3)\alpha}V_L^{(i-2)}\d_{p_{i-2}} \\ &+ G^{(2i+1)\alpha}V_L^{(i)}\d_{p_i}\;. } The first two terms also belong to $\CY^1(\A_{2i-2})$, so they can be absorbed into $D$ as well. The remaining term is \equ{ G^{(2i+1)\alpha}V_L^{(i)}\d_{p_i} = A\;, } thus we have verified that $A$ can be written as in \eref{e:construct}. The procedure to get the symmetric term from the other end of the chain is similar. We take now $j = 2i$ and $A = G^{(2i+2)\alpha} V_L^{(i)} \d_{q_i}$. We choose $B = G^{(2i+1)\alpha}V_L^{(i)}\d_{p_i} \in \CY^1(\CA_{j-1})$ and write \equs{ [G^\alpha X_0, B] =&\; f_i G^{(2i+1)\alpha}V_L^{(i)}\d_{p_i} + G^{(2i+1)\alpha}V_L^{(i)} G^{-1}(\d_{p_i} G) G^{\alpha} X_0 \\ &+ (2i+1)\alpha f_0 B + G^{(2i+2)\alpha} V_L^{(i)} \d_{q_i}\;. } The first three terms belong to $\CY^1(\A_{2i-1})$ and can be absorbed into $D$, so we verified that every element of $\A$ can indeed be written as in \eref{e:defA}. \subsection{Verification of the hypotheses and proof} In order to be able to apply Theorem \ref{theo:princ}, we verify the hypotheses \hyp{1}--\hyp{3}. \proclaim{Verification of \hyp{2}.} We want to check that $A\in \CA_j$ implies $A^* \in \CY^1(\CA_j)$. By Proposition \ref{prop:Degs}, we can easily verify that $\A \sauf \bar \CA \subset \L_0$. But we know that \equ{ A \in \CL_0 \Rightarrow A^* = -A + g\;, } and so \hyp{2} holds for $\CA \sauf \bar \CA$. The elements of $\bar \CA$ being self-adjoint, \hyp{2} holds trivially. \proclaim{Verification of \hyp{3}.} The operator $\Lambda^2$ can be written as \equ{ \Lambda^2 = -\sum_{i,j} \d_i a_{ij}(x) \d_j + V(x)\;. } It is well-known that if $a_{ij}$ and $V$ are sufficiently nice, such operators are essentially self-adjoint on $\cOinf[\CX]$ (see \eg \cite[Thm.~3.2]{Ag}). \proclaim{Verification of \hyp{1}.} Let us define $\fL_0 \subset \fL$ as the set of first-order differential operators with coefficients in $\CF_0$. We first verify that \equ{ A \in \A\,,\; f \in \F\quad\Rightarrow\quad [A,f] \in \F\;. } This is trivial, noticing that $\A \subset \fL_0 \cup \bar \A$ and $[\fL_0, \F] = [\bar \A, \F] = \{0\}$. We now verify that \equ{ A \in \A \quad\Rightarrow\quad A^* \in \CY^1(\A)\;. } This is also trivial, because $A \in \fL_0 \Rightarrow A^* = -A + g$, with $g \in \CF_0$. Moreover, the elements of $\bar \CA$ are self-adjoint. Finally, we want to verify that \equ{ A,B \in \A \quad\Rightarrow\quad [A,B] \in \CY^1(\A)\;. } This is a little bit longer to verify. Concerning the commutators of the elements of $\bar \A$ with the other elements of $\A$, the statement follows easily from the fact that if $F:\R^n \to R$ is linear and $A \in \fL_0$, then $[A,F] \in \CF_0 \equiv \F$. Moreover, the com\-mutator between two multiplication operators vanishes. Concerning the commutators between the $\d_r$ and the other elements, we notice that they commute with the functions $V_L^{(i)}(\tilde q)$ and $V_R^{(i)}(\tilde q)$. Moreover, we have for example \equ{ [\d_{r_{\!L}}, G^\gamma] = \gamma \bigl(G^{-1} [\d_{r_{\!L}}, G]\bigr)G^\gamma \;,\quad\text{if}\quad \gamma \in \R\;, } and $G^{-1} [\d_{r_{\!L}}, G]$ belongs to $\F$. It is straightforward to verify that this implies the desired statement. Concerning the commutators of $G^\alpha X_0$ with the other elements of $\A$, the statement has already been verified by the construction of $\A$ for every operator, but those in $\A_{2N-2}\sauf \A_{2N-3}$. These operators are of the form \equ{ A = G^{2N\alpha}V_R^{(N-1)}\d_{q_1}\;, } and a similar term at the other end of the chain. We can make a computation very similar to the one we made when we constructed $\A_{2i-1}$, to show that \equ{ [G^\alpha X_0, A] = G^{(2N+1)\alpha}V_R^{(N)}\d_{p_{0}} + C\;, \qquad C\in \CY^1(\A)\;. } But $G^{2N\alpha}V_R^{(N)} \in \F$, so $[G^\alpha X_0, A] \in \CY^1(\A)$. It remains therefore only to verify the statement for commutators between elements of $\A \sauf \A_0$. We can divide these commutators in three classes. \proclaim{Both operators contain a $\boldsymbol{\d_p}$.} We notice that these operators can all be written in the form $G^{\alpha_i} W_i(q) \d_{p_i}$. The commutator between two such elements is given by \equs{ [G^{\alpha_i} W_i(q) \d_{p_i},G^{\alpha_j} W_j(q) \d_{p_j}] =&\; G^{-1}(\d_{p_i}G)G^{\alpha_i}W_i(q)G^{\alpha_j}W_j(q)\d_{p_j} \\ & - G^{-1}(\d_{p_j}G)G^{\alpha_j}W_j(q)G^{\alpha_i}W_i(q)\d_{p_i}\;. } Both terms belong to $\CY^1(\A)$, because $G^{-1}(\d_p G) \in \F$. \proclaim{One operator contains a $\boldsymbol{\d_p}$, one contains a $\boldsymbol{\d_q}$.} Let us compute the commutator between $G^{(2i+2)\alpha}V_L^{(i)}\d_{q_i}$ and $G^{(2j+1)\alpha}V_L^{(j)}\d_{p_j}$. We have \equs{ [G^{(2i+2)\alpha}V_L^{(i)}\d_{q_i},G^{(2j+1)\alpha}V_L^{(j)}\d_{p_j}] =&\; G^{(2i+2)\alpha}V_L^{(i)}(\d_{q_i} G)G^{-1}G^{(2j+1)\alpha}V_L^{(j)}\d_{p_j} \\ &+ G^{(2i+1)\alpha}V_L^{(i)}(\d_{p_i} G)G^{-1}G^{(2i+2)\alpha}V_L^{(i)}\d_{q_i} \\ &+ G^{2i\alpha}V_L^{(i)}G^{2\alpha}f_{ij}G^{(2j+1)\alpha}V_L^{(j)}\d_{p_j}\;. } All those terms belong to $\CY^1(\A)$. The computation is similar if we take for example \equ{ G^{(2j+1)\alpha}V_R^{(j)}\d_{p_{N-j}} } instead of $G^{(2j+1)\alpha}V_L^{(j)}\d_{p_j}$. \proclaim{Both operators contain a $\boldsymbol{\d_q}$.} The computation is similar to the preceding case and is left to the reader. It is now easy to give the \begin{proof}[of Proposition~\ref{prop:estDeltaPrime}] We have just verified that the hypotheses of Theorem \ref{theo:princ} are satisfied. We apply it, so we have the estimate \equ{ \|\tilde\Delta^\eps f\| \le C(\|Kf\| + \|f\|)\;, } where $\tilde\Delta$ is given by \equ{ \tilde\Delta = 1 + \sum_{A \in \A} A^*A\;. } It is easy to see that $\tilde\Delta$ has exactly the form \eref{e:defDeltaPrime}. This completes the proof of Proposition~\ref{prop:estDeltaPrime}. \end{proof} \mysection{Proof of Theorem \ref{theo:Chain}} \label{sec:theoChain} It is now possible to prove that the operator $K$ has compact resolvent, which is one of the main results of this paper. Before we start the proof itself, we need two preliminary results. The first one states \begin{lemma} \label{prop:sumEps} Let $\tilde\Delta$ be the closure in $\Ltwo(\R^n)$ of the operator acting on $\cOinf[\R^n]$ as \equ{ \tilde\Delta = \sum_{i=1}^{\bar N} L_i^*L_i + a_0\;, } where the $L_i$ are smooth vector fields with bounded coefficients spanning $\R^n$ at every point and $a_0$ is a smooth positive function. Let $V : \R^n \to \R^n$ be a continuous function such that for every constant $C>0$, there exists a compact $K_C \subset \R^n$ with the property that $V(x) > C$ for every $x \in \R^n \backslash K_C$. We moreover assume that $V(x) \ge 1$. Define the operator $H$ as the closure in $\Ltwo(\R^n)$ of the operator acting on $f \in \cOinf[\R^n]$ as \equ{ \bigl(H f\bigr)(x) = \bigl(\tilde\Delta f\bigr)(x) + V(x) f(x)\;. } Then the operator $H$ is self-adjoint. Suppose $V$ and the $L_i$ are such that the function \equ[e:defBound]{ 2a_0 V + \sum_{i=1}^{\bar N}\Bigl((L_i^* + L_i)[L_i,V] - \bigl[L_i,[L_i,V]\bigr]\Bigr) } is bounded. We then have the estimate \equ[e:sumexp]{ \scal{f,H ^\eps f} \le \scal{f,\tilde\Delta^\eps f} +\scal{f,V^\eps f} + C\scal{f,H^{\eps-1}f}\;,\qquad 0<\eps < 1\;, } which holds for any $f \in \cOinf[\R^n]$. \end{lemma} \begin{proof} The result concerning the self-adjointness of $H$ and of $\tilde \Delta$ is classical, we will not prove it here. The interested reader can find a proof in \cite[Thm.~3.2]{Ag}. We use the fact that if $T$ is a strictly positive self-adjoint operator and $\alpha= 1-\eps \in (0,1)$, we can write \equ{ T^{-\alpha} = C_{\alpha} \int_0^\infty z^{-\alpha} (z+T)^{-1}\,dz\;,\qquad C_\alpha = \frac{\sin(\pi \alpha)}{\pi}\;, } and thus \equ{ T^{\eps} = C_{\alpha} \int_0^\infty z^{\eps-1} \frac{T}{z+T}\,dz\;. } Moreover, a core of $T$ is again a core of $T^\eps$, so \eref{e:sumexp} makes sense. For a proof of these statements, see \cite[\S V.3]{Ka}. This allows us to write inequality \eref{e:sumexp} as \equs[e:inequWanted]{ \int_0^\infty z^{\eps-1} \Bigl\langle f,\frac{H}{z+H}f\Bigr\rangle\,dz \le &\; \int_0^\infty z^{\eps-1} \Bigl\langle f,\frac{\tilde\Delta}{z+\tilde\Delta}f\Bigr\rangle\,dz + \int_0^\infty z^{\eps-1} \Bigl\langle f,\frac{V}{z+V}f\Bigr\rangle\,dz \\[2mm] & + C\int_0^\infty z^{\eps-1} \Bigl\langle f,\frac{1}{z+H}f\Bigr\rangle\,dz\;. } \def\tilde\Delta{\tilde\Delta} In order to prove \eref{e:inequWanted}, let us first show that the operator $\tilde\Delta V + V \tilde\Delta$ is lower bounded. This is an immediate consequence of \eref{e:defBound} and the equality \equs{ L_i^* L_i V + V L_i^* L_i &= 2L_i^* V L_i + (L_i + L_i^*)[L_i,V] - \bigl[L_i,[L_i,V]\bigr]\;, } which is easily verified, using the fact that $L_i + L_i^*$ is simply a function. Therefore, there exists a constant $C> 0$ such that \equ{ \scal[b]{g, (\tilde\Delta V + V \tilde\Delta) g} + C\scal{g,g} \ge 0\;,\quad \forall g\in \cOinf[\R^n]\;. } Since $H \ge 1$ in the sense of quadratic forms, we find \equ{ \scal[b]{g, (\tilde\Delta V + V \tilde\Delta) g} + C\scal{g,(z+H)g} \ge 0\;, } which holds for every $z \ge 0$. Since $\tilde\Delta$ and $V$ are positive self-adjoint operators, this immediately implies \equ[e:inter1]{ \scal[B]{g, V\frac{\tilde\Delta}{z+\tilde\Delta}V g} + \scal[B]{g, \tilde\Delta\frac{V}{z+V}\tilde\Delta g} + \scal[b]{g, (\tilde\Delta V + V \tilde\Delta) g}+ C\scal{g,(z+H)g} \ge 0\;. } We can easily check the following identities \equs{ V \tilde\Delta(z+\tilde\Delta)^{-1} V + \tilde\Delta V &= (z+H)\tilde\Delta(z+\tilde\Delta)^{-1}V\;, \\[1mm] \tilde\Delta V(z+ V)^{-1} \tilde\Delta + V \tilde\Delta &= (z+H)\tilde\Delta(z+V)^{-1}\tilde\Delta \;. } Inserting this in \eref{e:inter1}, we get \equ{ \scal[b]{g,(z+H)\bigl(\tilde\Delta(z+\tilde\Delta)^{-1}V + V(z+V)^{-1}\tilde\Delta\bigr)g} + C\scal{g,(z+H)g} \ge 0\;, } and thus \equ{ \scal[b]{g,(z+H)Hg} \le \scal[b]{g,(z+H)\bigl(H+\tilde\Delta(z+\tilde\Delta)^{-1}V + V(z+V)^{-1}\tilde\Delta\bigr)g} + C\scal{g,(z+H)g}\;. } We can check the equalities \equs{ \tilde\Delta(z+\tilde\Delta)^{-1}V + \tilde\Delta &= \tilde\Delta(z+\tilde\Delta)^{-1}(z+H)\;,\\[1mm] V(z+V)^{-1}\tilde\Delta + V &= V(z+V)^{-1}(z+H)\;, } which allow us to write \equ{ \scal[b]{g,(z+H)Hg} \le \scal[b]{(z+H)g,\bigl(\tilde\Delta(z+\tilde\Delta)^{-1} + V(z+V)^{-1}\bigr)(z+H)g} + C\scal{g,(z+H)g}\;. } Let us define $f \equiv (z + H)g$. This immediately yields the estimate \equ[e:rel]{ \scal[B]{f,\frac{H}{z+H}f} \le \scal[B]{f,\frac{\tilde\Delta}{z+\tilde\Delta}f} + \scal[B]{f,\frac{V}{z+V}f} + C \scal[B]{f,\frac{1}{z+H}f}\;, } which holds for any $f$ in $\CW \equiv (z+H)\cOinf[\R^n]$. But we know that $\cOinf[\R^n]$ is a core for $H$, therefore $\CW$ is dense in $\Ltwo(\R^n)$. Since the operators appearing in \eref{e:rel} are all bounded, the inequality \eref{e:rel} holds for every $f \in \Ltwo(\R^n)$ and thus in particular also for $f \in \cOinf[\R^n]$. This implies the wanted estimate \eref{e:inequWanted}. \end{proof} The second result we want to use is \begin{proposition} \label{prop:compRes} Let $\tilde\Delta$, $V$ and $H$ be as in Lemma~\ref{prop:sumEps}. Then $H$ has compact resolvent. \end{proposition} \begin{proof} We know that $\tilde \Delta$ is a positive self-adjoint operator, so \equ{ T = (\tilde\Delta + 1)^{-1} } exists and $\|T\| \le 1$. The proof of compactness is a modification of the standard proof of the same theorem with $\tilde \Delta$ replaced by the true Laplacean $\Delta$, which can be found \eg in \cite{Ag}. It is based on the fact that if $\chi$ is a function with compact support, then the multiplication operator $\chi$ is relatively compact with respect to $\tilde\Delta$. We want to prove that $\chi T$ is a compact operator, \ie that the closure of \[ Y = \{ \chi T f \; | \; f \in \cOinf[\R^n] \quad \hbox{and} \quad \|f\| \le 1\} \] is compact. Let us define $\CK = \supp \chi$. By hypothesis, $\CK$ is compact. Moreover, we have $Y \subset \cOinf(\CK)$. It is well-known that if $\CK$ is a compact domain of $\R^n$, then the set \[ \{ u\in \cOinf(\CK) \; | \; \|u\| \le 1;\; \scal{u,\Delta u} \le 1\} \] is compact (see \eg \cite[Thm.~XIII.73]{RS}). This implies that $Y$ is compact if we are able to prove that there are strictly positive constants $\eps$, $c_1$ and $c_2$ such that $u\in Y$ implies \[ \|u\| \le c_1 \qquad\hbox{and}\qquad \scal{u,\Delta u} \le c_2 \;. \] We take any element $u$ in $Y$ and write it as $u=\chi T f$. We have \[ \|u\| \le \|\chi\|_\infty \, \|T\| \, \|f\| \le c_1\;. \] Recall that we assumed the vector fields $L_i$ appearing in the construction of $\tilde \Delta$ span $\R^n$ at any point and that $a_0$ is a strictly positive function. Together with the compactness of the support of $u$, this implies that there are constants $C$ and $k_1$ such that \equs[equ:est1]{ |\scal{u,\Delta u}| &\le C|\scal{u,\tilde\Delta u}| = C|\scal{u,\tilde\Delta \chi T f}| \le C\|u\|\|\tilde\Delta \chi T f\| \\ & \le C\|\chi \tilde\Delta T f\| + C\|[\tilde\Delta,\chi] T f\| \le k_1 + C\|[\tilde\Delta,\chi] T f\|\;, } where the last inequality is a consequence of $T = (1 +\tilde \Delta)^{-1}$. We therefore only need to bound the term containing the commutator of $\tilde\Delta$ and $\chi$. Explicit calculation yields \equ{ [\tilde\Delta,\chi] = \sum_{i=1}^{\bar N} \Bigl( -2[L_i, \chi]L_i + \bigl[L_i,[L_i,\chi]\bigr] + (L_i^* + L_i)[L_i,\chi]\Bigr) \equiv \sum_{i=1}^{\bar N} \eta_i L_i + \eta_0\;, } where the $\eta_i$ are bounded functions with $\supp \eta_i \subset \CK$. So the only terms that remain to be bounded are of the form $\|\eta_i L_i T f\|$. As $\eta_i$ is bounded, it is enough to bound $\|L_i T f\|$. We have \equs[equ:est2]{ \|L_i Tf\|^2 &= \scal{Tf, L_i^* L_i Tf} \le \scal{Tf, \tilde\Delta Tf} \le \|f\|^2\;. } This completes the proof of the statement about the relative compactness of $\chi$. This implies that we can add to $H$ any function with compact support without changing its essential spectrum (see \cite[Thm.~XIII.14]{RS}). But the assumption we made concerning $V$ and the positivity of $\tilde \Delta$ imply that for any constant $C$, we can raise the spectrum of $H + \chi$ above $C$ by taking for $\chi$ a smooth function satisfying \equ{ \chi(x) = \left\{ \begin{array}{cc@{}l} C \quad& x \in \CK_C &\;,\\ 0 \quad& d(x,\CK_C) > 1&\;. \end{array} \right. } Therefore, the essential spectrum of $H$ is empty and thus $H$ has compact resolvent. \end{proof} It is now easy to give the \begin{proof}[of Theorem \ref{theo:Chain}] By Proposition \ref{prop:estG} and \ref{prop:estDeltaPrime}, we can choose a constant $\eps$ small enough to have, for every $f \in \cOinf[\CX]$, the estimate \equ{ \|\tilde\Delta^\eps f\| \le C(\|Kf\| + \|f\|) \quad\text{and}\quad \|G^\eps f\| \le C(\|Kf\| + \|f\|)\;. } We moreover define \equ[e:defHchain]{ H \equiv \tilde\Delta + G\;. } By the proof of Proposition~\ref{prop:estDeltaPrime}, we see that the assumptions of Lemma~\ref{prop:sumEps} are satisfied. We can thus write \equs{ \|H^\eps f\|^2 &= \scal{f, H^{2\eps}f} = \scal{f,(\tilde\Delta + G)^{2\eps} f} \le \scal{f, \tilde\Delta^{2\eps} f} + \scal{f, G^{2\eps} f} + C\|f\|^2 \\ & \le \|\tilde\Delta^{\eps} f\|^2 + \|G^\eps f\|^2 + C\|f\|^2 \le C(\|Kf\| + \|f\|)^2\;. } Because $G$ is confining, we can apply Proposition~\ref{prop:compRes} to see that $H$, and therefore also $H^\eps$, have compact resolvent. Therefore Corollary~\ref{prop:Comp} applies, showing that $K$ has compact resolvent. \end{proof} \begin{remark}[1] The proof still works under slightly weaker assumptions. The coupling between the ends of the chain and the heat baths does not have to be of the dipolar type. It is enough for example that $F_L$ and $F_R$ belong to some $\CF_\beta$ with $\beta < n$. Moreover, the potentials $V_1$ and $V_2$ can be different for each particle. We only have to impose that assumptions {\bf A1}--{\bf A3} can be satisfied for every particle with the same constants $\ell$, $m$ and $n$. \end{remark} \begin{remark}[2] Throughout this paper, we restricted ourselves to the one-dimensional case, \ie each particle had only one degree of freedom. It is not very hard to generalize the results of this paper to the $d$-dimensional case. It is straightforward to generalize assumptions {\bf A1} and {\bf A2}, where $V'$ is now a vector. In assumption {\bf A3}, the inverse of $\V2''$ has to be read as the inverse matrix. A matrix or vector-valued function is said to belong to $\CF_\beta$ if each of its components belong to $\CF_\beta$. The only point that could cause some trouble is the expression \eref{e:derV}, because the $\V2''(\tilde q_j)$ are now matrices which do not commute, so the expression for $\d_{q_{j}} V_L^{(i)}(\tilde q)$ will show terms of the form \equ{ \V2''(\tilde q_{i}) \V2''(\tilde q_{i-1})\cdot\ldots\cdot \V2'''(\tilde q_{j}) \cdot\ldots\cdot\V2''(\tilde q_{1})\;, } where $\V2'''$ is a trilinear form. Such a term can we written as \equ{ \V2''(\tilde q_{i}) \V2''(\tilde q_{i-1})\cdot\ldots\cdot \V2'''(\tilde q_{j}) {\V2''(\tilde q_{j+1})}^{-1} \cdot\ldots\cdot {\V2''(\tilde q_{i})}^{-1} V_L^{(i)}\;. } If we want to get expressions similar to \eref{e:estVi1} and \eref{e:estVi2}, we have to make $|\alpha|$ very big (of the order of $N$), but this is not a problem. \end{remark} \begin{remark}[3] One important assumption was that $m > n$, in other words, the interparticle coupling is stronger at infinity than the single particle potential. If this is not satisfied, our proof does not work. There could be some physical reason behind this. If a stationary state exists, this means that even if the chain is in a state of very high energy, the mean time to reach a region with low energy is finite (see \eg \cite{Ha}). But if $m<n$, the relative strength of the coupling versus the one-body potential goes to zero at high energy. The consequence is that there is almost no energy transmitted between particles. Since the only points where dissipation occurs are the ends of the chain, we see that the higher the energy of the chain is, the slower this energy will be dissipated. Probably this is not sufficient to destroy the existence of a stationary state, but it could explain why the proof does not work in this situation. It is even possible that this phenomenon destroys the compactness of the resolvent of $K$. \end{remark} \mysection{The invariant measure} \label{sec:inv} This section is devoted to the proof of Theorem~\ref{theo:exist}. Throughout this section, we denote by ${\cal T}^t$ the semi-group generated by the system of stochastic differential equations \eref{e:stochChain}. We also assume that {\bf A1}--{\bf A3} are satisfied, so Proposition~\ref{prop:estG} and \ref{prop:estDeltaPrime} hold, as well as Theorem~\ref{theo:Chain}. The proof of Theorem~\ref{theo:exist} is divided into three separate propositions, showing respectively the following properties of the invariant measure $\mu$: \begin{list}{$\bullet$} {\setlength{\leftmargin}{9mm}\setlength{\topsep}{2mm}\setlength{\itemsep}{0mm}} \item[(i)] Existence and smoothness. \item[(ii)] Decay properties. \item[(iii)] Uniqueness and strict positivity. \end{list} \begin{proposition} If Assumptions {\bf A1}--{\bf A3} are satisfied, the Markov process given by \eref{e:stochChain} possesses an invariant measure $\mu$. It has a density $h$, which is a $\CC^\infty$ function on $\R^{2N+4}$. \end{proposition} \begin{proof} By Theorem~\ref{theo:Chain}, we know that $K$ has compact resolvent. This implies also the compactness of the resolvent of $L_{\CH}$ and thus of $L_0$. Since $G$ grows algebraically at infinity, we see that the constant function $1$ belongs to $\CH_0$. Moreover, we notice that $L_0 1 = 0$, thus the operator $L_0$ has an eigenvalue $0$, which is isolated because of the compactness of its resolvent. This in turn implies that $L_0^*$ also has an isolated eigenvalue $0$. We denote the corresponding eigenvector by $g$ and normalize it so that $\scal{g,1}_{\CH_0}=1$. Since $L_0^*$ is hypoelliptic, $g$ must be $\CC^\infty$. Assume first that $g \ge 0$. We then define \equ[e:defh]{ h(p,q,r) = Z_0^{-1} g(p,q,r) e^{-2\beta_0 G}\;, } where $Z_0$ is the normalization constant appearing in the definition of $\CH_0$. Set $\mu(dx) = h(x)\,dx$; we want to check that $\mu$ is the invariant measure we are looking for. Notice that $\mu(dx)$ is a probability measure because \equ{ \int \mu(dx) = Z_0^{-1}\int e^{-2\beta_0 G(x)}g(x)\,dx = \scal{g,1}_{\CH_0} = 1\;. } Let $A$ be a Borel set of $\R^{2N+4}$. Then the characteristic function $\chi_A$ of $A$ belongs to $\CH_0$. We have \equs{ \bigl(({\cal T}^t)^* \mu\bigr)(A) &= \int \bigl({\cal T}^t \chi_A\bigr)(x)\,\mu(dx) = Z_0^{-1} \int e^{-2\beta_0 G(x)} g(x) \bigl({\cal T}^t \chi_A\bigr)(x)\, dx \\ & = Z_0^{-1} \int e^{-2\beta_0 G(x)} \bigl(({\cal T}_0^t)^* g\bigr)(x)\chi_A(x)\, dx = \mu(A)\;, } thus $\mu$ is an invariant measure for the Markov process defined by \eref{e:stochChain}. The argument showing that it was indeed justified to assume $g$ positive can be taken over from \cite[Prop.~3.6]{EPR}. \end{proof} We next turn to the decay properties of the invariant measure $h$. We first introduce a convenient family of Hilbert spaces. \begin{definition} Choose $\gamma \in \R$. We define the Hilbert space $W^{(\gamma)}$ as \equ{ W^{(\gamma)} \equiv \Ltwo(\CX,\,G^{2\gamma}(x)\,dx) = \CD(G^\gamma)\;. } We will denote by $\scal{\cdot,\cdot}_{(\gamma)}$ and $\|\cdot\|_{(\gamma)}$ the corresponding scalar product and norm. We also define \equ{ W^{(\infty)} \equiv \bigcap_{\gamma > 0} W^{(\gamma)}\;, } which is the set of all functions that decay at infinity faster than any polynomial. \end{definition} We already know that $h$ is a $\CC^\infty$ function, so we want to show that it is possible to write \equ{ h(p,q,r) = \tilde h(p,q,r) e^{-\beta_0 G(p,q,r)}\;,\qquad \tilde h \in W^{(\infty)}\;. } The function $\tilde h$ being an eigenfunction of the operator $K$, the decay properties of the invariant measure are a consequence of the following result. \begin{proposition} \label{prop:decay} The eigenfunctions of $K$ and $K^*$ belong to $\CC^\infty(\CX) \cap W^{(\infty)}$. \end{proposition} We will show Proposition~\ref{prop:decay} only for the eigenfunctions of $K$. It is a simple exercise left to the reader to retrace the proof for the eigenfunctions of $K^*$. We already know that $K$ and $K^*$ are hypoelliptic, so their eigenfunctions belong to $\CC^\infty(\CX)$. It remains to be proven that they also belong to $W^{(\infty)}$. To prove the proposition, we will show the implication \equ[e:implSpaces]{ f \in W^{(\gamma)} \quad\text{and}\quad K f \in W^{(\gamma)} \quad\Rightarrow\quad f \in W^{(\gamma + \eps)}\;, } which immediately implies that the eigenvectors of $K$ belong to $W^{(\infty)}$. For this purpose, we introduce the family of operators $K_\gamma$ defined by \equs[2]{ K_\gamma : \;&\CD(K_\gamma) &\;\to\;& W^{(\gamma)} \\ &\quad f & \;\mapsto\; & K f\;, } where $\CD(K_\gamma)$ is given by \equ{ \CD(K_\gamma) = \{ f \in W^{(\gamma)}\;|\; K f \in W^{(\gamma)}\}\;. } The expression $K f$ has to be understood in the sense of distributions. We have the following preliminary result. \begin{lemma} \label{lem:coregamma} $\cOinf[\CX]$ is a core for $K_\gamma$. \end{lemma} \begin{proof} The proof uses the tools developed in Appendix~\ref{App:ops} and is postponed to Appendix~\ref{App:core}. \end{proof} The key lemma for the proof of Proposition~\ref{prop:decay} is the following. \begin{lemma} \label{lem:KDecay} There are an $\eps>0$ and constants $C_\gamma > 0$ such that for every $\gamma > 0$ and every $u \in \CD(K_\gamma)$, the relation \equ[e:estKDecay]{ \|G^\eps u\|^2_{(\gamma)} \le C_\gamma\bigl(\|K_\gamma u\|^2_{(\gamma)} + \|u\|^2_{(\gamma)}\bigr) } holds. \end{lemma} \begin{proof} Since we know that $\cOinf[\CX]$ is a core for $K_\gamma$, it suffices to show \eref{e:estKDecay} for $u \in \cOinf[\CX]$. Let $L$ be the first-order differential operator associated to a divergence-free vector field. Then we have for $f,g \in \cOinf$, \equs{ \scal{Lf,g}_{(\gamma)} &= -\scal{f,L G^{2\gamma}g} = -\scal{f,G^{2\gamma} L g} - 2\gamma\scal{f,G^{2\gamma} G^{-1}(LG)g} \\[1mm] &= -\scal{f,L g}_{(\gamma)} - 2\gamma\scal{f, G^{-1}(LG) g}_{(\gamma)}\;. } We write this symbolically as \equ{ L_\gamma^* = - L_\gamma - 2\gamma G^{-1}(LG)\;. } We can use the latter equality to show that there are constants $c^{(1)}_\gamma$ and $c^{(2)}_\gamma$ such that \equ{ -\frac{(L_\gamma)^2 + (L_\gamma^*)^2}{2} = L_\gamma^* L_\gamma + c^{(1)}_\gamma G^{-2}(LG)^2 + c^{(2)}_\gamma G^{-1}(L^2G)\;. } Using the explicit form of $K$, this in turn yields the useful relation \equs[e:ReKbeta]{ \Re\scal{u,K u}_{(\gamma)} =&\, c_L^2 \|\d_{r_{\!L}} u\|_{(\gamma)}^2 + c_R^2 \|\d_{r_{\!R}} u\|_{(\gamma)}^2 \\[1mm] & + a_L^2 \|({r_{\!L}} - \lambda_L q_0) u\|_{(\gamma)}^2 + a_R^2 \|({r_{\!R}} - \lambda_R q_N) u\|_{(\gamma)}^2 + \scal{u,f_K u}_{(\gamma)}\;, } where $f_K$ is some bounded function. We now have the tools to prove the validity of \eref{e:estKDecay}. We use Proposition~\ref{prop:estG} to write \equs{ \|u\|^2_{(\gamma+\eps)} &= \|G^\eps G^\gamma u\|^2 \le C(\|K G^\gamma u\|^2 + \|G^\gamma u\|^2) \\[1mm] &\le C(\|G^\gamma K u\|^2 + \|[K,G^\gamma] u\|^2 + \|G^\gamma u\|^2)\;. } An explicit computation yields \equ{ [K,G^{\gamma}]u = G^{\gamma}\bigl(f_L\,\d_{r_{\!L}} + f_R\,\d_{r_{\!R}} + f_0\bigr)u\;, } for some smooth bounded functions $f_L$, $f_R$ and $f_0$. We are thus able to write \equ[e:estuDecay]{ \|u\|^2_{(\gamma+\eps)} \le C\bigl(\|K u\|^2_{(\gamma)} + \|u\|^2_{(\gamma)} + c_L^2 \|\d_{r_{\!L}} u\|^2_{(\gamma)} + c_R^2 \|\d_{r_{\!R}} u\|^2_{(\gamma)}\bigr)\;. } Using \eref{e:ReKbeta}, we can write \equs{ c_L^2 \|\d_{r_{\!L}} u\|^2_{(\gamma)} + c_R^2 \|\d_{r_{\!R}} u\|^2_{(\gamma)} &\le |\Re\scal{u,Ku}_{(\gamma)}| + C\|u\|_{(\gamma)}^2 \\[1mm] &\le C\bigl(\|Ku\|_{(\gamma)}^2 + \|u\|_{(\gamma)}^2\bigr)\;. } This, together with \eref{e:estuDecay}, completes the proof of the assertion. \end{proof} \begin{proof}[of Proposition~\ref{prop:decay}] Lemma~\ref{lem:KDecay} immediately shows that $\CD(K_\gamma) \subset W^{(\gamma + \eps)}$ for every $\gamma > 0$. This proves the assertion \eref{e:implSpaces}. Let $f$ be an eigenfunction of $K$. We know that $f \in \Ltwo(\CX)$ and, because it is an eigenvector of $K$, we have $K f \in \Ltwo$. Thus, by \eref{e:implSpaces}, $f \in W^{(\eps)}$. Of course $K f \in W^{(\eps)}$ as well, so $f \in W^{(2 \eps)}$. This can be continued {\it ad infinitum}, and so we have $f \in W^{(\infty)}$, which is the desired result. \end{proof} Finally, we want to show the strict positivity and the uniqueness of the invariant measure. The proof of this result will only be sketched, as it simply retraces the proof of Theorem~3.6 in \cite{EPR2}. \begin{proposition} The density $h$ of the invariant measure $\mu$ is a strictly positive function. Moreover, the invariant measure is unique. \end{proposition} \proclaim{Sketch of proof.}The idea is to show that the control system associated with the stochastic differential equation \eref{e:stochChain} is strongly completely controllable. This means that, given an initial condition $x_0$, a time $\tau$ and an endpoint $x_\tau$, it is possible to find a realization of the Wiener process $w$ such that $\xi(\tau;x_0,w) = x_\tau$. The main assumption needed to show that is that the gradient of the two-body potential is a diffeomorphism. This is ensured by assumption {\bf A3}. The consequence is that, for every time $\tau$, every initial condition $x_0$ and every open set $U$, the transition probability $P(\tau,x_0,U)$ is strictly positive. Because $\mu$ is invariant, we have \equ{ \mu(U) = \int P(t,x,U)\,\mu(dx) > 0\;. } This implies the strict positivity of $h$. Uniqueness follows from an elementary ergodicity argument. {\hfill\qed}\makeappendix{Proof of Lemma \ref{lem:power}} \label{App:main} Throughout this appendix, we will make use of the same notations as in Section~\ref{sec:Hormander}, \ie $\CH = \Ltwo(\R^n)$, $\CD = \cOinf[\R^n]$ and ${\mathfrak D}$ is the set of differential operators with smooth coefficients. Moreover, $\A$ denotes some finite subset of ${\mathfrak D}$ and is identified with closed operators on $\CH$. The operator $\Lambda^2$ is defined as \equ[e:defLambdaApp]{ \Lambda^2 \equiv 1 + \sum_{A\in\A}A^*A\;. } We will moreover assume that \hyp{1} and \hyp{3} concerning $\A$ and $\F$ holds, \ie $A,B \in \A$ and $f \in \F$ imply \equ[e:H1]{ [A,B] \in \CY^1(\A)\;,\quad A^* \in \CY^1(\A)\;,\quad [A,f] \in \F\;. } In order to prepare the proof of Lemma \ref{lem:power}, we need a few auxiliary results. \begin{lemma} \label{lem:estj} Let $\A$, $\F$, $\CD$ and $\Lambda$ be as above and assume \hyp{1} and \hyp{3} hold. Then, if $A \in \CY^j_\F(\A)$, the operator $A\Lambda^{-j}$ is bounded. \end{lemma} The proof of this lemma will be a consequence of \begin{lemma} \label{lem:estTwo} Let $\A$, $\F$, $\CD$ and $\Lambda$ be as above and assume \hyp{1} and \hyp{3} hold. Then, if $A_1, A_2 \in \A$, the operators $A_1\Lambda^{-1}$ and $A_1 A_2\Lambda^{-2}$ are bounded. \end{lemma} \begin{proof} Let us show first that $A_1 \Lambda^{-1}$ is bounded. Since $\CD$ is a core for $\Lambda$, it suffices to show that there is a constant $C$ such that \equ{ \|A_1 f\|^2 \le C\|\Lambda f\|^2 \qquad\forall\, f\in \CD\;. } This is an immediate consequence of \equ{ \|\Lambda f\|^2 = \|f\|^2 + \sum_{A \in \A} \|Af\|^2\;. } In order to show that $A_1A_2\Lambda^{-2}$ is bounded, we will show that there are constants $\tau$ and $C$ such that \equ[e:estTwo]{ \|A_1A_2 f\|^2 \le C\|\Lambda^2f + (\tau - 1)f\|^2\;. } We can write the following equality \equs{ \|(\Lambda^2-1)f + \tau f\|^2 &= \tau^2\|f\|^2 + 2\tau \sum_{A\in \A} \|A f\|^2 + \sum_{A,B\in \A} \scal{f,A^*AB^*Bf} \\ &= \tau^2\|f\|^2 + 2\tau \sum_{A\in \A} \|A f\|^2 + \sum_{A,B\in \A} \bigl(\|ABf\|^2 + \scal{f,[A^*A,B^*]Bf}\bigr)\;. } We can write the operator intervening in the last term as \equ{ [A^*A,B^*]B = A^*[A,B^*]B + [B,A]^*AB\;. } Because of \hyp{1}, this implies that there are positive constants $C_{ABC}$ such that \equs{ \|(\Lambda^2-1)f + \tau f\|^2 \ge&\; \tau^2\|f\|^2 + 2\tau \sum_{A\in \A} \|A f\|^2 + \sum_{B,C\in \A}\|BCf\|^2 \\ &- \sum_{A,B,C \in \A} C_{ABC} \|Af\|\|BCf\|\;. } If we use now \equ{ 2xy \le x^2s^2 + \frac{y^2}{s^2}\quad x,y \ge 0\,,\; s > 0\;, } we see that we can choose $\tau$ big enough to have \equ{ \|(\Lambda^2-1)f + \tau f\|^2 \ge \tau^2\|f\|^2 + \frac 12 \sum_{A\in \A} \|A f\|^2 + \frac 12 \sum_{B,C\in \A}\|BCf\|^2\;. } This immediately implies \eref{e:estTwo}. \end{proof} This lemma can now be used to prove Lemma \ref{lem:estj}. \begin{proof}[of Lemma \ref{lem:estj}] We want to show that $A \in \CY^i_\F(\A)$ implies $A\Lambda^{-i}$ bounded. We already treated the cases $i=1$ and $i=2$. For the other cases, we proceed by induction. Let us fix $j>2$ and assume the assertion has been proved for $i<j$. Then the operators of the form \equ[e:estj]{ A_1 A_2 \cdot\ldots\cdot A_j \Lambda^{-j}\qquad A_i \in \A\;, } are bounded. We distinguish two cases. \begin{list}{}{\setlength{\leftmargin}{1.3truecm}} \item[$\boldsymbol{j=2n}$.] We write the operator of \eref{e:estj} as \equ{ A_1A_2 \Lambda^{-2}\cdot\Lambda^2 A_3A_4\Lambda^{-4}\cdot\ldots\cdot\Lambda^{2n-1}A_{2n-2}A_{2n}\Lambda^{-2n}\;. } We show that operators of the form \equ{ \Lambda^{2m-2}AB\Lambda^{-2m}\quad A,B\in \A\,,\;m \le n\;, } are bounded. We write \equ{ \Lambda^{2m-2}AB\Lambda^{-2m} = AB\Lambda^{-2} + [\Lambda^{2m-2},AB]\Lambda^{-(2m-1)}\Lambda^{-1}\;. } The first term is bounded by Lemma \ref{lem:estTwo}. The second term is bounded by noticing that $[\Lambda^{2m-2},AB] \in \CY^{2m-1}_\F(\A)$ and using the induction hypothesis. \item[$\boldsymbol{j=2n+1}$.] We write the operator of \eref{e:estj} as \equ{ A_1A_2 \Lambda^{-2}\cdot\Lambda^2 A_3A_4\Lambda^{-4}\cdot\ldots\cdot\Lambda^{2n}A_{2n+1}\Lambda^{-2n-1}\;. } The first terms are bounded exactly the same way as before. Concerning the last term, we have \equ{ \Lambda^{2n}A_{2n+1}\Lambda^{-2n-1} = A_{2n+1}\Lambda^{-1} + [\Lambda^{2n},A_{2n+1}]\Lambda^{-2n}\Lambda^{-1}\;, } which is bounded by Lemma \ref{lem:estTwo} and the induction hypothesis, noticing that the commutator belongs to $\CY^{2n}(\A)$. \end{list} This completes the proof of the lemma. \end{proof} We need another result from \cite{EPR}. \begin{lemma} Let $\{A(z)\} \subset \B(\H)$ be a family of uniformly bounded operators, $\Lambda \ge 1$ a self-adjoint operator and let $F(\lambda,z)$ be a real, positive bounded function. Then \equ[e:op1]{ \left\| \int_0^\infty A(z)\, F(\Lambda,z) f\, dz\right\| \le \sup_{y \ge 0} \|A(y)\|\|f\|\int_0^\infty \sup_{\lambda \ge 1} F(\lambda,z)\,dz\;, \qquad \forall\, f\in\H\;. } If furthermore $A = A(z)$ is independent of $z$, one has the bound \equ[e:op2]{ \left\| \int_0^\infty A\, F(\Lambda,z) f\, dz\right\| \le \|A\|\|f\|\sup_{\lambda \ge 1} \int_0^\infty F(\lambda,z)\,dz\;, \qquad \forall\, f\in\H\;. } \end{lemma} \begin{lemma} Let $\Lambda$, $\F$ and $\A$ be as above and assume \hyp{1} and \hyp{3} hold. If $X \in \CY_\F^j(\A)$, then the operators \equ{ \Lambda^\beta X \Lambda^\gamma \quad\text{with}\quad \beta+\gamma \le -j } are bounded. If $Y \in \fL$ is such that $[Y,\Lambda^2] \in \CY_\F^j(\A)$, then the operators \equ{ \Lambda^\beta[\Lambda^\alpha,Y]\Lambda^\gamma \quad\text{with}\quad \alpha + \beta + \gamma \le 2-j } are bounded. If $X,Y \in \fL$ are such that \equ{ [X,\Lambda^2] \in \CY_\F^j(\A)\;\;,\quad [Y,\Lambda^2] \in \CY_\F^k(\A)\quad\text{and}\quad\bigl[[\Lambda^2,X],Y\bigr] \in \CY_\F^{j+k-2}(\A)\;, } then the operators \equ{ \Lambda^\beta\bigl[[\Lambda^\alpha,X],Y\bigr]\Lambda^\gamma \quad\text{with}\quad \alpha + \beta + \gamma \le 4-j-k } are bounded. \end{lemma} \begin{proof} Let us prove the first assertion. The case $\gamma = 0$ is handled by noticing that \equ{ \Lambda^\beta X = \Lambda^{\beta+j}\bigl(X^*\Lambda^{-j}\bigr)^*\;, } and that both operators of the latter product are bounded by Lemma \ref{lem:estj}. The case $\beta=0$ is handled in the same way by considering the adjoint. The proof for the other cases follows exactly \cite{EPR}. We will demonstrate the techniques involved by proving the third assertion, assuming the first two assertions hold. The second assertion can be proved in a similar way without using the third one. We will first assume that $\alpha \in (-2,0)$. In this case, we can write (see \eg \cite[\S~V.3.11]{Ka}) \equ[e:exprPower]{ \Lambda^{\alpha} = C_{\alpha} \int_0^\infty z^{\alpha/2} (z+\Lambda^2)^{-1}\,dz\;,\qquad C_\alpha = -\frac{\sin(\pi \alpha/2)}{\pi}\;. } We notice moreover that it is possible to write \equs[e:trans]{ \bigl[[(z + \Lambda^2)^{-1},X],Y\bigr] =&\; (z + \Lambda^2)^{-1}\bigl[[\Lambda^2,X],Y\bigr] (z + \Lambda^2)^{-1} \\ &+ (z + \Lambda^2)^{-1}[\Lambda^2,X](z + \Lambda^2)^{-1}[\Lambda^2,Y](z + \Lambda^2)^{-1} \\ &+ (z + \Lambda^2)^{-1}[\Lambda^2,Y](z + \Lambda^2)^{-1}[\Lambda^2,X](z + \Lambda^2)^{-1}\;. } If we substitute the expression \eref{e:exprPower} in $\Lambda^\beta\bigl[[\Lambda^\alpha,X],Y\bigr]\Lambda^\gamma$ and use \eref{e:trans}, we get three terms, which we call $T_1$, $T_2$ and $T_3$, and which will be estimated separately. \proclaim{Term $\boldsymbol{T_1}$.} This term is given by \equ{ T_1 = C_{\alpha} \int_0^\infty z^{\alpha/2} \frac{\Lambda^\beta}{z + \Lambda^2}\bigl[[\Lambda^2,X],Y\bigr] \frac{\Lambda^\gamma}{z + \Lambda^2}\,dz\;. } We define $B= \bigl[[\Lambda^2,X],Y\bigr] \in \CY^{j+k-2}(\A)$ and write \equs{ T_1 &= C_{\alpha} \int_0^\infty z^{\alpha/2} \Lambda^\beta B \frac{\Lambda^\gamma}{(z + \Lambda^2)^2}\,dz + C_{\alpha} \int_0^\infty z^{\alpha/2} \frac{\Lambda^\beta}{z + \Lambda^2} [\Lambda^2,B]\frac{\Lambda^\gamma}{(z + \Lambda^2)^2}\,dz\\ &\equiv C_{\alpha}\bigl(T_{11} + T_{12}\bigr)\;. } The term $T_{11}$ is estimated by writing, for any $f \in \H$, \equs{ \|T_{11} f\| &= \left\|\Lambda^{\beta}B\Lambda^{2-j-k-\beta} \int_0^\infty z^{\alpha/2} \frac{\Lambda^{\gamma + \beta + j + k - 2}}{(z + \Lambda^2)^2}f\,dz\right\|\\ &\le \|f\|\bigl\|\Lambda^{\beta}B\Lambda^{2-j-k-\beta}\bigr\|\sup_{\lambda \ge 1} \int_0^\infty z^{\alpha/2} \frac{\lambda^{\gamma + \beta + j + k - 2}}{(z + \lambda^2)^2}\,dz \\ &= \|f\|\bigl\|\Lambda^{\beta}B\Lambda^{2-j-k-\beta}\bigr\|\sup_{\lambda \ge 1} \int_0^\infty s^{\alpha/2} \frac{\lambda^{\alpha + \gamma + \beta + j + k - 4}}{(s + 1)^2}\,ds\;. } Since the assumption yields $B \in \CY^{j+k-2}(\A)$, the norm is bounded. The integral is also bounded because, by assumption, we have $\alpha + \gamma + \beta \le 4 - j - k$. To bound $T_{12}$, we observe that $[\Lambda^2, B] \in \CY^{j+k-1}(\A)$. Using \eref{e:op2}, we find the bound \equs{ \|T_{12}f\| &= \biggl\| \int_0^\infty z^{\alpha/2} \frac{\Lambda^\beta}{z + \Lambda^2} [\Lambda^2,B]\Lambda^{3-j-k-\beta}\frac{\Lambda^{\gamma+\beta+j+k-3}}{(z + \Lambda^2)^2}f\,dz\biggr\| \\ &\le \|f\|\; \sup_{y>0} \;\Bigl\|\frac{\Lambda^\beta}{y + \Lambda^2} [\Lambda^2,B]\Lambda^{3-j-k-\beta}\Bigr\| \int_0^\infty z^{\alpha/2} \sup_{\lambda \ge 1} \frac{\lambda^{\gamma+\beta+j+k-3}}{(z + \lambda^2)^2}\,dz\;. } This expression is bounded when $\alpha + \beta + \gamma \le 4 - j - k$ and $\alpha \in (-2,0)$. This can be seen by making as before the substitution $z \mapsto \lambda^2 s$. Before we go on, we introduce the notation $\Lambda_z \equiv (z + \Lambda^2)^{-1}$. \proclaim{Term $\boldsymbol{T_2}$.} This term is given by \equ{ T_2 = C_{\alpha} \int_0^\infty z^{\alpha/2} \frac{\Lambda^\beta}{z + \Lambda^2}A\frac{1}{z + \Lambda^2}B\frac{\Lambda^\gamma}{z + \Lambda^2}\,dz\;, } where we defined \equ{ A = [\Lambda^2,X] \qquad\text{and}\qquad B= [\Lambda^2,Y]\;. } Since $[\Lambda_z, B] = \Lambda_z[B,\Lambda^2]\Lambda_z$, the term appearing under the integral can be written as \equ{ \Lambda^\beta \Lambda_z A \Lambda_z B \Lambda_z \Lambda^\gamma = \Lambda^\beta \Lambda_z A B \Lambda_z^2 \Lambda^\gamma + \Lambda^\beta \Lambda_z A \Lambda_z [B,\Lambda^2] \Lambda_z^2 \Lambda^\gamma\;. } According to this, the term $T_2$ is split into two terms $T_{21}$ and $T_{22}$. We have \equ{ \|T_{21}f\| \le \|f\| \;\sup_{y>0}\;\bigl\|\Lambda^\beta \Lambda_y A B \Lambda^{-\beta-j-k}\bigr\|\int_0^\infty s^{\alpha/2} \sup_{\lambda \ge 1}\frac{\lambda^{\alpha+\beta+\gamma+j+k-4}}{(s+1)^2}\,ds\;. } The integral is bounded by hypothesis. The norm is also bounded, because $AB \in \CY^{j+k}(\A)$. For the second term, we have \equ{ \|T_{22}f\| \le \|f\| \;\sup_{y>0}\;\bigl\|\Lambda^\beta \Lambda_y A \Lambda_y [\Lambda^2,B] \Lambda^{-\beta-j-k}\bigr\|\int_0^\infty s^{\alpha/2} \sup_{\lambda \ge 1}\frac{\lambda^{\alpha+\beta+\gamma+j+k-4}}{(s+1)^2}\,ds\;. } This is bounded in the same fashion, noticing that \equ{ \sup_{y>0}\;\bigl\|\Lambda^\beta \Lambda_y A \Lambda_y [\Lambda^2,B] \Lambda^{-\beta-j-k}\bigr\| \le \sup_{x>0}\;\bigl\|\Lambda^\beta \Lambda_x A \Lambda^{-\beta-j}\bigr\| \sup_{y>0}\;\biggl\|\frac{\Lambda^2}{y + \Lambda^2} \Lambda^{j+\beta-2}[\Lambda^2,B] \Lambda^{-\beta-j-k}\biggr\|\;. } \proclaim{Term $\boldsymbol{T_3}$.} It can be bounded in the same way as $T_2$ by symmetry. We now have to check the assertion for the other values of $\alpha$. If $\alpha = 0$ or $\alpha = 2$, it holds trivially. For $\alpha > 0$, we proceed by induction, using the equality \equs[e:equInd]{ \bigl[[\Lambda^{\alpha+2},X],Y\bigr] &= \Lambda^2\bigl[[\Lambda^{\alpha},X],Y\bigr] + \Lambda^\alpha\bigl[[\Lambda^2,X],Y\bigr] + [\Lambda^\alpha,Y][\Lambda^2,X] + [\Lambda^2,Y][\Lambda^\alpha,X]\;. } For $\alpha = -2$, the assertion is proved using equality \eref{e:trans} with $z = 0$. For $\alpha < -2$, we also proceed by induction, using \eref{e:equInd} with $2$ replaced by $-2$. This completes the proof of Lemma \ref{lem:power}. \end{proof} \makeappendix{Proof of Proposition~\ref{prop:Tt}} \label{App:ops} \def{\tilde\CL}{{\tilde\CL}} \begin{proposition} ${\cal T}^t$, as defined in \eref{e:defTt}, extends uniquely to a quasi-bounded strongly continuous semi-group on $\Ltwo(\CX,\,dx)$. Its generator $L$ acts like $\CL$ on functions in $\cOinf[\CX]$. \end{proposition} \begin{proof} See the proof of Lemma~A.1 in \cite{EPR}. \end{proof} We now turn to the question of the domain of the generator $L$. Recall that $\fL$ is the set of all formal expressions of the form \equ{ \sum_{|l| \le k} a_l(x) D^l\;,\qquad k \ge 0\;,\quad a \in \CC^\infty(\R^n)\;. } To any element $L \in \fL$ having the above form, we associate its formal adjoint $L^* \in \fL$ in an obvious way. In the sequel, the notation $\scal{f,g}$ will be used to denote the scalar product in $\Ltwo$ if $f,g \in \Ltwo$ and the evaluation $f(g)$ if $f$ is a distribution and $g \in \cOinf[\R^n]$. We hope this slight ambiguity will not be too misleading. We associate to every $L \in \fL$ the operator $T_L : \CD(T_L) \to \Ltwo(\R^n)$ by \equ{ \bigl(T_L f\bigr)(x) = L f(x)\quad\text{and}\quad \CD(T_L) = \{f \in \Ltwo\;|\; Lf \in \Ltwo\}\;, } where $Lf$ has to be understood in the sense of distributions, \ie \equ{ \bigl(Lf\bigr)(g) \equiv f(L^* g)\quad\text{for all}\quad g\in \cOinf[\R^n]\;. } We also define the operator $S_L : \CD(S_L) \to \Ltwo(\R^n)$ by \equ{ S_L = \overline{T_L \upharpoonright \cOinf}\;. } The operators $T_L$ and $S_L$ are usually called the \emph{minimal operator} and the \emph{maximal operator} constructed from the \emph{formal operator} $L$. The following result is classical, so we do not give its proof here \begin{proposition} \label{prop:maxmin} For every $L \in \fL$, we have $T_L^* = S_{L^*}$ and $S_L^* = T_{L^*}$. In particular, this shows that $T_L$ is closed.\phantom{a}\nobreak\hfill $\qedsquare$ \end{proposition} We prove now the quasi {\it m}-dissipativity of $S_\L$. We define \equ{ {\tilde\CL} \equiv \L - \sum_{i=1}^M \gamma_i -1 \;. } By definition, if $S_{\tilde\CL}$ is strictly {\it m}-dissipative, $S_\L$ is quasi {\it m}-dissipative. It is well-known that an equivalent characterization of strict {\it m}-dissipativity is that \begin{list}{$\bullet$}{\setlength{\leftmargin}{9mm} \setlength{\topsep}{0mm}\setlength{\parsep}{0mm}} \item[(a)] $S_{\tilde\CL}$ is strictly dissipative and \item[(b)] $\text{Range}(S_{\tilde\CL}) = \CH$. \end{list} \begin{proposition} \label{prop:maccr} Assume {\bf A0} holds. Then $S_{\tilde\CL}$ is strictly {\it m}-dissipative. \end{proposition} \begin{remark} It is clear that the statement holds if we consider the minimal operator in $\Ltwo(\CK, dx)$, where $\CK$ is some compact domain of $\CX$. The idea is to approximate $\CX$ by a sequence of increasing compact domains and to control the rest terms. This proposition fills a gap in \cite{EPR}, since the statement ``$\Re(f, L^*f) = -\frac{1}{2}\|\sigma^T \nabla f\|^2 + (f,\div b\; f) \le B\|f\|^2$'' in the proof of Lemma~A.1 is not justified for every $f \in \CD(L^*)$. \end{remark} \begin{demo} Property (a) is immediate. By the closed-range theorem, property (b) is equivalent to the statement \begin{list}{$\bullet$}{\setlength{\leftmargin}{9mm} \setlength{\topsep}{0mm}\setlength{\parsep}{0mm}} \item[(b')] $f \in \Ltwo$ and ${\tilde\CL}^* f = 0$ imply $f = 0$. \end{list} \begin{MHwrap}{r}{5.5cm}{Func}[-5mm] \vspace{-5mm} \end{MHwrap} Assume on the contrary that there exists a non-vanishing function $f\in \Ltwo$ for which ${\tilde\CL}^* f = 0$ holds in the sense of distributions. Since ${\tilde\CL}^*$ is hypoelliptic, $f$ must be a $\CC^\infty$ function. Let us choose some function $\phi \in \cOinf(\R_+)$ such that $\phi(x) = 1$ if $x \in [0,1]$. We also define \equs[2]{ \phi_n : \;&\CX &\;\to\; &\R \\ & x &\;\mapsto\; & \phi\bigl(G(x)/n\bigr)\;. } By assumption, ${\tilde\CL}^* f = 0$, so we have \equ{ 0 = 2\Re \scal{\phi_n f, {\tilde\CL}^* f} = \scal{\phi_n f, {\tilde\CL}^* f} + \scal{{\tilde\CL}^* f, \phi_n f}\;. } Since $\phi_n \in \cOinf$ and all the other functions are $\CC^\infty$, we can make all the formal manipulations we want. In particular, we have \equ[e:altzero]{ \scal{{\tilde\CL}^* f, \phi_n f} = \scal{f, {\tilde\CL} \phi_n f} \quad \Rightarrow\quad \scal[b]{f,(\phi_n {\tilde\CL}^* + {\tilde\CL} \phi_n) f} = 0\;. } Recall that ${\tilde\CL}$ is given by \equs[e:genChainApp]{ {\tilde\CL} &= \sum_{i=1}^M \lambda_i^2\gamma_i T_i \d_{r_i}^2 - \sum_{i=1}^M \gamma_i\bigl(r_i - \lambda_i^2 F_i(p,q)\bigr)\d_{r_i} + X^{H_{\!S}} - \sum_{i=1}^M r_i X^{F_i} - \sum_{i=1}^M \gamma_i -1\\ &\equiv \sum_{i=1}^M \zeta_i \d_{r_i}^2 + Y_0 - 1\;. } Straightforward computation yields \equs[e:developOp]{ \phi_n {\tilde\CL}^* + {\tilde\CL} \phi_n &= 2\sum_{i=1}^M \zeta_i \d_{r_i}\phi_n \d_{r_i} + \sum_{i=1}^M \zeta_i \bigl(\d_{r_i}^2 \phi_n\bigr) + [Y_0, \phi_n] - \phi_n\\[2mm] &= 2\sum_{i=1}^M \zeta_i \d_{r_i}\phi_n \d_{r_i} + \sum_{i=1}^M \zeta_i \Bigl(\frac 1n (\d_{r_i}^2 G) \phi''(G/n) + \frac 1{n^2}(\d_{r_i} G)^2 \phi'(G/n)\Bigr) \\[1mm] &\quad + \frac 1n \sum_{i=1}^M \frac{\gamma_i}{\lambda_i^2}\bigl(r_i - \lambda_i^2 F_i(p,q)\bigr)^2 \phi'(G/n) - \phi_n\\[2mm] &\equiv 2\sum_{i=1}^M \zeta_i \d_{r_i}\phi_n \d_{r_i} + \Phi_n - \phi_n\;. } Using {\bf A0}, we next verify that $|\Phi_n(x)| \le \tilde C$ for all $x \in \CX$ and for all $n \ge 1$. We define \equ{ c_1 \equiv \sup_{x \ge 0} \phi''(x)\quad\text{and}\quad c_2 \equiv \sup_{x \ge 0} x\phi'(x)\;. } An elementary computation shows that {\bf A0} implies that there exist constants $c_3,\ldots, c_5 > 0$ for which \equ{ \bigl| \d_{r_i}^2 G(x)\bigr| \le c_3\;, \quad \bigl| \d_{r_i} G(x)\bigr|^2 \le c_4 G(x) \;,\quad\text{and}\quad \bigl(r_i - \lambda_i^2 F_i(p,q)\bigr)^2 \le c_5 G(p,q,r)\;. } We thus have \equs{ |\Phi_n(x)| &\le \sum_{i=1}^M \Bigl( \frac{\zeta_i c_3}{n} \bigl|\phi''(G/n)\bigr| + \frac{\zeta_i c_4}{n} \bigl|(G/n) \phi'(G/n)\bigr| + \frac{\gamma_i c_5}{\lambda_i^2} \bigl| (G/n) \phi'(G/n)\bigr| \Bigr)\\ &\le \sum_{i=1}^M \Bigl( \zeta_i \frac{c_1 c_3 + c_2 c_4}{n} + \frac{\gamma_i c_2 c_5}{\lambda_i^2}\Bigr) \le \tilde C\;, } as asserted. Moreover, the first part of {\bf A0} implies that there exist constants $C, \alpha > 0$ such that \equ[e:suppPhi]{ \supp \Phi_n \subset \{x \in \CX \;|\; \|x\|^\alpha \ge n/C\}\;. } Substituting \eref{e:developOp} back into \eref{e:altzero}, we get \equ[e:altzero2]{ 0 = - 2\sum_{i=1}^M \zeta_i \bigl\|\sqrt{\phi_n} \d_{r_i} f\bigr\|^2 - \|\sqrt{\phi_n} f\bigr\|^2 + \int_\CX \Phi_n(x) |f(x)|^2\; dx\;. } Since $f \in \Ltwo(\CX)$, one has \equ{ \lim_{n \to \infty} \|\sqrt{\phi_n} f\bigr\|^2 = \|f\|^2\;. } Moreover, the uniform boundedness of $\Phi_n$ together with property \eref{e:suppPhi} imply that \equ{ \lim_{n \to \infty} \int_\CX \Phi_n(x) |f(x)|^2\; dx = 0\;. } This supplies the required contradiction to \eref{e:altzero2}, thus establishing the strict {\it m}-dissipativity of $S_{\tilde\CL}$. \end{demo} We complete now the \begin{demo}[of Proposition~\ref{prop:Tt}] It only remains to be proved that $L = S_\L$ and that $L^* = S_{\L^*}$. It is clear that the generator $L$ of $\CT^t$ satisfies $S_\L \subset L$. Since $S_\L$ is quasi {\it m}-dissipative, \ie has no proper quasi dissipative extension, and since the generator of a quasi-bounded semi-group is always quasi {\it m}-dissipative, we must have $L = S_\L$. Concerning the adjoint, we have by Proposition~\ref{prop:maxmin}, $L^* = T_{\L^*}$. It is possible to retrace the above argument for $\L^*$ to show that $S_{\L^*}$ is quasi {\it m}-dissipative. Since $L^*$ is also quasi {\it m}-dissipative and $S_{\L^*} \subset L^*$, we must have $L^* = S_{\L^*}$. \end{demo} \makeappendix{Proof of Lemma~\ref{lem:coregamma}} \label{App:core} Using the technique developed in Appendix~\ref{App:ops}, we can now turn to the proof of Lemma~\ref{lem:coregamma}. Recall that $K$ is given by \eref{e:defKchain} and that \equ{ W^{(\gamma)} = \Ltwo(\CX, G^{2\gamma}\,dx)\;. } Moreover, $K_\gamma$ is the maximal operator constructed from $K$ when considering it as a differential operator in $W^{(\gamma)}$. We have \begin{proposition} $\cOinf[\CX]$ is a core for $K_\gamma$. \end{proposition} \begin{proof} We introduce the unitary operator $U : W^{(\gamma)} \to \Ltwo(\CX)$ defined by \equ{ \bigl(U f\bigr)(x) = G^\gamma (x) f(x)\;. } We also define $K_\gamma^0 \equiv \overline{K_\gamma \upharpoonright \cOinf[\CX]}$. The operators $K_\gamma$ and $K_\gamma^0$ are unitarily equivalent to the operators $\tilde K_\gamma$ and $\tilde K_\gamma^0$ respectively by the following relations. \[ \newdimen\UHeight \settoheight{\UHeight}{$\scriptstyle{U}$} \advance\UHeight 4mm \def\rule[-2mm]{0mm}{\the\UHeight}{\rule[-2mm]{0mm}{\the\UHeight}} \def\lef#1{\vcenter{\llap{$\scriptstyle{#1}$}}} \def\rig#1{\vcenter{\rlap{$\scriptstyle{#1}$}}} \begin{array}{CCCCCCC} \CD(K_\gamma) & \stackrel{K_\gamma}{\longrightarrow} & W^{(\gamma)} & \qquad\qquad\qquad & \CD(K_\gamma^0) & \stackrel{K_\gamma^0}{\longrightarrow} & W^{(\gamma)}\\[1mm] \lef{U \rule[-2mm]{0mm}{\the\UHeight}} \Big\downarrow \Big\uparrow \rig{\rule[-2mm]{0mm}{\the\UHeight} U^{-1}} & & \lef{U \rule[-2mm]{0mm}{\the\UHeight}}\Big\downarrow\Big\uparrow\rig{\rule[-2mm]{0mm}{\the\UHeight} U^{-1}} && \lef{U \rule[-2mm]{0mm}{\the\UHeight}} \Big\downarrow \Big\uparrow \rig{\rule[-2mm]{0mm}{\the\UHeight} U^{-1}} & & \lef{U \rule[-2mm]{0mm}{\the\UHeight}}\Big\downarrow\Big\uparrow\rig{\rule[-2mm]{0mm}{\the\UHeight} U^{-1}}\\[2mm] \CD(\tilde K_\gamma) & \underset{\tilde K_\gamma}{\longrightarrow} & \Ltwo(\CX) && \CD(\tilde K_\gamma^0) & \underset{\tilde K_\gamma^0}{\longrightarrow} & \Ltwo(\CX) \end{array} \] By construction, $\tilde K_\gamma$ is maximal. Thus, by Proposition~\ref{prop:maxmin}, its adjoint $\tilde K_\gamma^*$ is minimal. It is immediate that the formal expressions for $\tilde K_\gamma^*$ and $K_\gamma^0$ are given by \equ{ \tilde K_\gamma^* = G^{-\gamma} K^* G^\gamma \qquad\text{and}\qquad \tilde K_\gamma^0 = G^\gamma K G^{-\gamma}\;. } It is now a simple exercise to retrace the proof of Proposition~\ref{prop:maccr} to see that $\tilde K_\gamma^*$ and $\tilde K_\gamma^0$ are both {\it m}-accretive. The remark of Section~\ref{sec:defs} concerning the adjoints of {\it m}-accretive operators implies that $\tilde K_\gamma$ is also {\it m}-accretive. Since $\tilde K_\gamma^0 \subset \tilde K_\gamma$, we must have $\tilde K_\gamma^0 = \tilde K_\gamma$ and thus $K_\gamma^0 = K_\gamma$. This proves the assertion. \end{proof} \markboth{\sc \refname}{\sc \refname} \def\Rom#1{\uppercase\expandafter{\romannumeral #1}} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
1,108,101,564,011
arxiv
\section{Introduction} \label{sec:intro} It is widely known that a large fraction of local galaxies contain emission line nuclei that are the result of low nuclear activity. \citet{ho95apjs:optspec,ho97apjs:specparam,ho97apjs:broadHal} showed that more than 40\% of 486 nearby galaxies with $B_{T}\le12.5$~mag could be considered as active with optical spectra classified as Seyfert nuclei, low-ionization nuclear emission-line regions \citep[LINERs,][]{heckman80aap}, or transition objects (objects having intermediate spectra between LINERs and H~II nuclei). Pure LINER sources would be the most common component representing 20\% of all 486 galaxies. The ionization mechanism responsible for the excitation of emission lines in LINER sources is an ongoing matter of debate and could be explained in terms of: shock heated gas \citep{dopita95apj:shocliner}, starburst activity \citep{alonso-herrero00ApJ:starburstinliners,terlevich95mnras:starburstliner}, or a low luminosity active Galactic nucleus (AGN). Many multiwavelength studies were attributed to this subject, looking for a radio, sometimes variable, core \citep{4278nagar05aap} or a variable UV core \citep{maoz05apj:linervarUV} in nearby LINER sources. Nevertheless, the most used tool to search for an active nucleus in a LINER is to look for a hard 2-10~keV unresolved core that could not be due to diffuse emission from shock heated gas or from unusually hot stars \citep{terashima00ApJ:liners,ho01apjl,dudik05apj:hardcoreliner,flohic06apj,gonzalezmartin06aa,gonzalezmartin09aa,zhang09apj:llagnxray}. How do LINERs harboring a low luminosity active nucleus compare to luminous Seyfert galaxies and quasars? \citet{maoz07MNRAS}, using high angular resolution multiwavelength observations of 13 LINER sources, demonstrated that the luminosity ratios in different wavebands, mainly UV to X-ray and radio to UV luminosities, follow the same trend as luminous Seyfert galaxies. The authors did not find any sharp change in the spectral energy distribution (SED) of their sample of 13 LINERs compared to more luminous Seyfert and quasar nuclei, suggesting that a thin accretion disk may persist at low accretion rates. Moreover, \citet{pianmnras10} detected up to 30\%\ flux variations on half a day time-scale in 2 (NGC~3998 and M81) out of 4 LINER and low luminosity AGN sources observed in X-ray with the XRT onboard {\sl Swift}. They combined their X-ray fluxes with simultaneous UV fluxes coming from the UVOT instrument and showed that the SED and the UV to X-ray flux ratios of their 4 sources sample are consistent with those of more luminous sources and that LINERs may have similar accretion and radiative processes at their center compared to luminous Seyfert nuclei. On the other hand, the faintness of LINER sources compared to luminous Seyfert galaxies and quasars has been attributed to a different accretion mechanism owing to some observational contrast between the two classes. No broad nor narrow Fe~K$\alpha$ emission line at 6.4~keV have been detected in the spectra of the LINER sources with the highest signal to noise ratio \citep{ptak04apj:ngc3998,binder09apj:ngc3226}, X-ray short time-scale variability has been detected in only a few sources \citep{ptak98apj:variance,awaki01pasj:varliner}, and the disappearance of the big blue bump in the UV band in the SED of LINER sources \citep{ho99sed,ho08aa:review}, all these signatures could indicate the disappearance of the thin accretion disk at low luminosities and that a different accretion mechanism is responsible for the emission in LINER sources. It has been suggested that when the mass accretion rate falls below a critical value $\dot{M}_{crit}$, the density of the disk could become too low for radiative cooling to be effective. The trapped heat will expand the thin accretion disk into a pressure-supported thick disk with a very low radiative efficiency \citep[see ][for reviews]{quataert01aspc:riaf,narayan08:riafreview}. Such radiatively inefficient accretion flow (RIAF) models successfully explained the spectral energy distribution of a large number of LINER sources \citep{ptak04apj:ngc3998,nemmen06apj:ngc1097,nemmen10:lineradaf}. Another way to assess the geometry of the accretion mode in AGN is to compare them to their less massive counterparts, X-ray binaries (XRBs). \citet{shemmer06apj:rqagn} showed that the X-ray spectral slope, $\Gamma$, of Seyfert~1 galaxies and quasars and the Eddington ratio, $L_{bol}/L_{Edd}$, are positively correlated, similar to XRBs in their high/soft state \citep[see also ][and references therein]{shemmer08apj:gamvsedd}. Such a behavior could be explained in an AGN accretion mode consistent with an optically thick geometrically thin accretion disk \citep{shakura73aa}. \citet{gu09mnras:gamVSeddllagn} performed a similar study on a broad sample of LINERs and low luminosity Seyfert galaxies. They found a significant anticorrelation between $\Gamma$ and the Eddington ratio for the local Seyfert galaxies in their sample analogous to XRBs in the low/hard state where a RIAF mode of accretion takes place. However, no strong correlation was found when considering only the LINER sources in their sample, owing, as suggested by the authors, to heterogeneous fitting models as they have collected their data from different studies. In a separate study, \citet{ constantin09ApJ:liners} analyzed the X-ray emission of a sample of 107 nearby galaxies including low luminosity Seyferts, LINERs, transitions (nuclei with spectra between Seyferts and LINERs), H~II regions, and passive galaxies (lacking optical emission-line activity), {\sl none of which show broad-line components}. Using a Spearman-rank correlation, the authors found an anticorrelation for their sample between $\Gamma$ and the $L_{bol}/L_{Edd}$. By considering each class separately, a spearman-rank test showed that the anticorrelation persists for the different objects, except for the low luminosity Seyfert galaxies. Finally, broad optical emission lines, a characteristic property of classical Seyferts and quasars, are also found in nuclei of much lower luminosities. Thirty three sources out of the 221 nuclei classified as Seyfert, LINER, or transition objects in \citet{ho95apjs:optspec} sample of nearby galaxies show definite detection of a broad H$\alpha$ emission, 16 of those ($\sim$17\%\ of the total pure LINER sources) are LINERs \citep[noted as LINER~1.9 in ][LINER~1s hereinafter]{ho97apjs:broadHal}. In this paper, we are aiming for the study of the X-ray characteristics of these LINER~1s observed with the current generation of X-ray telescopes, \xmm\ and \chandra. Such a sample insures the responsibility of accretion into a SMBH for the formation of the broad emission lines (given the early type class of this sample where outflows form massive stars and/or supernovae are not expected to be relevant), guarantees the non-existence of large obscuration, and enables X-ray comparison of this class with both XRBs and type 1 AGN. We introduce our sample in \S~2, \S~3 represents the observations and the data reduction. Temporal and spectral results are given in \S~4. In \S~5 we discuss the results in the context of LINER~1s-Seyfert-XRB connections, and a conclusion summarizing the main results is given in \S~6. We report, in appendix A, some notes on the individual sources and in Appendix B we give spectral results to the surrounding sources around the centers of galaxies observed with \chandra. In the remainder of this paper, luminosities are calculated using the distances given in Table~\ref{galaxy-param} derived with a Hubble constant $H_{0}=75$~km~s$^{-1}$~Mpc$^{-1}$. \section{The sample} \label{sec:sample} We selected objects classified as LINER~1.9 (LINER~1s) sources in \citet{ho97apjs:broadHal} showing a definite detection of a broad H$\alpha$ emission line. This implies the definite existence of an AGN at the center of all of the sixteen selected galaxies and its responsibility for the excitation of the detected optical emission lines. \begin{table*}[!th] \caption{Properties of the 13 LINER~1s showing definite detection of a broad H$\alpha$ emission (taken from \citealt{ho97apjs:broadHal} sample).} \label{galaxy-param} \newcommand\T{\rule{0pt}{2.6ex}} \newcommand\B{\rule[-1.2ex]{0pt}{0pt}} \begin{center}{ \begin{tabular}{c c c c c c} \hline \hline Galaxy Name\T\B & R.A. & Dec. & Hubble Type & Distance$^{a}$ & log($M_{BH}$)$^{b}$ \\ \T & & & & (Mpc) & ($M_{\odot}$) \\ \hline \object{NGC-266} \T& 00 49 47.8 & +32 16 40& SB(rs)ab & 62.4 & 8.44 \\ \object{NGC-315} \T& 00 57 48.9 & +30 21 09& E+: & 65.8 & 9.06 \\ \object{NGC-2681} \T& 08 53 32.7 & +51 18 49& (R')SAB(rs)0/a & 17.2 & 6.78 \\ \object{NGC-2787} \T& 09 19 18.5 & +69 12 12& SB(r)0+ & 7.48 & 8.15 \\ \object{NGC-3226} \T& 10 23 27.0 & +19 53 55& E2:pec & 23.6 & 8.06 \\ \object{NGC-3718} \T& 11 32 34.8 & +53 04 05& SB(s)a pec & 17.0 & 7.61 \\ \object{NGC 3998} \T& 11 57 56.1 & +55 27 13& SA(r)0? & 14.1 & 9.07 \\ \object{NGC 4143} \T& 12 09 36.0 & +42 32 03& SAB(s)0 & 15.9 & 8.18 \\ \object{NGC-4203} \T& 12 15 05.0 & +33 11 50& SAB0-: & 15.1 & 7.73 \\ \object{NGC-4278} \T& 12 20 06.8 & +29 16 51& E1+ & 16.1 & 8.72 \\ \object{NGC-4750} \T& 12 50 07.2 & +72 52 28& (R)SA(rs)ab & 26.1 & 7.27 \\ \object{NGC-4772} \T& 12 53 29.1 & +02 10 06& SA(s)a & 16.3 & 7.46 \\ \object{NGC-5005} \T& 13 10 56.2 & +37 03 33& SAB(rs)bc & 21.3 & 7.79 \\ \hline \end{tabular}} \end{center} \begin{list}{}{} \item[{\bf Notes.}]$^{a}$Distances adapted from \citet{tonry01apj:dist}, otherwise from \citet{tully88agn:dist}. $^{b}$Black hole mass calculated using \citet{graham10:bhmass} updated M-$\sigma$ relation of \citet{termaine02ApJ:Mbh} with stellar velocity dispersion taken from \citet{ho09apjs:veldisp}. \end{list} \end{table*} We excluded three sources from the sample: NGC~3642, NGC~4636 and NGC~1052. NGC~3642 did not have any archived \xmm\ or \chandra\ observations. As for NGC~4636, all of the X-ray archived observations were studied in extreme detail \citep{jones02apj:ngc4636,xu02apj:ngc4636,ohto03pasj:ngc4636,osullivan05apj:ngc4636,baldi09apj:ngc4636,xu10raa:ngc4636} and show a complicated spectrum that requires, for a good spectral parameters measurement, detailed imaging analysis and should be modeled including: sophisticated shock models, temperature and density gradients, and last but not least, steep abundance gradients in the core. Finally, the broad H$\alpha$ emission line detected in the spectrum of NGC~1052 is attributed to polarization due to electron scattering within the opening cone of an obscuring torus \citep{barth99apj:ngc1052}. NGC~1052 was classified as an obscured AGN showing a large intrinsic absorption in the X-ray spectrum \citep[$N_{H}\approx10^{23}$~cm$^{-2}$,][]{guainazzi99mnras:ngc1052}. Table~\ref{galaxy-param} shows the list of galaxies along with their corresponding right ascension and declination, Hubble type, distance (taken from \citealt{tonry01apj:dist}, otherwise from \citealt{tully88agn:dist}), the mass of the black hole derived from the M-$\sigma$ relation \citep{termaine02ApJ:Mbh,graham10:bhmass} where the velocity dispersion is taken from \citet{ho09apjs:veldisp}. Multiple snapshot observations (exposure time $\le5$~ks) were excluded from the analysis either because of high background contamination (NGC~4143, obs.ID: 0150010201), low number of counts detected (NGC~2787, obs. ID: 388), or severe pile-up (NGC~4203 and NGC~4278, obs. IDs:397 and 398, respectively). The final sample consists of 13 LINER~1s with a total of 31 observations summarized in Table~\ref{obs-param}. \begin{table*}[!th] \caption{Log of the \chandra\ and \xmm\ X-ray observations.} \label{obs-param} \newcommand\T{\rule{0pt}{2.6ex}} \newcommand\B{\rule[-1.2ex]{0pt}{0pt}} \begin{center}{ \resizebox{\textwidth}{!}{ \begin{tabular}{c c c c c c} \hline \hline \textbf{Source Name} & \textbf{Satellite} \T \B & \textbf{Instrument} & \textbf{Start Date} & \textbf{Obs. ID} & \textbf{Net Exposure-time}\\ & \T \B & & & & (ks)\\ \hline NGC-266 & \chandra \T & ACIS-S & 2001 June 01 & 1610 & 2.0 \\ NGC-315 & \chandra \T & ACIS-S & 2000 October 08 & 855 & 4.7 \\ & \chandra \T & ACIS-S & 2003 February 22 & 4156 & 55.0 \\ & \xmm \T & EPIC & 2005 July 02 & 0305290201 & 13.2/23.2/23.2$^b$ \\ NGC-2681 & \chandra \T & ACIS-S & 2001 January 30 & 2060 & 80.9 \\ & \chandra \T & ACIS-S & 2001 May 02 & 2061$^{a}$ & 79.0 \\ NGC-2787 & \chandra \T & ACIS-S & 2004 May 18 & 4689 & 30.8 \\ & \xmm \T & EPIC & 2004 October 10 & 0200250101 & 14.2/28.5/28.0$^b$ \\ NGC-3226 & \chandra \T & ACIS-S & 1999 December 30 & 860 & 46.6 \\ & \chandra \T & ACIS-S & 2001 march 23 & 1616 & 2.2 \\ & \xmm \T & EPIC & 2002 March 29 & 0101040301 & 30.5/36.9/36.9 \\ & \xmm \T & EPIC & 2008 January 09 & 0400270101 & 94.3$^b$ \\ NGC-3718 & \chandra \T & ACIS-S & 2003 February 08 & 3993 & 4.9 \\ & \xmm \T & EPIC & 2004 May 02 & 0200430501$^{a}$ & 9.6\\ & \xmm \T & EPIC & 2004 November 04 & 0200431301$^{a}$ & 8.8\\ NGC 3998 & \chandra \T & ACIS-S & 2006 July 01 & 6781 & 13.6 \\ & \xmm \T & EPIC & 2001 May 09 & 0090020101 & 8.9/12.5/12.5 \\ NGC 4143 & \chandra \T & ACIS-S & 2001 March 26 & 1617 & 2.5\\ & \xmm \T & EPIC & 2003 November 22 & 0150010601$^{a}$ & 9.3/11.9/11.9 \\ NGC-4203 & \chandra \T & ACIS-S & 2009 March 10 & 10535$^{a}$ & 41.6 \\ NGC-4278 & \chandra \T & ACIS-S & 2005 February 02 & 4741 & 37.5 \\ & \chandra \T & ACIS-S & 2006 March 16 & 7077 & 110.3 \\ & \chandra \T & ACIS-S & 2006 July 25 & 7078 & 51.4 \\ & \chandra \T & ACIS-S & 2006 October 24 & 7079 & 105.1 \\ & \chandra \T & ACIS-S & 2007 February 20 & 7081 & 110.7\\ & \chandra \T & ACIS-S & 2007 April 20 & 7080 & 55.8 \\ & \xmm \T & EPIC & 2004 May 23 & 205010101 & 30.3/35.2/35.2\\ NGC-4750 & \chandra \T & ACIS-S & 2003 August 27 & 4020 & 4.9 \\ NGC-4772 & \chandra \T & ACIS-S & 2003 February 14 & 3999$^{a}$ & 4.7 \\ NGC-5005 & \chandra \T & ACIS-S & 2003 August 19 & 4021 & 4.9 \\ & \xmm \T & EPIC & 2002 December 12 & 0110930501 & 8.7/13.1/13.1 \\ \hline \end{tabular}}} \end{center} \begin{list}{}{} \item[{\bf Notes.}]$^{a}$Observations reported for the first time for the LINER~1 nucleus study. $^{b}$Exposure time corrected for solar flare intervals. \end{list} \end{table*} \section{X-ray observations and data reduction} \label{sec:reduction} \subsection{\chandra\ observations} \label{chan-obs} All of the LINER~1s in our sample have at least one \chandra\ observation. Snapshot observations with an exposure time less than 5~ks were performed for eight sources (NGC~266, NGC~315, NGC~3226, NGC~3718, NGC~4143, NGC~4750, NGC~4772, and NGC~5005). Seven sources have observations with a sufficient exposure time for a detailed temporal and spectral study (NGC~315, NGC~2681, NGC~2787, NGC~3226, NGC~3998, NGC~4203, and NGC~4278). All of the \chandra\ observations were obtained with the spectroscopic array \citep[ACIS-S;][]{weisskopf02PASP} where the nucleus was placed on the aim point, except for NGC~3226, of the ACIS-S3 back-illuminated chip. They were taken in either Faint or Very Faint mode to increase their sensitivity. All of the observations are \chandra\ archival data obtained from chaser\footnote{\label{chaser}http://cda.harvard.edu/chaser/Descriptions.}. The log of the \chandra\ observations are listed in Table~2. All \chandra\ observations were reduced and analyzed in a systematic, homogeneous way \citep[as in][hereinafter Y10]{younes10aa:ngc4278} using the CIAO software package version 4.2, \chandra\ Calibration Database, CALDB, version 4.3.1, and the ACIS Extract (AE) software package version 3.175 \footnote{\label{AEnote} The {\em ACIS Extract} software package and User's Guide are available at http://www.astro.psu.edu/xray/acis/acis\_analysis.html.} \citep{broos2010AE}. We started by using the level 1 event file produced by the \chandra\ X-ray Center (CXC) to suppress the position randomization applied by the CXC Standard Data Processing when creating a level 2 event file. We also corrected for the effects of charge-transfer inefficiency on event energies and grades. We filtered for bad event grades (only ASCA grades 0, 2, 3, 4 and 6 are accepted) and hot columns to take account of several potential issues such as cosmic rays and bad pixels. Good time intervals, supplied by the pipeline, were applied to the final products. The LINER nucleus source position is determined after running a wavelet transform detection algorithm, the {\sl wavdetect} program within the CIAO data analysis system \citep{wavdetect:freeman02apjs}. This position is then given to the AE software that refines it, extract source photons, construct local backgrounds, extract source, and background spectra, compute redistribution matrix files (RMFs) and auxiliary response files (ARFs), by spawning the {\sl mkarf} and {\sl mkacisrmf} routines of CIAO, and perform spectral grouping and fitting. Source events are extracted around the source centroid, inside a polygonal shape of the local PSF, generated by MARX\footnote{http://space.mit.edu/ASC/MARX/} at the energy 1.497 keV using the {\sl ae\_make\_psf} tool implemented in AE. Background region is defined following the AE procedure. The background region is an annular region centered on the source position where the inner radius delimit 1.1~$\times$~99\% encircled energy radius and the outer radius is set such that the background includes between 100 counts and 200 counts (depending on the brightness of the source). This background is obtained from a special image where all events within the $\sim1.1\times99$\% PSF circles of all the sources in the field were excluded ({\sl swiss cheese image}). Background was modeled for snapshot observations. Piled-up observations were accounted for by excluding the core of the PSF (see Y10 for more details). We use the tool {\sl dmextract}, called by the AE software, to create spectra over the energy range 0.5--8~keV. We used the tool {\sl ae\_group\_spectrum} implemented in AE to group the spectra. Channels between 0.5 and 8 keV are grouped to have a three sigma ($3\sigma$) signal to noise ratio, which corresponds to a minimum of 20 counts per bin, to enable the use of the $\chi^{2}$ statistics in the spectral analysis. The cash statistic (C-stat) is used to derive spectral parameters for snapshot \chandra\ observations with the background being modeled with the {\sl cplinear} background model developed by \citet{broos2010AE}. The background model is arbitrarily chosen to consist of continuous piecewise-linear ({\sl cplinear}) functions with 2 to 10 vertexes. The model has 2 to 10 parameters representing the X-ray fluxes at the different vertexes. These vertexes are placed on the energy scale so that they divide the energy range into intervals with approximately equal numbers of observed counts in the background spectrum (0.1 to 10 keV). Vertex energies are chosen to coincide with the energies of actual events in the background which helps to prevent the vertex flux from being driven to the hard limit of zero during the fitting process \citep[see \S~7.5 of ][]{broos2010AE}. \subsection{\xmm\ observations} \label{xmmobs} The log of the \xmm\ observations is listed in Table~2. Eight sources were observed at least once with \xmm\ (NGC~315, NGC~2787, NGC~3226, NGC~3718, NGC~3998, NGC~4143, NGC~4278, and NGC~5005) and two have multiple observations (NGC~3226 and NGC~3718). In all of the observations, the EPIC-pn \citep{struder01aa} and MOS \citep{turner01aa} cameras were operated in Imaging, Prime Full Frame or Large Window Mode (except for the long NGC~3226 observation, where MOS cameras were operating in Small Window Mode\footnote{NGC~3226 is off axis during the long observation and hence is not detected with the MOS cameras when operating in a Small Window Mode}) using the thin or medium filter. The Reflection Grating Spectra show only few counts for all the different observations and therefore they were not included in our analysis. We did not make use of the optical/UV data taken with the optical/UV monitor (OM) instrument \citep{mason01aa:om} since this paper concentrates on the X-ray characteristics of this sample. A multiwavelength study of our sample will be treated in a forthcoming paper. All data products were obtained from the XMM-Newton Science Archive (XSA)\footnote{http://xmm.esac.esa.int/xsa/index.shtml} and reduced using the Science Analysis System (SAS) version 9.0. Data are selected using event patterns 0--4 and 0--12 for pn and MOS, respectively, during only good X-ray events (``FLAG$=$0''). None of the EPIC observations were affected by pile-up, although severe intervals of enhanced solar activity, where the background count rate even exceeds the source count rate, were present during several observations (NGC~315, NGC2787, NGC~3226). In these cases, we reduced the background to $5\%$ by excluding the high background intervals which reduces the observation time usable for spectral analysis, sometimes to less than 30$\%$ of the raw exposure time (NGC~315). \xmm\ source events of all of the LINER~1s in our sample were extracted from a circle centered at the nucleus using two different radii of 10\hbox{$^{\prime\prime}$}\ and 25\hbox{$^{\prime\prime}$}\footnote{The lower 10\hbox{$^{\prime\prime}$}\ limit was taken so to encircle at least 50\%\ of the EPIC \xmm\ PSF.}. We compared light curves and spectra of both extraction regions to check if any of the sources (jet emission, diffuse emission, and/or unresolved point-like sources) detected in the \chandra\ image between 10\hbox{$^{\prime\prime}$}\ and 25\hbox{$^{\prime\prime}$}\ around the nucleus contaminate the \xmm\ nucleus emission (see Appendix~B and online Fig.~1--4). No change is seen in the light curves, and the fit parameters of the two extracted spectra were consistent, within the error bars. Therefore, and to achieve better statistics and better constrain fit parameters, source events of all of the LINER~1s in our sample observed with \xmm\ were taken from the 25\hbox{$^{\prime\prime}$}-radius circle centered on the nucleus. We added the spectral contribution of the different sources (jet emission, diffuse emission, and/or unresolved point-like sources), derived from the \chandra\ observation and detected in a 25\hbox{$^{\prime\prime}$}-radius circle around the nucleus (see Appendix~B for more details), to the spectral model used to fit the \xmm\ spectrum. The particular case of NGC~4278 is discussed in detail in Y10. Background events are extracted from a source--free circle with a radius twice of the source on the same CCD. We generated response matrix files using the SAS task {\sl rmfgen}, while ancillary response files were generated using the SAS task {\sl arfgen}. The EPIC spectra were created in the energy range 0.5--10~keV to enable flux and model--parameter comparison with \chandra\ spectra. They were grouped to have a signal to noise ratio of 3 with a minimum of 20 counts per bin to allow the use of the $\chi^2$ statistic. \onlfig{1}{ \begin{figure}[] \centerline{\includegraphics[angle=0,width=0.47\textwidth]{16806fg1.pdf}} \caption{\chandra\ image of the central 25\hbox{$^{\prime\prime}$}\ of NGC~315. Jet spectrum is extracted from an ellipse with a semi-major axis of 11.3\hbox{$^{\prime\prime}$}\ and a semi-minor axis of 5.6 \hbox{$^{\prime\prime}$}. Core emission is extracted from a circle centered on the source with a radius comprising 99\%\ of the PSF ($\sim$2.7\hbox{$^{\prime\prime}$}). The rest of the medium inside a 25\hbox{$^{\prime\prime}$}\ circle is considered diffuse emission. See Appendix~B for more details.} \label{chanim1} \end{figure} } \onlfig{2}{ \begin{figure}[] \centerline{\includegraphics[angle=0,width=0.47\textwidth]{16806fg2.pdf}} \caption{\chandra\ image of the central 25\hbox{$^{\prime\prime}$}\ of NGC~2787. A point-like source south-east of the central LINER is present with a luminosity comparable to the core luminosity. Another six point-like sources, marked in white, are present in the field. See Appendix~B for more details.} \label{chanim2} \end{figure} } \onlfig{3}{ \begin{figure}[] \centerline{\includegraphics[angle=0,width=0.47\textwidth]{16806fg3.pdf}} \caption{\chandra\ image of the central 25\hbox{$^{\prime\prime}$}\ of NGC~3226. Two sources are present in the field marked source~1 and source~2. See Appendix~B for more details.} \label{chanim3} \end{figure} } \onlfig{4}{ \begin{figure}[] \centerline{\includegraphics[angle=0,width=0.47\textwidth]{16806fg4.pdf}} \caption{\chandra\ image of the central 25\hbox{$^{\prime\prime}$}\ of NGC~3998. Only one source, much fainter than the core, is present in the field. See Appendix~B for more details. The horizontal bright line corresponds to the readout streak events.} \label{chanim4} \end{figure} } \section{Results} \label{resultsec} \subsection{Light curves and hardness ratios} \label{lightcurvesec} Temporal analysis was only done for long exposure observations, not including snapshot \chandra\ observations. Light curves and corresponding hardness ratios, defined as $HR=(H-S)/(H+S)$, where $S$ is the count rate in the soft 0.5-2~keV band and $H$ is the count rate in the hard 2-10~keV band, were extracted for all of the long observations. We corrected the net count rate of the piled-up sources for the excluded fraction of the PSF. \chandra\ and \xmm\ light curves were all binned with a time bin size of 1~ks for a reliable rms variability analysis. We first conducted a Kolmogorov-Smirnov, K-S, test to examine any potential variability within each observation. Based on this test, we do not find short time-scale (hours to days) variability in 5/10 sources (NGC~315, NGC~2681, NGC~3718, NGC~3998, and NGC~5005) with a K-S test probability $>$10$\%$ that the nuclear emission originates from a constant source. Three sources (NGC~2787, NGC~4143, and NGC~4203) indicate a possible variability with a K-S test probability between 4\%\ and 2\%\ that the nuclear emission originates from a constant source. Two \xmm\ observations of two different sources exhibit significant short time-scale variability, both already reported in the literature, NGC~4278 (obs. ID: 205010101, Y10) and NGC~3226 \citep[obs.ID: 0400270101, ][]{binder09apj:ngc3226}, where the K-S test gives a probability less than $1\%$ that the core emission is originating from a constant source. NGC~4278 shows an increase at the beginning of the observation of 10$\%$ on a time-scale of $\sim$1.5~hour, the emission remains constant for the rest of the observation following that hint of variability. As for NGC~3226, variability is clear through the whole observation where a total increase of $\sim$60$\%$ is detected between the beginning and the end of the $\sim$100~ks observation \citep{binder09apj:ngc3226}. \xmm\ light curve of NGC~3226 is shown in Fig.~\ref{light-hardness} and the \xmm\ and \chandra\ light curves of the other sources are given in online Fig.~\ref{LCxmmallLINERs} and Fig.~\ref{LCchandraallLINERs}. \begin{figure}[] \includegraphics[angle=0,width=0.5\textwidth]{16806fg5.pdf} \caption{Light curve ({\sl upper panel}) and hardness ratio ({\sl lower panel}) of the $\sim$100~ks \xmm\ observation of NGC~3226 binned to have a 1~ks resolution. The dashed lines show the averages on the count rate and hardness ratio.} \label{light-hardness} \end{figure} \onlfig{6}{ \begin{figure*}[] \begin{center} \includegraphics[height=.19\textheight,angle=0,width=.33\textheight]{16806fg6-1.pdf} \includegraphics[height=.19\textheight,angle=0,width=.33\textheight]{16806fg6-2.pdf}\\ \includegraphics[height=.19\textheight,angle=0,width=.33\textheight]{16806fg6-3.pdf} \includegraphics[height=.19\textheight,angle=0,width=.33\textheight]{16806fg6-4.pdf}\\ \includegraphics[height=.19\textheight,angle=0,width=.33\textheight]{16806fg6-5.pdf} \includegraphics[height=.19\textheight,angle=0,width=.33\textheight]{16806fg6-6.pdf}\\ \includegraphics[height=.19\textheight,angle=0,width=.33\textheight]{16806fg6-7.pdf} \includegraphics[height=.19\textheight,angle=0,width=.33\textheight]{16806fg6-8.pdf} \caption{Light curves and hardness ratios of the LINER~1s observed with \xmm, all binned with a 1~ks time bin-size.} \label{LCxmmallLINERs} \end{center} \end{figure*} } \onlfig{7}{ \begin{figure*}[] \begin{center} \includegraphics[height=.19\textheight,angle=0,width=.33\textheight]{16806fg7-1.pdf} \includegraphics[height=.19\textheight,angle=0,width=.33\textheight]{16806fg7-2.pdf}\\ \includegraphics[height=.19\textheight,angle=0,width=.33\textheight]{16806fg7-3.pdf} \includegraphics[height=.19\textheight,angle=0,width=.33\textheight]{16806fg7-4.pdf}\\ \includegraphics[height=.19\textheight,angle=0,width=.33\textheight]{16806fg7-5.pdf} \includegraphics[height=.19\textheight,angle=0,width=.33\textheight]{16806fg7-6.pdf}\\ \includegraphics[height=.19\textheight,angle=0,width=.33\textheight]{16806fg7-7.pdf} \end{center} \caption{Light curves and hardness ratios of the LINER~1s observed with \chandra\ with a long exposure time, all binned with a 1~ks time bin-size.} \label{LCchandraallLINERs} \end{figure*} } \begin{table}[!th] \newcommand\T{\rule{0pt}{2.6ex}} \begin{center}{ \caption{Normalized excess variance for LINER~1s with a relatively long exposure time.} \label{varsigma} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{c c c c c} \hline \hline Galaxy Name \T & $\sigma_{NXS}^2$& Exposure Time$^a$ & $\sigma_{NXS}^2$ & Exposure time$^a$ \\ \T & \multicolumn{2}{c}{\xmm} & \multicolumn{2}{c}{\chandra} \\ \hline NGC-315 \T & $<0.0027$ & 49.7 & $<0.0160$ & 55.0 \\ NGC-2681 \T & & & $<0.0400$ & 159.9 \\ NGC-2787 \T & $<0.0170$ & 37.1 & $<0.0453$ & 30.8 \\ NGC-3226 \T & $(0.02\pm0.002)^{b}$ & 94.3 & $<0.0700$ & 46.6 \\ NGC-3718 \T & $<0.0078$ & 18.4 & & \\ NGC 3998 \T & $<0.0001$ & 8.9 & $<0.0016$ & 13.6 \\ NGC 4143 \T & $<0.0060$ & 9.3 & & \\ NGC-4203 \T & & & $<0.0026$ & 41.6 \\ NGC-4278 \T & $<0.0012$ & 30.3 & $<0.0202$ & 470.8 \\ NGC-5005 \T & $<0.0042$ & 8.7 & & \\ \hline \end{tabular}}} \begin{list}{}{} \item[{\bf Notes.}]$^a$The exposure times in ks used to calculate the value of $\sigma_{NXS}^2$. $^{b}$Value corresponding to only the longest observation of NGC~3226. Including the other \xmm\ observation would result in a $\sigma_{NXS}^2$ upper limit of 0.013. \end{list} \end{center} \end{table} To check more accurately any intrinsic variability amplitude from the different sources, we calculated the normalized excess variance \citep{nandra97apj:variance} for all of the long observations with the following expression: \begin{equation} \sigma^2_{NXS}=\frac{1}{N\mu^2}\sum\limits_{i=1}^{N}[(X_i-\mu)^2-\sigma_{i}^2] \end{equation} where $N$ is the number of bins in a given light curve, $X_{i}$ and $\sigma_{i}$ are the count rate and uncertainty of each bin, respectively, and $\mu$ is the arithmetic mean of the counting rates. To enable $\sigma^2_{NXS}$ comparison between the different light curves, the bin size and the light curve segment duration should be taken equally. For that purpose, we first decided to use light curve segments of 20~ks, as usually done for luminous Seyfert galaxies, splitting any longer observations into multiple ones. That limited our sample to 6 sources observed with \chandra\ and 5 observed with \xmm, not enough to draw any safe conclusions. Therefore, and owing to the heterogeneous sampling of the observations for this type of study, we decided to use the whole corrected exposure time of all of the long observations. The mean of the $\sigma^2_{NXS}$ is taken for every source with multiple \chandra\ or \xmm\ observations. The time bin size choice of 1~ks for all of the observations was taken to have a good signal to noise ratio with at least 20 counts in each bin and an acceptable number of bins in each light curve. Estimating the error on the $\sigma^2_{NXS}$ could be a tricky task. The variability in an AGN light curve depends, on one hand, on the measurement errors of the data (e.g. Poisson noise) and, on the other hand, on the stochastic nature of the process underlying AGN variability \citep[e.g. red noise, see][for a detailed discussion on this issue]{vaughan03mnras:varagn}; even if a source is not intrinsically variable the mean and the variance of different light curves based on observations performed at different times will not be identical. We estimated error due to Poisson noise using the \citet{vaughan03mnras:varagn} equation \begin{equation} \label{errsig} \resizebox{.43\textwidth}{!}{$err(\sigma^2_{NXS})=\sqrt{\left(\sqrt{\frac{2}{N}}\frac{\overline{\sigma_{err}^{2}}}{\overline{x}^2}\right)^2+\left(\sqrt{\frac{\overline{\sigma^{2}_{err}}}{N}}\frac{2\sigma_{NXS}}{\overline{x}}\right)^2}$}. \end{equation} \begin{figure}[!t] \centerline{\includegraphics[angle=0,width=0.5\textwidth]{16806fg8.pdf}} \caption{$\sigma_{NXS}^2$ derived from the \xmm\ observations as a function of the BH mass for our sample of LINER~1s. Arrows represent upper limits. NGC~3226 is the only source showing clear short time-scale ($\sim$1~day) variability and thus a non-upper limit value on $\sigma_{NXS}^2$.} \label{NXSvsMbh} \end{figure} The uncertainty owing to the red noise process has been presented by \citet{vaughan03mnras:varagn} to depend on the power-spectrum shape of the source which we do not know a priori. Therefore, \citet{oneill05mnras:varagn} estimated the error on the red noise process directly from the data. Our observations are not well sampled to use \citet{oneill05mnras:varagn} method to determine any error due to the stochastic nature of the AGN X-ray variability. The error on $\sigma_{NXS}^2$ shown in equation~\ref{errsig} was also used to estimate upper limits to the excess variance in the case of non-variability detection whenever $\sigma_{NXS}^2$ is negative or consistent with zero. Only one object, NGC~3226, in our sample shows a clear short time-scale variability during the longest 100~ks observation with $\sigma_{NXS}^2=0.02\pm0.002$ (comparable to the value of 0.014 found by \citealt{binder09apj:ngc3226}). Upper limits were obtained for the rest of the sample. Table~\ref{varsigma} and Fig.~\ref{NXSvsMbh} summarizes the results that are discussed in \S~\ref{xrayvar}. \subsection{X-ray spectral results} \label{specresulsec} The spectral analysis was performed using XSPEC \citep{arnaud96conf} version 12.6.0. The photo-electric cross sections and the solar abundances of \citet{wilms00ApJ} are used throughout to account for absorption by neutral gas. An absorbed Galactic column density derived for every single source from \citet{kalberla05aa:nh} (obtained with the W3NH tool\footnote{http://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3nh/w3nh.pl}) was applied to the different spectral models. Spectral uncertainties are given using $\Delta$\chisq\ of 2.71, corresponding to 90\% confidence for one interesting parameter, and to 95\% confidence for upper limits. \subsubsection{snapshot observations} We began our spectral analysis with the study of the \chandra\ snapshot observations. Table~\ref{specfit-param-snapshot} gives the best fit parameters to the snapshot observations of our sample of LINER~1s. Complicated models, like partial covering and/or two power-law components could not be tested due to the low number of counts. A power-law modified by Galactic absorption gave a good fit in the case of NGC~266 and NGC~4143. An additional intrinsic neutral absorption was needed in the remainder of the cases. The photon indices vary between $1.3\pm0.2$ for the hardest spectra to $2.1\pm0.7$ for the softest ones with a mean value of about 1.7. The hydrogen column density of the intrinsic absorber had an upper limit of $\sim4\times10^{21}$~cm$^{-2}$ in the case of NGC~4750 and NGC~5005. A value consistent with $10^{21}<N_{H}<10^{22}$~cm$^{-2}$ was derived for the rest of the snapshot observations. In one case, NGC~3718, we find a somewhat larger column density with $N_{H}$ in the order of $\sim10^{22}$~cm$^{-2}$. For NGC~5005, a thermal component \citep[{\sl mekal} model,][using the abundance table of \citealt{wilms00ApJ}]{mewe85aaps:mekal} with a $0.8_{-0.2}^{+0.3}$~keV temperature, was included in the model to take account for some low energy features, most likely due to diffuse emission from hot gas. In order to rigorously confirm the validity of our best fit spectral-parameter values derived using the C-stat and a modeled background, we compared our results to the results derived from fits applying the $\chi^2$ statistics to all of the snapshots observations. Spectral parameters derived using the C-stat were all in agreement, within the error bars, with the results derived using the $\chi^2$ statistics; with smaller deviation from the central value of one interesting parameter. We decided, as a consequence, to use the C-stat fits to calculate model fluxes in the soft, 0.5-2~keV, and in the hard, 2-10~keV, bands. Table~\ref{specfit-fluxes} gives the corresponding 0.5-2~keV and 2-10~keV observed fluxes and corrected luminosities. \begin{table*}[!th] \caption{Best fit parameters to the \chandra\ snapshot observations of our sample of LINER~1s.} \label{specfit-param-snapshot} \newcommand\T{\rule{0pt}{2.6ex}} \newcommand\B{\rule[-1.2ex]{0pt}{0pt}} \begin{center}{ \resizebox{0.8\textwidth}{!}{ \begin{tabular}{l c c c c c c} \hline \hline Galaxy Name\T\B & Obs. ID & N$_{h}$ & $\Gamma$ & Pl Norm. at 1~keV & kT & EM$^{(a)}$ \\ \T\B & & ($10^{20}$~cm$^{-2}$) & & ($10^{-5}$~Photons~keV$^{-1}$~cm$^{-2}$~s$^{-1}$) & (keV) & (10$^{62}$~cm$^{-3}$) \\ \hline NGC~266\T & 1610 & ($\ldots$) & 1.4~[0.9-1.9] & 2~[1-3] & ($\ldots$) & ($\ldots$) \\ NGC~315\T & 855 & 20~[10-30] & 1.3~[1.1-1.5] & 13~[10-16] & ($\ldots$) & ($\ldots$) \\ NGC~3226\T & 1616 & 74~[47-105] & 1.7~[1.3-2.1] & 22~[15-34] & ($\ldots$) & ($\ldots$) \\ NGC~3718\T & 3993 & 114~[97-132] & 1.5~[1.4-1.7] & 66~[53-81] & ($\ldots$) & ($\ldots$) \\ NGC~4143\T & 1617 & ($\ldots$) & 1.9~[1.6-2.1] & 7.2~[6.6-8.3] & ($\ldots$) & ($\ldots$) \\ NGC~4750\T & 4020 & $<$31 & 1.8~[1.4-2.3] & 5~[3-8] & ($\ldots$) & ($\ldots$) \\ NGC~4772\T & 3999 & 46~[24-53] & 1.69~[1.29-1.74]& 7~[4-11] & ($\ldots$) & ($\ldots$) \\ NGC~5005\T & 4021 & $<$39 & 2.1~[1.4-2.8] & 5~[3-11] & 0.8~[0.6-1.1] & 3~[1-4] \\ \hline \end{tabular}}} \end{center} \begin{list}{}{} \item[{\bf Notes.}]$^{(a)}$The emission measure (EM) of the {\sl mekal} model, EM=$\int$n$_{e}$n$_{H}$dV. \end{list} \end{table*} \subsubsection{Long-exposure observations} \begin{table*}[!th] \caption{Best fit parameters to the LINER~1s in our sample observed with a relatively long \chandra\ and \xmm\ exposure time.} \label{specfit-param} \newcommand\T{\rule{0pt}{2.6ex}} \newcommand\B{\rule[-1.2ex]{0pt}{0pt}} \begin{center}{ \resizebox{0.8\textwidth}{!}{ \begin{tabular}{l c c c c c c c c} \hline \hline Galaxy Name\T\B & Obs. ID & N$_{h}$ & $\Gamma$ & Pl Norm. at 1~keV & kT & EM$^{(a)}$ & $\chi^2_\nu$ & d.o.f. \\ \T\B & & ($10^{20}$~cm$^{-2}$) & & ($10^{-5}$~Photons~keV$^{-1}$~cm$^{-2}$~s$^{-1}$) & (keV) & (10$^{62}$~cm$^{-3}$) & & \\ \hline NGC~315 \T & 4156 & 10~[9-13] & 1.5~[1.4-1.6] & 18~[16-21] & 0.55~[0.47-0.59] & 15~[13-17] & \multirow{2}{*}{1.13} & \multirow{2}{*}{465} \\ \T & 0305290201 & L. & 2.1~[1.9-2.2] & 20~[17-25] & L. & L. & & \\ \hline NGC~2681 \T & 2060 & $<$29 & 1.5~[1.2-1.8] & 0.6~[0.5-0.8]& 0.67~[0.63-0.70] & 0.4~[0.3-0.5] & \multirow{2}{*}{0.90} & \multirow{2}{*}{74} \\ \T & 2061 & L. & L. & L. & L. & L. & & \\ \hline NGC~2787 \T & 4689 & 16~[8-24] & 2.4~[2.1-2.6] & 3~[2-4] & ($\ldots$) & ($\ldots$) & \multirow{2}{*}{1.12} & \multirow{2}{*}{118} \\ \T & 0200250101 & L. & L. & 4~[3-5] & ($\ldots$) & ($\ldots$) & & \\ \hline NGC~3226 \T & 860 & 25~[$<$60] & 1.7~[1.5-2.0] & 13~[9-19] & ($\ldots$) & ($\ldots$) & \multirow{3}{*}{0.98} & \multirow{3}{*}{467} \\ \T & 0101040301 & 89~[82-96] & 1.8~[1.7-1.9] & 25~[23-27] & ($\ldots$) & ($\ldots$) & & \\ \T & 0400270101 & 42~[39-44] & 2.05~[2.0-2.1] & 27~[26-28] & ($\ldots$) & ($\ldots$) & & \\ \hline NGC~3718 \T & 0200430501 & 138~[121-155]&1.8~[1.7-1.9] & 57~[48-66] & ($\ldots$) & ($\ldots$) & \multirow{2}{*}{0.88} & \multirow{2}{*}{122} \\ \T & 0200431301 & L. &L. & 47~[40-55] & ($\ldots$) & ($\ldots$) & & \\ \hline NGC~3998 \T & 6781 & 3~[2-4] &2.1~[2.0-2.2] & 282~[267-298] & ($\ldots$) & ($\ldots$) & \multirow{2}{*}{1.10} & \multirow{2}{*}{590} \\ \T & 0090020101 & L. & 1.84~[1.82-1.85]& 323~[318-328] & ($\ldots$) & ($\ldots$) & & \\ \hline NGC~4143 \T & 0150010601 & 6~[3-9] & 2.2~[2.1-2.3] & 17~[15-19] & ($\ldots$) & ($\ldots$) & 0.98 & 118 \\ \hline NGC~4203 \T & 10535 & ($\ldots$) & 2.3~[2.2-2.4] & 83~[78-89] & ($\ldots$) & ($\ldots$) & 0.83 & 51 \\ \hline NGC~4278 \T & 4741 & $<$6.78 & 2.1~[2.0-2.3] & 43~[39-47] & 0.62~[0.58-0.66] & 2.6~[2.3-3.0] & \multirow{6}{*}{0.93} & \multirow{6}{*}{310} \\ \T & 7077 & L. & 2.3~[2.2-2.4] & 18~[17-20] & L. & L. & & \\ \T & 7078 & L. & 2.3~[2.2-2.5] & 42~[39-46] & L. & L. & & \\ \T & 7079 & L. & 2.4~[2.3-2.5] & 38~[35-41] & L. & L. & & \\ \T & 7080 & L. & 2.0~[1.8-2.2] & 11~[10-13] & L. & L. & & \\ \T & 7081 & L. & 2.1~[2.0-2.3] & 12.5~[11.4-12.9]& L. & L. & & \\ \T & 0205010101 & 3.8[3.1 4.6] & 2.05~[2.03-2.07]& 81~[79-82] & ($\ldots$) & ($\ldots$) & 1.01 & 487 \\ \hline NGC~5005 \T & 0110930501 & 9~[2-18] & 1.7~[1.5-1.8] & 8~[6-9] & 0.64~[0.61-0.67] & 5.2~[4.7-5.7] & 1.2 & 107 \\ \hline \end{tabular}}} \end{center} \begin{list}{}{} \item[{\bf Notes.}](L.) represents a linked paramter in the fit. $^{(a)}$The emission measure (EM) of the {\sl mekal} model, EM=$\int$n$_{e}$n$_{H}$dV. \end{list} \end{table*} \begin{table*}[!th] \caption{Absorbed fluxes and corrected luminosities derived from the best fit model to our sample of LINER~1s and the corresponding $L_{2-10~keV}/L_{Edd}$.} \label{specfit-fluxes} \newcommand\T{\rule{0pt}{2.6ex}} \newcommand\B{\rule[-1.2ex]{0pt}{0pt}} \begin{center}{ \resizebox{0.9\textwidth}{!}{ \begin{tabular}{l c c c c c c c} \hline \hline Galaxy Name\T\B & Obs. ID & 0.5-2~keV Flux & 2-10~keV Flux & Corr. 0.5-2~keV Lum. & Corr. 2-10~keV Lum. & Pl$^{a}$ & Log($L_{2-10~keV}$/$L_{Edd}$)\\ \T\B & & \multicolumn{2}{c}{(Logarithmic scale; erg~s$^{-1}$~cm$^{-2}$)} & \multicolumn{2}{c}{(10$^{41}$~erg~s$^{-1}$)} & \%\ & \\ \hline NGC~266\T&1610& -13.40~[-13.50 -13.30] & -12.90~[-13.10 -12.80] & 0.23~[0.14 0.29] & 0.59~[0.37 0.74] & 100 & -5.77\\ NGC~315\T&855 & -12.74~[-12.78 -12.71] & -11.99~[-12.03 -11.95] & 1.56~[1.42 1.71] & 5.42~[4.94 5.94] & 100 & -5.42\\ \T&4156& -12.80~[-12.82 -12.78] & -12.04~[-12.05 -12.02] & 2.42~[2.36 2.53] & 5.06~[4.83 5.18] & 96 & -5.45\\ \T&0305290201& -12.80~[-12.82 -12.78] & -12.37~[-12.39 -12.35] & 2.42~[2.36 2.53] & 2.37~[2.26 2.48] & 94 & -5.78\\ NGC~2681\T&2060&-13.56~[-13.58 -13.55] & -13.47~[-13.49 -13.45] & 0.011~[0.010 0.012] & 0.012~[0.011 0.013] & 73 & -5.80\\ \T&2061&-13.56~[-13.58 -13.55] & -13.47~[-13.49 -13.45] & 0.011~[0.010 0.012] & 0.012~[0.011 0.013] & 73 & -5.80\\ NGC~2787\T&4689& -13.30~[-13.33 -13.26]& -13.28~[-13.32 -13.25] & 0.0054~[0.0049 0.0058] & 0.004~[0.002 0.007]&100&-7.70\\ \T&0200250101&-13.34~[-13.38 -13.31] & -13.33~[-13.37 -13.29] & 0.0048~[0.0044 0.0052]&0.0032~[0.0029 0.0034]&100&-7.74\\ NGC~3226\T& 1616 &-12.76~[-12.81 -12.71] & -12.07~[-12.12 -12.02] & 0.32~[0.28 0.36] &0.59~[0.51 0.66] &100 & -5.38\\ \T&860& -12.76~[-12.80 -12.73] & -12.32~[-12.36 -12.29] & 0.19~[0.17 0.20] & 0.33~[0.30 0.35] & 100 & -5.64\\ \T&0101040301&-12.77~[-12.77 -12.76] & -12.07~[-12.07 -12.06] & 0.37~[0.36 0.38] & 0.61~[0.60 0.62]&100 & -5.37\\ \T&0400270101&-12.56~[-12.57 -12.56] & -12.21~[-12.22 -12.21] & 0.39~[0.38 0.40] & 0.42~[0.41 0.43]&100 & -5.53\\ NGC~3718\T&3993 &-12.40~[-12.41 -12.37] & -11.50~[-11.52 -11.48] & 0.50~[0.48 0.54] & 1.15~[1.10 1.23]&100 & -4.64\\ \T&0200430501&-12.58~[-12.59 -12.56]&-11.76~[-11.78 -11.74]&0.44~[0.42 0.45] & 0.66~[0.63 0.68]&100 & -4.88\\ \T&0200431301&-12.65~[-12.68 -12.63]&-11.84~[-11.86 -11.82]&0.36~[0.34 0.38] & 0.55~[0.51 0.58]&100 & -4.96\\ NGC~3998\T&6781 & -11.25~[-11.26 -11.23] & -11.19~[-11.21 -11.17] & 1.47~[1.44 1.54] & 1.54~[1.47 1.61]&100 & -5.98\\ \T&0090020101&-11.183~[-11.186 -11.180]&-10.975~[-10.978 -10.973]&1.70~[1.69 1.71] & 2.53~[2.51 2.54]&100 & -5.76\\ NGC~4143\T& 1617 &-12.81~[-12.87 -12.75] & -12.63~[-12.70 -12.57] & 0.05~[0.04 0.06] & 0.07~[0.06 0.08]&100 & -6.43\\ \T&0150010601&-12.53~[-12.54 -12.52]&-12.51~[-12.52 -12.49]&0.112~[0.109 0.115] & 0.096~[0.092 0.098]&100 & -6.30\\ NGC~4203\T&10535&-11.75~[-11.77 -11.72]& -11.86~[-11.89 -11.84] & 0.51~[0.48 0.53] & 0.38~[0.35 0.40] & 100 & -5.25\\ NGC~4278\T&4741&-11.99~[-12.02 -11.97] & -12.05~[-12.08 -12.02] & 0.33~[0.31 0.35] & 0.28~[0.26 0.30] & 94 & -6.37\\ \T&7077&-12.30~[-12.32 -12.28] & -12.50~[-12.52 -12.48] & 0.16~[0.15 0.17] & 0.10~[0.09 0.11] & 83 & -6.83\\ \T&7078&-12.00~[-12.02 -11.97] & -12.18~[-12.21 -12.16] & 0.33~[0.31 0.35] & 0.20~[0.19 0.21] & 93 & -6.51\\ \T&7079&-12.04~[-12.06 -12.02] & -12.25~[-12.27 -12.23] & 0.30~[0.29 0.32] & 0.17~[0.16 0.18] & 90 & -6.58\\ \T&7080&-12.46~[-12.49 -12.44] & -12.56~[-12.59 -12.54] & 0.11~[0.10 0.12] & 0.08~[0.07 0.09] & 80 & -6.89\\ \T&7081&-12.42~[-12.44 -12.40] & -12.57~[-12.59 -12.55] & 0.12~[0.11 0.13] & 0.08~[0.07 0.09] & 81 & -6.90\\ \T&0205010101&-11.808~[-11.812 -11.804]&-11.718~[-11.722 -11.714]&0.55~[0.54 0.56]&0.60~[0.59 0.61]& 100&-6.04\\ NGC~4750\T& 4020 &-13.14~[-13.21 -13.07] & -12.84~[-12.92 -12.78] & 0.08~[0.07 0.09] & 0.12~[0.10 0.14]&100 & -5.30\\ NGC~4772\T& 3999 &-13.25~[-13.33 -13.17] & -12.67~[-12.76 -12.59] & 0.04~[0.03 0.05] & 0.07~[0.06 0.08]&100 & -5.72\\ NGC~5005\T& 4021 &-12.86~[-12.92 -12.81] & -12.95~[-13.01 -12.90] & 0.08~[0.06 0.11] & 0.06~[0.05 0.07]& 72 & -6.11\\ \T&0110930501&-12.55~[-12.57 -12.54]&-12.49~[-12.51 -12.48]&0.17~[0.16 0.18] & 0.18~[0.17 0.19] & 76 & -5.64\\ \hline \hline \end{tabular}}} \end{center} \begin{list}{}{} \item[{\bf Notes.}]$^{(a)}$The power-law component fraction to the 0.5-10~keV corrected luminosity. \end{list} \end{table*} \begin{figure}[] \includegraphics[angle=0,width=0.47\textwidth]{16806fg9-1.pdf}\\ \includegraphics[angle=0,width=0.47\textwidth]{16806fg9-2.pdf} \caption{{\sl Upper left panel}. Data and best fit model of the different spectra of NGC~315. A hardening in the \chandra\ ACIS spectrum (in blue) above $\sim$1.5~keV is seen, relative to the \xmm\ spectra (burgundy, green, and orange). {\sl Upper right panel}. Residuals of the best fit model in terms of sigma. {\sl Lower left panel}. Data and best fit model of the different spectra of NGC~3226. It is clear that more absorption below 2~keV from cold material is taking place between the two \xmm\ observations (black representing the long $\sim$100~ks observation). {\sl Lower right panel}. Residuals of the best fit model in terms of sigma.} \label{bestfitmod} \end{figure} \onlfig{10}{ \begin{figure*}[] \begin{center} \includegraphics[height=.19\textheight,angle=0]{16806fg10-1.pdf} \includegraphics[height=.19\textheight,angle=0]{16806fg10-2.pdf}\\ \includegraphics[height=.19\textheight,angle=0]{16806fg10-3.pdf} \includegraphics[height=.19\textheight,angle=0]{16806fg10-4.pdf}\\ \includegraphics[height=.19\textheight,angle=0]{16806fg10-5.pdf} \includegraphics[height=.19\textheight,angle=0]{16806fg10-6.pdf}\\ \includegraphics[height=.19\textheight,angle=0]{16806fg10-7.pdf} \caption{Data and best fit model of the spectra of the LINER~1s in our sample with a relatively long exposure time. Residuals of every fit are given in terms of sigma.} \label{bestfitallLINERs} \end{center} \end{figure*} } We then turned to the analysis of observations with relatively long exposure times. We started with a simple absorbed power-law fit to each of the spectra of the different sources, separately. The fits were acceptable for 6/10 sources but residuals at energies less than 2~keV persisted in the other 4 sources (NGC~315, NGC~2681, NGC~4278, and NGC~5005), suggesting the presence of diffuse hot gas. In order to have a better signal to noise ratio and photon statistics, we decided to fit the different \chandra\ and/or \xmm\ spectra of each source simultaneously (the normalizations of the different models in a fit were left free between the different EPIC instruments to take care of any potential cross calibration uncertainties). Simultaneous fitting routines of different observations performed at different times is done for the first time for any sample of LINER sources. For this purpose, whenever a source is observed with both \chandra\ and \xmm, we carried out careful imaging analysis of the \chandra\ observations to disentangle the different components (diffuse emission, LMXBs, and/or jet emission) that are blended in one point-like source in the \xmm\ extraction region of 25\hbox{$^{\prime\prime}$}-radius circle (see Appendix~B and online Fig.~1--4). These components, that we assume non-variable, are included in the simultaneous \chandra/\xmm\ fit. We do not expect diffuse hot gas to vary on a time-scale of a few years \citep{fabbiano89aa:difgaz}, although off-nuclear point-like sources and jet X-ray emission could exhibit variation on such time-scales \citep[e.g.,][]{harris03nar:m87}. We present in Appendix~B the surrounding medium around the different LINER~1s that are observed with both \chandra\ and \xmm, and we give spectral results to the different components. We tested multiple models on the data in order to determine the mechanism responsible for the observed X-ray spectra, noticeably: (1) a simple absorbed power-law, (2) same as (1) but including a thermal {\sl mekal} component to take account of any diffuse hot gas features in the soft band, (3) two power-law components with different photon index values with one representing the hard 2-10~keV emission and the other representing a possible 0.5-2~keV soft excess emission commonly seen in the nuclei of luminous galaxies\citep[e.g.,][]{ porquet04aa:pgquasar}. We investigated spectral variability in a source by permitting one spectral parameter of a given fit to vary independently between different observations. We then used the F-test to evaluate the improvement in the fit where a 99\%\ probability for an improvement to occur by chance is considered valid. The case for NGC~4278 is already analyzed in Y10 and best fit spectral parameters and fluxes are taken from Y10. We find that 6/10 sources are best fit with model (1) with no additional need for any thermal or two power-law components (NGC~2787, NGC~3226, NGC~3718, NGC~3998, NGC~4143, and NGC~4203). The rest of the sources were best fit with model (2) showing features at low energies below 2~keV, indicating emission from diffuse hot gas (NGC~315, NGC~2681, NGC~4278, and NGC~5005). Model (3) did not improve the quality of the fit in any of the cases, giving worse fits at times. We found that the intrinsic hydrogen column density varies significantly in NGC~3226 decreasing from $(8.9\pm0.7)\times10^{21}$~cm$^{-2}$ to $4.2_{-0.3}^{+0.2}\times10^{21}$~cm$^{-2}$. This is clearly seen in the lower panels of Fig.~\ref{bestfitmod} where the soft part of the spectrum during the short \xmm\ observation is much more absorbed compared to the long observation. Additionally, the power-law photon index varies in seven sources (NGC~315, NGC~3226, NGC~3718, NGC~3998, NGC~4143, NGC~4278, and NGC~5005) with the most drastic change being the one observed in NGC~315 where $\Gamma$ increased from $1.5\pm0.1$ during the \chandra\ observation to $2.1_{-0.2}^{+0.1}$ during the \xmm\ one (upper panels of Fig.~\ref{bestfitmod}). This increase is accompanied by a decrease in the 2-10~keV flux from $9.8\times10^{-13}$ to $4.6\times10^{-13}$~erg~s$^{-1}$. This behavior is typical of X-ray emission originating in a RIAF structure, which is the accretion flow believed to exist at the center of NGC~315 \citep{wu07apj:riaf}, where the Eddington ratio, which is proportional to L$_{2-10~keV}$, is inveresly proportional to the photon index $\Gamma$ \citep{gu09mnras:gamVSeddllagn}. Best fit models and residuals of the other LINER~1s in our sample are shown in online Fig.~\ref{bestfitallLINERs}. The photon indicies we derived for all of the sources in our sample observed with a relatively long exposure time varied between $1.5\pm0.3$ and $2.4_{-0.3}^{+0.2}$ with a mean value of 2.0. Intrinsic column density covered two orders of magnitude, with $N_{H}$ varying between $\sim10^{20}$~cm$^{-2}$ for unabsorbed sources, and up to $\sim10^{22}$~cm$^{-2}$ for the only mildly absorbed source NGC~3718. The thermal component had a temperature mean value of 0.63~keV, consistent with all LINER-type sources embedded in diffuse emission \citep{flohic06apj}. Table~\ref{specfit-param} gives the best fit model parameters for our sample of LINER~1s with a relatively long exposure time, and Table~\ref{specfit-fluxes} gives the 0.5-2~keV and 2-10~keV observed fluxes and corrected luminosities, as well as the corresponding ``Eddington ratios'', $L_{2-10~keV}/L_{Edd}$. The hard, 2-10~keV, luminosity spans three orders of magnitude from 3.2$\times10^{38}$~erg~s$^{-1}$ to 5.4$\times10^{41}$~erg~s$^{-1}$ which resulted in a $L_{2-10~keV}/L_{Edd}$\ range from $2.0\times10^{-8}$ to $2.3\times10^{-5}$ which is at least one to two orders of magnitude smaller than $L_{2-10~keV}/L_{Edd}$\ seen in luminous AGN \citep[e.g.: ][]{porquet04aa:pgquasar,nandra07mnras:felinesey}. \begin{figure}[] \centerline{\includegraphics[angle=0,width=0.5\textwidth]{16806fg11.pdf}} \caption{Photon index, $\Gamma$, as a function of the $L_{2-10~keV}/L_{Edd}$\ for our sample of LINER~1s. It is clear these two quantities are strongly anticorrelated. The solid black line designates the least square best fit to a straight line. Snapshots, \chandra, and \xmm\ observations are shown in different colours.} \label{gmaVSedd} \end{figure} \begin{figure*}[!t] \includegraphics[angle=0,width=0.47\textwidth]{16806fg12-1.pdf} \includegraphics[angle=0,width=0.47\textwidth]{16806fg12-2.pdf}\\ \includegraphics[angle=0,width=0.47\textwidth]{16806fg12-3.pdf} \includegraphics[angle=0,width=0.47\textwidth]{16806fg12-4.pdf} \caption{{\sl Top left panel.} The positive correlation between the 2-10~keV luminosity and the Eddington ratio. {\sl Top right panel.} The positive correlation between the 2-10~keV luminosity and the BH mass. {\sl bottom panels.} No correlation is found between the BH mass and neither the Eddington ratio ({\sl bottom-left}) nor $\Gamma$ ({\sl bottom-right}). See text for more details.} \label{other-corr-fig} \end{figure*} \begin{table*}[!th] \newcommand\T{\rule{0pt}{2.6ex}} \newcommand\B{\rule[-1.2ex]{0pt}{0pt}} \begin{center}{ \caption{ Spearman-rank correlation and the \citet{bevington03book} method to check for the dependence between the fit parameters and the intrinsic parameters of our sample of LINER~1s. See text for details.} \label{spear-rank} \resizebox{0.8\textwidth}{!}{ \begin{tabular}{l c c c c} \hline \hline Correlations \T \B & Coefficient $r_s$ & Probability (\%) & Coefficient $r$ & Probability (\%)\\ \T \B & \multicolumn{2}{c}{Spearman-rank} & \multicolumn{2}{c}{\citet{bevington03book}} \\ \hline $\Gamma-$$L_{2-10~keV}/L_{Edd}$ \T & -0.62 & $>$99.9 & -0.65 & $>$99.9 \\ $L_{2-10~keV}-$$L_{2-10~keV}/L_{Edd}$ \T & 0.61 & $>$99.9 & 0.65 & $>$99.9 \\ $L_{2-10~keV}-M_{BH}$ \T & 0.43 & 99.0 & 0.70 & $>$99.9 \\ $L_{2-10~keV}/L_{Edd}$$-M_{BH}$ \T & -0.41 & 97.9 & -0.11 & 72 \\ $\Gamma-M_{BH}$ \T & 0.33 & 90 & 0.26 & 90 \\ \hline \end{tabular}}} \end{center} \end{table*} We looked for any sign of narrow Fe~K$\alpha$ emission line at 6.4 keV in the EPIC-pn \xmm\ observations of our sample of LINER~1s. None of the observations have clear evidence for the line. Good signal to noise ratio around 6~keV is only found in the spectra of 4 sources that enable the estimate of an upper limit: 112~eV for NGC~3718 (obs.ID: 0200430501), 38~eV for NGC~3226 (obs.ID: 0400270101), 33~eV for NGC~3998, and 22~eV for NGC~4278. Even though a hint for an Fe~K$\alpha$ emission line around 6.4 keV seems to be apparent in the EPIC-pn spectrum of NGC~315, the addition of a gaussian line to the best fit model does not improve the quality of the fit with an F-test probability of 60\%\ for an improvement to occur by chance. \subsection{X-ray correlations} \label{correlations} We looked for any correlations between the X-ray properties, mainly the photon index $\Gamma$ and the 2-10~keV luminosity, and the LINER~1s intrinsic parameters, black hole mass and the ratio $L_{2-10~keV}/L_{Edd}$, which could be directly linked to the Eddington ratio $L_{bol}/L_{Edd}$ considering that $L_{bol}=const.\times L_{2-10~keV}$ \citep[$const.\approx16$,][]{ho09apj:riaf}. We investigated the validity of a correlation by fitting the data with a least square fit to a straight line using the equations of \citet{york66cjp:linefit} and by minimizing the weighted residuals in both parameters $x$ and $y$. We assessed the goodness of the fit following the criteria explained in \citet{bevington03book}. If a dependent variable $y$ is correlated to a variable $x$ with a line slope $b$, then the reciprocity in fitting $x$ as a function of $y$ should lead to a line with a slope $b'$. Therefore, the linear correlation coefficient $r$, $r\equiv\sqrt{bb'}$, varies between 0, when there is no correlation, to $\pm1$, when there is complete correlation. This correlation coefficient $r$ cannot be used directly to indicate the degree of correlation, instead a probability should be calculated that a random sample of N uncorrelated data points would yield to a linear-correlation coefficient as large as or larger than $r$. We found a strong anticorrelation between the photon index $\Gamma$, and $L_{2-10~keV}/L_{Edd}$, for our sample. The fit of these two quantities with a least square fit to a straight line resulted in the following equation: \begin{equation} \Gamma=(-0.31\pm0.06)log(L_{2-10~keV}/L_{Edd})+(0.11\pm0.40) \end{equation} with a linear-correlation coefficient $r\approx-0.65$ and a probability greater than 99.99\% that $L_{2-10~keV}/L_{Edd}$\ and $\Gamma$ yield a linear-correlation coefficient $\ge r$. Additionally, we performed a spearman-rank correlation between $\Gamma$ and $L_{2-10~keV}/L_{Edd}$\ and found that these two quantities are correlated with a probability greater than 99.99\%\ and a spearman-rank correlation coefficient of -0.62. Figure~\ref{gmaVSedd} shows the anticorrelation between $\Gamma$ and $L_{2-10~keV}/L_{Edd}$, and the least square best fit to a straight line. Snapshots, \chandra, and \xmm\ observations are shown in different colours. Using the same criteria as above, we found a positive correlation between the hard X-ray luminosity, $L_{2-10~keV}$, and $L_{2-10~keV}/L_{Edd}$, with a linear correlation coefficient $r\approx0.65$ and a probability $P\ge99\%$ that these two quantities would yield to a linear-correlation coefficient as large as or larger than $r$. Another strong positive correlation we found is the increase of the 2-10~keV luminosity with increasing BH mass with $r\approx0.7$ and a probability greater than 99.9$\%$. We did not find any strong dependence of the spectral slope $\Gamma$ or $L_{2-10~keV}/L_{Edd}$\ on the BH mass with correlation coefficients of 0.26 and -0.11 respectively and a probability $\le90\%$ that these quantities are correlated. To strengthen our conclusions we performed a spearman-rank test on the four different correlations and the results are shown in Table~\ref{spear-rank}. Our previous results are in agreement with the spearman-rank test, except for a weak anticorrelation emerging between $L_{2-10~keV}/L_{Edd}$\ and the BH mass not seen using our principal correlation test. Fig.~\ref{other-corr-fig} shows the dependence of the fit parameters to the LINER~1s parameters. We discuss these results in \S~\ref{accmode} and \S~\ref{other-corr}. \section{Discussion} In the following discussion, we are comparing X-ray timing and spectral results of our sample of LINER~1s to other results derived on broad samples of LINERs (including type~1 and type~2 LINERs, and transition nucleus) and/or low luminosity AGN. We compare our findings, X-ray variability and absence of an Fe~K$\alpha$ emission line, to type~1 luminous AGN (Seyfert galaxies and quasars). We discuss correlations between the fit parameters, $\Gamma$ and $L_{2-10~keV}$, and the intrinsic parameters of our LINER~1 sample, $L_{2-10~keV}/L_{Edd}$\ and $M_{BH}$, and compare our results to luminous AGN and XRBs in order to find out the accretion mechanism in LINER~1s. \subsection{X-ray variability} \label{xrayvar} One of the most important characteristics of AGN is the X-ray flux-variability on different time-scales. An anti-correlation between the variability amplitude, characterized by the normalized excess variance $\sigma_{NXS}^2$, with both the 2-10~keV luminosity and the black hole mass has been established for a considerable number of AGN \citep{nandra97apj:variance, turner99apj:seyvar, papadakis04mnras:rmsVSmbh, oneill05mnras:varagn}. Such variability on time-scales of less than a day was never detected for LINER sources in the past, with observations taken with low spatial resolution telescopes, e.g.:{\sl ASCA}, \citep{komossa99aa:liners, terashima02apjs:LLAGNASCA}. \citet{ptak98apj:variance} showed that LINER and low luminosity AGN sources do not follow the same anticorrelation of luminous AGN showing stronger variability with decreasing luminosity. The authors attributed the non-detection of short time-scale variability in LINERs and low luminosity AGN to a bigger X-ray emitting region, e.g. RIAF, compared to luminous AGN. Three sources in our sample of LINER~1s (NGC~2787, NGC~4143, and NGC~4203) show hint of inter-day variability with a K-S test probability between 4\%\ and 2\%\ that the X-ray emission originates from a constant source. Two sources in our sample exhibit significant short time-scale variability, both already reported in the literature (NGC~4278: Y10 and NGC~3226: \citealt{binder09apj:ngc3226}). NGC~4278 exhibited a short time-scale variability ($t\sim1.5$~h) during the \xmm\ observation where the X-ray flux level was highest (compared to the other 6 \chandra\ observations, where no short time-scale variability was detected). During this 1.5~h time-scale, the flux of NGC~4278 increased by 10\%. Y10 proposed that this variability could be the result of a more compact X-ray emission region, e.g. an accretion disk truncated at a lower radius during the \xmm\ observation compared to the \chandra\ observations. On the other hand, NGC~3226 \citep{binder09apj:ngc3226}, which is the only source observed for $\sim$100~ks with \xmm, shows significant flux variability during the entire observation, increasing almost continuously. \citet{binder09apj:ngc3226} reported a $\sigma_{NXS}^2\approx0.014$, comparable to 0.02 reported here, which is a variability amplitude similar to the one observed in more luminous AGN, but on shorter time-scales \citep{oneill05mnras:varagn}. \citet{binder09apj:ngc3226}, assuming a BH mass of $1.4\times10^8$~M$_{\odot}$ and an Eddington ratio of $2\times10^{-5}$, predicted, using \citet{mchardy06nat:var} relation, a variability amplitude of $\sim2-3\times10^{-4}$ on a one day time-scale. The discrepancy between the observed and the predicted value of $\sigma_{NXS}^2$ is the fact that the \citet{mchardy06nat:var} relation is derived for objects in a high/soft state. As we show in paragraph~\ref{accmode}, LINER sources, in contrast to luminous Seyfert galaxies, could be in a low/hard state similar to XRBs in their low/hard state. The $L_{2-10~keV}/L_{Edd}$\ derived for the NGC~3226 longest observation, where the variability was observed, did not increase compared to the other three observations, in fact it decreased from 4.27$\times10^{-6}$ to 2.95$\times10^{-6}$. Therefore, the significant variability seen during the $\sim100$~ks observation cannot be attributed to an increase in $\dot{m}$ (as also denied by \citealt{binder09apj:ngc3226}). \citet{markowitz05apj:psdllagn} showed that the PSD break time-scale of the LLAGN NGC~4258 is greater than 4.5~days at $>90$\% confidence level. The authors suggested that LLAGN, like XRBs in their low/hard state, might have longer break time-scales compared to luminous AGN and XRBs in the high/soft state. The X-ray variability detected in the case of NGC~3226 could be the result of a break time-scale of $\sim1$~day ($\sim10^{-5}$~Hz) in the NGC~3226 PSD, however, this assumption is purely speculative and PSD measurement of NGC~3226, which is not possible with the present 100~ks observation due to a low number of counts for this kind of study, could help confirm or discard this idea. Similar to NGC~3226, variability on short time-scales (half a day to several days) was observed in the LINER~1 source NGC~3998 \citep{pianmnras10} and in the low luminosity AGN M~81 \citep{ishisaki96PASJ:m81, iyomoto01MNRAS:m81, pianmnras10}. \begin{figure*}[!t] \centerline{\includegraphics[angle=90,width=0.99\textwidth]{16806fg13.pdf}} \caption{Long term X--ray variability of the LINER~1s observed more than once. Red ticks represent the error on the 2-10 keV corrected flux.} \label{longtermflux} \end{figure*} Although all of the normalized excess variance derived for our sample consist of upper limits, except for NGC~3226, we show in Fig.~\ref{NXSvsMbh} $\sigma_{NXS}^2$ as a function of BH mass. Fig.~\ref{NXSvsMbh} rules out any variability on time-scales shorter than 50~ks in our sample of LINER~1s, commonly observed in more luminous AGN \citep{oneill05mnras:varagn}, except for NGC~4278 that shows a 10\%\ flux increase in a $\sim1.5$~h time-scale. Variability on time-scales of months to years is common in our sample. Seven out of nine sources observed for more than once show variability on months (NGC~5005 and NGC~4278) to years (NGC~315, NGC~3226, NGC~3718, NGC~3998, NGC~4143, and NGC~4278) time-scales\footnote{All of the sources reported to show variability on years time-scale and not on months time-scale lack inter-year observations and therefore, monthly variability cannot be tested.} with a 5 times flux increase in the most variable source, NGC~4278, and a 1.4 times flux increase in the least variable source, NGC~4143. The only two sources that do not show variability on long time-scales are the ones with a very low $L_{2-10~keV}/L_{Edd}$: NGC~2681 and NGC~2787 (Table~\ref{specfit-fluxes}). Both sources were observed twice in less than a 5 months period and therefore, variability on years-timescale cannot be tested. For illustration purposes, we report in Figure~\ref{longtermflux} the fluxes derived for the sources observed more than once. All but NGC~2681 and NGC~2787 exhibit months and/or years time-scale variability. With the increasing number of X-ray observations of LINER and low luminosity AGN sources, more sources are revealing short time-scale variability ($\le1$~day). This variability is detected with the help of the current generation of X-ray telescopes (\chandra, \xmm, and {\sl Swift}) which have a good spatial resolution to isolate the nucleus X-ray emission from contaminating sources. Nevertheless, and before any firm conclusions are made about variability from low accretion rate sources, a homogeneous well sampled population of LINERs and/or low luminosity AGN, with long X-ray observations ($\sim100$~ks), should be available. \subsection{X-ray spectral shape} Most recently, \citet{zhang09apj:llagnxray} studied the X-ray nuclear activity of a distance limited sample, D$<$15~Mpc, of 187 galaxies. The authors found that the X-ray emission from $\sim$60\%\ of the elliptical and early type spiral galaxies is consistent with nuclear accreting supermassive BH. They fit the spectra of each source with an absorbed power-law and found a photon index $\Gamma\approx1.5-2$ and an intrinsic column density covering almost 4 orders of magnitude from $10^{20}$~cm$^{-2}$ to $10^{24}$~cm$^{-2}$. The authors found a luminosity ranging from $\sim10^{38}$~erg~s$^{-1}$ to $\sim10^{42}$~erg~s$^{-1}$ corresponding to a L$_{0.3-8~keV}/$L$_{Edd}$ ratio between $10^{-4}$ to $10^{-8}$. Similar results were derived in previous work such as the work of \citet{gonzalezmartin06aa,gonzalezmartin09aa} on a sample of 82 nearby LINER sources. The authors found that 60\%\ of the sample shows a hard unresolved X-ray point-like source in the 4.5-8.0~keV band consistent with an AGN assumption. The data were fit mainly by an absorbed power-law, with a mean photon index $\Gamma=2.1\pm0.5$ and an intrinsic column density ranging from $10^{20}$~cm$^{-2}$ to $10^{24}$~cm$^{-2}$, and a thermal component with a mean temperature $kT=0.5\pm0.3$~keV. The results we derive for our homogeneous and optically selected sample of LINER~1s showing definite detection of broad H$\alpha$ emission are in agreement with all of the results derived on broad samples of LINERs and low luminosity AGN in the past. X-ray spectra in the whole 0.5-10~keV band of the majority of our sample are well fitted with an absorbed power-law (9/13). This could mean that the X-ray emission in the soft and in the hard band is originating from the same region. In the remaining 4 sources (NGC~315, NGC~2681, NGC~4278, and NGC~5005), we included a thermal component to take into account some low energy residuals, most likely from diffuse hot gas. We found a photon index ranging from $1.3\pm0.2$ for the hardest source to $2.4^{+0.2}_{-0.3}$ for the softest one with a mean value of $1.9\pm0.2$ and a dispersion $\sigma=0.3$, similar to values reported in \citet{komossa99aa:liners} and \citet{terashima02apjs:LLAGNASCA}. The absorption column density observed in our sample spans two orders of magnitude from $10^{20}$~cm$^{-2}$ for unabsorbed sources (e.g. NGC~3998, NGC~4143) to $10^{22}$~cm$^{-2}$ for the mildly absorbed source NGC~3718. This is consistent with the fact that strong absorption is not expected in objects showing broad optical emission line, e.g. LINER~1s, as found in luminous type~1 AGN. This result was already confirmed by \citet{terashima02apjs:LLAGNASCA} who studied the {\sl ASCA} X-ray observations of 21 LINER and 17 low luminosity Seyfert galaxies. The authors found a discrepancy in the intrinsic absorbing column density between type 1 LINERs, $N_{H}<10^{22}$~cm$^{-2}$ (except for NGC~1052 which is not included in our sample, see \S~\ref{sec:sample}), and type 2 LINERs absorbed with a column density consistent with $N_{H}$ a few $10^{22}$~cm$^{-2}$. The origin of the slight excess absorption in some of the sources in our sample of LINER~1s could be due to either the host galaxy and/or some compton thin material intrinsic to the central engine. We find a mean value to the temperature of the thermal component of $0.63\pm0.06$~keV typical of diffuse hot gas in early type galaxies \citep[$kT\approx0.5-2.0~keV$,] []{fabbiano89aa:difgaz}. Similar $kT$ values were reported in \citet{flohic06apj} for a sample of 19 LINER sources observed with \chandra. The 2-10~keV luminosities of the LINER~1s in our sample span a range from 3.2$\times10^{38}$~erg~s$^{-1}$ to 5.4$\times10^{41}$~erg~s$^{-1}$ which resulted in $L_{2-10~keV}/L_{Edd}$\ range from $2.0\times10^{-8}$ to $2.3\times10^{-5}$, in agreement with results derived on broad samples of LINERs and low luminosity AGN. These corresponding ``Eddington ratios'' are at least an order of magnitude smaller than those reported for luminous AGN \citep{porquet04aa:pgquasar,nandra07mnras:felinesey}. Although the photon indices measured for our sample of LINER~1s are similar to those of type~1 Seyfert galaxies \citep{nandra97apj:SEYfekline}, this does not necessarily mean that the X-ray emission in LINER~1s is originating from an accretion flow similar to that of more luminous galaxies \citep[e.g. NGC3998,][]{ptak04apj:ngc3998}. Other X-ray timing and spectral aspects could help shed light on the accretion mechanism in LINER~1s. \subsection{The absence of an Fe K$\alpha$ line} It is now believed that a narrow emission Fe K$\alpha$ line at 6.4~keV is a common feature in the X-ray spectra of Seyfert galaxies. The origin of this neutral narrow emission line is generally attributed to fluorescence originating from parsec-scale distances (torus) to distances closer than the broad line region \citep{shu10apjs:iteffect}. An X-ray Baldwin effect was discovered by \citet{iwasawa97apj:iteffect} where the EW of the Fe line decreases with increasing luminosity. \citet{page04mnras:iteffect} suggested that this Baldwin effect observed in X-rays could be the result of a luminosity-dependent covering fraction of the putative torus. The increase in radiation pressure flattens the torus leading to a bigger opening angle for the torus and, hence, smaller covering factor \citep{konigl94apj:torus}. This effect was later confirmed on bigger samples of radio-quiet and radio-loud AGN (\citealt{nandra97apj:iteffect}; \citealt{bianchiaa07:iteffect}; \citealt{chaudharyaa10:iteffect}, but also see \citealt{jim05aa:iteffect}). LINER~1s individually studied in the literature with a high signal to noise ratio around 6.4~keV do not show any sign of Fe~K$\alpha$ emission line with stringent upper limits on the EW: 25~eV for NGC~3998 \citep{ptak04apj:ngc3998}, 35~eV for NGC~3226 \citep{binder09apj:ngc3226}, and 22~eV for NGC4278 (Y10). Therefore, it appears that the X-ray Baldwin effect do not hold down to very low luminosity AGN. We do not detect any significant Fe~K$\alpha$ emission line in our sample with upper limits obtained for the sources with the highest signal to noise ratio around 6~keV (38~eV: NGC~3226, 112~eV: NGC~3718, 33~eV: NGC~3998, and 22~eV: NGC~4278). If the broad line region is responsible for the emission of an Fe~K$\alpha$ line, such as the case for the intermediate Seyfert-LINER source NGC~7213 \citep{lobban10mnras:ngc7213,bianchi08mnras:ngc7213}, we would expect to detect it in at least the sources with high signal to noise ratio around 6.4~keV, since our sample consists of LINER~1s showing a definite detection of {\sl broad} H$\alpha$ emission. Instead, the disappearance of the torus structure at low Eddington ratios as suggested by \citet{ho08aa:review} and \citet{zhang09apj:llagnxray} could explain the lack of Fe emission lines in our sample, assuming that the torus is responsible for the formation of the fluorescence Fe line. Interestingly, we find that the highest upper limit on the EW of the Fe~K$\alpha$ line is found for NGC~3718 which has the highest hydrogen column density ($N_H\approx10^{22}$~cm$^{-2}$) in our sample of LINER~1s. \subsection{Accretion mode in LINER~1s?} \label{accmode} The accretion mechanism responsible for the bulk of energy from radio to X-rays in LINER sources is still poorly understood. \citet{ho09apj:riaf} recently demonstrated that through local mass loss from evolved stars and Bondi accretion of hot gas, the accretion rate supply needed for the luminosities observed in LINERs and other LLAGN is easily attained. The author argued that the gas supply present at the center of nearby galaxies should generate more active nuclei and the luminosity deficit seen in the nearby universe is the result of a low radiative efficiency. Indeed, radiatively inefficient accretion flow \citep[RIAF, see ][for reviews]{narayan08:riafreview,quataert01aspc:riaf} models have been applied to a growing number of LINERs and LLAGN to explain their energy budget and their spectral energy distribution: M~81 and NGC~4579 \citep{quataert99apj:m81ngc4579}, NGC~4258 \citep{gammie99apj:ngc4258}, NGC~3998 \citep{ptak04apj:ngc3998}, and NGC~1097 \citep{nemmen06apj:ngc1097}. \citet{wu07apj:riaf} fit the SED of a small sample of eight FR~I sources with RIAF and/or jet models and confirmed the prediction of \citet{yuan05apj:jetvsriaf} that below a critical value of the Eddington ratio ($L_{2-10~keV}/L_{Edd}$$\approx10^{-6}$) the X-ray emission becomes dominated by the jet rather than the RIAF. \citet{ho09apj:riaf} suggested an even lower value to distinguish between objects in the ``low'' state, where an outer thin disk persists, and those in the ``quiescent'' state containing a pure RIAF. Since we are only considering the X-ray properties of this sample in our study, we decided to examine the $\Gamma-$$L_{2-10~keV}/L_{Edd}$\ relation which was shown to be a good indicator of accretion rate in luminous AGN \citep{shemmer06apj:rqagn}. In luminous AGN, a positive correlation is found between the hard X-ray power-law slope and the ratio of the 2-10 keV luminosity to the Eddington luminosity \citep{wang04apjl:rqagn, shemmer06apj:rqagn, sobolewska09mnras:gamvsedd}. \citet{greene07apj:imagn} showed that the relation holds for intermediate-mass ($10^5-10^6$~M$_\odot$) BHs in active galaxies and \citet{porquet04aa:pgquasar} showed that the relation extends up to more luminous objects when studying a sample of 21 low-redshift quasars. A viable explanation is that whenever the disk emission increases, the corona, the origin of the hard X-ray emission, cools more efficiently exhibiting a steepening of the hard X-ray spectrum. This relation was examined for LLAGN (local Seyferts and LINERs) by \citet{gu09mnras:gamVSeddllagn} (see also \citealt{ constantin09ApJ:liners}) who found a significant anticorrelation between the hard X-ray photon index $\Gamma$ and the Eddington ratio, L$_{bol}$/L$_{Edd}$=30$\times$$L_{2-10~keV}/L_{Edd}$, for the local Seyfert galaxies in their sample. However, no strong correlation was found when considering only the LINER sources in their sample owing most likely to heterogeneous fitting models as they have collected their data points from different studies. The authors suggested that this anticorrelation found in their sample, which is in contrast to the positive correlation found for more luminous AGN, could signify that LLAGN resemble XRBs in the low/hard state where a similar anticorrelation is found \citep{yamaoka05cjaa:XRBhs, yuan07ApJ:xrbs, wu08apj:XRBhs}. \citet{wu08apj:XRBhs} suggested that the anticorrelation could mean an accretion mode consistent with a RIAF whereas a positive correlation could mean the existence of the classical thin accretion disk. A plausible explanation for the hardening of the spectrum as the accretion rate increases in a RIAF context (as seen in the $\Gamma$-$L_{2-10~keV}/L_{Edd}$\ anticorrelation) is the increase of the optical depth of the RIAF which leads to an increase in the Compton $y$-parameter resulting in a harder X-ray spectrum. Our well defined optically-selected sample of LINER~1s and our homogeneous data analysis techniques allowed us to establish a strong anticorrelation between $\Gamma$ and $L_{2-10~keV}/L_{Edd}$\ for our sample of LINER~1s (see \S~\ref{correlations}), not seen in the LINER sample of \citet{gu09mnras:gamVSeddllagn}. This strong anticorrelation support the idea suggested by \citet{gu09mnras:gamVSeddllagn} that LLAGN might be similar to XRBs in the low/hard state where the emission is presumably generated in a RIAF structure. \citet{qiao10pasj:LHS} predicted such an anticorrelation in the low/hard state for their accretion flow model consisting of an outer-cool optically-thick disk and inner-hot optically-thin RIAF within the framework of disk and corona with mass evaporation \citep{liu02apj:LHSriaf}. \citet{qiao10pasj:LHS} found that their model can reproduce the anticorrelation between the X-ray photon index and the Eddington ratio observed for the X-ray binary \object{XTE~J1118+480}. \begin{figure}[!t] \centerline{\includegraphics[angle=0,width=0.5\textwidth]{16806fg14.pdf}} \caption{Dependence of $\Gamma$ as a function of the Eddington ratio. The positive and negative correlations represent the ones found for a sample of luminous AGNs \citep{shemmer08apj:gamvsedd} and for our sample of LINER~1s respectively. Note here that the intercept of our anticorrelation has changed since now we are considering $L_{bol}$ instead of $L_{2-10~keV}$. The crosspoint between the two lines represents a probable transitional point from a standard thin accretion disk to a RIAF in AGNs.} \label{diff-state} \end{figure} \citet{wu08apj:XRBhs} noted that the transition between the two different accretion modes in their sample of XRBs is different for different sources but roughly converge to the transitional point where $\Gamma=1.5\pm0.1$ and $log(L_X(0.5-25$~keV)$/L_{Edd})=-2.1\pm0.2$. Assuming that the accretion mode in the high luminosity AGN sample of \citet{ shemmer08apj:gamvsedd} is the standard thin disk and the accretion mode in our sample of LINER~1s is a RIAF, then the transition point between the two accretion modes in AGNs would be at $\Gamma\approx1.3_{-0.3}^{+0.2}$ and $log(L_{bol}/L_{Edd}) \approx -2.6_{-0.8}^{+0.7}$ (Fig.~\ref{diff-state}) in good agreement, within the error bars, with the values reported for the XRBs and for the sample of \citet{constantin09ApJ:liners}. One should keep in mind that these two values are affected by several parameters, mainly the BH mass and the bolometric luminosity, that are calculated by different means between \citet{shemmer08apj:gamvsedd} and this work. Indeed, \citet{shemmer08apj:gamvsedd} calculated the BH masses and the Eddington ratios, $L_{bol}/L_{Edd}$, of their sample of luminous AGNs using the $\nu L_{\nu}$(5100\AA) and FWHM(H$\beta$), whereas our calculation of the BH masses of our sample are based on the $M-\sigma$ relation and $L_{bol}/L_{Edd}$ is calculated using the bolometric correction $L_{bol}=16~L_X$ of \citet{ho08aa:review}. We do not detect any sharp cut below the critical value of $L_{2-10~keV}/L_{Edd}$\ of 10$^{-6}$ indicating a change in the X-ray emission, becoming dominated by synchrotron emission from a jet \citep{yuan05apj:jetvsriaf}. This could be due to selection effects where none of our sources, except for NGC~315, comprises strong jets. Moreover, a transition between a ``low'' and a ``quiescent'' state whenever the accretion rate drops below a critical value ($\dot{m}\approx10^{-6}$) as suggested most recently by \citet{ho09apj:riaf} could not be tested since, as stressed by the author, this kind of analysis should be conducted on large samples. \subsection{Other correlations and implications} \label{other-corr} We found a positive correlation between the hard X-ray luminosity, $L_{2-10~keV}$, and $L_{2-10~keV}/L_{Edd}$. An increase in the hard X-ray luminosity with increasing $L_{2-10~keV}/L_{Edd}$\ is independent of the accretion model and seen in different types of galaxies from transition nuclei to high luminosity Seyfert nuclei and quasars \citep{ho09apj:riaf,greene07apj:seyquas}. Another strong positive correlation we found for our sample of LINER~1s is the increase of the 2-10~keV luminosity with increasing BH mass. A similar trend is seen in a sample of 112 early-type galaxies within a distance of 67~Mpc \citep{pellegriniapj10:xemiearlygal}. Therefore, both the BH mass and the $L_{2-10~keV}/L_{Edd}$\ (equivalent to the Eddington ratio L$_{bol}$/L$_{Edd}$) are driven factors of the hard $2-10~$keV luminosity. This could be seen in the right upper panel of Fig.~\ref{other-corr-fig} in \S~\ref{correlations} where NGC~2787 with a relatively high BH mass of $1.4\times10^8$~M$_{\odot}$ has a low 2-10~keV luminosity as a consequence of a low $L_{2-10~keV}/L_{Edd}$\ ($2\times10^{-8}$). We did not find any strong dependence of the spectral slope $\Gamma$ or the Eddington ratio on the BH mass (although a spearman-rank correlation showed that $L_{2-10~keV}/L_{Edd}$\ is weakly anticorrelated to the BH mass). This is in contrast with the anti-correlation between the X-ray spectral slope and the BH mass seen in a sample of low redshift PG quasars \citep{porquet04aa:pgquasar}. It appears that in our sample of LINER~1s, which have a very low accretion rate, the BH mass -- here spanning 2 orders of magnitude in log($M_{BH}$) space -- is not the main driver of the X-ray photon index. \section{Summary and conclusion} In the present work, we studied the X-ray properties of a sample of nearby LINER galaxies showing definite detection of broad H${\alpha}$ emission \citep{ho97apjs:broadHal}. Such a sample insures the responsibility of accretion into a SMBH for the detection of the broad emission lines, guarantees the non-existence of large obscuration, and enables X-ray comparison of this class with both XRBs and type 1 AGN. Only two sources in our sample exhibit significant hours to days time-scale variability. The NGC~4278 flux increased by a factor of 10\%\ on a $\sim1.5$ hour period, to remain constant for the rest of the observation (see Y10). On the other hand, NGC~3226 shows variability for the whole observation of $\sim100$~ks, increasing continuously. Three other sources show hint of inter-day variability with a K-S test probability between 2\%\ and 4\%\ that the X-ray emission originates from a constant source. Short time-scale variability study from a homogeneous well sampled population of LINERs and/or low luminosity AGN, with long X-ray observations ($\sim100$~ks) should be conducted before any firm conclusions are made about variability from low accretion rate sources. On the other hand, variability on longer (months to years) time-scales is common in our sample where 7 out of 9 sources exhibit long time-scale variability. The two sources not exhibiting variability are the ones with a very low $L_{2-10~keV}/L_{Edd}$. The X-ray spectra of our sample of LINER~1s are typical of all types of LINER sources, fit with an absorbed power-law, or a combination of a thermal component and an absorbed power-law. We found a photon index for our sample between $1.3\pm0.2$ for the hardest source and $2.4^{+0.2}_{-0.3}$ for the softest one with a mean value of $1.9\pm0.2$ and a dispersion $\sigma=0.3$. None of the sources in our sample is heavily absorbed with NGC~3718 having the highest intrinsic hydrogen column density of $\sim10^{22}$~cm$^{-2}$. The thermal component had a mean temperature kT$\approx0.6$~keV, typical of other LINER sources embedded in diffuse emission \citep{flohic06apj}. Our sample spans three orders of magnitude in both luminosity and $L_{2-10~keV}/L_{Edd}$\ space, ranging from 10$^{38}$ to 10$^{41}$~erg~s$^{-1}$ and from 10$^{-8}$ to 10$^{-5}$ respectively, which is at least an order of magnitude smaller than the Eddington ratios observed in luminous AGN (Seyferts and quasars). We do not detect any significant Fe~K$\alpha$ emission line at 6.4~keV in the spectra of our sample of LINER~1s. We obtained upper limits on the Fe line for the four sources with the highest signal to noise ratio around 6.4~keV (38~eV: NGC3226, 112~eV: NGC~3718, 33~eV: NGC3998, and 22~eV: NGC~4278). The lack of a narrow Fe line could be due to the disappearance of a torus structure in LLAGN and LINER sources \citep{ho08aa:review,zhang09apj:llagnxray}. This implies that the X-ray Baldwin effect or the ``Iwasawa-Tanigushi'' effect of decreasing Fe~K$\alpha$ EW with increasing 2-10~keV luminosity does not extend down to LINER~1s. Finally, we looked for correlations between the X-ray properties and the AGN properties of our LINER~1 sample. We found a strong anticorrelation between the power-law photon index and the Eddington ratio suggesting that LINER~1s differ from more luminous Seyfert and quasar galaxies that show a positive correlation between the photon index and the Eddington ratio. This anticorrelation, established for the first time for LINER~1s, suggest that LINER~1s mode of accretion could be similar to that of XRBs in their low-hard state, and a RIAF could be responsible for the emitting energy from the nucleus. We found that the 2-10~keV luminosity in our sample is positively correlated to two parameters, the BH mass and the Eddington ratio, $L_{2-10~keV}/L_{Edd}$, confirming the results found for broad samples of LINERs and low luminosity AGN. On the other hand, it appears that in our sample of LINER~1s, which have a very low accretion rate, the BH mass is not the main driver of the X-ray photon index, as the two quantities do not show any strong correlation. \acknowledgements This research has made use of the data obtained from the {\sl Chandra Data Archive} and the {\sl Chandra Source Catalog}, and software provided by the {\sl Chandra X-ray Center} (CXC) in the application packages CIAO and Chips. This work is based on observations with \xmm, an ESA science mission with instruments and contributions directly funded by ESA Member States and the USA (NASA). This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. G.Y. would like to thank I. Papadakis for helpful and enlightening discussions. The authors would also like to thank the referee for fruitful comments that improved the quality of the manuscript. \begin{appendix} \section{Notes on individual sources} \label{appa} {\sl NGC~266.} \citet{terashima03apj:rloud} studied the 2~ks snapshot observation made with \chandra\ and discussed here. They model the X-ray spectrum with an absorbed power-law and derive a hydrogen column density and a photon index of $<0.82\times10^{22}$~cm$^{-2}$ and 1.4, respectively. They find a 2-10 keV flux of $1.6\times10^{-13}$~erg~s$^{-1}$~cm$^{-2}$. The values of the photon index and the 2-10~keV flux are with a good agreement with the value we report here (within the error bars), but no additional absorption is added to our model. {\sl NGC~315.} \citet{worrall94apj:ngc315} first suggested the presence of an active galactic nucleus at the center of NGC~315 using \rosat\ data. This assumption was later confirmed by \citet{matsumoto01pasj:ngc315} when studying a 37~ks ASCA observation (see also \citet{terashima02apjs:LLAGNASCA}). The authors fit the hard 2-10~keV spectrum with a power-law and find a photon index of $\sim2$ and a luminosity of $3.1\times10^{41}$~erg~s$^{-1}$. A resolved X-ray jet emission was first reported by \citet{worrall03mnras:ngc315} when studying the \chandra\ snapshot observation (obs. ID: 855). The fit to the jet emission with a power-law gave a photon index of $2.5\pm0.7$ and a $3.5\times10^{40}$~erg~s$^{-1}$ luminosity. The authors fit the unresolved core emission with a moderately absorbed power-law with an intrinsic hydrogen column density of $\sim5\times10^{21}$~cm$^{-2}$ and a photon index of $1.4\pm0.4$. \citet{worrall07mnras:ngc315} found similar results when studying the longer \chandra\ observation reporting a harder core spectrum than the jet with photon indicies of $\sim1.6$ and $\sim2.2$ respectively. Moreover, \citet{croston08mnras:radgal}, after solar flare cleaning, studied the only \xmm\ observation and fit the $60\hbox{$^{\prime\prime}$}$ core spectrum with a combination of a mekal and a power-law. Our analysis of the two \chandra\ observations and the \xmm\ one gives similar results to all of the above studies with a simultaneous fit to the different extracted spectra where a combination of a mekal ($kT\approx0.5$~keV) and a mildly absorbed ($N_{h}\approx10^{22}$~cm$^{-2}$) power-law ($\Gamma$ between 1.5 and 2) were used. The increase in the power-law slope from $1.5\pm0.1$ during the \chandra\ observation to $2.1_{-0.2}^{+0.1}$ during the \xmm\ one is accompanied by a decrease in the 2-10~keV flux from $9.8\times10^{-13}$ to $4.6\times10^{-13}$~erg~s$^{-1}$. This behavior (see \S~\ref{accmode}) is typical of X-ray emission originating in a RIAF structure which is believed to be the accretion mechanism responsible for the bulk of energy from radio to X-rays in NGC~315 \citep{wu07apj:riaf}. {\sl NGC 2681.} One of the two observations performed with \chandra\ has been already reported in \citet{satyapalapj05:sfVSliner}. The authors fit the 0.5-8~keV spectrum with a combination of a thermal component with $kT\approx0.7$~keV and a power-law with a photon index $\Gamma\approx1.6$. No intrinsic absorption was required. The same observation was treated in \citet{gonzalezmartin09aa} and same results were derived after fitting the spectrum with a {\sl mekal} and a power-law. \citet{gonzalezmartin09aa} derived a 2-10~keV flux of $\sim2\times10^{-13}$~erg~s$^{-1}$~cm$^{-2}$. We fit the spectrum of the two \chandra\ observations of NGC~2681 simultaneously with a combination of a thermal component and an absorbed power-law. We found similar results to that derived in the previous works ($\Gamma\approx1.5$ and $kT\approx0.6$~keV) with a 2-10~keV flux of $3\times10^{-13}$~erg~s$^{-1}$~cm$^{-2}$. {\sl NGC 2787.} \citet{ho01apjl}, after studying a \chandra\ snapshot, gave this source a class III X-ray morphology, showing hard X-ray nucleus embedded in diffuse emission. \citet{terashima03apj:rloud} derived a 2-10~keV flux of about $3\times10^{-14}$~erg~s$^{-1}$~cm$^{-2}$, after assuming a photon index of 2 and a Galactic absorption. \citet{gonzalezmartin09aa} analyzed both \chandra\ and \xmm\ long observations and fit the \chandra\ spectrum with a power-law with a rather soft $\Gamma$ of 2.3 and fit the \xmm\ spectrum with a combination of two power-laws and a thermal component with a power-law absorption of $\sim10^{22}$~cm$^{-2}$. We found a good fit for both \chandra\ and \xmm\ spectra simultaneously with a single absorbed power-law and found little absorption of $\sim2\times10^{21}$~cm$^{-2}$ and a soft power-law photon index, $\Gamma=2.4$, and a 2-10~keV flux of $4\times10^{-14}$~erg~s$^{-1}$~cm$^{-2}$. {\sl NGC~3226.} \citet{georgeapj01:ngc3226} fit the 0.5-10~keV spectrum extracted from the long \chandra\ observation with an absorbed ($N_{H}\approx5\times10^{21}$~cm$^{-2}$) power-law ($\Gamma\approx1.9$). \citet{terashima03apj:rloud} fit the spectrum of the 2.5~ks snapshot \chandra\ observation with a moderately absorbed power-law with $N_{H}\approx10^{22}$~cm$^{-2}$ and $\Gamma\approx2.2$. \citet{gondoin0411:ngc3226} fit the data of the 35~ks \xmm\ observation with a partial covering absorber to a bremsstrahlung and found that the X-ray emitting region, with a temperature $kT\approx0.9$~keV is 90\% covered by an absorber with $N_{H}\approx5\times10^{21}$~cm$^{-2}$. \citet{binder09apj:ngc3226} studied the $\sim$100~ks \xmm\ observation and fit the spectrum with a partially covered power-law with $\Gamma\approx1.9$, a covering fraction of 90\%, and an intrinsic hydrogen column density of $10^{21}$~cm$^{-2}$. We fit the spectra of both \xmm\ and the long \chandra\ observations simultaneously with an absorbed power-law and found an intrinsic column density varying between the observations from $\sim3\times10^{21}$~cm$^{-2}$ to $\sim9\times10^{21}$~cm$^{-2}$ and a mean photon index $\Gamma=1.9$. {\sl NGC~3718.} \citet{satyapalapj05:sfVSliner} studied the snapshot \chandra\ observation and fit the spectrum with an absorbed ($N_{H}\approx10^{22}$~cm$^{-2}$) power-law ($\Gamma\approx1.5$) in excellent agreement with our fit results to the same observation. We studied two \xmm\ observations of NGC~3718, being in the field of view of the observations of the heavily absorbed Seyfert~2 galaxy \object{UGC~6527}. We fit the spectra with an absorbed power-law and found an intrinsic hydrogen column density similar to the one derived for the \chandra\ observation of $\approx10^{22}$~cm$^{-2}$ but a somewhat softer power-law with $\Gamma\approx1.8$. This softening is accompanied with a 2-10~keV flux decrease from $3.3\times10^{-12}$~erg~s$^{-1}$~cm$^{-2}$ to $1.6\times10^{-12}$~erg~s$^{-1}$~cm$^{-2}$. {\sl NGC~3998.} \citet{ptak04apj:ngc3998} studied the 10~ks \xmm\ observation and fit the spectrum with a slightly absorbed ($N_{H}\approx10^{20}$~cm$^{-2}$) power-law ($\Gamma\approx1.9$). Same results were found for observations made with {\sl BeppoSAX} \citep{pellegriniaa00:mgc3998} and {\sl ASCA} \citep{ptak99apjs:TCM}. \citet{gonzalezmartin09aa} fit the \chandra\ spectrum with a combination of two absorbed power-laws and a thermal component. We fit the \xmm\ and the \chandra\ spectra simultaneously with a mildly absorbed ($N_{H}\approx10^{20}$~cm$^{-2}$) power-law and found a varying $\Gamma$ from 1.8 to 2.1 respectively, occurring with a flux decrease from $1.1\times10^{-11}$~erg~s$^{-1}$~cm$^{-2}$ to $6.5\times10^{-12}$~erg~s$^{-1}$~cm$^{-2}$. {\sl NGC~4143.} \citet{terashima03apj:rloud} fit the \chandra\ snapshot observation with an absorbed power-law with $N_{H}<10^{21}$~cm$^{-2}$ and $\Gamma\approx1.7$. We fit the same snapshot observation with a power-law without a requirement of an intrinsic absorption and found a similar power-law photon index within the error bars, $\Gamma\approx1.9$. We fit the \xmm\ observation with an absorbed power-law and found a $N_{H}=6\times10^{20}$~cm$^{-2}$ and a $\Gamma\approx2.2$. {\sl NGC~4203.} A power-law fit to the {\sl ASCA} spectrum resulted in a $\Gamma\approx1.8$ \citep{iyomotoapj98:ngc4203}. \citet{ho01apjl} gave NGC~4203 a class I X-ray morphology showing dominant X-ray nucleus. We find that the 40~ks \chandra\ spectrum is well fitted with a simple power-law affected by Galactic absorption with $\Gamma\approx2.3$, softer than the result reported for {\sl ASCA}, most likely due to contamination from X-ray sources in the {\sl ASCA} extraction region of 1\hbox{$^\prime$}. {\sl NGC~4278.} See Y10. {\sl NGC~4750.} \citet{dudik05apj:hardcoreliner}, and according to the only \chandra\ snapshot observation, gave this source a morphological X-ray type II, exhibiting multiple, hard off-nuclear point sources of comparable brightness to the nuclear source. We fit the spectrum of this same observation with an absorbed power-law with $N_{H}<3\times10^{21}$~cm$^{-2}$ and $\Gamma=1.8$. {\sl NGC~4772.} We fit the spectrum of the only, \chandra\ snapshot, observation with an absorbed ($N_{H}\approx5\times10^{21}$~cm$^{-2}$) power-law ($\Gamma\approx1.7$). {\sl NGC~5005.} \citet{terashima02apjs:LLAGNASCA} fit the {\sl ASCA} spectrum with a combination of an absorbed ($N_{H}<9\times10^{21}$~cm$^{-2}$) power-law ($\Gamma\approx1$) and a thermal component ($kT\approx0.8$). \citet{dudik05apj:hardcoreliner} gave NGC~5005 a morphological X-ray type III, showing a hard nuclear point source embedded in diffuse emission. \citet{gonzalezmartin09aa} fit the \xmm\ spectrum with a combination of a thermal component with $kT\approx0.3$~keV and an absorbed ($N_{H}\approx6\times10^{21}$~cm$^{-2}$) power-law ($\Gamma\approx1.5$). We find a hotter thermal component when fitting the same data set with $kT\approx0.6$~keV and a mildly absorbed power-law with $N_{H}\approx10^{21}$~cm$^{-2}$ and $\Gamma\approx1.7$. \section{Surrounding sources of the centers of galaxies observed with \chandra} \label{appb} In this Appendix, we report the spectral analysis of resolved and/or unresolved off-nucleus sources detected with the \chandra\ telescope, but blended within the central LINER in the \xmm\ extraction region (\S~\ref{xmmobs}). Fig.~1-4 show the surrounding sources of these LINER~1s detected with \chandra\ within a 25\hbox{$^{\prime\prime}$}-radius circle. The surrounding medium of NGC~4278 is already reported in Y10. {\sl NGC~315.} This source is the only source in our sample that shows a resolved X-ray jet. We extracted from the longer \chandra\ observation the spectrum of the jet from an ellipse with a semi-major axis of about 11.3\hbox{$^{\prime\prime}$}\ and semi-minor axis of 5.6\hbox{$^{\prime\prime}$}. The base of the ellipse extends down to the $1.1\times99\%$ PSF of the central source. We fit the spectrum with a combination of an absorbed power-law and a thermal {\sl mekal} component and found a good fit with a reduced $\chi^2$ of 0.9 for 42 d.o.f. We find a hydrogen column density upper limit of $2\times10^{21}$~cm$^{-2}$ and a photon index $\Gamma=2.0^{+0.4}_{-0.2}$. The thermal component had a temperature of $0.6^{+0.5}_{-0.9}$~keV. We find a corrected 0.5-10~keV flux for the jet emission of $(1.2\pm0.1)\times10^{-13}$~erg~s$^{-1}$~cm$^{-2}$, which corresponds to a 0.5-10~keV luminosity of $6\times10^{40}$~erg~s$^{-1}$, corresponding to 8\%\ of the nuclear flux. The power-law emission contributes to 95\%\ to the total emission of the jet. For the diffuse emission, we extracted the spectrum from an annulus with inner circle delimited by $1.1\times99\%$ PSF of the central source and an outer radius of 25\hbox{$^{\prime\prime}$}\ excluding the jet extraction region. The same model fit to the jet gave a good fit with a reduced $\chi^2$ of 1.3 for 61 d.o.f. We find an intrinsic absorption to the powerlaw component $N_{H}=6^{+12}_{-10}\times 10^{21}$~cm$^{-2}$ and a photon index $\Gamma=1.7\pm0.8$, possibly representing emission from unresolved X-ray binaries. The thermal component has a temperature similar to the one derived for the jet emission with $kT=0.6\pm0.2$~keV. We found a corrected 0.5-10~keV flux of $2.9^{+0.2}_{-0.3}\times10^{-13}$~erg~s$^{-1}$~cm$^{-2}$ with the power-law contributing only to 30\%\ of the total emission. This corresponds to a luminosity of $\sim10^{41}$~erg~s$^{-1}$ which is $\sim14$\%\ of the total core luminosity. {\sl NGC~2787.} An X-ray source south-east of the nucleus of NGC~2787 at a distance less than 10\hbox{$^{\prime\prime}$}\ is present. We fit the spectrum of this source with an absorbed power-law. We used the same redshift and Galactic absorption as for the NGC~2787 nucleus, assuming that this X-ray source is located in NGC~2787 and not a background quasar. The fit is acceptable with a reduced $\chi^2$ of 1.0 for 19~d.o.f. We found a power-law photon index $\Gamma=1.5^{+0.5}_{-0.4}$, typical of X-ray binaries in nearby galaxies \citep{irwin03apj:lmxbpop,fabbiano06aa:lmxbpop}, and an upper limit on the intrinsic hydrogen column density of $2\times10^{21}$~cm$^{-2}$. We derived a 0.5-10~keV corrected flux of $7\pm1\times10^{-14}$~erg~s$^{-1}$~cm$^{-2}$, which resulted in a luminosity of, adapting the NGC~2787 distance of 7.48~Mpc, $5\pm1\times10^{38}$~erg~s$^{-1}$. This luminosity is close to the NGC~2787 core luminosity of $\sim9\times10^{38}$~erg~s$^{-1}$. Such a source could be a luminous low mass X-ray binary (LMXB) similar to some seen in early type galaxies \citep{fabbiano06aa:lmxbpop}. The rest of the medium in an annulus of outer radius 25\hbox{$^{\prime\prime}$}\ around NGC~2787 is formed by six other X-ray sources, much fainter than the closest one to the center. We could not perform spectral analysis on the different sources aside, so we fit the spectrum of the six sources simultaneously with a power-law only affected by Galactic absorption. We found a photon index of $\sim$2 for the six sources and a total corrected 0.5-10~keV flux of $8^{+1}_{-2}\times10^{-15}$~erg~s$^{-1}$~cm$^{-2}$, resulting in a 0.5-10~keV luminosity of $5^{+1}_{-2}\times10^{37}$~erg~s$^{-1}$, corresponding to 5\%\ of the nucleus luminosity, when adapting the NGC~2787 distance of 7.48~Mpc. {\sl NGC~3226.} Two X-ray sources are within a $\sim$12\hbox{$^{\prime\prime}$}\ distance from the nucleus of NGC~3226 (source 1: CXOU~J102334.1+195347, source 2: CXOU~J102326.7+195407). Both sources were reported in \citet{georgeapj01:ngc3226}. Based on the hardness ratio between the counts in the 0.3-2~keV band and the counts in the 2-10~keV band, the authors estimated the sources to have a flux of a few $10^{-14}$~erg~s$^{-1}$~cm$^{-2}$, and therefore a luminosity between a few times $10^{38}$~erg~s$^{-1}$ to a few times $10^{39}$~erg~s$^{-1}$. We fit the spectrum of both sources with an absorbed power-law, using the cash statistic due to a low number of counts. We find that source 1 and source 2 have an intrinsic absorption upper limit of $6\times10^{22}$~cm$^{-2}$ and $5\times10^{21}$~cm$^{-2}$, respectively. The photon index we found for source 2 is typical, within the error bars, of accreting objects with a $\Gamma=1.2^{+0.9}_{-0.6}$. On the other hand, we found a harder spectrum for source 1 with a $\Gamma=0.2\pm1.3$. The corrected 0.5-10~keV flux we derive for source 1 and source 2 are $\sim4\times10^{-14}$~erg~s$^{-1}$~cm$^{-2}$ and $\sim2\times10^{-14}$~erg~s$^{-1}$~cm$^{-2}$, respectively. This implies, assuming the distance of NGC~3226 to both sources, a corrected 0.5-10~keV luminosity of $3\times10^{39}$~erg~s$^{-1}$ and $2\times10^{39}$~erg~s$^{-1}$ for source~1 and source~2 respectively. Both luminosities are well beyond the luminosity of a typical neutron star LMXB of $\sim3\times10^{38}$~erg~s$^{-1}$, and hence could be BHs greater than or equal to a few solar masses, very young supernovae, or microquasars \citep{georgeapj01:ngc3226}. {\sl NGC~3998.} Only one source is detected within a 25\hbox{$^{\prime\prime}$}\ circle around NGC~3998 in the \chandra\ image. We fit the source with an absorbed power-law, using the cash statistic. We found an upper limit on the intrinsic column density of $10^{21}$~cm$^{-1}$ and a power-law photon index of $1.4^{+0.9}_{-0.5}$. We derived a corrected 0.5-10~keV flux of $3\pm1\times10^{-14}$~erg~s$^{-1}$~cm$^{-2}$, which corresponds to a luminosity of $7\pm1\times10^{38}$~erg~s$^{-1}$, adapting the NGC~3998 distance of 14.1~Mpc. This corresponds to only 0.2\%\ of the total core luminosity of NGC~3998 and match the luminosity of XRBs in nearby galaxies. \end{appendix}
1,108,101,564,012
arxiv
\section{Introduction}\label{sec:1} Emergent properties of artificial or natural complex systems attract growing interests recently. Some of them are conveniently modeled with a network, where constituting ingredients and interactions are represented with vertices and links, respectively. Watts and Strogatz demonstrated that real-world networks display the small-world effect and the clustering property, which cannot be explained with the regular and random networks~\cite{Watts98}. Later on, in the study of the WWW network, Albert {\em et al.} found that the degree, the number of attached links, of each vertex follows a power-law distribution~\cite{Albert99}. Those works trigger a burst of researches on the structure and the organization principle of complex networks~(see Refs.\cite{Albert02,Dorogovtsev02,Newman03SIAM} for reviews). Many real-world networks, e.g., in biological, social, and technological systems, are found to obey the power-law degree distribution~\cite{Albert02}. A network with the power-law distribution is called a scale-free~(SF) network. One of the possible mechanism for the power law is successfully explained with the Barab\'asi-Albert~(BA) model~\cite{Barabasi99}. The model assumes that a network is growing and that the rate acquiring a new link for an existing vertex is proportional to a popularity measured by its degree. The popularity-based growth appears very natural since, e.g., creating a new web site, one would link it preferentially to popular sites having many links. With the BA and related network models, structural and dynamical properties of networks have been explored extensively. On the other hand, there exists another class of networks which have a group structure. Consider, for example, online communities such as the ``Groups'' operated by the Yahoo~(\url{http://www.yahoo.com}) and the ``Cafes'' operated by the Korean portal site Daum~(\url{http://www.daum.net}). They consist of individual members and groups, gatherings of members with a common interest, and growth of the community is driven not only by members but also by groups. A community evolves as an individual registers as a new member. The new comers can create new groups with existing members or joins existing groups. The online community is a rapidly growing social network~\cite{comment_daum}. The emerging structure would be distinct from that observed in networks without the group structure. In this paper, we propose a growing network model for the community with the group structure. We model the community with a bipartite network consisting of two distinct kinds of vertices representing members and groups, respectively. A link may exist only between a member vertex and a group vertex, which represents a membership relation. The bipartite network~\cite{Newman01a} has been considered in the study of the movie actor network~\cite{Watts98} consisting of actors and movies, the scientific collaboration network~\cite{Newman01a,Goldstein04} of scientists and articles, and the company director network~\cite{Newman01a} of directors and boards of directors. Usually those networks are treated as unipartite by projecting out one kind of vertices of less interest~\cite{Newman01b,Newman03}. Some biological and social networks are known to have a modular structure~\cite{Girvan02,Ravasz03}, where vertices in a common module are densely connected while vertices in different modules are sparsely connected. The modular structure is coded implicitly in the connectivity between vertices. Unipartite network models with the modular structure were also studied in Refs.~\cite{Ravasz03,Watts02,Motter03,Skyrms00,Jin01,Gronlund04,DHKim03}, where vertices form modules which in turn form bigger modules hierarchically~\cite{Watts02,Ravasz03,Motter03} or the modular structure emerges dynamically as a result of social interactions~\cite{Skyrms00,Jin01,Gronlund04,DHKim03}. In Ref.~\cite{DHKim03}, each vertex is assigned to a Potts-spin-like variable pointing to its module~\cite{DHKim03}. These studies on the group structures of networks have mainly focused on the groups with finite number of members. However, there are groups in the real-world online community which keep growing as the community evolves. Reflecting growing dynamics of the real-world online community, our model takes account of the group structure explicitly with a bipartite network consisting of member and group vertices. Upon growing, both the member and group vertices evolve in time. We study the dynamics of the size of groups and the activity of the members. The size of a group is defined as the number of members in the group and the activity of a member is the number of groups in which the member participates. When the community grows large enough, the group size distribution shows a power law distribution unlike the network models studied previously~\cite{Watts02,DHKim03}. To test our model, we analyze the empirical data from on the online communities, the ``Groups'' in \url{http://www.yahoo.com} and the ``Cafe'' in \url{http://www.daum.net} and show that both communities indeed show power law group size distributions for wide ranges of group sizes. This paper is organized as follows. In Sec.~\ref{sec:2}, we introduce the growing network model. Depending on the choice of detailed dynamic rules, one may consider a few variants of the model. Characteristics such as the group size distribution, the member activity distribution, and the growth of the number of groups are studied analytically in a mean field theory and numerically in Sec.~\ref{sec:3}. Those characteristics are also calculated for the real-world online communities and compared with the model results. We conclude the paper with summary in Sec.~\ref{sec:4}. \section{Model}\label{sec:2} We introduce a model for a growing community with the group structure. The community grows by adding a new member at a time, who may open a new group or join an existing group~\cite{comment_Simon}. Following notations are adopted: A member entering the community at time step $i$ is denoted by $I_i$. The activity, the number of participating groups, of $I_i$ is denoted by $A_i$. As members enter the community, new groups are created or existing groups expand. The $\alpha$th group is denoted by $G_\alpha$, its creation time by $\tau_\alpha$, and its size by $S_\alpha$. The total number of members and groups is denoted by $N$ and $M$, respectively. Initially, at time $t=0$, the community is assumed to be inaugurated by $m_0$ members, denoted by $I_{-(m_0-1)},\ldots,I_0$, belonging to an initial group $G_1$. That is, we have that $N(t=0)=m_0$, $M(t=0)=1$, $A_j(t=0)=1$ for $j=-(m_0-1),\cdots,0$, $\tau_1=0$, and $S_1(t=0)=m_0$. At time $t$, a new individual $I_t$ is introduced into the community and becomes a member by repeating the following procedures until its activity reaches $m$: \begin{itemize} \item {\textbf{Selection}} : It selects a partner $I_{j}$ among existing members $\{I_{k<t}\}$ with a selection probability $P^S_j$. \item {\textbf{Creation or Joining}} : With a creation probability $P^C_j$, it creates a new group $G_{M+1}$ with the partner $I_j$. Otherwise, it selects randomly one of the groups of $I_j$ with the equal probability and joins it. If $I_t$ is already a member of the selected group, then the procedure is canceled. \end{itemize} A specific feature of the model varies with the choice of those probabilities $P^S$ and $P^C$. Regarding to the selection, simplest is the random choice among existing members with the equal probability $P^S_j = 1/(m_0+t-1)$. Note that the selection may be regarded as an invitation of a new member by existing members. Then, it may be natural to assume that active members invite more newcomers. Such a case is modeled with a preferential selection probability $P^S_j = A_j / (\sum_{k<t} A_k)$. After selecting a partner $I_j$, the newcomer may create a new group or join one of $I_j$'s groups with the equal probability. In that case the creation probability is variable as $P^C_j = 1 / (A_j + 1 )$. In the other case, it may create a new group with a fixed probability $P^C_j = \omega$. Combining the strategies in the two procedures, we consider the possible four different growth models denoted by RV, RF, PV, and PF, respectively. Here, R~(P) stands for the random~(preferential) selection, and V~(F) for the group creation with the variable~(fixed) probability. For example, the RF model has the selection probability, $P^S_j = 1/(m_0+t-1)$ and the creation probability, $P^C_j = 1 / (A_j + 1 )$. The growth rules are summarized in Table~\ref{table1}. \begin{figure}[t] \includegraphics*[width=0.7\columnwidth]{fig1.ps} \caption{A network for the RV model with $m_0=m =1$ and $N=10$ with six groups. The symbol {\large\textcircled{{\small $i$}}} and \fbox{$\alpha$} represents a member $I_i$ and a group $G_\alpha$, respectively.} \label{fig1} \end{figure} \begin{figure}[t] \includegraphics*[width=\columnwidth]{fig2.ps} \caption{ (color online) A network for the RV model with $m_0=m =1$ and $N=1000$. A square~(circle) symbol stands for a group~(member).} \label{fig2} \end{figure} The whole structure of the community is conveniently represented with a bipartite network of two kinds of vertices; one for the group and the other for the member. A link exists only from a member vertex to a group vertex to which it belongs. The member activity and the group size correspond to the degree of the corresponding vertex. Figure~\ref{fig1} shows a typical network configuration for the RV model with $m_0=m=1$. To help readers understand the growth dynamics, we add the indices for members $I_i$ and groups $G_\alpha$ in the figure. It is easily read off that $I_1$ selects $I_0$ and becomes a member of $G_1$ at $t=1$ and that $I_2$ opens a new group $G_2$ with $I_0$ at $t=2$, and so on. Figure~\ref{fig2} shows a configuration of a RV network with $m=m_0=1$ grown up to $N=1000$ members with $M=452$ groups. It is noteworthy that there appear hub groups having a lot of members. The emerging structure of the network will be studied in the next section. \begin{table} \caption{Model description and mean field results for the group size distribution exponent $\gamma$. Here, $\Theta_{RV}$ and $\Theta_{PV}$ are the group number growth rate given in Eqs.~(\ref{eq:Theta}) and (\ref{eq:Theta_PV}), respectively. The activity distribution follows a power law only for the PF model with the exponent $\lambda=2+1/\omega$. } \label{table1} \begin{ruledtabular} \begin{tabular}{l|cc} & R $\left(P_j^S = \frac{1}{m_0+t-1}\right)$ & P $\left(P_j^S = \frac{A_j}{\sum_{k<t}A_k}\right)$ \\ \hline V $\left(P_j^C=\frac{1}{A_j+1}\right)$ & $1+\Theta_{RV}^{-1}$ & $1 + \Theta_{PV}^{-1}$ \\ F $\left(P_j^C=\omega\right)$ & $ 2 / (1-\omega)$ & $ 2 / (1-\omega)$ \end{tabular} \end{ruledtabular} \end{table} \section{Network Structure}\label{sec:3} The number of groups $M(t)$, the activity of each member $A_i(t)$, and the size of each group $S_\alpha(t)$ increase as the network grows. With those quantities, we characterize the growth dynamics and the network structure. In the following, we study the dynamics of those quantities averaged over network realizations. For simplicity's sake, we make use of the same notations for the averaged quantities. The network dynamics implies that they evolve in time as follows: \begin{eqnarray} A_i(t+1) &=& A_i(t) + m P^S_i P^C_i \label{delA} \\ M(t+1) &=& M(t) + m \sum_{j\leq t} P^S_j P^C_j \label{delM} \\ S_\alpha(t+1) &=& S_\alpha(t) + m \sum_{j\leq t} P^S_j \chi_{j\alpha} (1-P^C_j)/A_j \ , \label{delS} \end{eqnarray} where $\chi_{j\alpha}=1$ if $I_j$ belongs to $G_\alpha$ or 0 otherwise. The initial conditions are given by $A_i(t=i)=m$, $M(t=0)=1$, and $S_\alpha(t=\tau_\alpha)=2$ with $\tau_\alpha$ the creation time of $G_\alpha$. We analyze the equations in a continuum limit and in a mean field scheme, neglecting any correlation among dynamic variables. Firstly we consider the RV model. Using the corresponding $P^C$ and $P^S$ in Table~\ref{table1}, Eqs.~(\ref{delA},\ref{delM},\ref{delS}) become \begin{eqnarray} \frac{dA_i}{dt} &=& \frac{m}{(A_i+1)(m_0+t)} \\ \frac{dM}{dt} &=& \frac{1}{(m_0+t)} \sum_{j\leq t} \frac{m}{(A_j+1)} \\ \frac{dS_\alpha}{dt} &=& \left(\frac{1 }{m_0+t}\right) \left(\frac{S_\alpha}{m_0+t}\right) \sum_{j\leq t} \frac{m}{(A_j+1)} \ , \label{eq:dotS_RV} \end{eqnarray} where we approximate $\chi_{j\alpha}$ in Eq.~(\ref{delS}) with $\frac{S_\alpha}{(m_0+t)}$, the fraction of members of $G_\alpha$ among all members. The solution for $A_i(t)$ is given by \begin{equation}\label{eq:Ai} A_i(t) = -1 + \sqrt{ (m+1)^2 + 2m \ln \left[\frac{m_0+t}{m_0+i}\right]} \ . \end{equation} It shows that an older member with smaller $i$ has a larger activity and that the activity grows very slowly in time. With the solution for $A$, one can easily show that $\sum_{j\leq t}m/(A_j+1)\simeq \Theta_{RV} (m_0 + t )$ for large $t$ with \begin{equation}\label{eq:Theta} \Theta_{RV} = \int_0^1 du \frac{m}{\sqrt{ (m+1)^2 -2m \ln u}} \ . \end{equation} Hence, the average number of groups increases linearly in time as $M(t) \simeq \Theta_{RV} t$ with the group number growth rate $\Theta_{RV}$. The group size increases algebraically as \begin{equation}\label{eq:S_RV} S_\alpha(t) \simeq 2 \left( \frac{m_0+t}{m_0+\tau_\alpha} \right)^{\Theta_{RV}} \ . \end{equation} We have obtained the activity of each member and the size of each group, which allow us to derive the distribution function $P_a(A)$ and $P_s(S)$ for the activity and the group size, respectively. The activity distribution function is given by the relation $P_a(A) = P_{in}(i) |di/dA|$ with the uniform individual distribution, $P_{in}(i)= 1/(m_0+t)$. The differentiation can be done through Eq.~(\ref{eq:Ai}), which yields that the activity distribution is bounded as $P_{a}(A) = (A+1) \exp\{ - ( (A+1)^2 - (m+1)^2 )/(2 m) \}/m$. Similarly, the group size distribution is given by $P_s(S) = P_\alpha(\tau) |d\tau/dS|$ with the group creation time distribution $P_\alpha(\tau)$. We assume that the group creation time is distributed uniformly, which is justified with the linear growth of $M \simeq \Theta_{RV}(m_0+t)$. Then the group size distribution follows a power law $P_s(S) \sim S^{-\gamma_{RV}}$ with the exponent \begin{equation} \gamma_{RV} = 1 + \Theta_{RV}^{-1} \ . \end{equation} Note that the distribution exponent is determined by the group number growth rate $\Theta_{RV}$. We now turn to the PF model. With the selection and creation probabilities, Eqs.~(\ref{delA},\ref{delM},\ref{delS}) are written as \begin{eqnarray} \frac{d{A}_i}{dt} &=& \frac{m\omega A_i}{\sum_{j\leq t} A_j} \label{eq:delA_PF}\\ \frac{dM}{dt} & = & m\omega \label{eq:M_PF}\\ \frac{dS_\alpha}{dt} &=& (1-\omega) S_\alpha \frac{m}{\sum_{j\leq t} A_j} \ . \end{eqnarray} We also took the approximation $\chi_{i\alpha} = S_\alpha/(m_0+t)$ in Eq.~(\ref{delS}). Trivially we find that the group number grows in time as $M(t) =m\omega t +1$. For $A_i$ and $S_\alpha$, one need evaluate the quantity $\sum_{j\leq t}A_j$. Summing over all $i$ both sides of Eq.~(\ref{eq:delA_PF}), one obtains that $\sum_{i\leq t}(dA_i/dt) = m\omega$. Note that $d({\sum_{i\leq t} A_i})/dt = \sum_{i\leq t} (dA_i/dt)+ m = (1+\omega)m$, which yields that $(\sum_{j\leq t} A_j) = m(1+\omega)t + m_0$. Hence we obtain the algebraic growth of the activity and the group size as \begin{eqnarray} A_i(t) &=& m \left( \frac{ m(1+\omega)t+m_0 } { m(1+\omega)i+m_0}\right)^{\frac{\omega}{1+\omega}} \label{eq:A_PF} \\ S_\alpha(t) &=& 2 \left( \frac{ m(1+\omega)t + m_0 }{m(1+\omega)t_\alpha + m_0 } \right)^\frac{1-\omega}{1+\omega} \label{eq:S_PF} \ . \end{eqnarray} These results allow us to find the distribution functions $P_a(A)$ and $P_s(S)$. They follow the power distribution $P_a(A) \sim A^{-\lambda_{PF}}$ and $P_s(S) \sim S^{-\gamma_{PF}}$ with the exponents \begin{equation} \lambda_{PF} = 2 + 1 / \omega \quad\mbox{and}\quad \gamma_{PF} = 2/(1-\omega) \ . \end{equation} Here we also assumed the uniform distribution of $\tau_\alpha$ in Eq.~(\ref{eq:S_PF}), which is supported from the linear growth of $M(t) \sim m \omega t$. In contrast to the RV model, both distributions follow the power-law. The exponents do not depend on the parameter $m$, but only on the group creation probability $\omega$. For the PV and the RF model, the followings can be shown easily: The PV model behaves similarly as the RV model. The group number increases linearly in time as $M(t) \simeq \Theta_{PV} t$ with the group number growth rate $\Theta_{PV}$. Unfortunately, we could not obtain a closed form expression for it. However, if we adopt the assumption that the selection probability $P_i^S$ is proportional to $A_i+1$ instead of $A_i$, it can be evaluated analytically as \begin{equation}\label{eq:Theta_PV} \Theta_{PV} \simeq \left( \sqrt{m^2+6m+1} - (m+1) \right)/2 \ . \end{equation} The approximation would become better for larger values of $m$. The group size grows algebraically as in Eq.~(\ref{eq:S_RV}) with $\Theta_{PV}$ instead of $\Theta_{RV}$. Therefore, the group size distribution follows the power-law with the exponent $\gamma_{PV}$ presented in Table~\ref{table1}. The RF model also displays the power-law group size distribution. The distribution exponent $\gamma_{RF}$ is given in Table~\ref{table1}. Note that $\gamma_{RF}$ and $\gamma_{PF}$ are the same. On the other hand, the activity distribution follows an exponential distribution in the RF and the PV model. Origin for the power-law distribution of the group size is easily understood. In all models considered, the size of a group increases when one of its members invites a new member. The larger a group is, the more chance to invite new members it has. Therefore there exists the preferential growth in the group size, which is known to lead to the power-law distribution~\cite{Barabasi99}. The activity of a member increases when a newcomer selects it and creates a new group. When the random selection probability is adopted, such a process does not occur preferentially for members with higher activity. It results in the exponential type activity distribution in the RV and RF models. In the PV model, although the selection probability is proportional to the activity, the creation probability is inversely proportional to the activity. Hence, it does not have the preferential growth mechanism in the member activity either. Only in the PF model, the activity growth rate is proportional to the activity of each member. Therefore, the activity distribution follows the power-law only in the PF model. \begin{figure}[t] \includegraphics*[width=\columnwidth]{fig3.eps} \caption{(a) The group size distribution and (b) the activity distribution. The model parameters are $m=4,1$ for the RV and the PV model, respectively. The RF model has $m=4$ and $\omega=0.6$, and the PF model has $m=4$ and $\omega=0.5$. The community has grown up to $N=10^6$ and the distributions are averaged over $10^4$ samples.} \label{fig3} \end{figure} \begin{figure}[t] \includegraphics*[width=\columnwidth]{fig4.eps} \caption{(a) Numerical results for $\gamma$ for the RV and the PV model. The solid~(dashed) curve represents the analytic mean field results for the RV~(PV) model. (b) Numerical results for $\gamma$~(open symbols) of the RF and the PF model, and for $\lambda$~(filled symbols) of the PF model. The solid~(dashed) curve represents the analytic results for $\gamma$~($\lambda$) in Table~\ref{table1}.} \label{fig4} \end{figure} The analytic mean field results are compared with numerical simulations. In simulations, we chose $m_0 =m$ and all data were obtained after the average over at least 10000 samples. We present the numerical data in Fig.~\ref{fig3}. In accordance with the mean field results, the group size distribution follows the power-law in all cases. The activity distribution also shows the expected behavior; the power-law distribution for the PF model and exponential type distributions for the other models. We summarize the distribution exponents in Fig~.\ref{fig4}. The measured values of the distribution exponents are in good agreement with the analytic results. Our network models display distinct behaviors from those bipartite networks such as the movie actor network, the scientific collaboration networks, and the director board network which have been studied previously. For the first two examples, their growth is driven only by the member vertices, the actors and the scientists, respectively. The activity of members may increase in time. However, the group vertices, the movies and the papers, respectively, are frozen dynamically and their sizes are bounded practically. For the last example, both the members~(directors) and the groups~(boards) may evolve in time. However, it was shown that the group size distribution is also bounded~\cite{Newman01a}. Our model is applicable to evolving networks with the group structure where the size of a group may increase unlimitedly. The online community is a good example of such networks. To test the possibility, we study the empirical data obtained from the Groups and the Cafe operated by the Yahoo in \url{http://www.yahoo.com} and the Daum in \url{http://www.daum.net}, respectively. It is found in August, 2004 that there are 1,516,750~(1,743,130) groups~(cafes) with 76,587,494~(351,565,837) cumulative members in the Yahoo~(Daum) site. The numbers of members of the groups are available via the web sites. Figure~\ref{fig5} presents the cumulative distribution $P_>(S) = \sum_{S' > S} P_s (S')$ of the group size. The distribution has a fat tail~\cite{comment_data}. Although the distribution function in the log-log scale show a nonnegligible curvature in the entire range, it can still be fitted reasonable well into the power law for a range over two decades~(see the straight lines drawn in Fig.~\ref{fig5}). From the fitting, we obtain the group size distribution exponents $\gamma_{\text{Yahoo}} \simeq 2.8$ and $\gamma_{\text{Daum}} \simeq 2.15$. The power-law scaling suggests that the online community may be described by our network model. Unfortunately, information on the activity distribution is not available publicly. So we could not compare the activity distribution of the communities with the model results. We would like to add the following remark: A real-world online community evolves in time as new members are introduced to and new groups are created. At the same time, it also evolves as members leave it and groups are closed. Those processes are not incorporated into the model. Our model is a minimal model for the online community where the effects of leaving members and closed groups are neglected. \begin{figure} \includegraphics*[width=\columnwidth]{fig5.eps} \caption{Cumulative group size distribution of the online communities in the Yahoo and the Daum.} \label{fig5} \end{figure} \section{Summary}\label{sec:4} We have introduced the bipartite network model for a growing community with the group structure. The community consists of members and groups, gatherings of members. Those ingredients are represented with distinct kinds of vertices. And a membership relation is represented with a link between a member and a group. Upon growing a group increases its size when one of its members introduces a new member. Hence, a larger group grows preferentially faster than a smaller group. With the analytic mean field approaches and the computer simulations, we have shown that the preferential growth leads to the power-law distribution of the group size. On the other hand, the activity distribution follows the power-law only for the PF model with the preferential selection probability and the fixed creation probability~(see Table~\ref{table1}). We have also studied the empirical data obtained from the online communities, the Groups of the Yahoo and the Cafe of the Daum. Both communities display the power-law distribution of the group size. It suggests our network model be useful in studying their structure. \acknowledgments{This work was supported by Grant No. R14-2002-059-01002-0 from the KOSEF-ABRL program and by Grand No. KRF-2004--015-C00188. JDN and HCJ would like to thank KIAS for the support during the visit.}
1,108,101,564,013
arxiv
\section{\label{sec:intro}Introduction\protect\\} High energy diffractive hadronic scattering at small values of four-momentum transfer squared $t$, is dominated by an exchange of the Pomeron trajectory, a color-singlet object with the quantum numbers of the vacuum~\cite{barone,donnachie}. The calculation of cross-sections for small-$t$ scattering requires a non-perturbative approach in QCD and its theoretical treatment is still being developed. The experimental data therefore provide significant constraints for theoretical approaches and models~\cite{buttimore,trueman3}. The coupling of the Pomeron to the nucleon spin is of special interest since it is predicted to be sensitive to the internal dynamics of the nucleon~\cite{buttimore,trueman3}. Studies of the spin dependence of proton-proton (pp) scattering at small momentum transfers and at the highest energies presently available at RHIC offer an opportunity to reveal important information on the nature of the Pomeron. There are several theoretical approaches which predict non-zero spin-dependent Pomeron amplitudes for elastic scattering. Examples include an approach in which the Pomeron-proton coupling is modeled via two-pion exchange~\cite{pumplin}, an impact picture model assuming that the spin-flip contribution is sensitive to the impact parameter distribution of matter in a polarized proton~\cite{bourrely}, and a model which treats the Pomeron helicity coupling analogously to that of the isoscalar anomalous magnetic moment of the nucleon~\cite{ryskin}. Still another approach assumes a diquark enhanced picture of the proton~\cite{diquark}, in which a non-zero spin-flip amplitude may arise if the proton wave function is dominated by an asymmetric configuration, such as a quark-diquark. Here we present a high precision measurement of the transverse single spin asymmetry $A_N$ in elastic scattering of polarized protons at $\sqrt{s} = 200 \:\rm{GeV}$ in the $t$-range $0.003 \leqslant |t| \leqslant 0.035$ ${\rm (GeV/}c)^2$ by the STAR experiment~\cite{starnim} at RHIC. The single spin asymmetry $A_N$ is defined as the left-right cross-section asymmetry with respect to the transversely polarized proton beam. In this range of $t$, $A_N$ originates predominantly from the interference between electromagnetic (Coulomb) spin-flip and hadronic (nuclear) non-flip amplitudes \cite{buttimore}. However, it was realized that $A_N$ in the Coulomb-nuclear interference (CNI) region is also a sensitive probe of the hadronic spin-flip amplitude \cite{diquark}, which will be discussed in more detail in Section~\ref{sec:hadronic}. A previous measurement of $A_N$ in a similar $t$-range and the same $\sqrt{s}$, but with limited statistics, has been reported by the pp2pp collaboration \cite{pp2ppAN}. Other measurements of $A_N$ performed at small $t$ were obtained at significantly lower energies. They include high precision results from the RHIC polarimeters obtained at $\sqrt{s} = 6.8 - 21.7~\rm{GeV}$ for elastic proton-proton \cite{jetcal,alekseev,bazilevsky} and proton-carbon~\cite{carboncal} scattering, as well as earlier results from the BNL AGS for pC scattering~\cite{e950} at $\sqrt{s} = 6.4~\rm{GeV}$ and from FNAL E704 for pp scattering~\cite{e704} at $\sqrt{s} = 19.4~\rm{GeV}$. The combined analysis of all results, which covers a wide energy range and different targets, will help to disentangle contributions of various exchange mechanisms relevant for elastic scattering in the forward region \cite{trueman}. In particular, such an analysis will allow us to extract information on the spin dependence of the diffractive mechanism which dominates at high energies. \section{\label{sec:hadronic}Hadronic spin-flip amplitude in elastic collisions} Elastic scattering of two protons is described by five independent helicity amplitudes: two helicity conserving ($\phi_1$ and $\phi_3$), two double helicity-flip ($\phi_2$ and $\phi_4$), and one single helicity-flip amplitude ($\phi_5$) -- see~\cite{buttimore} for definitions. At very high $\sqrt{s}$, such as available at RHIC, and very small $|t| <$ 0.05~${\rm (GeV/}c)^2$, the proton mass $m$ can be neglected with respect to $\sqrt{s}$ and $t$ can be neglected with respect to $m$, which simplifies kinematical factors in the following formulas. The elastic spin-averaged cross-section is given by: \begin{linenomath} \begin{equation} \frac{d\sigma}{dt} = \frac{2\pi}{s^2} {(|\phi_1|^2 + |\phi_2|^2 + |\phi_3|^2 + |\phi_4|^2 + 4 |\phi_5|^2)} \;, \end{equation} \end{linenomath} while the single spin-flip amplitude $\phi_5$ gives rise to the single spin asymmetry, $A_N$, through interference with the remaining amplitudes: \begin{linenomath} \begin{equation} A_N \frac{d\sigma}{dt} = -\frac{4\pi}{s^2} {\rm Im}\:\{\phi_5^*(\phi_1+\phi_2+\phi_3-\phi_4)\} \;. \end{equation} \end{linenomath} Each of the amplitudes consists of Coulomb and hadronic contributions: $\phi_i=\phi_i^{\rm em} + \phi_i^{\rm had}$, with the electromagnetic one-photon exchange amplitudes $\phi_i^{\rm em}$ described by QED using the measured anomalous magnetic moment of the proton.~\cite{buttimore78}. The optical theorem relates the hadronic amplitudes to the total cross-section: \begin{linenomath} \begin{equation} \sigma_{\rm total} = \frac{4\pi}{s}{\rm Im\:}(\phi_1^{\rm had} + \phi_3^{\rm had})|_{t=0}\;, \end{equation} \end{linenomath} which provides an important constraint on the parameterization of these dominant helicity conserving hadronic amplitudes. The contribution of the two double spin-flip hadronic amplitudes $\phi_2^{\rm had}$ and $\phi_4^{\rm had}$ to the asymmetry $A_N$ is small, as indicated by both experimental results~\cite{doublespin,spin2010} and theoretical predictions~\cite{trueman2}. Thus, the main contribution to $A_N$ is given by: \begin{linenomath} \begin{equation} A_N \frac{d\sigma}{dt} = -\frac{8\pi}{s^2} {\rm Im\:}(\phi_5^{{\rm em}*}\phi_+^{\rm had} + \phi_5^{{\rm had}*}\phi_+^{\rm em}) \;, \end{equation} \end{linenomath} where $\phi_+=(\phi_1+\phi_3)/2$. The parametrization of $\phi_5^{\rm had}$ is usually done in terms of \mbox{$\phi_+^{\rm had}\:$: $\phi_5^{\rm had}(s,t) = (\sqrt{-t}/m){\:\cdot\:}r_5(s){\:\cdot\:}{\rm Im\:}\phi_+^{\rm had}(s,t)$,} where $m$ is the proton mass. Thus $r_5$ is the measure of the ratio of the hadronic single spin-flip amplitude ($\phi_5$) to hadronic single non-flip amplitudes ($\phi_1$ and $\phi_3$). Using this parametrization the following representation of $A_N$ can be derived~\cite{buttimore}: \begin{widetext} \begin{linenomath} \begin{equation} A_N = \frac {\sqrt{-t}}{m} \: \frac {[\kappa (1 - \rho \:\delta) + 2 (\delta \:{\rm{Re}} \:r_5 - {\rm{Im}} \:r_5)] \frac{t_c}{t} - 2 ({\rm{Re}} \:r_5 - \rho \:{\rm{Im}} \:r_5)}{ (\frac{t_c}{t})^2 - 2 (\rho + \delta)\frac{t_c}{t} + (1 + \rho ^2)}\,, \label{cnicurve} \end{equation} \end{linenomath} \end{widetext} where $t_c = -8 \pi \alpha /\sigma _{\rm total}$, $\kappa$ is the anomalous magnetic moment of the proton, $\rho=\rm{Re\:}\phi_+/\rm{Im\:}\phi_+$ is the ratio of the real to imaginary parts of the non-flip elastic amplitude, and $\delta $ is the relative phase between the Coulomb and hadronic amplitudes~\cite{buttimore}: \begin{linenomath} \begin{equation} \delta = \alpha\: \ln\:\frac{2}{|t|(B+8/\Lambda ^2)} - \alpha\:\gamma \;, \label{delta} \end{equation} \end{linenomath} where $B$ is the slope of the forward peak in elastic scattering, $\alpha = 1/137$ is the fine structure constant, $\gamma = 0.5772$ is Euler's constant, and $\Lambda ^2 = 0.71 \:{\rm (GeV/}c)^2$. \section{Detection of elastic proton-proton collisions at RHIC} The protons, which scatter elastically at small angles ($\lesssim$ 2 mrad), follow the optics of the RHIC magnets and are detected by a system of detectors placed close to the beam inside movable vessels known as ``Roman Pots" (RPs)~\cite{pp2ppNIM}. The Roman Pot stations are located on either side of the STAR interaction point (IP) at 55.5 m and 58.5 m with horizontal and vertical insertions of the detectors, respectively. The coordinate system of the experiment is described in Fig.~\ref{fig:setup}. There are eight Roman Pots, four on each side of the IP. Four approach the beam horizontally WHI, WHO (EHI, EHO) and four approach the beam vertically WVU, WVD (EVU, EVD) as shown in Fig.~\ref{fig:setup}. The location of the RPs was optimized so that, combined with proper accelerator magnet settings, it provides so-called ``parallel-to-point focusing", i.e. the $(x,y)$ position of the scattered protons at the RPs depends almost exclusively on their scattering angles and is nearly insensitive to the transverse position of the interaction point. As shown in Fig.~\ref{fig:setup}, there are five major magnets between the RPs and the collision point, two dipole magnets DX and D0, which bend beams into collision, and the focusing triplet of quadrupoles Q1-Q3. The dipole magnets scatter out particles with momentum which is not close to the beam momentum. The detector package inside each RP consists of four 0.4 mm thick silicon micro-strip detector planes with a strip pitch of about 100~$\mu$m, two of them measuring the horizontal ($x$) and two the vertical ($y$) position of a scattered proton. The sensitive area of the detectors is 79 $\times$ 48 mm$^2$. Scintillation counters covering this area are used to form a trigger for elastic events. More details on the experiment and the technique can be found in Refs.~\cite{pp2ppplb04, pp2ppNIM}. \begin{figure} \includegraphics[width=130mm]{PotStarPLB2012.eps} \caption{(color online) The layout of the experiment. The Roman Pot stations are located on both sides of the STAR IP. The positive $z$ direction is defined along the outgoing ``Blue" beam (the West direction). Positive $y$ is pointing up and positive $x$ is pointing away from the center of the RHIC ring. The detectors are placed on the outgoing beams. The figure is not to scale.} \label{fig:setup} \end{figure} The preliminary alignment was done by surveying the detector packages during their assembly and after installation inside the Roman Pots with respect to the beam line of the accelerator. The displacement of the RPs during data taking was measured by linear variable differential transformers (LVDTs). The final alignment was done using elastic events in the overlapping regions of horizontal and vertical RPs, which allowed a relative position measurement of the RPs on each side of the IP with a precision better than 0.1~mm. Collinearity of the elastic events and Monte-Carlo simulations of the acceptance boundaries due to limiting apertures in the quadrupole magnets were used to further constrain the geometry and to estimate systematic errors. The data were taken during four dedicated RHIC stores between June 30 and July 4, 2009 with special beam optics of $\beta^*$ = 22~m in order to minimize the angular divergence at the IP~\cite{angelika}. The average luminosity over the four stores during which the data were collected was ${\mathcal L}$ $\approx$ 2$\cdot$10$^{29}$cm$^{-2}$s$^{-1}$. The closest approach of the first strip to the center of the beam was about 10~mm or about 12 $\sigma$ of the transverse beam size. A total of 33 million elastic triggers were recorded. \section{Data selection and reconstruction of elastic scattering events} The selection of elastic events in this experiment is based on the collinearity of the scattered proton tracks. A single track was required on each side of the IP. Noisy and dead strips were rejected, with a total of five out of $\approx$ 14,000 in the active detector area. Track reconstruction started with the search for hits in the silicon detectors. First, adjacent strips with collected charge values above $5\:\sigma$ from their pedestal averages were found and combined into clusters. A threshold depending on the cluster width was applied to the total charge of the cluster, thus improving the signal-to-noise ratio for clusters of 3 to 5 strips, while wider clusters were rejected. The cluster position was determined as a charge weighted average of strip coordinates. For each RP a search was performed for matching clusters in the pairs of planes measuring the same coordinate. Two clusters in such planes were considered matched if the distance between them was smaller than 200~$\mu$m, approximately the width of two strips. A matching pair with the smallest matching distance was chosen and its cluster coordinates were averaged. If only one cluster in the pair of planes was found, we just use its coordinate for the analysis. If more than one cluster or no match was found, no output from this RP was selected. An $(x,y)$ pair found in an RP was considered a track. About 1/3 of all reconstructed tracks were found in the region of overlapping acceptance between the horizontal and the vertical RPs; for those tracks the average of the kinematic variables was used. To minimize the background contribution from beam halo particles, products of beam-gas interactions, and detector noise, fiducial areas were selected to cut edges of the silicon detectors near the beam and boundaries of the magnet apertures. Planar angles $\theta_x^{\rm RP},\theta_y^{\rm RP}$ and coordinates $x^{\rm RP}, y^{\rm RP}$ of protons at a given RP relate to the angles $\theta_x, \theta_y$ and coordinates $x, y$ at the IP by the transport matrix ${\mathbf M}$: \begin{linenomath} \begin{equation} \begin{bmatrix} x^{\rm RP} \\ \theta_x^{\rm RP} \\ y^{\rm RP} \\ \theta_y^{\rm RP} \end{bmatrix} = {\mathbf M} \begin{bmatrix} x \\ \theta_x\\ y \\ \theta_y \end{bmatrix} = \begin{bmatrix} a_{11} & L_x^{\rm eff} & a_{13} & a_{14} \\ a_{21} & a_{22} & a_{23} & a_{24} \\ a_{31} & a_{32} & a_{33} & L_y^{\rm eff} \\ a_{41} & a_{42} & a_{43} & a_{44} \end{bmatrix} \begin{bmatrix} x \\ \theta_x\\ y \\ \theta_y \end{bmatrix} . \label{eq:tm} \end{equation} \end{linenomath} For example, the transport matrix ${\mathbf M}$ for the horizontal Roman Pot in the West side of the IP (WHI,WHO) is: \begin{linenomath} \begin{displaymath} {\mathbf M} = \left[ \begin{array}{D{.}{.}{9} D{.}{.}{9} D{.}{.}{9} D{.}{.}{9} } -0.0913 & 25.2566~{\rm m} & -0.0034 & 0.0765~{\rm m} \\ -0.0396~{\rm m^{-1}} & 0.0137 & -0.0001~{\rm m^{-1}} & 0.0057 \\ -0.0033 & -0.1001~{\rm m} & 0.1044 & 24.7598~{\rm m} \\ 0.0002~{\rm m^{-1}} & 0.0083 & -0.0431~{\rm m^{-1}} & -0.6332 \end{array} \right ] . \end{displaymath} \end{linenomath} For the case of parallel-to-point focusing, and in the absence of $x$-$y$ mixing, the transport matrix is simplified and the so-called ``effective" length, $L^{\rm eff}$, terms dominate. The $L^{\rm eff}$ values are in the range of 22-26~m for this experiment. The angles of the scattered protons at the IP can then be reconstructed independently for the East~(E) and West~(W) arms with respect to the IP: \begin{linenomath} \begin{eqnarray} \theta_x &=& x^{\rm RP} / L_x^{\rm eff}, \\ \theta_y &=& y^{\rm RP} / L_y^{\rm eff}. \label{eq:stm} \end{eqnarray} \end{linenomath} Because non-dominant terms in the transport matrix are small and result in a negligible correction of about 4 $\mu$rad to the reconstruction of the scattering angles, we used a 2$\times$2 matrix ($L_x^{\rm eff}$, $a_{14}$ ; $a_{32}$, $L_y^{\rm eff}$), which was obtained by neglecting those small terms of the transport matrix. Once the planar angles at IP were reconstructed, a collinearity requirement was imposed using $\chi^2$ defined as: \begin{linenomath} \begin{equation} \chi^2 = {[(\delta\theta_x - {\delta\bar{\theta}_x})/\sigma_{\theta_x}]^2 + [(\delta\theta_y-{\delta\bar{\theta}_y})/\sigma_{\theta_y}]^2}\,, \label{eq:chi2} \end{equation} \end{linenomath} where $\delta\theta_{x,y} = [\theta_{x,y}^{W} - \theta_{x,y}^{E}]$ and the mean values ${\delta\bar{\theta}_{x,y}}$ and widths $\sigma_{\theta_{x,y}}$ are taken from the fits to data performed for each data sample. An example is shown in Fig.~\ref{fig:coll}. The small non-zero mean values ($\approx$ 10 $\mu$rad) are consistent with the uncertainties of angle determinations discussed in the next section. Fig.~\ref{fig:coll} shows a typical distribution of $\delta\theta_y$ vs. $\delta\theta_x$ and its projections, fitted with a Gaussian and a linear background. Based on these fits, the non-collinear background contribution is estimated to be 0.3-0.5\%. The requirement of $\chi^2<9$ left about 21 million events for the asymmetry calculations. \begin{figure} \includegraphics[width=130mm]{collinearity.eps} \caption{(color online) Distribution of $\delta\theta_y$ vs. $\delta\theta_x$ for both detector pairs in horizontal RPs (a) and their projections in $\delta\theta_y$ (b) and $\delta\theta_x$ (c). The overlaid curves represent the fits with a Gaussian signal and a linear background. The $\sigma$ values of distributions are $\approx$ 58 $\mu$rad, consistent with beam angular divergence, and the background-to-signal ratio under the Gaussian distributions in ${\pm 3~\sigma}$ is $\approx$ 0.4\%.} \label{fig:coll} \end{figure} The polar scattering angle $\theta$ and azimuthal angle $\varphi$ (measured counterclockwise from the positive x-axis) for an event were then calculated as an average of those obtained from East and West arms, and the four-momentum transfer squared, $t$, was assigned to the event using $t = -2p^2(1 - {\rm cos}\theta) \approx -p^2\theta^2$ with $p=100.2$~GeV/$c$. \section{Single Spin Asymmetries} \label{sec:ssa} The azimuthal angle dependence of the cross-section for the elastic collision of vertically polarized protons is given~\cite{leader} by: \begin{linenomath} \begin{eqnarray} \frac{d^2\sigma}{dt d\varphi}&=&\frac{1}{2\pi}\frac{d\sigma}{dt}\cdot[1+({\mathcal P}_B+{\mathcal P}_Y)A_N(t){\rm cos}\varphi \nonumber\\ &+&{\mathcal P}_B{\mathcal P}_Y(A_{NN}(t){\rm cos}^2\varphi + A_{SS}(t){\rm sin}^2\varphi)]\;, \end{eqnarray} \end{linenomath} where higher order terms are ignored, $d\sigma/dt$ is the spin-averaged cross-section, ${\mathcal P}_B$ and ${\mathcal P}_Y$ are the beam polarizations for the two colliding beams (called Blue and Yellow). The double spin asymmetry $A_{NN}$ is defined as the cross-section asymmetry for scattering of protons with spin orientations parallel and antiparallel with respect to the unit vector $\hat{n}$, normal to the scattering plane. The asymmetry $A_{SS}$ is defined analogously for both beams fully polarized along the unit vector $\hat{s}$ in the scattering plane and normal to the beam. For each of the four RHIC stores, the event sample satisfying the requirements for elastic scattering was divided into five $t$-bins. Within each $t$-bin, the $\varphi$ distributions were subdivided into bins of 10$^\circ$. The raw asymmetry, $\varepsilon_N(\varphi)$, was calculated using geometric means~\cite{squareroot}, the so-called ``square root formula" for each pair of $\varphi$ and $\pi-\varphi$ bins in the range $-\pi/2 < \varphi < \pi/2$: \begin{widetext} \begin{linenomath} \begin{equation} \varepsilon_N(\varphi) = \frac{({\mathcal P}_B+{\mathcal P}_Y)A_N \cos(\varphi)}{1 + \nu(\varphi)} = \frac{\sqrt{N^{\uparrow\uparrow}(\varphi)N^{\downarrow\downarrow}(\pi-\varphi)} - \sqrt{N^{\downarrow\downarrow}(\varphi)N^{\uparrow\uparrow}(\pi-\varphi)}}% {\sqrt{N^{\uparrow\uparrow}(\varphi)N^{\downarrow\downarrow}(\pi-\varphi)} + \sqrt{N^{\downarrow\downarrow}(\varphi)N^{\uparrow\uparrow}(\pi-\varphi)}} \,, \label{eq:squareroot} \end{equation} \end{linenomath} \end{widetext} where the ``$\uparrow$" and ``$\downarrow$" indicate the spin direction of the transversely polarized colliding proton beam bunches, $N$ is the number of events detected in the respective spin and respective $\varphi$ states and $\nu(\varphi) = {\mathcal P}_B{\mathcal P}_Y(A_{NN}\cos^2(\varphi)+A_{SS}\sin^2(\varphi))$. \begin{figure} \includegraphics[width=92mm]{Fig3_6.eps} \caption{(color online) The asymmetry $\varepsilon(\varphi)/({\mathcal P}_B+{\mathcal P}_Y)$ for the five $t$-intervals as given in Table~\ref{tab:results} (a) - (e). The asymmetry $\varepsilon'(\varphi)$ for the whole measured $t$-range (f). The red curves represent the best fit to Eq.~(\ref{eq:squareroot}) (a) - (e) and Eq.~(\ref{eq:forbidden}) (f).} \label{fig:rawasymm} \end{figure} In the square root formula~(\ref{eq:squareroot}), the relative luminosities of different spin direction combinations cancel out. In addition, the detector acceptance and efficiency also cancel out, provided they do not depend on the bunch polarization. Results of Ref.~\cite{doublespin} and preliminary results of this experiment~\cite{spin2010} show that both $A_{NN}$ and $A_{SS}$ are very small $\approx 0.005$ (and compatible with zero), constraining $\nu(\varphi)$ to $\approx 0.002$, which can be safely neglected. For each RHIC store, the obtained raw asymmetries were divided by the sum of polarizations of both beams for this particular store, and then averaged over the stores. The resulting asymmetries for each $t$ bin are shown in Fig.~\ref{fig:rawasymm}(a-e) as a function of $\varphi$. The solid lines represent the best fits to Eq.~(\ref{eq:squareroot}). Along with the raw asymmetry, $\varepsilon_N$, which is proportional to the sum of the beam polarizations $({\mathcal P}_B+{\mathcal P}_Y)$, other asymmetries can be obtained using different combinations of bunch spin directions. For instance, the asymmetry proportional to the beam polarization difference $({\mathcal P}_B-{\mathcal P}_Y)$ is defined as follows: \begin{widetext} \begin{linenomath} \begin{equation} \varepsilon'(\varphi) = \frac{({\mathcal P}_B-{\mathcal P}_Y)A_N \cos(\varphi)}{1 - \nu(\varphi)} = \frac{\sqrt{N^{\uparrow\downarrow}(\varphi)N^{\downarrow\uparrow}(\pi-\varphi)} - \sqrt{N^{\downarrow\uparrow}(\varphi)N^{\uparrow\downarrow}(\pi-\varphi)}}% {\sqrt{N^{\uparrow\downarrow}(\varphi)N^{\downarrow\uparrow}(\pi-\varphi)} + \sqrt{N^{\downarrow\uparrow}(\varphi)N^{\uparrow\downarrow}(\pi-\varphi)}} \; . \label{eq:forbidden} \end{equation} \end{linenomath} \end{widetext} Provided that the beam polarizations $({\mathcal P}_B$ and ${\mathcal P}_Y)$ have the same values, which is approximately valid in this experiment, one would expect $\varepsilon'=0$. The derived values of $\varepsilon'$ may be used to estimate false asymmetries, which remain after applying the ``square root" method. The distribution of the asymmetry $\epsilon'$, obtained for the whole $t$-range, together with its fit, is shown in Fig.~\ref{fig:rawasymm}(f). During data taking, 64 bunches (16$\uparrow\uparrow$, 16$\downarrow\downarrow$, 16$\uparrow\downarrow$, 16$\downarrow\uparrow$) of the 90 proton beam bunches collided with usable spin patterns, and were used for $\varepsilon_N$ and $\varepsilon'$ calculations. The major systematic uncertainties of the experiment are due to the error of the beam polarization measurement, the reconstruction of $t$ and a small background contribution as shown in Fig.~\ref{fig:coll}. The two main contributions to the uncertainty in the $t$ reconstruction are due to the uncertainties of the $L^{\rm eff}$ values and the position of the beam center at the RP location. The former is mostly due to the uncertainty on values of the magnetic field strength in the Q1-Q3 focusing quadrupoles, which is mainly due to uncertainties in the magnet current and field measurements. The correction to the strength was derived using the correlation between the angle and position in the RPs for the tracks in the regions where the detector acceptance overlaps. An overall correction to the strength of the focusing quadrupoles of 0.5\% was applied. The residual systematic error of the field calculation was estimated to be $\approx$ 0.5\%, leading to $\approx$ 1\% uncertainty in $L^{\rm eff}$ and $\approx$ 1.4\% uncertainty in $t$~\cite{phil}. The position of the beam center is the reference point for the scattering angle calculations and effectively absorbs a large set of geometrical unknowns such as beam crossing angles and transverse beam positions at the IP, beam shifts from the beam pipe center at the RP location, as well as survey errors. To accommodate all these uncertainties, corrections to the survey were introduced based on the comparison of the simulated to the measured $(x,y)$ distributions at the horizontal RPs on both sides of the IP. The simulation of the transport of elastically scattered protons through the RHIC magnets and the apertures was done and the detector acceptance was calculated. The acceptance boundaries from that simulation and the data were compared. No correction was found for the West side, while for the East side a correction of ($\Delta x, \Delta y$) = (2.5, 1.5) mm was obtained. The uncertainty of that correction was estimated to be 400 $\mu$m. After applying that alignment correction, the collinearity, defined as the average angle difference ${\delta\bar{\theta}_{x,y}}$ (see Eq.~(\ref{eq:chi2})), was reduced from $\approx$ 55 $\mu$rad to $\approx$ 10 $\mu$rad. The remaining alignment uncertainty leads to a value of $\delta t/t$ = 0.0020~[GeV/$c$]$/\sqrt{t}$ and was added in quadrature to the uncertainty due to $L^{\rm eff}$. The number of background events in the data is less than 1\% in all $t$-bins (e.g. see Fig.~\ref{fig:coll}). Assuming the background is beam polarization independent, the asymmetry will be diluted by the same amount, $\delta A_N/A_N<0.01$. This value results in a negligible contribution to the total error, when statistical and systematic errors are added in quadrature. The polarization values of the proton beams were determined by the RHIC CNI polarimeter group. Polarizations and their uncertainties (statistical and systematic combined) for the four stores were: 0.623$\pm$0.052, 0.548$\pm$0.051, 0.620$\pm$0.053, 0.619$\pm$0.054 (Blue beam), 0.621$\pm$0.071, 0.590$\pm$0.048, 0.644$\pm$0.051, 0.618$\pm$0.048 (Yellow beam)~\cite{cnigroup}. The overall luminosity-weighted average polarization values for all four stores are $\langle {\mathcal P}_B+{\mathcal P}_Y \rangle$~=~1.224$\pm$0.038 and \mbox{$\langle {\mathcal P}_B-{\mathcal P}_Y \rangle$~=~$-$0.016$\pm$0.038. } Taking into account the overall uncertainty for normalization in polarization measurements, the total polarization error $\delta\langle{{\mathcal P}_B+{\mathcal P}_Y}\rangle/$$\langle {\mathcal P}_B+{\mathcal P}_Y \rangle$ is 5.4\%. If the false asymmetry $\varepsilon_F$ were proportional to the beam polarization values, it would be indistinguishable from $A_N$. On the contrary, if it does not depend on the polarization, it contributes equally to both $\varepsilon_N$ and $\varepsilon'$: \begin{linenomath} \begin{eqnarray} \varepsilon_N &=& A_N({\mathcal P}_B+{\mathcal P}_Y)+\varepsilon_F \;,\\ \varepsilon' &=& A_N({\mathcal P}_B-{\mathcal P}_Y)+\varepsilon_F \; .\, \end{eqnarray} \end{linenomath} and a direct estimate on the false asymmetry can be obtained: \begin{linenomath} \begin{equation} \varepsilon_F=\frac{\varepsilon'({\mathcal P}_B+{\mathcal P}_Y)-\varepsilon_N({\mathcal P}_B-{\mathcal P}_Y)} {2{\mathcal P}_Y} \approx\varepsilon'-\varepsilon_N\frac{{\mathcal P}_B-{\mathcal P}_Y}{{\mathcal P}_B+{\mathcal P}_Y} \,. \end{equation} \end{linenomath} The values of the raw asymmetries, measured in the whole $t$-range, are $\varepsilon_N$ = 0.0276$\pm$0.0004 and $\varepsilon'$ = $-$0.0007$\pm$0.0004. This gives a false asymmetry of $\varepsilon_F$ = $-$0.0004$\pm$0.0010. Thus the conclusion is that the false asymmetry is consistent with zero and very small compared to the measured raw asymmetry $\varepsilon_N$. The results of the $A_N$ measurements in the five $t$-bins are summarized in Table~\ref{tab:results} together with associated uncertainties and $-t$ range boundaries. Two independent analyses of the data performed with slightly different selection criteria by two different groups gave consistent results. We have also done the cross checks to extract $A_N$ using the beam polarizations of the two beams. The resulting $A_N$ were found to be compatible with those in Table~\ref{tab:results} within their statistical uncertainties. \begin{table}[htbp] \centering \begin{tabular}{@{} lccccc @{}} \toprule \hline \cmidrule(r){1-2} $-t$ [(GeV/$c$)$^2$] & ~0.003 - 0.005& ~0.005 - 0.01 & ~0.01 - 0.015 & ~0.015 - 0.02 & ~0.02 - 0.035 \\ \midrule \hline No. of Events & 444045 & 2091977 & 2854764 & 2882893 & 2502703 \\ \hline $\langle-t\rangle$ [(GeV/$c$)$^2$] & 0.0039 & 0.0077 & 0.0126 & 0.0175 & 0.0232 \\ $\delta t$ [(GeV/$c$)$^2$](syst.) & 0.0001 & 0.0002 & 0.0003 & 0.0004 & 0.0004 \\ \hline $A_N$ & 0.0403 & 0.0299 & 0.0227 & 0.0196 & 0.0170 \\ $\delta A_N$(stat.) & 0.0016 & 0.0008 & 0.0007 & 0.0007 & 0.0007 \\ $\delta A_N$(syst.) & 0.0021 & 0.0016 & 0.0012 & 0.0010 & 0.0009 \\ \hline \bottomrule \end{tabular} \caption{$A_N$ values in five $t$ ranges with associated uncertainties. Statistical errors for $t$ are negligible and combined systematic errors are shown (See the text for details). Statistical errors and systematic errors on $A_N$ are also shown, where $\delta A_N$(syst.) is a scale error due to the beam polarization. } \label{tab:results} \end{table} \begin{figure} \includegraphics[width=88mm]{Fig4.eps} \caption{(color online) The measured single spin asymmetry $A_N$ for five $-t$ intervals. Vertical error bars show statistical uncertainties. Statistical error bars in $-t$ are smaller than the plot symbols. The dashed curve corresponds to theoretical calculations without hadronic spin-flip and the solid one represents the $r_5$ fit. } \label{fig:an} \end{figure} \begin{figure} \includegraphics[width=80mm]{contour_new.eps} \caption{(color online) Fitted value of $r_5$ with contours corresponding to statistical error only (solid ellipse and cross) and statistical+systematic errors (dashed ellipse and cross) of $1\sigma$. } \label{fig:r5} \end{figure} \section{Results and Conclusions} The measured values of $A_N$ are shown in Table~\ref{tab:results} and presented in Fig.~\ref{fig:an} together with parameterizations based on formula~(\ref{cnicurve}): the dashed line corresponds to no hadronic spin-flip contribution, i.e. $r_5=0$, while the solid line is the result of the fit using $r_5$ as a free parameter. Other parameter values used in the fit are: $\sigma_{\rm total} = 51.79\pm$0.12~mb, $\rho = 0.1278\pm0.0015$ taken from fits to the world pp and ${\rm p}\overline{{\rm p}}$ data~\cite{compete,bourrely2} and $B=16.3\pm$1.8~(GeV/$c$)$^{-2}$ from Ref.~\cite{pp2ppplb04}. The value of $r_5$ resulting from the fit described above is shown in Fig.~\ref{fig:r5} together with 1$\sigma$ confidence level contours. \begin{table}[htbp] \centering \begin{tabular}{@{} lcrr @{}} \toprule \hline \cmidrule(r){1-2} & central value & Re$\:r_5$=0.0017 & Im$\:r_5$=0.007 \\ \hline \hline & uncertainties & $~~~\delta$Re$\:r_5$ & ~~~$\delta$Im$\:r_5$ \\ \midrule \hline 1 & statistical & 0.0017 & 0.030 \\ 2 & $\delta t$($L^{\rm eff}$) & 0.0008 & 0.005 \\ 3 & $\delta t$(alignment) & 0.0011 & 0.011 \\ 4 & ${\delta\mathcal P}$ & 0.0059 & 0.047 \\ \hline 5 & $\delta\sigma_{\rm total}$ & 0.0003 & 0.002 \\ 6 & $\delta\rho$ & $<$ 0.0001 & $<$ 0.001 \\ 7 & $\delta B$ & $<$ 0.0001 & $<$ 0.001 \\ \hline & total syst. error & 0.0061 & 0.049 \\ & total stat. + syst. error & 0.0063 & 0.057 \\ \hline \bottomrule \end{tabular} \caption{The fitted $r_5$ values including the uncertainties. (1): Statistical uncertainties. (2)-(4): Systematic uncertainties associated with this measurement. (5)-(7): Systematic uncertainties associated with the values used in the fit function. See the text for details. } \label{tab:systematics} \end{table} \begin{figure} \includegraphics[width=80mm]{r5_compare_paper_extended.eps} \caption{(color online) Measurements of Im($\:r_5$) values for (a) this experiment, (b) RHIC pp2pp at $\sqrt{s}$=200 GeV~\cite{pp2ppAN}, (c) RHIC H-jet target at $\sqrt{s}$=21.7 GeV~\cite{bazilevsky}, (d) FNAL E704 at $\sqrt{s}$=19.4 GeV~\cite{e704} , (e) RHIC H-jet target at $\sqrt{s}$=13.7 GeV~\cite{jetcal}, (f) RHIC H-jet target at $\sqrt{s}$=7.7 GeV~\cite{bazilevsky}, and (g) RHIC H-jet target at $\sqrt{s}$=6.8 GeV~\cite{alekseev}. Theoretical calculations shown are (1) anomalous moment~\cite{ryskin}, (2) quark-diquark picture~\cite{diquark}, (3) two-pion exchange model~\cite{pumplin}, and (4) impact picture~\cite{bourrely}. The theoretical calculations are either energy independent (1,2,3) or done at $\sqrt{s}$=200 GeV (4). The vertical dashed line indicates where Im($\:r_5$)=0. All error bars shown include both statistical and systematic errors. } \label{fig:r5compare} \end{figure} In Table~\ref{tab:systematics}, we show the central value of the fit and uncertainties on Re$\:r_5$ and Im$\:r_5$ due to the listed effects. In the first row of the table, the statistical error to the fit with the central value of the parameters is shown. The remaining rows show changes of Re$\:r_5$ and Im$\:r_5$, when each parameter was varied one by one by $\pm$1$\:\sigma$ during the fit procedure. Rows 2 and 3 show the effect due to the systematic uncertainty in $L^{\rm eff}$ and alignment, row 4 due to the beam polarization (vertical scale uncertainty of $A_N$) and rows 5-7 systematic contributions due to the uncertainty of fit parameters. The dominant source of the systematic uncertainty is due to the beam polarization uncertainty. The total systematic uncertainty, including the effects related to rows 2-7 of Table~\ref{tab:systematics}, is obtained by adding the error covariance matrices. The final result on $r_5$ is shown in Fig.~\ref{fig:r5} together with both statistical and systematic uncertainties. The obtained values Re$\:r_5$ = 0.0017$\pm$0.0063 and Im$\:r_5$ = 0.007$\pm$0.057 are consistent with the hypothesis of no hadronic spin-flip contribution at the energy of this experiment. Since the maximum $A_N$ in the CNI region can be evaluated as $\kappa - 2{\rm Im}\:r_5$ in Eq.~(\ref{cnicurve}), theoretical calculations emphasize values of Im$\:r_5$. Measurements of Im$\:r_5$ at different energies in the range 6.8 GeV $\leqslant \sqrt{s} \leqslant$ 200 GeV are shown in Fig.~\ref{fig:r5compare}, together with predictions of theoretical models of the hadronic spin-flip amplitude as discussed above. All of the experimental results, including that reported here, are consistent with the assumption of no hadronic spin-flip contribution to the elastic proton-proton scattering. The high accuracy of the current measurement provides strong limits on the size of any hadronic spin-flip amplitude at this high energy, hence significantly constraining theoretical models which require hadronic spin-flip.
1,108,101,564,014
arxiv
\section{Introduction} Game theory deals with a situation in which two or more parties compete to maximize their respective payoffs by playing suitable strategies according to the known payoff matrix. Extension of game theory to quantum domain with quantization of the strategy space has shown clear advantage over classical strategies \cite{meyer,eisert,marinatto,nawaz1}. A detailed description on classical and quantum game theory can be found in \cite{neumann,lee} In quantum version of the game arbiter prepares an initial quantum state and passes it on to the players (generally referred as Alice and Bob). After applying their local operators (or strategies) the players return the state to arbiter who then announces the payoffs by\ performing a measurement with the application of suitable payoff operators depending on the payoff matrix of the game. The role of the initial quantum state remained an interesting issue in quantum games \cite{eisert,marinatto,azhar,nawaz1}. However, the importance of the payoff operators used by arbiter to perform measurement to determine the payoffs of the players remained unnoticed. In our earlier paper \cite{nawaz} we have pointed out the importance of measurement basis in quantum games. It was shown that if the arbiter is allowed to perform the measurement in the entangled basis interesting situations could arise which were not possible in the frame work of Eisert et. al. \cite{eisert} and Marinatto et. al. \cite{marinatto} schemes. In this paper we further extend our earlier work to investigate the role of measurement basis in quantum games by taking Prisoner Dilemma as an example. In this scenario quantum payoffs are divided into four different categories on the basis of initial state and measurement basis. These different situations arise due the possibility of having product or entangled initial state and then applying product or entangled basis for the measurement \cite{pati,xyz}. In the context of our generalized framework for quantum games, the four different types of payoffs are\emph{\ } (i) $\$_{PP}$\ is the payoff when the initial quantum state is of the product form and product basis are used for measurement to determine the payoff. (ii) $\$_{PE}$\ is the payoff when the initial quantum state is of the product form and entangled basis are used for measurement to determine the payoff. (iii) $\$_{EP}$\ is the payoff when the initial quantum state is entangled and product basis are used for measurement to determine the payoff. (iv) $\$_{EE}$\ is the payoff when the initial quantum state is entangled and entangled basis are used for measurement to determine the payoff. Our results show that these payoffs obey a relation, $\$_{PP}<\$_{PE}=% \$_{EP}<\$_{EE}$ at the Nash Equilibrium (NE).\ This is also\ interesting to note that the role of entangled and /or product input and \ entangled \ and/or product measurement in this relation\ is\ very similar to its role in the existing relation for the classical capacities of the quantum channels. It is shown in the Ref. \cite{king} that for a quantum channel the capability to transmit maximum classical information, called the classical channel capacity $C$\ of a quantum channel, a relation of the form $% C_{PP}<C_{PE}=C_{EP}<C_{EE}$\ \ holds. In this paper we have not tried to investigate the possible relationship between channel capacity and payoff's. \section{\label{prisoner}Prisoner Dilemma} In the game of Prisoner Dilemma two prisoners are being interrogated in separate cells for a crime they have committed together. The two possible moves for these prisoners are to cooperate ($C$) or to defect ($D$). They are not allowed to communicate but have access to the following payoff matrix: \begin{equation} \text{Alice}% \begin{array}{c} C \\ D% \end{array}% \overset{\text{Bob}}{\overset{% \begin{array}{cc} C\text{ \ \ \ \ } & D% \end{array}% }{\left[ \begin{array}{cc} \left( 3,3\right) & \left( 0,5\right) \\ \left( 5,0\right) & \left( 1,1\right)% \end{array}% \right] }}, \label{classical payoff} \end{equation}% \ It can be seen from the Eq. (\ref{classical payoff}) that $D$ is the dominant strategy for the two players. Therefore, rational reasoning forces each player to play $D$ causing \ ($D,D$) as the Nash equilibrium of the game with payoffs \ (1,1), i.e., 1 for both. The players could have got higher payoffs had both of them decided to play $C$ instead of $D$. This is the dilemma in this game \cite{flood}.\emph{\ }Eisert et. al \cite{eisert} analyzed this game in quantum domain and showed that there exist a suitable quantum strategy for which the dilemma is resolved. They also pointed out a quantum strategy which always wins over all classical strategies. In our generalized version of quantum games the arbiter prepares the initial state of the form \begin{equation} \left| \psi _{in}\right\rangle =\cos \frac{\gamma }{2}\left| CC\right\rangle +i\sin \frac{\gamma }{2}\left| DD\right\rangle . \label{state in} \end{equation}% Here $\left| C\right\rangle $ and $\left| D\right\rangle $, represent vectors in the strategy space corresponding to Cooperate and Defect, respectively\emph{\ }with\emph{\ }$\gamma \in \left[ 0,\pi \right] $\textbf{% \ }\emph{. }The strategy of each of the\ players can be represented by the unitary operator $U_{i}$\ of the form \begin{equation} U_{i}=\cos \frac{\theta _{i}}{2}R_{i}+\sin \frac{\theta _{i}}{2}P_{i},\text{ \ \ \ } \label{combination} \end{equation}% where $i=1$\ or $2$\ and $R_{i}$, $P_{i}$\emph{\ }are the unitary operators defined as: \begin{align} R_{i}\left| C\right\rangle & =e^{i\phi _{i}}\left| C\right\rangle ,\text{ \ \ }R_{i}\left| D\right\rangle =e^{-i\phi _{i}}\left| D\right\rangle , \notag \\ P_{i}\left| C\right\rangle & =-\left| D\right\rangle ,\text{ \ \ \ \ \ }% P_{i}\left| D\right\rangle =\left| C\right\rangle . \label{oper} \end{align}% \textbf{\ }Here we restrict our treatment to two parameter set of strategies ($\theta _{i},\phi _{i}$) for mathematical simplicity in accordance with the Ref. \cite{eisert}. After the application of the strategies, the initial state given by the eq. (\ref{state in}) transforms to \begin{equation} \left| \psi _{f}\right\rangle =(U_{1}\otimes U_{2})\left| \psi _{in}\right\rangle . \label{final} \end{equation}% and using Eqs. (\ref{oper}) and (\ref{final}) the above expression becomes \begin{align} \left| \psi _{f}\right\rangle & =\cos \left( \gamma /2\right) \left[ \cos \left( \theta _{1}/2\right) \cos \left( \theta _{2}/2\right) e^{i\left( \phi _{1}+\phi _{2}\right) }\left| CC\right\rangle -\cos \left( \theta _{1}/2\right) \sin \left( \theta _{2}/2\right) e^{i\phi _{1}}\left| CD\right\rangle \right. \notag \\ & -\left. \cos \left( \theta _{2}/2\right) \sin \left( \theta _{1}/2\right) e^{i\phi _{2}}\left| DC\right\rangle +\sin \left( \theta _{1}/2\right) \sin \left( \theta _{2}/2\right) \left| DD\right\rangle \right] \notag \\ & +i\sin \left( \gamma /2\right) \left[ \cos \left( \theta _{1}/2\right) \cos \left( \theta _{2}/2\right) e^{-i(\phi _{1}+\phi _{2})}\left| DD\right\rangle +\cos \left( \theta _{1}/2\right) \sin \left( \theta _{2}/2\right) e^{-i\phi _{1}}\left| DC\right\rangle \right. \notag \\ & +\left. \cos \left( \theta _{2}/2\right) \sin \left( \theta _{1}/2\right) e^{-i\phi _{2}}\left| CD\right\rangle +\sin \left( \theta _{1}/2\right) \sin \left( \theta _{2}/2\right) \left| CC\right\rangle \right] . \label{state fin} \end{align}% The operators used by the arbiter to determine the payoff for Alice and Bob are \begin{align} P_{A}& =3P_{CC}+P_{DD}+5P_{DC} \notag \\ P_{B}& =3P_{CC}+P_{DD}+5P_{CD} \label{pay-operator} \end{align}% where \begin{subequations} \label{oper a} \begin{align} P_{CC}& =\left| \psi _{CC}\right\rangle \left\langle \psi _{CC}\right| \text{% , \ }\left| \psi _{CC}\right\rangle =\cos \left( \delta /2\right) \left| CC\right\rangle +i\sin \left( \delta /2\right) \left| DD\right\rangle , \label{oper 1} \\ P_{DD}& =\left| \psi _{DD}\right\rangle \left\langle \psi _{DD}\right| \text{% , \ }\left| \psi _{DD}\right\rangle =\cos \left( \delta /2\right) \left| DD\right\rangle +i\sin \left( \delta /2\right) \left| CC\right\rangle , \label{oper 2} \\ P_{DC}& =\left| \psi _{DC}\right\rangle \left\langle \psi _{DC}\right| \text{% , \ }\left| \psi _{DC}\right\rangle =\cos \left( \delta /2\right) \left| DC\right\rangle -i\sin \left( \delta /2\right) \left| CD\right\rangle , \label{oper 3} \\ P_{CD}& =\left| \psi _{CD}\right\rangle \left\langle \psi _{CD}\right| \text{% , \ }\left| \psi _{CD}\right\rangle =\cos \left( \delta /2\right) \left| CD\right\rangle -i\sin \left( \delta /2\right) \left| DC\right\rangle , \label{oper 4} \end{align}% with\emph{\ }$\delta \in \left[ 0,\pi \right] $. Above payoff operators reduce to that of Eisert's scheme for $\delta $ equal to $\gamma ,$ which represents the entanglement of the initial state \cite{eisert}. And for $% \delta =0$ above operators transform into that of Marinatto and Weber's scheme \cite{marinatto}. In our generalized quantization scheme, payoffs for the players are calculated as \end{subequations} \begin{eqnarray} \$^{A}(\theta _{1},\phi _{1},\theta _{2},\phi _{2}) &=&\text{Tr}(P_{A}\rho _{f})\text{,} \notag \\ \$^{B}(\theta _{1},\phi _{1},\theta _{2},\phi _{2}) &=&\text{Tr}(P_{B}\rho _{f}), \label{payoff} \end{eqnarray}% \emph{\ } where $\rho _{f}=\left| \psi _{f}\right\rangle \left\langle \psi _{f}\right| $ is the density matrix for the quantum state given by (\ref% {state fin}) and Tr represents the trace of a\emph{\ }matrix. Using Eqs. (% \ref{state fin}), (\ref{oper a}), and (\ref{payoff}), we get the following payoffs \begin{eqnarray} \$^{A}\left( \theta _{i},\phi _{j}\right) &=&\sin ^{2}\left( \theta _{1}/2\right) \sin ^{2}\left( \theta _{2}/2\right) \left[ \cos ^{2}\left( \frac{\gamma +\delta }{2}\right) +3\sin ^{2}\left( \frac{\gamma -\delta }{2}% \right) \right] \notag \\ &&+\cos ^{2}\left( \theta _{1}/2\right) \cos ^{2}\left( \theta _{2}/2\right) \left[ 2+\cos \gamma \cos \delta +2\cos \left( 2\delta \left( \phi _{1}+\phi _{2}\right) \right) \sin \gamma \sin \delta \right] \notag \\ &&-\sin \theta _{1}\sin \theta _{2}\sin \left( \phi _{1}+\phi _{2}\right) \left[ \sin \gamma -\sin \delta \right] +\frac{5}{4}\left[ 1-\cos \theta _{1}\cos \theta _{2}\right] \notag \\ &&+\frac{5}{4}\left( \cos \theta _{2}-\cos \theta _{1}\right) \left[ \cos \gamma \cos \delta +\cos \left( 2\phi _{1}\right) \sin \gamma \sin \delta % \right] . \label{payoff-general-a} \end{eqnarray}% The payoff of player $B$ can be found by interchanging $\theta _{1}\longleftrightarrow $\ $\theta _{2}$\ and $\phi _{1}\longleftrightarrow \phi _{2}$ in the Eq. (\ref{payoff-general-a})$.$ There can be four types of payoffs for each player\ for different combinations of $\delta $ and $\gamma $. In the following $\$_{PP}\left( \theta _{1},\theta _{2}\right) $ means payoffs of the players when the initial state of the game is product state and payoff operator used by arbiter for measurement is also in the product form $(\gamma =0,\delta =0)$ and $\$_{EP}\left( \theta _{1},\theta _{2},\phi _{1},\phi _{2}\right) $ means the payoffs for entangled input state when the payoff operator used for measurement is in the product form, i.e., $(\gamma \neq 0,\delta =0)$. Similarly $\$_{PE}\left( \theta _{1},\theta _{2},\phi _{1},\phi _{2}\right) $ and $\$_{EE}\left( \theta _{1},\theta _{2},\phi _{1},\phi _{2}\right) $ can also be interpreted. Therefore, for different values of $\delta $ and $\gamma $ the\emph{\ }following four cases can be identified: \textbf{Case (a) }When\textbf{\ }$\delta $\textbf{\ }$=$\textbf{\ }$\gamma =0,$ the Eq.(\ref{payoff-general-a}), becomes \begin{subequations} \label{SPP-prisoner} \begin{equation} \$_{PP}^{A}\left( \theta _{1},\theta _{2}\right) =3\cos ^{2}\left( \theta _{1}/2\right) \cos ^{2}\left( \theta _{2}/2\right) +\sin ^{2}\left( \theta _{1}/2\right) \sin ^{2}\left( \theta _{2}/2\right) +5\sin ^{2}\left( \theta _{1}/2\right) \cos ^{2}\left( \theta _{2}/2\right) \label{SPP-prisoner-a} \end{equation}% This situation corresponds to the classical game where each player play,\ $% C, $ with probability $\cos ^{2}\left( \theta _{i}/2\right) $ with $i=1,2$ % \cite{eisert1}$.$ The Nash equilibrium corresponds to $\theta _{1}=\theta _{2}=\pi ,$ i.e., $(D,D)$ with payoffs for both the players as \end{subequations} \begin{equation} \$_{PP}^{A}(\theta _{1}=\pi ,\theta _{2}=\pi )=\$_{PP}^{B}(\theta _{1}=\pi ,\theta _{2}=\pi )=1. \label{SPP-Nash} \end{equation} \textbf{Case (b) }When $\gamma =0,\delta $\textbf{\ }$\neq 0,$ in the Eq. (% \ref{payoff-general-a}), then the game has two Nash equilibria one at $% \theta _{1}=\theta _{2}=0$ when $\sin ^{2}\left( \delta /2\right) \geq \frac{% 2}{3}$ \ and the other at $\theta _{1}=\theta _{2}=\pi $ when $\sin ^{2}\left( \delta /2\right) \leq \frac{1}{3}$. The corresponding payoffs for these Nash equilibria are \begin{eqnarray} \$_{PE}^{A}(\theta _{1} &=&0,\theta _{2}=0)=\$_{PE}^{B}(\theta _{1}=0,\theta _{2}=0)=3-2\sin ^{2}\left( \delta /2\right) , \notag \\ \$_{PE}^{A}(\theta _{1} &=&\pi ,\theta _{2}=\pi )=\$_{PE}^{B}(\theta _{1}=\pi ,\theta _{2}=\pi )=1+2\sin ^{2}\left( \delta /2\right) . \label{SPE-nash} \end{eqnarray}% Here in this case at NE the payoffs are independent of $\phi _{1},\phi _{2}.$ It is clear that the above payoffs for all the allowed values of $\delta $ remain less than 3, which is the optimal payoff for the two players if they cooperate. \textbf{Case (c) }For $\gamma \neq 0,$ and $\delta $\textbf{\ }$=0,$ the Eqs. (\ref{payoff-general-a}) again gives two Nash equilibria one at $\theta _{1}=\theta _{2}=0$ when $\sin ^{2}\left( \gamma /2\right) \geq \frac{2}{3}$ \ and the other at $\theta _{1}=\theta _{2}=\pi $ when $\sin ^{2}\left( \gamma /2\right) \leq \frac{1}{3}$. The corresponding payoffs are \begin{eqnarray} \$_{EP}^{A}(\theta _{1} &=&0,\theta _{2}=0)=\$_{EP}^{B}(\theta _{1}=0,\theta _{2}=0)=3-2\sin ^{2}\left( \gamma /2\right) , \notag \\ \$_{EP}^{A}(\theta _{1} &=&\pi ,\theta _{2}=\pi )=\$_{EP}^{B}(\theta _{1}=\pi ,\theta _{2}=\pi )=1+2\sin ^{2}\left( \gamma /2\right) . \label{SEP-nash} \end{eqnarray}% It can be seen that the payoffs at both Nash equilibrium for allowed values of $\sin ^{2}\frac{\gamma }{2}$\ remain less than 3. From the Eqs. (\ref% {SPE-nash}) and (\ref{SEP-nash}), it is also clear that $\$_{EP}^{A}(0,0)=% \$_{PE}^{A}(\pi ,\pi )$ only for $\delta =\gamma .$ \textbf{Case (d) }When$\ \gamma =\delta $\textbf{\ }$=\pi /2,$ Eqs. (\ref% {payoff-general-a}) becomes \begin{subequations} \label{SEE-prisoner} \begin{align} \$_{EE}^{A}\left( \theta _{1},\theta _{2},\phi _{1},\phi _{2}\right) & =3 \left[ \cos \left( \theta _{1}/2\right) \cos \left( \theta _{2}/2\right) \cos \left( \phi _{1}+\phi _{2}\right) \right] ^{2} \notag \\ & +\left[ \sin \left( \theta _{1}/2\right) \sin \left( \theta _{2}/2\right) +\cos \left( \theta _{1}/2\right) \cos \left( \theta _{2}/2\right) \sin \left( \phi _{1}+\phi _{2}\right) \right] ^{2} \notag \\ & +5\left[ \sin \left( \theta _{1}/2\right) \cos \frac{\theta _{2}}{2}\cos \phi _{2}-\cos \left( \theta _{1}/2\right) \sin \left( \theta _{2}/2\right) \sin \phi _{1}\right] ^{2} \notag \\ & \label{SEE-prisoner-b} \end{align}% This payoff is same\emph{\ }as found by Eisert et. al. \cite{eisert} and $% \theta _{1}=\theta _{2}=0,\phi _{1}=\phi _{2}=\frac{\pi }{2}$ is the Nash equilibrium \cite{eisert} of the game that gives the payoffs for both players as \end{subequations} \begin{equation} \$_{EE}^{A}(0,0,\frac{\pi }{2},\frac{\pi }{2})=\$_{EE}^{B}(0,0,\frac{\pi }{2}% ,\frac{\pi }{2})=3 \label{SEE-nash} \end{equation}% Comparing eqs. (\ref{SPP-Nash},\ref{SPE-nash},\ref{SEP-nash},\ref{SEE-nash}% ), it is evident that \begin{equation} \$_{EE}^{l}(0,0,\frac{\pi }{2},\frac{\pi }{2})>\left( \$_{PE}^{l}(\theta _{1}=k,\theta _{2}=k),\$_{EP}^{l}(\theta _{1}=k,\theta _{2}=k)\right) >\$_{PP}^{l}(\theta _{1}=\pi ,\theta _{2}=\pi ) \end{equation}% and \begin{equation} \$_{PE}^{l}(\theta _{1}=k,\theta _{2}=k)=\$_{EP}^{l}(\theta _{1}=k,\theta _{2}=k)\text{ for }\gamma =\delta \end{equation}% with $k=0,\pi $ and $l=A,B$. This expression shows the crucial role of entanglement in quantum games. The combination of initial entangled state with entangled payoff operators gives higher payoffs as copmared to all other combinations of $\gamma $\ and $\delta $. \section{Conclusion} \emph{\ }In quantum games the arbiter (the referee) prepares an initial quantum state and passes it on to the players (Alice and Bob). After applying their local operators (their strategies) the players return their state to the arbiter. The arbiter then performs a measurement on the final state by applying the payoff operators to determine the payoffs of the player on the basis of payoff matrix of the game. In our earlier paper \cite% {nawaz}, we pointed out the importance of measurement in the quantum games. Here we extended our earlier work,\ by taking Prisoner Dilemma game as an example and showed that depending on the initial states and type of measurement (product or entangled), quantum payoffs in games can be categories in to four different types. These four categories are $% \$_{PP},\$_{PE},\$_{EP},\$_{EE}$ where $P,$ and $E$\ are abbreviations for the product and entanglement at input and output. It is shown that there exists a relation of the form $\$_{PP}<\$_{PE}=\$_{EP}<\$_{EE}$\ among different payoffs at the NE.